what do you mean under Ada's complexity?
E.g. C++ is really complex because of a lot of features which badly interoperate between themselves.
Is this true for the Ada lang/compiler? Or do you mean the whole complexity of ideas included in Ada - like proof of Poincaré conjecture complex for unprepared person.
Yes, Ada has a lot of the same kind of fractal complexity that C++ has, which derives from unforeseen interaction of some features with some other.
On top of that, as I said in another comment, features are extremely overspecified. The standard specifies what has to be done in every edge case, often with a specification that is not very practical to implement efficiently
I think so. Who writes something and why are important context for what we do with the information. It's an issue with the lack of disclosure, not AI in general.
Most longform readers will assume an author has deep expertise and spent a lot of time organizing their thoughts, which lends their ideas some legitimacy and trust. For a small blog, an 8,000 word essay is a passion project.
But if AI is detected in the phrasing and not disclosed, it begs a lot of questions. Did AI write the whole thing, or just light edits? Are the facts AI generated, too, and not from personal experience? What motivated someone to produce this content if they were going to automate parts of its creation; why would they value the output more than the process?
> But if AI is detected in the phrasing and not disclosed, it begs a lot of questions.
absolutely zero questions from me.
If I see two exactly same writings: one - by human, another - by AI. For me its doesn't matter.
> Most longform readers will assume an author has deep expertise and spent a lot of time organizing their thoughts, which lends their ideas some legitimacy and trust.
It's the incorrect assumption of "most readers". Before AI there are enough methods to throw a long read. So, AI isn't really a gamechanger here
why?
Before comments about LLM I didn't notice this. After I compared pre-LLM posts and post-LLM and looks like AI was used to write/edit this article.
But.. why should I matter? Why my ignorance of this fact insane you?
the one feature which I have never seen in VCS is ability to stick commit message not just to all changes, but specifically to line/hunk.
It looks like very intuitive for me - I don't want to invent "references style" to say something about specific changed line in my commit - I want comment this line directly.
Just to be clear: it’s not suitable for me to do work using this editor.
There are many features lacking that makes me more efficient. In case of vim that is solved by a few plugins. It’s not about being IDE but rather about being well fitted tool. I wish I would only need LSP but it’s not enough for my work.
Do you mean why I care that I have to call the free function at every exit point of the scope? That's easy: because it's error prone. Defer is much less error prone.
Of course people do" virtual functions" in C, but I think this is not an argument despite C.
I noticed that making virtual in C++ is sooo easy that people start abusing it. This making reading/understanding/debugging code much harder (especially if they mess this up with templates).
And here C is a way - it allow but complicates "virtual". So, you will think twice before using it
can someone explain security consideration of placing scripts into $HOME?
Some time ago I moved all my scripts to /usr/local/bin, because I feel that this is better from security perspective.
There are no security implications, on the contrary.
It is objectively cleaner to keep your user scripts in your home, that way they are only in _your_ PATH, whereas putting them in /usr/[local/]bin implicitly adds them to every [service] user on the machine, which I can see creating obscure undesired effets.
Not even mentioning the potential issues with packages that could override your scripts at install, unexpected shadowing of service binaries, setuid security implications, etc.
reply