It really really really depends on how you are using it and what you are using it for.
I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.
I find it a lot more useful to dive into bugs involving multiple layers and versions of 3rd party dependencies. Deep issues where when I see the answer I completely understand what it did to find it and what the problem was (so in essence I wouldn't of learned anything diving deep into the issue), but it was able to do so in a much more efficient fashion than me referencing code across multiple commits on github, docs, etc...
This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.
LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.
Indeed, many if not most bugs are intellectually dull. They're just lodged within a layered morass of cruft and require a lot of effort to unearth. It is rarely intellectually stimulating, and when it is as a matter of methodology, it is often uninteresting as a matter of acquired knowledge.
The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.
Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.
Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.
I agree with your definition of programming (and I’ve been saying the same thing here), but
> It's annoying when a distracting and unessential detail derails this conversation
there is no such details.
The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).
> No one argues that we should throw away type checking,…
That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.
As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.
Qua formal system, yes, but this is a pedantic point as the aim - the what - of a system is more important than the how. This distinction makes the distinction between domain-relevant features and implementation details more conspicuous. If I wish to predict the relative positions of the objects of our solar system, then in relation to that end and that domain concern, it matters not whether the underlying model assumes a geocentric or heliocentric stance in its model (that tacitly is the deeper value of Copernicus's work; he didn't vindicate heliocentrism, he showed that a heliocentric model is just as explanatory and preserves appearances equally well, and I would say that this mathematical and even philosophical stance toward scientific modeling is the real Copernican revolution, not all the later pamphleteer mythology).
Of course, in relation to other ends and contexts, what were implementation details in one case become the domain in the other. If you are, say, aiming for model simplicity, then you might prefer heliocentrism over geocentrism with all its baroque explanatory or predictive devices.
The underlying implementation is, from a design point-of-view, virtually within the composite. The implementation model is not of equal rank and importance as the domain model, even if the former constrains the latter. (It's also why we talk about rabbit-holing; we can get distracted from our domain-specific aim, but distraction presupposes a distinction between domain-specific aim and something that isn't.) When woodworking, we aren't talking about quantum mechanical phenomena in the wood, because while you cannot separate the wood from the quantum mechanical phenomena as a factual matter - distinction is not separation - the quantum is virtual, not actual with respect to the wood, and it is irrelevant within the domain concerning the woodworker.
So, if there is a bug in a library, that is, in some sense, a distraction from our domain. LLMs can help keep us on task, because our abstractions don't care how they're implemented as long as they work and work the way we want. This can actually encourage clearer thinking. Category mistakes occur in part because of a failure to maintain clear domain distinctions.
> That’s not a good comparison. Type checking [...]
It reduces cognitive load vis-a-vis understanding code. When I want to understand a function in a dynamic language, I often have to drill down into composing functions, or look at callers, e.g., in test cases to build up a bunch of constraints in my mind about what the domain and codomain is. (This can become increasingly difficult when the dynamic language has some form of generics, because if you care about the concrete type/class in some case, you need even more information.)
This cognitive load distracts us from the domain. The domain is effectively blurred without types. Usually, modeling something using types first actually liberates us, because it encourages clearer thinking upfront about the what instead of jumping right into how. (I don't pretend that types never increase certain kinds of burdens, at least in the short term, but I am talking about a specific affordance. In any case, LLMs play very nicely with statically-typed languages, and so this actually reduces one of the argued benefits of dynamic languages as ostensibly better at prototyping.)
> As long as you can pattern match to get a solution [...]
Indeed, and that's the point. LLMs work so well precisely, because our abstractions suck. We have lot of boilerplate and repetitive plumbing that is time-consuming and tedious and pulls us away from the domain. Years of programming research and programming practice has not resolved this problem, which suggests that such abstractions are either impractical or unattainable. (The problem is related to the philosophical question whether you can formalize all of reality, which you cannot, and certainly not under one formal system.)
I don't claim that LLMs don't have drawbacks or tradeoffs, or require new methodologies to operate. My stance is a moderate one.
> Yes but that’s why you ask it to teach you what it just did.
Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.
I would consider this a benefit. I've been a professional for 10 years and have successfully avoided CSS for all of it. Now I can do even more things and still successfully avoid it.
This isn’t necessarily a bad thing. I know a little css and have zero desire or motivation to know more; the things I’d like done that need css just wouldn’t have been done without LLMs.
I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.
It takes a lot of cajoling to get an LLM to produce a result I want to use. It takes no cajoling for me to do it myself.
The only time "AI" helps is in domains that I am unfamiliar with, and even then it's more miss than hit.
> I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.
I don’t even bother. Most of my use cases have been when I’m sure I’ve done the same type of work before (tests, crud query,…). I describe the structure of the code and let it replicate the pattern.
For any fundamental alteration, I bring out my vim/emacs-fu. But after a while, you start to have good abstractions, and you spend your time more on thinking than on coding (most solutions are a few lines of codes).
It is better than doomscrolling on Instagram for hours like the new generations. At least the brain is active, creating ideas or reading some text nonstop to keep itself active.
I think we have a skewed perception of ability, now that we’re connected via the internet to the whole rest of humanity.
Nature is ableist in a lot of ways… if you had diabetes in some ancient tribe you’d basically be screwed. But if you had an amputated leg, I dunno. You could still be the best flint knapper in the tribe.
I mean, think of your extended circle of friends and acquaintances, your 100 “closest” friends. In particular, think back on the community grew up in, if you’ve subsequently moved to a tech hub that accumulates rare talent. I bet picking through your hobbies there are a couple potentially useful things that you’d be a top-tier contributor in. Most people are really bad at almost everything they don’t do, after all.
I’m not going to engage in some prehistoric idealism. The past was pretty brutal. But we’d probably all die by getting little infected cuts, not because we were abandoned by our tribes.
The early nod to Agora Institute mission of “building stronger global democracy” Followed by bemoaning USAID cuts makes me wonder if the author is deliberately missing one of the most glaring examples of this.
How can we have a "stronger global democracy" if we don't currently have "global democracy" to begin with? Democracy suggests it is worldwide, whereas we know a number of countries out there are not democratic, or are barely democratic (due to corruption, war and other issues.)
This is all debatably valid, except for the fact that the entrenched system produced massive fraud, money laundering, wagging-the-dog and worst of all, a decade of domestic propaganda and anti-democratic schemes in an attempt to protect the machine from widespread exposure.
Except all of that was widely recognized and reported on at the time. People just didn't care. Lots of people will argue about this stuff until they're blue in the face, but no one is "surprised" by any of the evidence. The malfeasance was going to happen anyway, it's an inevitable consequence of global realpolitik. There's no Rule of Law on the high seas, as it were.
My point really isn't that cynical, it's more optimistic: if you're going to do all that stuff (and let's be honest and admit upfront: we were 100% going to do all that stuff) you might as well feed a bunch of people and garner some good will along the way.
On the contrary ... we LOVE the perennial underdog who stays in the fight. Like the British, once you start winning consistently you quickly earn our contempt.
reply