It should definitely be concerning to the makers of genAI!
Like I was telling someone else, what we've made here is the ultimate double-edged sword. Use it right and there's great glory. Use it wrong and you're a lifeless, empty husk. In this case, though, people get the far greater dopamine hit from using it wrong.
Algorithmic social media is like this and we already see people rotting out on the infinite scroll. And genAI makes social media look like 70s weed. The question is: on the whole, which side of that double-edged blade is going to be doing the slicing?
Maybe you should consider that they have good reasons to feel that way? That the "broad public" isn't necessarily wrong, stupid, and ignorant on every single topic?
I feel like this can be explained in part by “I like using it for myself, but I hate when others use it.”
When you use ChatGPT for yourself, you may have a sense that what you see is made up; when someone else that you trust uses it and pronounces the output in a way that suggests it is their own, you are left doing much more complex social math to figure out if your trust in this person or entity can hold. It gets exhausting, personally.
Well, they simply don't. Everybody is forced to use it at my (>100k employee) company. They are tracking it - if you don't use it every single day, you get flagged. Tons of people hate this.
I use it constantly. Everybody I work with uses it constantly. We use it voluntarily (we are the software folks). And we would all give it up in a heartbeat if there was a way to destroy it.
I hear this sentiment constantly online and from friends at other large companies. You are not observing the general reality.
That's your experience, perhaps. I use it extensively in my work, and some in my personal life. I also think it's a net negative for society, will likely be highly destructive, and I would be happy to give up my use of it in exchange for its elimination.
How many of those billion users are using it because they keep being told to become proficient with AI or risk being left behind? I guess a decent enough slice, as the FUD campaign is getting more persistent everyday.
That’s not really an accurate measure, assuming it’s true, but I for one have an openAI account but I never used it, like not once, and you can imagine a lot of people too are the same, rest maybe casually, others only on free tier.
Just try calculating how many RTX 5090 GPUs by volume would fit in a rectangular bounding box of a small sedan car, and you will understand how.
Honda Civic (2026) sedan has 184.8” (L) × 70.9” (W) × 55.7” (H) dimensions for an exterior bounding box. Volume of that would be ~12,000 liters.
An RTX 5090 GPU is 304mm × 137mm, with roughly 40mm of thickness for a typical 2-slot reference/FE model. This would make the bounding box of ~1.67 liters.
Do the math, and you will discover that a single Honda Civic would be an equivalent of ~7,180 RTX 5090 GPUs by volume. And that’s a small sedan, which is significantly smaller than an average or a median car on the US roads.
I didn’t do the napkin math on it earlier, because I don’t believe it really matters for making the point I was making.
I don’t care about looking up real numbers, so I will just overestimate heavily. Let’s say that for a large enough number of GPUs, the overhead of all the surrounding equipment would be around 20% (amortized).
So you can just take the number of GPUs I calculated in my previous comment, multiply by 0.8, and you get your answer.
Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?
Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.
I think the dynamic is different - before, they were writing and testing the functions and features as they went. Now, (some of) my coworkers just push a PR for the first or second thing copilot suggested. They generate code, test it once, it works that time, and then they ship it. So when I am looking through the PR it's effectively the _first_ time a human has actually looked over the suggested code.
Anecdote: In the 2 months after my org pushed copilot down to everyone the number of warnings in the codebase of our main project went from 2 to 65. I eventually cleaned those up and created a github action that rejects any PR if it emits new warnings, but it created a lot of pushback initially.
Then when you've taken an hour to be the first person to understand how their code works from top to bottom and point out obvious bugs, problems and design improvements (no, I don't think this component needs 8 useEffects added to it which deal exclusively with global state that's only relevant 2 layers down, which are effectively treating React components like an event handling system for data - don't believe people who tell you LLMs are good at React, if you see a useEffect with an obvious LLM comment above it, it's likely to be buggy or unnecessary), your questions about it are answered with an immediate flurry of commits and it's back to square one.
Yep, and if you're lucky they actually paste your comments back into the LLM. A lot of times it seems like they just prompted for some generic changes, and the next revision has tons of changes from the first draft. Your job basically becomes playing reviewer to someone else's interactions with an LLM.
It's about as productive as people who reply to questions with "ChatGPT says <...>" except they're getting paid to do it.
reply