Hacker Newsnew | past | comments | ask | show | jobs | submit | teej's commentslogin

ChatGPT has a billion users so surely not all of the public hates it

I use LLMs and I think they should be nuked from orbit. [Speaking figuratively, NSA.]

Helpful, sure. Would humanity be better off without generative AI? Definitely.


> I think they should be nuked from orbit

This kind of hyperbloic stance amongst the broad public is sad concerning


It should definitely be concerning to the makers of genAI!

Like I was telling someone else, what we've made here is the ultimate double-edged sword. Use it right and there's great glory. Use it wrong and you're a lifeless, empty husk. In this case, though, people get the far greater dopamine hit from using it wrong.

Algorithmic social media is like this and we already see people rotting out on the infinite scroll. And genAI makes social media look like 70s weed. The question is: on the whole, which side of that double-edged blade is going to be doing the slicing?


Maybe you should consider that they have good reasons to feel that way? That the "broad public" isn't necessarily wrong, stupid, and ignorant on every single topic?

I feel like this can be explained in part by “I like using it for myself, but I hate when others use it.”

When you use ChatGPT for yourself, you may have a sense that what you see is made up; when someone else that you trust uses it and pronounces the output in a way that suggests it is their own, you are left doing much more complex social math to figure out if your trust in this person or entity can hold. It gets exhausting, personally.


Usage != positive feelings.

why? I think both generally track. And I won't appreciate category errors such as calling it similar to smoking

>I think both generally track

Well, they simply don't. Everybody is forced to use it at my (>100k employee) company. They are tracking it - if you don't use it every single day, you get flagged. Tons of people hate this.

I use it constantly. Everybody I work with uses it constantly. We use it voluntarily (we are the software folks). And we would all give it up in a heartbeat if there was a way to destroy it.

I hear this sentiment constantly online and from friends at other large companies. You are not observing the general reality.


That's your experience, perhaps. I use it extensively in my work, and some in my personal life. I also think it's a net negative for society, will likely be highly destructive, and I would be happy to give up my use of it in exchange for its elimination.

Yep, anyone familiar with the Red Queen's race will recognize the importance of staying on top of AI advancements.

But that position is orthogonal to whether they like having to participate in the race to begin with.


How many of those billion users are using it because they keep being told to become proficient with AI or risk being left behind? I guess a decent enough slice, as the FUD campaign is getting more persistent everyday.

So, seven billion non-users.

That’s not really an accurate measure, assuming it’s true, but I for one have an openAI account but I never used it, like not once, and you can imagine a lot of people too are the same, rest maybe casually, others only on free tier.

> ChatGPT has a billion users

And their company's leadership is famous for compulsively lying. Pardon me if I suspect they might be arriving at that number using creative math.


X has more active users than it ever has in its history


Passkeys are auth garbage. Normal users do not benefit from overly complex auth.


You tap your finger and you're done. Faster than a password paste. How is that complex or difficult UX?


I’ve used AWS for almost 20 years and I can tell you it’s more stable than Azure


I have zero doubts.


Depends how many 3090s you have


How many do you need to run inference for 1 user on a model like Opus 4.5?


8x 3090.

Actually better make it 8x 5090. Or 8x RTX PRO 6000.


How is there enough space in this world for all these GPUs


Just try calculating how many RTX 5090 GPUs by volume would fit in a rectangular bounding box of a small sedan car, and you will understand how.

Honda Civic (2026) sedan has 184.8” (L) × 70.9” (W) × 55.7” (H) dimensions for an exterior bounding box. Volume of that would be ~12,000 liters.

An RTX 5090 GPU is 304mm × 137mm, with roughly 40mm of thickness for a typical 2-slot reference/FE model. This would make the bounding box of ~1.67 liters.

Do the math, and you will discover that a single Honda Civic would be an equivalent of ~7,180 RTX 5090 GPUs by volume. And that’s a small sedan, which is significantly smaller than an average or a median car on the US roads.


What about what's around the GPU? Motherboard etc.


I didn’t do the napkin math on it earlier, because I don’t believe it really matters for making the point I was making.

I don’t care about looking up real numbers, so I will just overestimate heavily. Let’s say that for a large enough number of GPUs, the overhead of all the surrounding equipment would be around 20% (amortized).

So you can just take the number of GPUs I calculated in my previous comment, multiply by 0.8, and you get your answer.


This is not 20% , it's 100%+.


Now factor in power and cooling...


Don’t forget to lease out idle time to your neighbors for credits per 1M tokens…


Milk crates and fans, baby. Party like it’s 2012.


48x 3090’s actually.


None, if you have time to wait, and a bit of memory on the computer.


Is that the claim the OP is making?


The food pyramid was published by the department of agriculture, it’s always been propaganda.


Thanks to the dedicated work of Edward Bernays... nephew of Sigmund Freud ... and the Creel Committee

https://en.wikipedia.org/wiki/Committee_on_Public_Informatio...

When Lucky Strike needed more women to smoke cigarettes in the late 1920s, it turned to Bernays.


It doesn't sound like your firm does any diligence that would actually prevent you from buying a vendor that has security flaws.


Your coworkers were probably writing subtle bugs before AI too.


Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?


Easier to skim 1000 flies from a single drum than 100 flies from 100 bowls of soup.


Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.


… while not having a real distinction between flies and non-fly ingredients.


No, I think it would be far easier to pick 100 flies each from a single bowl of soup than to pick all 1000 flies out of a 50 gallon drum.

You don’t get to fix bugs in code by simply pouring it through a filter.


I think the dynamic is different - before, they were writing and testing the functions and features as they went. Now, (some of) my coworkers just push a PR for the first or second thing copilot suggested. They generate code, test it once, it works that time, and then they ship it. So when I am looking through the PR it's effectively the _first_ time a human has actually looked over the suggested code.

Anecdote: In the 2 months after my org pushed copilot down to everyone the number of warnings in the codebase of our main project went from 2 to 65. I eventually cleaned those up and created a github action that rejects any PR if it emits new warnings, but it created a lot of pushback initially.


Then when you've taken an hour to be the first person to understand how their code works from top to bottom and point out obvious bugs, problems and design improvements (no, I don't think this component needs 8 useEffects added to it which deal exclusively with global state that's only relevant 2 layers down, which are effectively treating React components like an event handling system for data - don't believe people who tell you LLMs are good at React, if you see a useEffect with an obvious LLM comment above it, it's likely to be buggy or unnecessary), your questions about it are answered with an immediate flurry of commits and it's back to square one.

Who are we speeding up, exactly?


Yep, and if you're lucky they actually paste your comments back into the LLM. A lot of times it seems like they just prompted for some generic changes, and the next revision has tons of changes from the first draft. Your job basically becomes playing reviewer to someone else's interactions with an LLM.

It's about as productive as people who reply to questions with "ChatGPT says <...>" except they're getting paid to do it.


pg_ system tables aren’t built for direct consumption. You typically have to massage them quite a bit to measure whatever statistic you need.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: