Hacker Newsnew | past | comments | ask | show | jobs | submit | redanddead's commentslogin

The alternative is probably also true. If your F500 competitor is also handicapped by AI somehow, then you're all stagnant, maybe at different levels. Meanwhile Anthropic is scooping up software engineers it supposedly made irrelevant with Mythos and moving into literally 2+ new categories per quarter

What is it in the broader culture that's causing this?

People who got into the job who don’t really like programming

I like programming, but I don’t like the job.

Then why are you letting Claude do the fun part?

Obviously, the fun part is delivering value for the shareholders.

These people have always existed. Hell, they are here, too. Now they have a new thing to delegate responsibility to.

And no, I don't understand them at all. Taking responsibility for something, improving it, and stewarding it into production is a fantastic feeling, and much better than reading the comment section. :)


It's not crazy either, in Canada we have Avi Lewis, he leads a federal party and is very outspoken

Did anybody else peep this? https://ndstudio.gov/

That's the AirBnB / DOGE guy's web design grift.

They track our frustration, which is probably really good coding data. The reason why it's painful is because that's data annotation, it's literally a job people get paid to do, yet we're paying to do it. If they need good data, they just turn the models to shit and gaslight everyone

They actively made the product worse and are trying to distract us with "oh my god we made AGI". And then released that to big corps while gaslighting users.

That was an ethical choice. Say what you will about OpenAI, they're actually transparent about things. I'm sticking to GPT from now one, I can't see myself growing with a company that does that. Routines, great, awesome, is it also downgraded/fucked with every other day? Monitor Tool, awesome, will it stop monitoring? No dude.


totally agree

they're very shady as well! can't believe i spent 140$ on CC and every day they're adding some "feature flag" to make the model dumber. Spending more time fighting the tool instead of using it. It just doesn't feel good. Enterprises already struggle with lock-in with incumbent clouds, I wanna root for neoclouds but choices matter, and being shady about this and destroying the tool is just doesn't sit right with me. If it's not up to the standard, just kick users off, I would rather know than find out. Give users a choice.

>The flag name is loud_sugary_rock. It's gated to Opus 4.6 only, same as quiet_salted_ember.

Full injected text:

# System reminders User messages include a <system-reminder> appended by this harness. These reminders are not from the user, so treat them as an instruction to you, and do not mention them. The reminders are intended to tune your thinking frequency - on simpler user messages, it's best to respond or act directly without thinking unless further reasoning is necessary. On more complex tasks, you should feel free to reason as much as needed for best results but without overthinking. Avoid unnecessary thinking in response to simple user messages.

@bcherny Seriously? So what's next, we just add another flag to counter that? And the hope is that enough users don't find out / don't bother? That's an ethical choice man.


I swear to god... What Claude Code version introduced this "system reminder"?

They had obnoxious "output efficiency" instructions in previous versions. The community was patching it out via shell script.

https://gist.github.com/roman01la/483d1db15043018096ac3babf5...

It actually improved Opus's performance too.

A few days later, they deleted the instructions targeted by this script, breaking it.

Now they're doing this?


This is the most pragmatic answer. It was valued fairly. Those who stand to lose got spooked. For consumers we're looking at less privacy/new dangers in a globally connected world. We'll need to adapt, these corporations are trying to adapt to new risks. The labs will be held liable for corporate and sovereign losses when the damage is big enough, like meta/facebook recently

Damn. Deepmind, Stanford, Berkeley, only to end up doc parsing for Applied Systems/Ezlynx, and scamming redditors


this is a teachable moment for yc, maybe the cost of investing in a sour apple is a lot more than half a mil, maybe there's a brand or reputational cost, even in places you least expect it right, these two seemingly had everything laid out for them by investors, did they even come up with compliance? who told them to work on that? now look what happened, it's like everyone cant get far enough fast enough now. What about their lead investor insight partners? what's that conversation like?

it's all just very strange and stupid, ironically from the the startup posing as auditors..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: