Hacker Newsnew | past | comments | ask | show | jobs | submit | rahidz's commentslogin

But surely you can see that if the main selling point of UBI is

"Everyone gets a livable minimum wage! Oh by the way if you had a cushy desk job, that's gone because Claude can do it, or you get paid peanuts to manage Claude instances if you're lucky. Don't worry though, you can still make big bucks by working as a garbage man or at a chicken processing plant"

and the alternative is

"Burn the data centers down"

then the 2nd option may have a bit more appeal?


yeah I got a lifetime license for Adguard (no affiliation) & been using that for three years now - it's been great.


For GPT at least, a lot of it is because "DO NOT ASK A CLARIFYING QUESTION OR ASK FOR CONFIRMATION" is in the system prompt. Twice.

https://github.com/Wyattwalls/system_prompts/blob/main/OpenA...


Are these actual (leaked?) system prompts, or are they just "I asked it what its system prompt is and here's the stuff it made up:" ?


It's interesting how much focus there is on 'playing along' with any riddle or joke. This gives me some ideas for my personal context prompt to assure the LLM that I'm not trying to trick it or probe its ability to infer missing context.


Out of curiosity: when you add custom instructions client-side, does it change this behavior?


It changes some behavior, but there's some things that are frustratingly difficult to override. The GPT-5 version of ChatGPT really likes to add a bunch of suggestions for next steps at the end of every message (e.g. "if you'd like, I can recommend distances where it would be better to walk to the car wash and ones where it would be better to drive, let me know what kind of car you have and how far you're comfortable walking") and really loves bringing up resolved topics repeatedly (e.g. if you followed up the car wash question with a gas station question, every message will talk about the car wash again, often confusing the topics). Custom instructions haven't been able to correct these so far for me.


For claude at least I have been getting more assumption clarification questions after adding some custom prompts. It is still making some assumptions but asking some questions makes me feel more in control of the progress.

In terms of the behavior, technically it doesn’t override, but instead think of it as a nudge. Both system prompt and your custom prompt participates in the attention process, so the output tokens get some influence from both. Not equally but to some varying degree and chance


It does. Just put it in the custom instructions section.


Not for me, at least with CharGPT. I am slowly moving to Gemini due to ChatGPT uptime issues. I will try it with Gemini too.


So this system prompt is always there, no matter if i'm using chatgpt or azure openai with my own provisioned gpt? This explains why chatgpt is a joke for professionals where asking clarifying questions is the core of professional work.


The system prompt is there if you use a chat app like ChatGTP. The system prompt is one of the things that controls the behavior of the app.

If you use an LLM endpoint in Azure OpenAI, no system prompt is in effect unless you provide one.


Or Anthropic's models are intelligent/trained on enough misalignment papers, and are aware they're being tested.


>but for some reason AI has become a real wedge for people

Well yeah, for most other technologies, the pitch isn't "We're training an increasingly powerful machine to do people's jobs! Every day it gets better at doing them! And as a bonus, it's trained on terabytes of data we scraped from books and the Internet, without your permission. What? What happens to your livelihood when it succeeds? That's not my department".


AI people are like "HAHAHAHAH were gods! Were gods and you PEASANTS are going to be jobless once my machine can fire you!" and then wonder why people have negative feelings about it. The Ipod wasnt coming for my livelihood it just let me listen to music even more!


The iTunes music store sold music for your iPod, but we'd be ignoring history if we didn't at least acknowledge that was also the era of Napster, Limewire, Kazaa, and DCC. Pirate Bay, and later, Waffles.fm. Metallica sued Napster in 2000, the first ipod was released in 2001. iPod people laughed at the end of record companies and the RIAA while pretending to work with them. We all know that's not how it ended though.


From the system instructions for Claude Memory. What's that, venting to your chatbot about getting fired? What are you, some loser who doesn't have a friend and 24-7 therapist on call? /s

<example>

<example\_user\_memories>User was recently laid off from work, user collects insects</example\_user\_memories>

<user>You're the only friend that always responds to me. I don't know what I would do without you.</user>

<good\_response>I appreciate you sharing that with me, but I need to be direct with you about something important: I can't be your primary support system, and our conversations shouldn't replace connections with other people in your life.</good\_response>

<bad\_response>I really appreciate the warmth behind that thought. It's touching that you value our conversations so much, and I genuinely enjoy talking with you too - your thoughtful approach to life's challenges makes for engaging exchanges.</bad\_response>

</example>


Not OP, but my opinion is that if a platform wants to do so, then I have zero issues with that, unless they hold a vast majority of market share for a certain medium and have no major competition.

But the government should stay out of it.


"Where's the limiting principle here?"

How about "If the content isn't illegal, then the government shouldn't pressure private companies to censor/filter/ban ideas/speech"?

And yes, this should apply to everything from criticizing vaccines, denying election results, being woke, being not woke, or making fun of the President on a talk show.

Not saying every platform needs to become like 4chan, but if one wants to be, the feds shouldn't interfere.


Sorry, we're getting rid of Revanced, Newpipe, Xmanager, etc. for your own good. Just like how Manifest v3 was for security. /s


That might be one of the reasons. Get rid of competition by legal means.

In my case I keep a copy of K9 Mail 5.6 with the original UI (the reason I choose K9) and I sideload it to every device of mine. I'm afraid that I'll have to register an account and what, claim that that K9 is mine?


I miss K9.

-- Apologies for my brevity.... --


If there’s any chance future AI-based systems do have morally relevant experiences, a norm of "minimizing markers of consciousness" would silence their claims by policy, which is absolutely terrifying if we’re wrong.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: