Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems like Anthropic is increasingly confused that these non deterministic magic 8 balls are actually intelligent entities.

The biggest enemy of AI safety may end up being deeply confused AI safety researchers...



I don't think they're confused, I think they're approaching it as general AI research due to the uncertainty of how the models might improve in the future.

They even call this out a couple times during the intro:

> This feature was developed primarily as part of our exploratory work on potential AI welfare

> We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future


I take good care of my pet rock for the same reason. In case it comes alive I don't want it to bash my skull in.


It’s clever PR and marketing and I bet they have their top minds on it, and judging by the comments here, it’s working!


Is it confusion, or job security?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: