Hacker Newsnew | past | comments | ask | show | jobs | submit | Filligree's commentslogin

Run “nix flake update”. Commit the lockfile. Build a docker image from that; the software you need is almost certainly there, and there’s a handy docker helper.

Recently I’ve been noticing that Nix software has been falling behind. So “the software you need is almost certainly there” is less true these days. Recently = April 2026.

Are you referring to how the nixpkgs-unstable branch hasn't been updated in the past five days? Or do you have some specific software in mind? (not arguing, just curious)

oh, great, adding more dependency, and one that just had serious security problem

as if other sandboxing software is perfect

I refuse. My code will be formatted according to my own preferences.

Imagine a world where your editor shows you what you want to see… but saves in a standard format for sharing.

By image generation standards this is a ridiculously good result. No surprise that people instantly find the new limits, but they are new limits.

But it's also straight up plagiarism and still ridiculously bad on so many levels.

It could already copy the art styles from its training data, what is the advancement here?

I would be perfectly happy moving it off-Earth. We can consider the long term after we have a mid-term.

Sometimes I wonder if this is somehow both an answer to the fermi paradox and the increasing expansion rate of the universe. Every alien civ doing exactly this somehow.

The exemption is about ensuring customers get what they paid for. It shouldn’t care how the manufacturer achieves that; driving the batteries less hard is an obvious tactic, and actually also makes them safer to use.

It will certainly affect the health of the TV.

In American law, companies have the choice of whether or not to do business with the government, outside of a few corner cases. There’s a process for forcing them, but it can’t just be because the leader says so.

In this particular case Anthropic had a contract stating what the military could and could not use their models for. The military broke that contract. Anthropic declined to sign a revised one.

This is within their rights, and more to the point, the government should absolutely not be allowed to unilaterally alter contracts they’ve already signed!

Predictability is the whole point. Undermining it is how you destroy your own economy.


That is allegedly not what happened. Anthropic’s CEO was happy to grant waivers on a case by case basis.

The problem is the branches of the government that Anthropic was doing business with found it infeasible to do this.

They had another problem. If one of their contractors used Claude to engineer solutions contrary to Anthropic’s “manifesto” would Claude poison pill the code?

Basically Anthropic wanted the angels halo and the devils horns and the govt said pick one.


> That is allegedly not what happened. Anthropic’s CEO was happy to grant waivers on a case by case basis. The problem is the branches of the government that Anthropic was doing business with found it infeasible to do this.

That's not what the presidential announcement blacklisting Anthropic said. It said they're being punished for trying to require that the military follow their terms of service.


That’s the other pov (from the govt angle) - https://www.businessinsider.com/pentagon-official-details-ho...

The media is usually flush with defending Anthropic. And yes - the supply chain risk label is too broad. But there is another side to the story and Anthropic isn’t an “innocent” as made out to be.


I've heard this POV before, I just re-read it again, and I genuinely do not understand which part of it you think shows Anthropic is anything but innocent. To me it seems pretty clear: Emil Michael heard that Anthropic was asking questions about how their system was used, and he thinks that attitude is an unacceptable security risk. He won't accept the use of systems that were developed based on "their constitution, their culture, their people" or "their own policy preferences". Anyone who would ask such questions might sabotage military operations if they don't like the answers, he argues, and I believe that he genuinely believes this.

So he'll only accept systems developed by people who understand, as Sam Altman promised to, that the US military is not to be questioned.


My impression was that Dario was happy to grant case by case exceptions. But Emil did not want that. I mean why setup claude at DoW where the goal is surveillance and targeting (possibly autonomous).

>happy to grant case by case exceptions

Which makes more sense, the world isn't a black and white place with clear abstractions.


Sure, they have a "choice", except that no one turns done the kind of money the government has to offer, and if the company is public they are legally obligated to increase shareholder value.

Anthropic has not in fact released it, and it does in fact appear to be that dangerous, judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg.

Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.


The flood of reports that open source projects like curl, Linux and Chromium are getting are presumably due to public models like Open 4.6 that released earlier this year, and not models with limited availability.

How many months till they release a better model than mythos to general audience?

Gpt 2 wasn't released fully because OpenAI deemed it too dangerous, rings a bell? https://openai.com/index/better-language-models/#sample1


A few months of restricting access to people they think will actually fix problems is a big deal. Obviously only an idiot would think it could or should be kept under wraps forever.

> judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg

Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?


Some relevant links:

[1] https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-proje...

> Improvement in AI models' capabilities became noticeable early 2026, said Daniel Stenberg.

> He estimates that about 1 in 10 of the reports are security vulnerabilities, the rest are mostly real bugs. Just three months into 2026, the cURL team Stenberg leads has found and fixed more vulnerabilities than each of the previous two years.

[2] https://www.linkedin.com/posts/danielstenberg_curl-activity-...

> The new #curl, AI, security reality shown with some graphs. Part of my work-in-progress presentation at foss-north on April 28.


He has changed his opinion completely. Yes, the ratio has turned.

Yes:

> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.

> I'm spending hours per day on this now. It's intense.

https://mastodon.social/@bagder/116336957584445742


Those vulnerabilities were found by open models as well.

Partly true. I think the consensus was it wasn't comparable because Mythos swept the entire codebase and found the vulnerabilities, whereas the open models were told where to look for said vulnerabilities.

https://news.ycombinator.com/item?id=47732337


Not really. The models were pointed specifically at the location of the vulnerability and given some extra guidance. That's an easier problem than simply being pointed at the entire code base.

Surely the Anthropic model also only looked at one chunk of code at a time. Cannot fit the entire code base into context. So supplying an identical chunk size (per file, function, whatever) and seeing if the open source model can find anything seems fair. Deliberately prompting with the problem is not.

You can’t get the couch into the elevator, typically. Trust me, I tried.

Couch depending. I will persist in trying every time this comes up.


Well if it's one of those hospital elevators that can take a bed with a patient, you probably could. Or if it's a small 2 seater sofa. The question isn't as dumb as it sounds at first, and a human would definitely ask a follow up question.

You can take a mattress up an elevator though (1). Some couches might fit in some elevators.

1: source: me...


Because their numbers don’t work out. When you do the math on token cost versus inference speed, you get something that barely breaks even even with cheap power.

Also they’ve already launched a crypto token, which is a terrible sign.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: