Hacker Newsnew | past | comments | ask | show | jobs | submit | avsm's commentslogin

The elephant in the room here is that there are hundreds of millions of embedded devices that cannot be upgraded easily and will be running vulnerable binaries essentially forever. This was a problem before of course, but the ease of chaining vulnerabilities takes the issue to a new level.

The only practical defense is for these frontier models to generate _beneficial_ attacks to innoculate older binaries by remote exploits. I dubbed these 'antibotty' networks in a speculative paper last year, but never thought things would move this fast! https://anil.recoil.org/papers/2025-internet-ecology.pdf


No, the elephant in the room is that even bad actors will now have easier to find vulnerabilities in, maintained or not, widely or in critical places used software. Unmaintained and remotely accessible devices should be discarded as soon as possible, you can't stay waiting till some of the good guys decide to give some time to your niche but critical unmaintained piece of software. Because if there is a possibility of taking profit of it, it will be checked and exploited.

And you can't assume that whatever vulnerability they have will let good guys to do the extra (and legally risky) work of closing the hole.


_SHOULD_ yes sure, but realistically is that going to happen?


As doom and gloom as things are generally, I do think things have gotten better. Due to legislation and commercial pressure things like wifi routers shipping with the same default password and open settings have gotten better. Webhosts and ISPs have implemented many improvements to protecting their residential customers.

I take your point, but think that it's also maybe too far.


> As doom and gloom as things are generally, I do think things have gotten better.

The question isn't "are companies making some security improvements?". That's one-sided. The question is "are companies making security improvements FAST ENOUGH to deal with the increased risks?"


And this is precisely why so many of these devices should not be connected to the Internet.

Things like an Internet-connected central heating seem absolutely insane to me, yet people look at me like I'm crazy when I say so. Do you really want your home' heating entirely controller by a publicly accessible device that likely will never be upgraded in case of security issues?


You should either implement over-the-air updates or not connect your device to the network at all.


That doesn't help when the company behind the device disappears or stops supporting the device. Or is hacked to convert all the devices they manufactured into a botnet.


The problem of course is that many of these devices are eager to connect to the internet so they can often user hostile updates.


> The only practical defense is for these frontier models

Another practical defence for many of these devices would be to just disconnect them... I feel like an old man yelling at a cloud, but too much is connected to the Internet these days.


It can be easier to hack the device and patch it than determine which device it is. This is nearly always true for the non-technical, but it is true for most technical people as well. Many of the devices in peoples homes that aren't being actively patched are not that old!


Why doesn't this atm tell me my balance anymore? Oh we implemented creata's advice

Why didn't this smartboard tell me my plane was delayed? Oh we implemented creata's advice

ad nauseum


> If the write ups are any useful, it generally appears here or reddit and I often link back those discussions in the articles

Totally agree, I do the same as well on my site; e.g.: https://anil.recoil.org/notes/tessera-zarr-v3-layout

There are quite a few useful linkbacks:

- The social urls (bluesky, mastodon, twitter, linkedin, hn, lobsters etc) are just in my Yaml frontmatter as a key

- Then there's standard.site which is an ATProto registration that gets an article into that ecosystem https://standard-search.octet-stream.net

- And for longer articles I get a DOI from https://rogue-scholar.org (the above URL is also https://doi.org/10.59350/tk0er-ycs46) which gets it a bit more metadata.

On my TODO list is aggregating all the above into one static comment thread that I can render. Not sure it's worth the trouble beyond linking to each network as I'm currently doing, since there's rarely any cross-network conversations anyway.


Damn. I got a bunch of idea around atproto from this comment. Also found out your blog. I wish digging out human written blogs wasn't such a chore. I like the idea of blogs but their discoverability sucks big time.


I like Kagi's small web initiative to help people find personal sites: https://blog.kagi.com/small-web-updates


From another comment below, it's just a nice short title to convey that we're going back in time and not one to set your watch by.

    We first submitted the article to the CACM a while ago.
    The review process takes some time and "Twelve years of
    Docker containers" didn't have quite the same vibe.
(The CACM reviewers helped improve our article quite a bit. The time spent there was worth it!)


Makes sense! Thanks for working on it -- truly a wonderful paper!


cool! What services have you shipped as unikernels? Docker doesn't have to be an alternative; it can help with the build/run pipeline for them too: https://www.youtube.com/watch?v=CkfXHBb-M4A (Dockercon 2015!)


Mostly finance stuff, and all the sensitive stuff that comes with it.

But the main benefit is the attack surface is greatly reduced when running a unikernel. Also we use way less resources and get really good perf.


> but omission from the article stands out.

(article author here)

Apple containers are basically the same as how Docker for Mac works; I wrote about it here: https://anil.recoil.org/notes/apple-containerisation

Unfortunately Apple managed to omit the feature we all want that only they can implement: namespaces for native macOS!

Instead we got yet another embedded-Linux-VM which (imo) didn't really add much to the container ecosystem except a bunch of nice Swift libraries (such as the ext2 parsing library, which is very handy).


> I don't think SLIRP was originally for palm pilots, given it was released two years before.

That's a mistake indeed; "popularised by" might have been better. Before my beloved Palmpilot arrived one Christmas, I was only using SLIRP to ninja in Netscape and MUD sessions onto a dialup connection which wasn't a very mainstream use.


Those are global to the machine; generally not an issue and seccomp rules can filter out undesirable syscalls to other containers. But GPU kernel/userspace driver matching has been a huge headache; see https://cacm.acm.org/research/a-decade-of-docker-containers/... in the article for how the CDI is (sort of) helping standardise this.


We've given up on native Windows containers in OCaml after trying to use them for our CI builds for many years. See https://www.tunbury.org/2026/02/19/obuilder-hcs/ for our recent switch to HCS instead. Compared to Linux containers, they're very much a second-class citizen in the Microsoft worldview of Docker.


Docker broke out the build layer into a separate component called BuildKit (see HN discussion recently https://news.ycombinator.com/item?id=47166264).

However, Dockerfiles are so popular because they run shell commands and permit 'socially' extending someone else shell commands; tacking commands onto the end of someone else's shell script is a natural process. /bin/sh is unreasonably effective at doing anything you need to a filesystem, and if the shell exposes a feature, it has probably been used in a Dockerfile somewhere.

Every other solution, especially declarative ones, tend to come up short when _layering_ images quickly and easily. However, I agree they're good if you control the entire declarative spec.


An extremely random fact I noticed when writing the companion article [1] to this (an OCaml experience report):

    "Docker, Guix and NixOS (stable) all had their first releases
    during 2013, making that a bumper year for packaging aficionados."
Now we get coding agent updates every week, but has there been a similar year since 2013 where multiple great projects all came out at the same time?

[1]: https://anil.recoil.org/papers/2025-docker-icfp.pdf


hg and git came weeks apart and fossil shortly after if that counts


TBH I feel as if only docker belongs in that list. Guix and nix have users, sure, but not remotely like docker.


Yeah they are way better than docker for packaging


Why is docker used by far the most, then?


Laziness


docker got popular because it had better DX (better tooling), it was like a super lightweight VM (and initially people really wanted to put init and SSH into containers)

easy but powerful, it's not just packaging, it's also a very basic deployment system too. (docker ps) and said better allowed a relatively foolproof cross-platform develop-deploy loop.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: