Uh, there is a list, named "linux-distros", which is for this purpose (and I think it's for more than just Linux, e.g. I believe it was used for the xz vuln).
Given this was announced when backports weren't ready (and given the POC was at least opaque if not obfuscated), I'm getting the vibe fixing the vuln wasn't as high as a priority as making a media splash.
> Note that for Linux kernel vulnerabilities, unless the reporter chooses
> to bring it to the linux-distros ML, there is no heads-up to
> distributions.
so, no, `linux-distros` list don't solve the problem.
But if you seek to replace coreutils (as at least is the case with Canonical it seems), rather than just be another POSIX userland implementation (e.g. busybox), then I would suggest you do need to be bug-compatible? I can apt/dnf/apk install busybox and use that for my user rather than coreutils, but given a significant amount of Linux infrastructure (including likely many personal scripts) are tied to coreutils, the bar is much higher. Given the numerous issues with quality Canonical has had, not just with Ubuntu but their other "commercial" tooling, I'm not sure any rewrite/port, written in rust or otherwise, with Canonical developing, managing, or even being associated with the project can meet the requisite bar.
As someone who prefers BSD I would make it my goal to become something reasonably popular on linux that isn't different just to force less reliance of the GNUisms in their core utils. Nothing wrong with the GNUisms on the command line, but there are are a lot of GNU assumptions in scripts that should be portable.
But are the current uutils developers the same as the 2013 developers? At least based on GitHub's graphs, that's not the case (it looks fairly bimodal to me), and so it wouldn't be unreasonable to treat the 2013-era project differently to the 2020-era project. So judging the 2020-era project for its current and ongoing failures does not seem unreasonable.
Similarly, sudo-rs dropping "legacy" features leaves a bad taste in my mind, there are multiple privilege escalation tools that exist (doas being the first that comes to mind), and doing something better and not claiming "sudo" (and rather providing a compat mode ala podman for docker) would to me seem a better long term path than causing more breakage (and as shown by uutils, breakage on "core" utils can very easily lead to security issue).
I personally find uutils lack of care to be concerning because I've been writing (as a very low priority side project) a network utility in rust, and while it not aiming to be a drop in rewrite for anything, I would much rather not attract the same drama.
doas and sudo-rs occupy different niches, specifically doas aims for extreme minimalism and deliberately sacrifices even more compatibility than sudo-rs, which represents a middle ground.
On revocation, check out https://bugzilla.mozilla.org/buglist.cgi?product=CA%20Progra...
I don't think any CA hasn't had an issue with revocation at some point (e.g. Let's Encrypt had a major one in 2021, and refused to revoke), which is why Let's Encrypt is moving to 7 day certs (so that revocation isn't required, basically https://www.imperialviolet.org/2011/03/18/revocation.html which is mentioned in the article). My impression is CRLs (and by implication current revocation methods) don't work, and browsers are effectively fudging around CAs with custom methods (e.g. allowing existing certs but no new certs from distrusted CAs).
I'm no security expert, but modern bind9 seems to just handle DNSSEC with no issues when I've used it, and given that the "WebPKI" seems is becoming more and more reliant on custom browser code, adopting DANE outside browsers might not be the worst idea.
According to https://stats.labs.apnic.net/dnssec DNSSEC is sitting about 1/3, so "very few" isn't accurate. I'm not suggesting browsers should change what they do, but if WebPKI can't be used, building a new CA ecosystem would seem to be to be at least as hard as getting DANE working.
My impression was that autoupdate was not the default because the devices it runs on only have so many resources, and there's a non-trivial chance of bricking the device (given how many devices are supported)? It's not like other vendors are doing any better in this space (and I've seen enough things in the "IoT/embedded" space brick themselves with updates to be a bit wary of autoupdates).
Auto-update is also a bad idea unless you can make it really secure, which is hard to do on devices so constrained they don't even have a clock to keep track of what day it is to judge whether a certificate is still valid.
Minimizing the chance of bricking the device with an automatic update requires at a minimum having two copies of the OS, so that the running copy isn't trying to modify itself and can remain as a fallback in case of a broken update. That's not too challenging these days now that most routers are using NAND flash, but for a long time it was common to use very small NOR flash modules with the absolute minimum capacity.
Updates don’t currently have a way to ensure that user installed packages have their configurations updated appropriately, so user installed packages may break on update. Additionally, as a sibling comment pointed out, official images don’t include user packages, so you’d either need a scalable way to build custom images or the updater would need to be smart enough to reinstall packages after update.
It would still be nice to have an official automatic update feature that is opt-in for stock systems.
You also need to rebuild the firmware with the installed packages. Otherwise you end up without your packages installed. That requires a server to build the firmware for your device. Doing this automatically for everyone is resource intensive.
They have the tools and infrastructure for assembling custom firmware images on-demand, and have recently added it to the default images, so they must feel like their infrastructure is ready for significantly increased demand.
I use attended sys upgrade. I've been using OpenWrt for the last 7 years, but I've noticed that attended sys upgrade often fails at release time. And there are often point releases shortly after. I'm just skeptical that their infrastructure would handle mass auto updates at release time. I usually wait a few weeks after release until the masses have reported various device specific bugs before I upgrade.
I’ve tried it in the past. This was a few years ago, so it’s possible it’s changed since then. But the reason I’m not choosing it for myself today is that it relies on either Sign in with Google (fine) or magic links to verify the user. I really don’t want to manage email delivery for this project, which is admittedly a stubborn personal choice. It just adds a lot of complexity that I don’t care to spend time on for hobby projects.
Then why rewrite coreutils in rust? TOCTOU isn't exact some new concept. Neither are https://owasp.org/Top10/2025/ (most of which a good web framework will prevent or migrate), and switching to rust (which as far as I know) won't bring you a safer web framework like django or rails.
1. Rust is a much more pleasant language to work with.
2. You can improve the tools, adding new features, fixing UX paper cuts etc.
You're probably thinking "you can improve the GNU versions!" and in theory sure. But in practice these sorts of tools are controlled by naysayers who want everything to stay as it was in the 80s. The sorts of people that only accept patches via git send-email to a mailing list.
Hahaha I just looked up GNU Coreutils and not only do they blame poor UX on the user ("Often these perceived bugs are simply due to wrong program usage.") but they even maintain a list of rejected feature requests:
Another maintainer and I follow issues and pull requests on a GitHub mirror. But email works fine for us and many other projects.
Regarding poor UX, it is difficult to dispute with that claim without a specific example. Note that a lot of the features we support are standardized by POSIX. Even if we dislike the behavior, it is better to comply with the standards so the programs don't behave differently than users expect. The sentence you quote isn't meant to put down users. These programs are often much more complex than meets the eye, and there are lots of common gotchas that people have run into (and will continue to do so) [1].
Of course we would love for these programs to be useful for everyone. However, feature requests are often incompatible with existing behavior, incompatible with other feature requests, or have existing functionality elsewhere. For those reasons we cannot accept every feature request.
Have you used busybox? The BSDs? I'm not sure adding more features to coreutils is a major help, and given rust-coreutils/uutils has:
1) more CVEs between two latest Ubuntu releases than coreutils has had over the last 30+ year
2) managed to break security updates
3) is neither fully compatible with POSIX nor coreutils
I'm not sure why I'd ever use it? Sadly, projects like uutils have made me suspicious of rust projects, so unless I know that the project is well maintained (for which there are numerous examples, ripgrep being the obvious example, but newsboat, the various tools from proxmox, servo/firefox, and the pgrx ecosystem are ones I use regularly), it's a negative marker against that project.
Given this was announced when backports weren't ready (and given the POC was at least opaque if not obfuscated), I'm getting the vibe fixing the vuln wasn't as high as a priority as making a media splash.
reply