> At that point, there's probably not much major input you can have into the core architecture or strategy
Sure you can? In this concrete case, Redis is very "flat" — there's the data structure implementations, and there's the commands that use them. 1+N. You could have feedback about the data structure (i.e. whether it's optimal for the use-cases); or about any of the commands (i.e. not just their impls, but also whether they're the best core API surface to lock in long-term, or even whether they're worth including at all.)
Any given feedback would necessitate fairly limited rework to address, as you're either modifying the data structure (and its tests) or a command (and its tests and docs.)
Fair point that there might be some functional changes you can suggest, but I continue to suspect that by the time this PR hit GitHub, all the most important decisions have already been finalized.
Also kind of makes sense out of the concept of "Desk Accessories" (i.e. the things under the Apple menu in Macintosh System 1 — Lisa OS also had these.) Every Lisa OS "task" (there were no processes in the pre-emptive sense) is either running a program in the context of a document that program manipulates; or is a document-less accessory program, running under some other task.
There was another unix on the Lisa, i'd tell you more but there's literally nothing online about it and the only guy with copies of it hasn't responded to my letter. The company that made it dumped their Lisa unix stuff on him when they went bust because he sold lisa related stuff. Other than that tidbit of info, I haven't found anything online about it
To be fair, the disk images are actually pretty tough to find. There've been a few times where I've spent a while looking for them because I lost the files on my computer and forgot where they were online. And that page isn't indexed on Google, so you really need to know where to look.
Accessible for development, sure. Developers and hobbyists are willing to pay $500 for an FPGA devkit, and that's been possible for a long time now.
But, more recently (last 10 years), we've seen increasingly-low-LE FPGAs on increasingly-minimal FPGA breakout boards, with no educational subsidies required to make the boards cheap. There are FPGA boards you can play with for under $50 now; and some <10k-LUT FPGA BGA ICs themselves going for $10-$15. That's to the point that it's just "a thing you can choose to add" to a board you're designing, rather than something so precious that it's the constraint you're designing your board around.
"Going through things" isn't always necessary / is avoidable in some deployments. And 2.4GHz signals can propagate an okay distance between nodes if there aren't things to go through. (Globalstar's emergency SOS satellite constellation uses the n53 band, which is right above the 2.4GHz "wi-fi" band, and it propagates between handsets and LEO through 1400km of air just fine.)
So you could probably pull off a 2.4GHz mesh outdoors in rural areas? It'd be feasible in the same places a microwave-laser hilltop-to-hilltop link would, but instead of "fast but point-to-point" it's "slow but meshed" (and with much larger tolerance for slop — you don't need to put everything on fixed masts so they have perfect line-of-sight, you can just stick them on the tops of trees or whatever and if they wave in the wind it still works.)
Mind you, the authors' motivating use-case for the hardware seems to be their project (https://github.com/datapartyjs/MeshTNC) to (AFAICT) bridge LoRa (or some specific LoRa L2 protocol — Meshtastic, probably?) to packet radio, i.e. digital packet-switched signalling over amateur (HAM) radio bands.
In that context, the tradeoff of high throughput for low propagation makes sense. Insofar as you're working with LoRa, and want to build and experiment with a bunch of site-local devices that mesh between themselves and interoperate with LoRa data-link protocols, you'd likely be speaking something like LoRA over 2.4GHz (LoRa itself doesn't spec a way to do that, but you could make it happen within the closed ecosystem of your own home/office.)
And in that context, you could use a MeshTNC device as something like "LoRaLAN" router. It'd be something you'd keep somewhere central in your house (like a wi-fi router), plugged into power + an antenna (internal to your house, like a wi-fi router) and plugged into a packet-radio transceiver with its own even-bigger antenna, outside your house. (Like a wi-fi router being plugged into a gateway modem on its upstream WAN port.)
This MeshTNC device would then pick up signals from:
- regular LoRaWAN IoT devices and Meshtastic handsets in your building
- more custom devices in your building†, that you've built yourself, that use another MeshTNC module; where these other devices do their part of the meshing only on the 2.4GHz band, which means they don't need big fiddly external antennas like LoRa devices do, but can be quite compact
- and possibly, a separate bidirectional LoRa repeater (made from any existing "high-gain" LoRa module, i.e. the kind used in mains-powered LoRaWAN base stations) — which brings in LoRa mesh traffic from outside your building, and picks up and carries away "destined for elsewhere in this area" LoRa mesh traffic that your "LoRaLAN" device has emitted (either due to forwarding it from your 2.4GHz-only mesh handsets/devices, or due to forwarding it after receiving it from packet radio.)
Though keep in mind you only need that complexity for the 2.4GHz-only mesh devices, since there isn't an existing mesh to forward those packets. But this whole setup is still also a regular LoRa mesh, and so you can still use regular LoRa (e.g. meshtastic) handsets, and put out packets that make their way through your regional mesh, back to the packet-radio bridge in your building; and from there to who-knows-where.
† To be clear, the 2.4GHz mesh handsets would only work reliably inside your building (if the 2.4GHz antenna is inside your building); but knowing HAMs, half the point would be seeing how far away you could get from your house/office and have your 2.4GHz mesh handsets keep working. (You'd probably want to have a second MeshTNC "base station" with a building-external antenna to try that. Pleasantly, that doesn't complicate the topology; it's all still just mesh, so you can just drop that in.)
It'd make a lot of sense to sandbox AF_ALG, then, wouldn't it? At least for userspace-driven invocations. Let the kernel keep its current code-path for kernel-driven invocations, but have the same code unit files also build some other sandboxed form, to be invoked by the crypto-accelerating syscalls.
If these syscalls are used by userspace as rarely as you say, the performance impact of this kind of sandboxing wouldn't matter much. And maybe there could be a KCONFIG/boot flag to switch back to using the un-sandboxed code path for userspace invocations too, for enterprises stuck with old software who really care.
---
My own thought process on how this could work below (but I'm not a kernel contributor, so you can probably immediately picture a design better than I can):
The naive way to do this, would be for the kernel build process to emit a separate AF_ALG userland IPC server as an additional build artifact; to get distros to package this IPC server as a component package of kernel packages; and to set up the sandboxed AF_ALG "kernel bridge" so that it proxies calls through to this IPC server if it exists, and errors out otherwise. (Basically like kfuse, except in this case the only "FUSE servers" are first-party.)
But that's a bit painful, organizationally. Puts a lot of work on the distro maintainers' shoulders, that they might just not bother doing. Prone to error. I think there are better alternatives.
1. Maybe the userland syscalls that rely on AF_ALG could instead ground out inside the kernel in a copy of AF_ALG that's been compiled to eBPF? Then that eBPF bytecode could just be embedded into the kernel.
2. Maybe the Linux kernel could consider a facility that would enable it to act as a hybrid microkernel (similar to macOS's XNU) — with arbitrary static sections of the kernel image/kernel modules [or perhaps standalone static ELF binaries embedded within kernel/kmod .data sections] being spawned not as supervisor-mode kthreads doing their own autonomous thing, but rather as unprivileged user-mode kernel threads, running as IPC-servers for the rest of the kernel to talk to?
- The rest of the kernel could talk to these "userspace kthreads" via some nonblocking IPC mechanism; but this mechanism wouldn't need to be exposed to userland the way macOS's XPC is; it could be kernel-to-kernel only (where these "userspace kthreads", despite being in userspace, are still fundamentally kernel threads, and so get to participate in it.)
- Also, these "userspace kthreads", when they're the active scheduled task, would have the kernel image's read-only sections [or their binary's sections, from within the kernel's .data section] mapped into their address space, since that's the binary they're executing against. But they wouldn't inherit [or the spawning mechanism would actively prune from their task struct] the rest of the kernel's mappings. So they'd have to either use the IPC mechanism, or use regular syscalls, to do anything with the kernel, just like any userspace task.)
I don't see those eBPF or microkernel ideas as being particularly realistic! But there are some simple ways AF_ALG's attack surface could be reduced (as an intermediate step to disabling it entirely), like requiring CAP_SYS_ADMIN and/or limiting the algorithms to a specific list.
This is amusing to me. Is there a list of extra naughty filenames? How invasive is the scan? If I create a new file with a cursed word, with this get locked into virus-scanner purgatory or is the deep locking only for external media? Will it get mad if I mount a CD full of virus names?
I would point out that for the Pico (RP2040) and Pico 2 (RP2350) in particular, there's not only a datasheet, but also first-party tutorials that walk you through laying out a minimal reference board design:
These tutorials in turn contain links to KiCad schematics + layout files for the boards they walk you through developing, for you to check your work against.
(I imagine analogue RF board-level simulation is a lot more expensive than digital-logic board-level simulation. Might have been impractical way back when, such that we only used to have the digital-logic kind. But we certainly have both kinds today.)
(I know that installing apps on iOS forces installation of the equivalent watchOS apps; not sure if having a watchOS app installed/running/activating itself forces installation of a "companion" iOS app that it might rely on.)
Sure you can? In this concrete case, Redis is very "flat" — there's the data structure implementations, and there's the commands that use them. 1+N. You could have feedback about the data structure (i.e. whether it's optimal for the use-cases); or about any of the commands (i.e. not just their impls, but also whether they're the best core API surface to lock in long-term, or even whether they're worth including at all.)
Any given feedback would necessitate fairly limited rework to address, as you're either modifying the data structure (and its tests) or a command (and its tests and docs.)
reply