I love having a relationship with the lowest levels of our craft. Access to an electron microscope and decapping chips to make my own reimplementions (in software) is next on my bucket list. If chip lithograph wasn't so prohibitively expensive I'd also try my hand at that...
Very cool! I meant more hands on diy fab, like how I make my own pcbs. It seems farming out my designs to be produced (hdl->chip) at reasonable one-off costs is a plausible avenue now, which is exciting as well. I've probably been exposed to too many toxic chemicals already anyways and should readjust my bucketlist plans...
Upvotes for apricot and zrobotics for thoughtful shared experiences.
One of the continuous battles I kept loosing when introducing an assembly language undergraduate course. Other higher up colleages and deans would say... too hard... nobody uses that anymore... and shut the course down. But I would always sneak it into other courses I taught, systems programming, computer languages, computer architecture. But I've always felt there was a hole in my student's understanding of computers.
I grew up in a time when assembly language was a part of the cariculum. It helped bridge the gap between higher level languages like C/C++ ...etc. Also why certain language features exist. Also how many language constructs work. Also more importantly, as pointed out by the two posters above, it gives you a way to think about the CPU one asm line at a time what is going on the CPU ecosystem. That is fantastic training!
Even though I kept loosing the assembly language course battles, I hope I planted enough seeds in students that they will take it up on their own at some point. Everyone should at least learn to program in one assembly language.
Man you should share your story. I got through a few Linux device driver labs but the more I read the less I understand. Even the keyboard driver or the tty driver are thousands of lines long.
I don’t know how people managed to write graphics card drivers back in the day. In the 80d it’s going to be all assembly code too, I think.
They are more black magic than the non-driver kernel components. I can at least understand the concept of kernel components such as VFS/Scheduler and read legacy kernel code without too much trouble, but drivers, even those in Linux 0.12 back in 1991, are crazily hard for me.
This piece of software has been my goto on transcribing music for all of the instruments I play for the past 25 years. I can't recommend it enough. It has been pivotal in me being a better musician. Works on Linux, Mac and Windows.
I've also been using transcribe for over a decade and can vouch for how great it is for learning music and transcribing by ear. Irreplacable for my workflow.
Very personal counterpoint: I find Stross writing extremely bland, contrived, and badly paced.
I really really disliked Accelerando in particular, finding it completely vacuous, the sciencey namedrops is self-aggrandising and sound like attempts at reader flattery, the entire plot is telegraphed, characters are generic and perfectly forgettable.
It was several friends recommendation and I only got reading through the whole ordeal because whenever I asked "well I'm about there and it doesn't click" they answered "no spoiler, just a dozen pages and you'll see!"
Not a critic, again this is my personal experience of it. If people enjoyed it, more power to them.
Not LaTeX. Flux has its own grammar. It tokenizes Unicode math symbols like ² directly into AST nodes.
The shell doesn't talk to the LLM directly. They're separate processes. Alexitha monitors system state via cgroup events and adjusts scheduler weights. Flux is just the user-facing shell. They're connected through Tenet (the scheduler), not through a direct pipe.
Yes, the LLM is swappable. Alexitha is currently a fine-tuned 7B model, but the interface is not model specific. Any model that can read a cgroup event stream and output a scheduling decision can be slot in. I'm planning to test with smaller models (1-3B) to reduce boot overhead.
How easy do you find Unicode input? Isn't "x^2" or "x**2" (Python) much easier to type than "x²" ? In the latter case, I have to lookup the char code for ², which happens to be U+0082 ("Superscript two")