Speaking of portability, As a developer who has shipped software on Windows for over a decade, and then some on Linux. Targeting Windows is insanely easy, because of the ABI. You compile once and you have an extremely high chance that it just works on every Windows version. Not perfect, but better than any other platform ever made. Heck I've used software from CDROMS where the binary was compiled 20 years ago and it still works today without any modification.
With Linux, you have to target specific distros, do something insane like a giant bundle of everything, or static linking or some other craziness, or open up your source code and let someone else take the headache. Oh and I almost forgot.. install scripts that detect distros, install dependencies. And god help you if you need to ship a kernel module.
>The ecosystem was not won on technical merit. OEM per-processor licensing, embrace-extend-extinguish against Java and the web, document format lock-in, and a long pattern of obstructing standardization attempts that would constrain Windows (PWI in 1994, ECMA-234 in 1995, OpenDocument later) while pushing their own through when it extended reach.
Windows has broad hardware compatibility, a stable enough application platform (see above), aggressive backward compatibility, a large developer ecosystem, and distribution through OEMs. Those are technical merits, even if they are not the only merits.
>do something insane like a giant bundle of everything, or static linking
But isn't this exactly what shipping on Windows looks like? I've just checked my Windows partition and there are 43 instances of sqlite dll and 16 instances of Qt5Core.dll because every program that uses those libs needs to include them in their "giant bundle of everything".
The issue on Linux is that the distro's package manager decides which versions of shared libraries exist system wide, and this works well when you install everything through the package manager. Windows SxS is specifically designed to allow multiple incompatible versions of the same shared component to coexist without forcing the entire Windows install to use it.
But okay, I accept your point. However I'd like to point out that "the OS allows you do something in multiple ways" is different from "this is the only way to do it"
> The issue on Linux is that the distro's package manager decides which versions of shared libraries exist system wide, and this works well when you install everything through the package manager.
Linux takes the lead: make code that depends directly on `kernel32.dll` exposed interfaces and you're in a world of hurt.
The problem pointed out is a distro, library compatiblity, packaging, or sand-boxing problem, not a Linux problem.
> Windows SxS
Now that's one very good Windows idea.
Nothing should prevent your favourite packaging/sandbox tool to present a facade that the file system has some specific files (your specific version of libraries) over some more generic files (say, Flatpak: freedesktop SDK, Steam Pressure Vessel: Steam Runtime) over some even more generic files (your actual distro libraries).
On the other hand, almost _nobody_ and _nothing_ should be touching "libraries" or "utilities" or whatever on my base system!
>The problem pointed out is a distro, library compatiblity, packaging, or sand-boxing problem, not a Linux problem.
Are you suggesting Windows users switch to Linux and not use a popular distro that can provide software they need? Otherwise, its simply a pedantic argument.
>Nothing should prevent your favourite packaging/sandbox tool to present a facade that the file system has some specific files (your specific version of libraries) over some more generic files (say, Flatpak: freedesktop SDK, Steam Pressure Vessel: Steam Runtime) over some even more generic files (your actual distro libraries).
If you introduce a new library in facade 2.0, its not going to work in facade 1.0. You can backport, but how many versions are you realistically going to support indefinitely? Its a good idea, but it doesn't solve the full problem.
>I've just checked my Windows partition and there are 43 instances of sqlite dll and 16 instances of Qt5Core.dll because every program that uses those libs needs to include them in their "giant bundle of everything".
Oouch, I just got temporary headache just trying to read and comprehend the Windows mess that you mentioned here.
You can also simply use Flatpak with the Freedesktop Runtime. It runs everywhere regardless of the distribution. For games Steam offers something similar with the Steam Runtimes. You simply develop for that one container and the software will still be running in 20 years. Even though, of course, making software proprietary isn’t best practice. If you make everything open source from the start the various Linux distributions and users can adapt it themselves for their distribution and eventually modernize it as well.
EOL doesn't mean you can't install or use it anymore. It simply means you shouldn't use it anymore because security updates are no longer available. You face the same problem with Windows software that is no longer updated. The only difference is that no one tells you it's no longer supported.
If my employer would still pay me the same salary, I couldn't care less about the license. I'm all for outsourcing this problem to the package managers.
> With Linux, you have to target specific distros, do something insane like a giant bundle of everything,
This is what you do for Flatpack, Steam, or Docker. All these are popular options.
> Oh and I almost forgot.. install scripts that detect distros, install dependencies.
Most distros offer tooling to make packages for their package managers. With them you declare the dependencies you want and the package manager does the rest.
> And god help you if you need to ship a kernel module.
The right way to do it is to open source it and let the installer compile the software against the kernel headers. Sysdig and VirtualBox do that.
>This is what you do for Flatpack, Steam, or Docker. All these are popular options.
Yes, Flatpak is decent, but its a separate runtime with its own sandbox and perms that can sometimes make things more awkward for things like accessing host components installed outside of flatpak e.g. IDE running installed compilers, and compilers accessing project files inside the sandbox. But yes, its nice when it works.
Docker is the nuclear option. Fundamentally, I don't see Docker as a good and legitimate way to ship software (I already ranted about the giant bundle of everything approach!). I can also image my entire dev box as a VM and ship software that way too :P
I'm eternally confused by what an ABI actually is. Especially now that people say the win 32 api is the stable Linux ABI.
Genuine question, what is the difference as they both seem to be conflated.
A library with a stable ABI means that newer releases of that library do not break compatibility with old binaries linked against it. This is why old Windows apps still work on newer OSes.
For example in a typical C library, as long as you don't rename or remove any existing functions/exports (or change their signatures), you can continue to add new ones over time without breaking forwards compatibility (old binaries can link/call into the newer library and still work). This is also important for security reasons and not just application compatibility.
Usually projects will only make ABI-stable changes within minor versions, and leave the breaking changes to major versions where upgrading or recompiling the original application becomes necessary.
For C++ this is more complicated because it's all compiler-dependent (no language-defined ABI), and with classes you typically can't re-arrange anything (like class members) without breaking compatibility.
The Linux (and BSD) ecosystems are not geared for shipping binary-only software. Everything is designed for distributions to package software _from source_, so there's no stable ABI.
Every single program has to write logic to parse/store/query/validate those values. A common API with a single store can be type-enforced, backed up, and likely easier to work with from an internationalization perspective.
I do like dotfiles for portable apps where everything the program needs is in one folder. Personally, my need for portable apps has gone down year on year.
I like to think of LLMs as idiot savants. Exceptional at certain tasks, but might also eat the table cloth if you stop paying attention at the wrong time.
With humans, you can kind of interview/select for a more normalized distribution of outcomes, with outliers being less probable, but not impossible.
Yes, you're correct. To add - companies don't fundamentally care about all the things that we like to think of as "nice things", like good design, lack of dark patterns, robust security architecture, minimizing technical debt, etc.
If customers cared about reputational damage from cybersecurity incidents (sure.. some do) , then you would see that reflected in their priorities. Also, non-technical customers don't really know who to blame for security anyway. They'll just blame the OS vendor or other random parties even if its the Application that is not secure.
There are various types of triggers for gene activation, some genes turn on/off all the time (housekeeping), some follow the circadian rythm, some are immediate response, some are specific to specific phases of cell division, some are persistently on all the time, etc ,etc. Not sure what type of chart you're looking for.
Thanks. Those modal categories of activations are a great start for organizing a visualization. I wonder what sort of patterns would show up. For example, what role does placement in a specific chromosome have (if at all!) in determining whether the gene is periodic, reactive, systemic, or developmental , etc.
> Not sure what type of chart you're looking for.
Just geek curiosity.
This is true when fresh college grads are building stuff. Experienced engineers know how to build things much more efficiently.
Also people like to fantasize that their project, their API, their little corner of the codebase is special and requires special treatment. And that you simply cant copy the design of someone much more experienced who has already solved the problem 10 years ago. In fact many devs boast about how they solved (resolved) that complex problem.
In other domains - Professional engineers (non-swe) know that there is no shame in simply copying the design for a bridge that is still standing after all those years.
BTW - UAC is not a security boundary, so UAC-bypass is not the same as privilege escalation, and there is no bounty for it, etc, etc. It's a common misunderstanding, probably in no small part due to Microsoft's own lack of communication around it.
With Linux, you have to target specific distros, do something insane like a giant bundle of everything, or static linking or some other craziness, or open up your source code and let someone else take the headache. Oh and I almost forgot.. install scripts that detect distros, install dependencies. And god help you if you need to ship a kernel module.
>The ecosystem was not won on technical merit. OEM per-processor licensing, embrace-extend-extinguish against Java and the web, document format lock-in, and a long pattern of obstructing standardization attempts that would constrain Windows (PWI in 1994, ECMA-234 in 1995, OpenDocument later) while pushing their own through when it extended reach.
Windows has broad hardware compatibility, a stable enough application platform (see above), aggressive backward compatibility, a large developer ecosystem, and distribution through OEMs. Those are technical merits, even if they are not the only merits.
reply