Hacker Newsnew | past | comments | ask | show | jobs | submit | renehsz's commentslogin

Thank you for doing this! I really mean it. We need more developers who care about keeping websites lean and fast. There's no good reason a regular site shouldn't work on GPRS, except maybe if the main content is video.


It's mostly just text on a page. It should be snappy!


1GB RAM per 1TB storage is really only required if you enable deduplication, which rarely makes sense.

Otherwise, the only benefit more RAM gets you is better performance. But it's not like ZFS performs terribly with little RAM. It's just going to more closely reflect raw disk speed, similar to other filesystems that don't do much caching.

I've run ZFS on almost all my machines for years, some with only 512MiB of RAM. It's always been rock-solid. Is more RAM better? Sure. But it's absolutely not required. Don't choose a different file system just because you think it'll perform better with little RAM. It probably won't, except under very extreme circumstances.


ZFS doesn't really need huge amounts of RAM. Most of the memory usage people see is the Adaptive Replacement Cache (ARC), which will happily use as much memory as you throw at it, but will also shrink very quickly under memory pressure. ZFS really works fine with very little RAM (even less than the recommended 2GB), just with a smaller cache and thus lower performance. The only exception is if you enable deduplication, which will try to keep the entire Deduplication Table (DDT) in memory. But for most workloads, it doesn't make sense to enable that feature anyways.


Which practically puts us back into standard time, as things should be


In my experience, writing a few lines to handle errors is really not as big of a deal as a lot of people make it out to be. However, I've seen numerous times how error handling can become burdensome in poorly structured codebases that make failure states hard to manage.

Many developers, especially those in a rush, or juniors, or those coming from exception-based languages, tend to want to bubble errors up the call stack without much thought. But I think that's rarely the best approach. Errors should be handled deliberately, and those handlers should be tested. When a function has many ways in which it can fail, I take it as a sign to rethink the design. In almost every case, it's possible to simplify the logic to reduce potential failure modes, minimizing the burden of writing and testing error handling code and thus making the program more robust.

To summarize, in my experience, well-written code handles errors thoughtfully in a few distinct places. Explicit error handling does not have to be a burden. Special language features are not strictly necessary. But of course, it takes a lot of experience to know how to structure code in a way that makes error handling easy.


SerenityOS serves as a cool side project for those who like to tinker with OS dev. I don't think it was "born" with any other goals in mind. Neither was their browser project, it just happened to turn into something a lot more serious.


This is big news. Defer can simplify control flow a lot, especially in early-return cases like handling errors. No longer do we have to write deeply nested ifs or the madness that is goto. Resource acquisition and cleanup can now be right next to each other.

Now we just have to hope that standardization goes well. The C standard moves very slowly, and that's probably a good thing. But defer is such a simple yet powerful feature that the cost/benefit ratio should easily justify its inclusion.


Thanks!

In fact, the standard reflects what is happening in the field. So, use `defer` where this is already possible and ping your compiler vendor for those that do not have it yet. If there is enough demand, `defer` may even happen for the next standard release.


> maybe today you can still build a win10 binary with a win11 toolchain, but you cannot build a win98 binary with it for sure.

In my experience, that's not quite accurate. I'm working on a GUI program that targets Windows NT 4.0, built using a Win11 toolchain. With a few tweaks here and there, it works flawlessly. Microsoft goes to great lengths to keep system DLLs and the CRT forward- and backward-compatible. It's even possible to get libc++ working: https://building.enlyze.com/posts/targeting-25-years-of-wind...


What does "a Win11 toolchain" mean here? In the article you link, the guy is filling missing functions, rewriting the runtime, and overall doing even more work than what I need to do to build binaries on a Linux system from 2026 that would work on a Linux from the 90s : a simple chroot. Even building gcc is a walk in the park compared to reimplementing OS threading functions...


Strongly agree with this article. It highlights really well why overcommit is so harmful.

Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.

But it feels like a lost cause these days...

So much software breaks once you turn off overcommit, even in situations where you're nowhere close to running out of physical memory.

What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory. Large virtual memory buffers that aren't fully committed can be very useful in certain situations. But it should be something a program has to ask for, not the default behavior.


>terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software

Having an assumption that your process will never crash is not safe. There will always be freak things like CPUs taking the wrong branch or bits randomly flipping. Parting of design a robust system is being tolerant to things like this.

Another point also mentioned is this thread is that by the time you run out of memory the system already is going to be in a bad state and now you probably don't have enough memory to even get out of it. Memory should have been freed already by telling programs to lighten up on their memory usage or by killing them and reclaiming the resources.


It's not harmful. It's necessary for modern systems that are not "an ECU in a car"

> Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.

The big software is not written that way. In fact, writing software that way means you will have to sacrifice performance, memory usage, or both because you either * need to allocate exactly what you always need and free it when it gets smaller (if you want to keep memory footprint similar)m and that will add latency * over-allocate, and waste RAM

And you'd end up with MORE memory related issues, not less. Writing app where every allocation can fail is just nightmarish waste of time for 99% of the apps that are not "onboard computer of a space ship/plane"


> What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory.

mmap with PROT_NONE is such a reservation and doesn't count towards the commit limit. A later mmap with MAP_FIXED and PROT_READ | PROT_WRITE can commit parts of the reserved region, and mmap calls with PROT_NONE and MAP_FIXED will decommit.


That's a normal failure state that happens occasionally. Out of memory errors come up all the time when writing robust async job queues. There are a lot of other reasons a failure could happen but running out of memory is just one of them. Sure I can force the system to use swap but that would degrade performance for everything else so it's better to let it die and log the result and check your dead letter queue after.


Even besides the aforementioned fork problems not having overcommit doesn't mean you can handle oom correctly by just handling errors from malloc!


There's still plenty of mandatory reading. It's not unusual for high schoolers to have to read at least two books per semester. Here's the problem though: It's just too easy to... you know... not do it. Teachers have no way of reliably telling the difference between those students who complete their reading assignments honestly and those who make due with summaries and AI assistance. Don't ask me how I know ;-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: