Yeah, I've seen this in more than a few places. There was a blog "running on a Wii" that, IIRC, was doing the same thing.
On the one hand I get it, TLS is pretty heavy, and it makes sense to take advantage of a VPS or Cloudflare or however you want to do it.
But once you are spinning up a VPS, the question is ... why the Pi? The VPS in the article has less RAM, but more storage. If you're already doing TLS termination on the VPS (the most RAM intensive part), you might as well just do the whole shebang there.
I know this is all for fun, I'm just wondering -- is the Pi Zero really too slow to handle TLS, especially with an optimized TLS library? In this setup, the Pi is already being directly exposed to the Internet anyway, there's no VPN being used. That ARM11 isn't "fast", but surely a 1 GHz ARM11 can handle an optimized TLS library serving some subset of TLS1.2.
The TLS termination isn't actually on the VPS. The article details that Tierhive has an haproxy edge service (handling the TLS), that then has the vps as the backend, but that vps is just doing tcp proxying with socat to the ddns exposed home server fqdn. Feels like a lot of unnecessary loops. Kinda fun I guess but, just, why
Yes it is, "we plan to use our external VPS for handling the TLS termination". Edit: Ah I see you are just pointing out termination is on haproxy service not VPS. Thought you were implying it was terminating on pi, my apologies.
The VPS is running socat only and just doing tcp forwarding. There is a shared haproxy also run by their same host, sitting in front of the VPS and is handling the TLS. I encourage you to read the article fully. They probably should have said "VPS provider" instead of VPS for the TLS bit.
This reminds me of the recent "running Doom on DNS" post which in actuality was "running Doom from DNS [as a storage device] on my PC" which is multitudes less impressive.
It reminds me of the footage of Doom running on a pregnancy test. And then it turned out it was another computer just displaying to the build in AMOLED display.
What was supposed to be a cool achievement is rendered pointless when one of the key elements is offloaded elsewhere.
Sometimes these demos enable caching on the reverse proxy. So then for these tiny demo html pages you request, you may not even reach the fun tiny computer it is supposed to demonstrate.
Considering that a 'base' raspbian type install can be something like 160MB of RAM used with openssh running and a lot of other launched-from-systemd daemons in the background, that leaves plenty of RAM available for a stock apache2 or nginx setup with TLS. No it won't be able to serve a ton of simultaneous requests, but I'm in agreement with the other comments here that doing purely port 80/http and putting it behind a secondary TLS proxy is not really "serving the website" from the raspberry pi.
I wouldn’t consider “the way most people do TLS in 2026” weird. That said this isn’t all that impressive or interesting, a computer… serving a website.
Many (most?) are hosting web applications and/or content in separate applications and sometimes servers from where TLS (HTTPS) termination happens. HAProxy, Traefik, Caddy and Nginx as reverse proxy and TLS termination servers are pretty common, even more so if you're containerizing your applications themselves. It dramatically simplifies the application stack.
While I may make the argument that most are probably hosting and doing php on the same server, it's not the typical approach for any custom software at this point.
It's vastly different to do TLS termination within your own network and to do it on a rando VPS and then send normal TCP over the internet. It's not an argument of it being on the same server.
It is more than a little weird. A pi zero is more than capable of handling HTTP/1.2 and TLS 1.3 for a handful of connections per second. This machine is 10x what we were running web servers on in the '90s.
Also, all web pages are served from RAM. It's automatic that modern OSes will cache this stuff on first access.
Yeah, I ran a phpbb forum (alongside my normal static site) on a 486 in 2003 or so. It worked. It was slow, but it worked just fine for my friends and I! I remember it took multiple minutes to generate the SSH server key after the initial install lol
In the mid-90s, I retired my 486 hardware and brought it over to a local ISP that we were friends with.
It had a second life doing stuff like delivering mail, handling IRC, serving web pages, and whatever else a few of us wanted from it. The performance was fine.
(The Pentium-ish machines stayed on desktop duty where GUIs devoured resources.)
>This machine is 10x what we were running web servers on in the '90s.
Kind of irrelevant since operating systems and web pages in the 90's were significantly smaller in footprints, as the web was mostly plain text back then. Windows XP with its GUI would run Max Payne on 128MB of RAM. You could do a lot more back then that You can't do modern stuff like that today with 128MB of RAM.
LLMs, including open ones, are really good at this it turns out. It stands to reason, there is tons of training material out there no doubt they have consumed and are ready to regurgitate.
Yesterday I one-shotted several interactive pages, that Qwen built out of straight HTML and Javascript. I handed it my API (source code, not even a swagger, via an MCP that Qwen wrote for me), asked for a frontend, and it delivered. One page at a time to keep context down, and mightve gotten lucky on the first draw but after the first one I told it to make the next ones like the first.
Can't say I've had that experience with backend languages & frameworks, incl writing that same API, but perhaps I'm off the beaten path with those, or perhaps there's greater breadth of things to do vs a narrower set of acceptance criteria? IDK.
Here I was sweating that I'd have to research and learn a current-day frontend framework. It felt like a magic wand using consumer-grade AI. HTML and plain old Javascript was plenty.
Tangent but apropos of other contemporary threads on HN, it puts a spin on supply chain threats. There's no NPM or anything, except perhaps whatever mysteries are baked into the model.
In this case, they are static elements, which can even be cached locally to share more easily.
If someone wants a massive build system to render a static HTML page, that's on them, and their personal interpretation. Increasingly, and maybe more often than not, there is more than one way to get the same outcome.
The fact that there's hundreds of downloads for a single web page is up to the constructor of that page. Still, these things can be reasonably cached. For example, host it on the Pi, then put a cloudflare in front of it or something.
The Pi Zero might not be for you, or easy to try to undermine. Which criticisms would go away if it was on a regular pi?
Even then... it's usually built before it's deployed n the server.. the server is still delivering text, css, js, images and images have always been pretty large. So your connection is tied up for a little bit longer... and as content was smaller in the 90's, connections themselves are much faster today... in the 90's you were lucky to be hosting on a T1 or faster and clients on modems. Today, you've likely got between 100mb to 2gb uplink on your home connections, let alone business connections that generally start at 1gb. 600x the bandwidth for the server from a T1
the airline flight number attached to the icon is in dark color and should be visible, also could switch to satelite view when zoom in, which is dark color
Client isolation is done at L2. You can't add exceptions for IP ranges / protocols / etc this way because that's up the stack. Even if devices can learn about each other in other ways, isolation gets in the way of direct communication between them.
The paper makes the point that you need to consider L3 in client isolation too - they call this the gateway bouncing attack. If you can hairpin traffic for clients at L3, it doesn't matter what preventions you have at L2
Outside of security stuff, over the years I've found this really handy for troubleshooting as well. Being able to extract detailed process info, screenshots, and a bunch of other things from a memory dump have allowed me to get a better idea of what a user was doing when a Windows BSOD occurred.
It builds more a nice picture of what was going on when paired with the users description. Or sometimes, depending on the user, you just don't have anything else to go on besides "it crashed".
I suspect they are meaning because it's uniform you can easily find the studs through it and fasten things directly into them.
An uneven wall material (plaster on lathe, or even plaster on drywall as we have in most of our house) can be quite a hassle to find the actual timbers/studs behind.
Modern plaster has the same properties, and works well with stud finders.
On a related note, if you can find a strong rare earth magnet, you can use it as a stud finder. It'll be attracted to the nails used to hold up the drywall / plaster backer boards. They sell purpose built ones with felt backs + built in bubble levels if you want to get fancy.
And then once all references to the inode are removed (by rotating out backups) it's freed. So there's no maintenance of the deduping needed, it's all just part of how the filesystem and --link-dest work together.
Yeah, give borg a look. It's just faster to back up, faster to delete old backups, and just easier to do restores because so long as you have the appropriate credentials you can list the archive from any machine.
I think there's still a place/use for --link-dst and hardlinks, but as a backup system I think borg does it better.
A weird flipside is things like... the IKEA Zigbee devices. Many of these do not work right at all with 1.5V batteries and basically require rechargables.
The reason fringing exists is because they're rendered as multi-colored fonts. As in, instead of 1 pixel for each pixel, its rendered in a 3x1 (or 1x3 for vert) 'greyscale', and then just striped across the subpixels, using the subpixels for their spacial locations.
With DirectWrite and Freetype, subpixel rendering isn't done for colored rendering. On OSX, due to lack of subpixel with Core Text, it also isn't done.
I suspect what you're really asking is "why does high color contrast look weird at the edges?" Because some monitors are "exceptionally clear and sharp", and people have been selecting monitors for this trait for over a decade.
On LCDs, "good" polarizers make it hard to make out individual subpixels (which also makes subpixel rendering kinda moot on them; you'll see the fringing but the text won't look any sharper than greyscale, and noticeably less sharper than aliased); instead of "clear and sharp" they're more "natural".
OLEDs and MicroLEDs do not have polarizers, and they're the sharpest monitors I've ever seen. However, good news (at least for me): I can see subpixels on a 1080p 24" during high color contrast (ie, fringing fonts), I _cannot_ see them on a 4k 24".
Even if I integer-scaled 1080p to 4k, I would be using an array that looks like...
R G B R G B
R G B R G B
.. to represent pixels. I can't see subpixels during situations like this. So, the only way to avoid the problem is just use HiDPI monitors. 32" 4k seems to be a very common size for Mac users; it causes it to do Retina scaling correctly, while also being about 150% the DPI of a 24" 1080p, or about 125% of the DPI of a 27" 1440p (the two most common sizes).
My recommendation also is: never use below 4k on OSX. OSX's handling of sub-4k monitors is broken, and usually leads to in-compositor scaling instead of letting apps natively render at LoDPI.
...yet at 4K native on macOS (OS X) I could see fringing. And it was worse than using a slightly lower resolution, scaled up by the OS.
And it's particularly bad on solid color lines and high contrast borders (not fonts). So... that doesn't work for me. Which was the point of the post; I don't like how this particular subpixel pattern OLED monitor looks and it's not for me.
Numbers.app, Autodesk Fusion, Adobe Illustrator, and Terminal.app were the first places I noticed it. And in Fusion and Illustrator it's not text that's the issue but lines/graphics.
And high contrast edges in photos in Apple Photos looked wonky.
Oof, at least two of those apps should not do that. I wonder how Fusion and Illustrator do lines, because last time I touched Illustrator (CS6 era), its line drawing was pretty good.
I'd like to see screenshots of these showing off the weirdness, if you don't mind.
I wanted to see what it looked like on my 24" 1080p IPS monitors (two Dell U2414H IPS, and a rebranded LG FastIPS from Monoprice). I don't own a Mac, so I can't replicate it.
All of those share similar traits: the lines are excessively soft in many cases. They're rendered in linear space and then baked to the target gamma ramp, instead of being rendered in sigmoidal space (or some other psudeo-sharpening/pixel-aligning w/o sharpening methodology).
The font in Numbers is extremely misrendered, that's soft even for Core Text standards. Its as if it had absolutely no hinting applied, instead of the kind they use that approximates Freetype's "light" hinting.
The Terminal.app one is okay, but not amazing, as you can see slight misshapen stems, such as in the M for Makefile.
So, I'm not sure the monitor was at fault, but given the "clear and sharp" nature of OLEDs, it certainly magnified the effect.
Yeah, and I'm not terribly interested in getting into the details of how everything renders... I just want a display that works and doesn't make my eyes feel funny.
The PA27JCV (which I don't expect to have back from warranty repair for 3+ weeks) looked fine, and I'm now at day 5 of using the U3223QE and it's fine. So this is my solution to the problem I guess.
From what I can tell from photos of your new monitor's pixels, it has a polarizer that is of similar taste of my Dell U2414Hs, just much newer. It aims for natural reproduction, which means pixels aren't sharply defined from neighbors, and subpixels blend together.
I prefer monitors like these, so I can't really argue with your choice. Sadly, due to a lot of younger kids being raised on phones (which have exceptionally sharp screens), modern high end screens keep being pushed towards sharper and clear to a fault.
Apple refuses to adjust rendering, since Apple's own taste in screens prefers natural over sharp. Even their OLEDs clearly have a film on them to hide the subpixel misalignment; the side effect of this is their brightness and contrast is lower, but eye fatigue is also lower.
Unfortunately, this is why I won't take OSX seriously: I bought a MBPr many years ago, I really tried to like OSX, I tried to understand why people like it, but it ultimately is a death by a thousand cuts, and entirely Apple's fault.
That MBPr ran OSX for a year, then Windows 10 a bit, and then Linux until it died. The text rendering was only fatigue-free on Windows and Linux, OSX had always been too fuzzy, especially with dark themes.
If you're not willing to break up with Apple, yeah, you're stuck just buying Apple-friendly monitors. A lot of OLEDs are just too clear and sharp and I don't disagree with you on sending it back.
With the advent of new RGB (three column, like most LCD) OLEDs I wonder if Apple's next high-end display is going to use that. It'd be a whole bunch of things aligning for a good ecosystem.
And I know this is a whole lot of personal preference, but I like macOS. It works well for me. It's a good UNIX(-like?) with professional-level apps.
I support/maintain/use Windows systems for a living so I'm comfortable there as well, and I'd be mostly fine on a Linux but the lack of pro-level apps for some of my hobbies (namely, map making) and sufficiently-user-friendly equivalents for a few other apps (eg: rubiTrack, Hazel, Photos.app) is a problem.
(A bunch of years back I made a conscious choice to do less sysadmin-ing at home, even if I have to pay a bit more. It's freed up mental capacity for using computers as a means to an end vs. an end itself. And it means I don't have the flexibility of Linux or other OSS things at times, but I've been able to work within that. But I'm getting way off topic here...)
I'd say that because the article documents my experience at this point in time, the only poor timing is when my old-ish monitor died and I went looking for a replacement. And this article documents my experience with that.
So while the content is in RAM on the Pi, a lot of the heavier lifting (TLS termination) is done elsewhere, which saves a ton of CPU load on the Pi.
reply