Every time I see this kind of article, no one really bothers about sb/server redundancy, load balancers, etc. are we ok with just 1 big server that may fail and bring several services down?
You saved a lot of money but you'll spend a lot of time in maintenance and future headaches.
It depends on the service and how critical that website is.
Sometimes it's completely acceptable that a server will run for 10 years with say 1 week or 1 month of downtime spread over those 10 years, yes. That's the sort of uptime you can see with single servers that are rarely changed and over-provisioned as many on Hetzner are. Some examples:
Small businesses where the website is not core to operations and is more of a shop-front or brochure for their business.
Hobby websites too don't really matter if they go down for short periods of time occasionally.
Many forums and blogs just aren't very important too and downtime is no big deal.
There are a lot of these websites, and they are at the lower end of the market for obvious reasons, but probably the majority of websites in fact, the long tail of low-traffic websites.
Not everything has to be high availability and if you do want that, these providers usually provide load balancers etc too. I think people forget here sometimes that there is a huge range in hosting from squarespace to cheap shared hosting to more expensive self-hosted and provisioned clouds like AWS.
Respectfully, this type of "high availability" strawman is a dated take.
This is a general response to it.
I have run hosting on bare metal for millions of users a day. Tens of thousdands of concurrent connections. It can scale way up by doing the same thing you do in a cloud, provision more resources.
For "downtime" you do the same thing with metal, as you do with digital ocean, just get a second server and have them failover.
You can run hypervisors to split and manage a metal server just like Digital Ocean. Except you're not vulnerable to shared memory and cpu exploits on shared hosting like Digital Ocean. When Intel CPU or memory flaws or kernel exploits come out like they have, one VM user can read the memory and data of all the other processes belonging to other users.
Both Digital Ocean, and IaaS/PaaS are still running similar linux technologies to do the failover. There are tools that even handle it automatically, like Proxmox. This level of production grade fail over and simplicity was point and click, 10 years ago. Except no one's kept up with it.
The cloud is convenient. Convenience can make anyone comfortable. Comfort always costs way more.
It's relatively trivial to put the same web app on a metal server, with a hypervisor/IaaS/Paas behind the same Cloudflare to access "scale".
Digital Ocean and Cloud providers run on metal servers just like Hetzner.
The software to manage it all is becoming more and more trivial.
> This level of production grade fail over and simplicity was point and click, 10 years ago.
While some of the tools are _designed_ for point and click, they don't always work. Mostly because of bugs.
We run Ceph clusters under our product, and have seen a fair share of non-recoveries after temporary connection loss [1], kernel crashes [2], performance degradations on many small files, and so on.
Similarly, we run HA postgres (Stolon), and found bugs in its Go error checking cause failure to recover from crashes and full-disk conditions [3] [4]. This week, we found that full-disk situations will not necessarily trigger failovers. We also found that if DB connections are exhausted, the dameon that's supposed to trigger postgres failover cannot connect to do that (currently testing the fix).
I believe that most of these things will be more figured out with hosted cloud solutions.
I agree that self-hosting HA with open-source software is the way to. These softwares are good, and the more people use them, the less bugs they will have.
But I wouldn't call it "trivial".
If you have large data, it is also brutally cheaper; we could hire 10 full-time sysadmins for the cost of hosting on AWS, vs doing our own Hetzner HA with Free Software, and we only need ~0.2 sysadmins. And it still has higher uptime than AWS.
It is true that Proxomox is easy to setup and operate. For many people it will probably work well for a long time. But when things aren't working, it's not so easy anymore.
I'm not arguing for cloud or against bare metal hosting, just saying there is a broad range of requirements in hosting and not everyone needs or wants load balancers etc - it clearly will cost more than this particular poster wants to pay as they want to pay the bare minimum to host quite a large setup.
I feel like 95% of the web falls into this category. Like, have you ever said "That's it, I am never gonna visit this page again!", because of temporary downtime? Unless you are Amazon and every minute costs you bazillions, you are likely gonna get the better deal not worrying about availability and scalability. That 250€/m root server is a behemoth. Complete overkill for most anything. As a bonus, you are gonna be half the internet, when someone at AWS or Cloudflare touches DNS.
Exactly. I've never not bought something because the website was temporarily down. I've even bought from b&h photo!
Even if Amazon was down, if I was planning to buy, I'd wait. heck, I got a bunch of crap in my cart right now I haven't finished out.
Intentional downtime lets everyone plan around it, reduces costs by not needing N layers of marginal utility which are all fragile and prone to weird failures at times you don't intend.
For me at least, the only thing where availability really matters is main personal communication services. If Signal was down for an hour, I'd be a little stressed. Maybe utilities like public transportation, too, but that's because I now have to do that online.
> Intentional downtime lets everyone plan around it, reduces costs by not needing N layers of marginal utility which are all fragile and prone to weird failures at times you don't intend.
Quite frankly, I would manage if things were run "on-supply" with solar and would just go dark at night.
> Like, have you ever said "That's it, I am never gonna visit this page again!", because of temporary downtime?
That's a strawman version of what happens.
There have been times when I've tried to visit a webshop to buy something but the site was broken or down, so I gave up and went to Amazon and bought an alternative.
I've also experienced multiple business situations where one of our services went down at an inconvenient time, a VP or CEO got upset, and they mandated that we migrate away from that service even if alternatives cost more.
If you think of your customers or visitors as perfectly loyal with infinite patience then downtime is not a problem.
> Unless you are Amazon and every minute costs you bazillions, you are likely gonna get the better deal not worrying about availability and scalability. That 250€/m root server is a behemoth. Complete overkill for most anything.
You don't need every minute of downtime to cost "bazillions" to justify a little redundancy. If you're spending 250 euros/month on a server, spending a little more to get a load balancer and a pair of servers isn't going to change your spend materially. Having two medium size servers behind a load balancer isn't usually much more expensive than having one oversized server handling it all.
There are additional benefits to having the load balancer set up for future migrations, or to scale up if you get an unexpected traffic spike. If you get a big traffic spike on a single server and it goes over capacity you're stuck. If you have a load balancer and a pair of servers you can easily start a 3rd or 4th to take the extra traffic.
> There have been times when I've tried to visit a webshop to buy something but the site was broken or down, so I gave up and went to Amazon and bought an alternative.
Great. So how much did the webshop lose in that hour of maintenance (which realistically would be in the middle of the night for their main audience) and how much would they have paid for redundancy? Also a bit hard to believe you repeatedly ran into the situation of an item sold at a self-hosted webshop and Amazon alike. Are you sure they haven't just messed up the web dev biz? You could totally do that with AWS too...
> If you're spending 250 euros/month on a server, spending a little more to get a load balancer and a pair of servers isn't going to change your spend materially.
Of course, but that's not the argument. It's implied you can just double the 250€/m server for redundancy, as you would still get an offer at the fraction of cloud prices. But really that server needs no more optimization in terms of hardware diversification. As I said, it's complete overkill. Blogs and forums could easily be run on a 30€/m recycled machine.
Well why have downtime if you can avoid it with a bit of work?
But I do agree the poster should think about this. I don't think it's 'off' or misleading, they just haven't encountered a hardware error before. If they had one on this single box with 30 databases and 34 Nginx sites it would probably be a bad time, and yes they should think about that a bit more perhaps.
They describe a db follower for cutover for example but could also have one for backups, plus rolling backups offsite somewhere (perhaps they do and it just didn't make it into this article). That would reduce risk a lot. Then of course they could put all the servers on several boxes behind a load-balancer.
But perhaps if the services aren't really critical it's not worth spending money on that, depends partly what these services/apps are.
I run internal services on DO that I've considered moving to Hetzner for cost savings.
Could I take it down for the afternoon? Sure. Or could I wait and do it after hours? Also sure. But would I rather not have to deal with complaints from users that day and still go home by 5pm? Of course!
to be fair a lot of ppl still run this way and just have really good backups, or have an offline / truly on-prep server where they can flip the dns switch in case of true outage.
Yes and for many services that is totally fine. As long as you have backups of data and can redeploy easily. It's not how I personally do things usually but there is definitely a place for it.
Also, in general, you can architect your application to be more friendly to migration. It used to be a normal thing to think about and plan for.
VMware has a conversion tool that converts bare metal into images.
One could image, then do regular snapshots, maybe centralize a database being accessed.
Sometimes it's possible to create a migration script that you run over and over to the new environment for each additional step.
Others can put a backup server in between to not put a load on the drive.
Digital Ocean makes it impossible to download your disk image backups which is a grave sin they can never be forgiven for. They used to have some amount of it.
Still, a few commands can back up the running server to an image, and stream it remotely to another server, which in turn can be updated to become bootable.
This is the tip of the iceberg in the number of tasks that can be done.
Someone with experience can even instruct LLMs to do it and build it, and someone skilled with LLMs could probably work to uncover the steps and strategies for their particular use case.
A week of downtime every decade I think still works out to a higher uptime than I've been getting from parts of GitHub lately. So I'd consider that a win.
These articles are popular where there's a mismatch between application requirements and the solution chosen. When someone over-engineers their architecture to be enterprise-grade (substitute your own definition of enterprise-grade) when really they were running a hobby project or a small business where a day of downtime every once in a while just means your customers will come back the next day, going all-out on cloud architecture is maybe not necessary. That's why you see so many comments from people arguing that downtime isn't always a big deal or that risking an outage is fine: There are a lot of applications where this is kind of true.
The confusing part about this article is the emphasis on a zero-downtime migration toward a service that isn't really ideal for uptime. It wouldn't be that expensive to add a little bit of architecture on the Hetzner side to help with this. I guess if you're doing a migration and you're paid salary or your time is free-ish, doing the migration in a zero downtime way is smart. It's a little funny to see the emphasis on zero downtime juxtaposed to the architecture they chose where uptime depends on nothing ever failing
Clever architecture will always beat cleverly trying to pick only one cloud.
Being cloud agnostic is best.
This means setting up a private cloud.
Hosted servers, and managed servers are perfectly capable of near zero downtime. this is because it's the same equipment (or often more consumer grade) that the "cloud" works on and plans for even more failure.
Digital Ocean definitely does not guarantee zero downtime. That's a lot of 9's.
It's simple to run well established tools like Proxmox on bare metal that will do everything Digital Ocean promises, and it's not susceptible to attacks, or exploits where the shared memory and CPU usage will leak what customers believe is their private VPS.
Nothing ever failing in the case of a tool like Proxmox is, install it on two servers, one VPS exists on both nodes (you connect both servers as nodes), click high availability, and it's generally up and running. Put cloudflare in front of it like the best preference practices of today.
If you're curious about this, there's some pretty eye opening and short videos on Proxmox available on Youtube that are hard to unsee.
Sadly, hardware breaks. You still need a working backup and a working failover plan, even if it's just setting up a new server and running your Terraform / Pulumi / Saltstack scripts.
Indeed, I missed the "two servers" part; a two-node mirrored config is what I suggested myself elsewhere in the thread. It's still much less expensive than anything comparable in the cloud.
Also, don't underestimate the reliability of simplicity.
I was a Linux sysadmin for many years, and I have never seen as much downtime from simpler systems as I routinely see from the more complicated setups. Somewhere between theory and reality, simpler systems just comes out ahead most of the time.
To be fair they were using a single VM on DigitalOcean, so they didn't had the perks of a cloud provider, except maybe the fact that a VM is probably more fault-tolerant than a bare metal server.
Usually those articles describe two situations:
- they were "on the cloud" for the wrong reasons and migrating to something more physical is the right approach
- they were "on the cloud" for the right reasons and migrating to something more physical is going to be a disaster
Here they appear to be in the first situation.
If their setup was running fine on DO and they put the right DR policies in place at Hetzner, they should be fine.
People also tend underestimate how much compute these dedicated servers got, compared to cloud offerings, and what that feels like without 100 layers of management abstraction in-between. You are likely not going to ever choke a plenty-cored, funny-RAMed root server at a fraction of your cloud costs. This overkill resource estate can be the answer to a lot of scalability worries. It's always there, no sharing shit all.
They may be making this decision based on a long history of, in fact, never really having run into "a lot of time in maintenance and future headaches".
To be fair, I migrated a VPS from Linode to Hetzner a few years ago. Minor downtime is a non-issue: personal website and email server. I approximately halved the monthly cost, and I haven't had any downtime except what I caused myself when rebooting to upgrade the kernel every now and then.
I had like... less than 10 minutes downtime on Hetzner in years (funny enough, that makes my personal containers more reliable than productionized AWS and GCP deployments with their constant partial outages).
So perhaps all that complexity (beyond maybe a backup container) isn't really necessary for companies where a bit of downtime doesn't really affect revenue?
Like, I know Leetcode tells otherwise, but most companies really don't need full FAANG stack with 99.999% uptime. A day of outage in a few years isn't going affect bottom lines.
To be fair, modern dedicated servers at hetzner have two power units, and come with a redundand ssd/hdd raid-1 config. AFAIK both ssd and power unit having hotplug capability, so in case either fails they can be replaced with zero downtime.
Given the downtimes we saw in the past year(s) (AWS, Cloudflare, Azure - the later even down several times), I would argue moving to any of the big cloud providers give you not much of a better guarantee.
I myself am a Hetzner customer with a dedicated vServer, meaning it is a shared virtual server but with dedicated CPUs (read: still oversubscribed, but some performance guarantee) and had zero hardware-based downtime for years [0]. I would guess their vservers are on similar redundant hardware where the failing components can be hotswapped.
[0] = They once within the last 3 years sent me an email that they had to update a router that would affect network connectivity for the vServer, but the notification came weeks in advance and lasted about 15 minutes. No reboot/hardware failure on my vServer though.
I was thinking the same. A managed database is just set and forget pretty much. I do NOT miss the old times where I had to monitor my email from routine security checkups hoping my database didn't get hacked by some script kiddie accompanied by blackmail over email.
What are you running on it is the only question which matters, obviously you dont want air traffic control to go down but some app… So what if it goes down? Backup is somewhere else if you even need it anyway. Github has uptime less than 90% according to this: https://mrshu.github.io/github-statuses/ . And the world keeps turning. Obviously we should strive for better, but also lets please not continue making this uptime fetish out of it, for vast majority of the apps it absolutely doesnt fucking matter.
DO doesn't do high availability droplets, and their migration policy is will try, if we detect poor health of server before it fails.
If someone starts thinking about redundancy and load balancers than DO's solution is rent a second similar sized droplet, and then add their load balancing service. If you do those things with Hetzner instead, you would still be spending less than you did with Digital Ocean.
Personally, what is keeping me on DO is that no single droplet I have is large enough to justify moving on its own, and I'm not prepared to deal with moving everything.
I don't know about Hetzner but with Upcloud and Vultr my single VPS setups have been more reliable than multiregion with redundancy setups with other providers like Fly.
A few weeks ago, I tested deploying Rails apps to Hetzner and Vultr for the first time using Hatchbox to deploy Rails apps onto them. I'm still supporting clients on Heroku, but there are potential new projects in the coming months that I might deploy elsewhere. Render is decent in some cases, but you can get a lot of bang for your buck deploying on Vultr, and Hatchbox makes it easy to do, whether you have one instance or a cluster. Hatchbox also helps with putting multiple apps/domains on a single server, a concept I had to give up long ago on Heroku. I've thought about deploying to DO plenty of times over the years, but there was always Heroku, and if I had to find a new home for Rails 8, I think I'd skip it in favor of a more powerful Vultr server. Hatchbox can provision Postgres for you, but Vultr has managed Postgres which is appealing to me. Or if you're just using Sqlite with Rails 8, that's easy to do with Hatchbox but not on Render since Render has an ephemeral file system.
Vultr is goated, I've been using them since ~2020 and have never had any issues. I stopped a year or so back and recently went through the whole onboarding process again and it was dead simple even 6 years later, with barely any price increase compared to other providers. Hetzner will always be the worst to me because plainly their UX sucks; I can't imagine if I was a non-technical user trying to use it
If you have the setup within server fully scripted and automated (bash, pyinfra or ansible etc) and backups are in place then recovery isn't that hard. Downtime for sure maybe couple of hours for which you can point your DNS entries to a static page while you're restoring everything.
> Every time I see this kind of article, no one really bothers about sb/server redundancy, load balancers, etc. are we ok with just 1 big server that may fail and bring several services down?
It seems that to start off the original system was hosted on a single Digital Ocean droplet with 32vCPUs and a total 192GB of RAM.
They switched to a single Hetzner AX162-R instance.
So the blogger switched from a single odd with 32 vCPUs to a dedicated server running a AMD EPYC 9454P and 256GB of RAM.
That should answer your question with a clear yes.
> You saved a lot of money but you'll spend a lot of time in maintenance and future headaches.
The vast majority of services are actually alright with a little downtime here and there. In exchange, maintenance is a lot simpler with less moving parts.
People underestimate how far you can go with one or two servers. In fact, what I have seen in ky career is many examples of services that should have been running on one or two servers and instead went for a hugely complex microserviced approach, all in on Cloud providers and crazy requirements of reliability for a scale that never would come.
Downtime happens in all different contexts of life that a web site/service being knocked offline is soo far down the priority list for most people.
It’s amusing that the US government can shutdown for days/weeks/months over budget reasons and there’s no adult discussions that take place about fixing the cause. Yet the latest HN demo that 100 people will use need all 9’s reliability and hundreds of responses.
> You saved a lot of money but you'll spend a lot of time in maintenance and future headaches.
Look from my perspective, I'll got flying color from my owner because of the cost saving and got my team morale up that I really need them to maintain the system instead of lay them off.
Also in some cases that also mean new jobs opening.
I wondered the same! FWIW I'm currently migrating from managed postgres to self-managed on hetzner with [autobase](https://autobase.tech/). Though of course for high availability it requires more than one server.
Beware of Hetzner Cloud volumes, they're unusable for a database, they're too slow. I'm not sure what workloads people run on Hetzner but the low-performance volumes and unreliable load balancers don't seem like a good fit for real production stuff with traffic.
I've run some benchmarks a couple years ago, I don't have them at hand unfortunately but off the top of my mind, seqread 4k produced around 1500 IOPS while seqwrite was like a third of that. The practical performance was abysmal, I moved PostgreSQL storage to a volume and it was very noticeably slower just by browsing the web app (compared to NVMe SSD storage).
For comparison, I'm now using UpCloud which uses network-attached storage for all volumes and easily hits 10k IOPS (up to 100k with some tuning).
I certainly may have missed something while testing this so I'm happy if someone else wants to contribute and correct me if I'm wrong.
I agree with you, even for the servers I am responsible for I always make decisions like putting db on supabase instead of local, hosting files on s3 with versioning/multi region etc. then of course come up with a backup and snapshot system.
I already made a comment here about testing Hatchbox. You point it to your servers and it can set up a cluster and load balancer with a few button clicks.
I’d even argue that most things operated by tech doesn’t need 24x7x365 availability. If it’s about life-and-death, then yes make it super reliable and available. Otherwise, bring back scheduled downtime please.
The EU providers are simply not on par with AWS, CloudFlare, GCP, etc.
Yes, you can get cheap servers but then you've to self-host and manage a bunch of services that you could get for pennies on the dollar in AWS.
There are hundreds of datacenter providers and yet, most are absolute garbage when it comes to customer support, problem resolution, you get really old hardware, many times you have to send an email and wait weeks because they don't have a self-service UI, SLA is a joke, etc.
You can do it, it's just gonna be a nightmare and you'll spend more time/money on it.
Check out Scaleway (France). They have by far the broadest range of managed services with full permissions/IAM integration etc. - it is the closest EU match to AWS. Yes, if your entire existing setup depends on once specialized AWS service (i.e. DynamoDB) you still will need to go with AWS, but when building from scratch it's a different story.
Exactly. Look at hetzner, 0 services, everything they tried to do sucked, they still have no managed k8s and their s3 attempt was horrible. This is one of the leading Cloud platforms in the EU and developer experience sucks.
They don't seem t care as well.The mindset is different.
Last week Z.ai coding plan was unusable due to a lot of people abusing the coding plan with OpenClaw. This can be verified: https://openrouter.ai/z-ai/glm-5-turbo
OpenClaw managed to burn 2.46 trillion tokens just in the last 30 days.
I'm not even gonna judge why someone needs an AI Assistant running 24/7, the core issue is that coding plans are being ruined because they're not paying for ridiculous amount of tokens burned.
Anthropic is actually making the right decision: You want a LOT of tokens for your 24/7 agent? Ok, just use the API and pay for your tokens.
I enjoy paying for a sub that I actually use to code, and what we pay today is not even enough to cover the costs of running AI servers.
For the last 2 years I can't even get an interview despited having 14 years of experience and being up to date with development trends, libraries, languages, AI tooling, etc.
I don't think the market is flooded with new devs as many state, I think we are in a deep silent crisis
I've been able to get something like 25 interviews in 2 months despite having long gaps on my resume and nothing especially impressive to my name. So I suspect you might be going about this wrong. I haven't gotten an offer yet, that's another story, but getting the interviews hasn't been hard. Applying in NYC/SF, senior-only.
I honestly have no idea. The last place I worked is pretty well-known. Not big tech, but a recognizable name to most people. I send out a lot of applications: those 25 interviews are the result of 150 applications in the last two months or so. And then I have my linkedin set to be discoverable and looking for a job. Basically just fiddle with the options under Visibility and Data Privacy in the linkedin settings and a bunch of people start reaching out to you immediately. I also think I have a nicely formatted resume, really readable.
So are the majority of these applications the result of recruiters finding you via LinkedIn, or have you been applying direct as well? What application path have most of the interviews come from?
Location has always been a huge factor in these discussions. There are usually significantly less opportunities outside of hubs. It’s a cart/horse problem- because companies go to those hubs to hire due the talent pool.
IMO it’s just depression for tech. Back then 33% of total employment got gutted, which is probably better than tech today or in a few years when big techs start AI gut.
I don't know. The company I work at is inviting candidates for interviews, and we have to make compromises because we can't get the exact profiles we are looking for. Something about your comment does not add up to me.
Locality. People want to work close to where they live and not all places are bustling with all kind of activity. I suspect you're hybrid or on site only, right?
not GP, but we're hybrid but remote-first and 80% is remote and we have the same experience. Getting juniors is easy, getting seniors+ is very difficult.
The model I am mentioning matches with this. Speaking from my own personal experience as well, when you're junior and young, you can move anywhere, especially if you're ambitious. As you gain experience, you also settle down a bit in your life, you have a wife, kids, a house. Their jobs and schools. Moving then is a _big deal_.
Of course, there are other factors that make juniors more abundant on the current job market, namely, most companies don't want them.
That absolutely makes sense, but I'm not sure it is the reason. I mentioned we're remote first: we hire _everywhere_. I've been with this company for 7 years, and haven't traveled to HQ even once, and have worked from home or a spot of my choosing (but honestly, that spot is almost always home!) every day, that's how remote first we are - nobody has to uproot their life to work with us.
But it's still extremely hard to find senior+. I'm sure our tech stack plays a role, and naturally senior developers are much less common than juniors. But whenever I hear about the job market being super hard, I feel like I'm living in a parallel universe.
AI is not replacing anyone from my perspective, but AI might become our only hope at some point, because we're growing aggressively. I have to keep mediocre people because I can't even replace at that level easily - the only ones I'm pruning are the ones that are net-negative contributors.
Ah, sorry, I misunderstood your original post then, I interpreted "hybrid, remote first" as... You can be remote most days but you _need_ to be in office a couple of days. This just goes to teach mea hybrid model has _a lot_ of variants.
Back to the point, I think I'm pretty senior, mostly embedded SW, thankfully I still have work, but the job market seems to havecratered. I have friends that are pretty good that are looking for jobs for about half of year now.
I'm incredibly curious now what is your tech stack. And how do you guys view people looking to switch tech stacks.
We're very boring, our stack is PHP/postgres/mysql. A lot of Symfony, a lot of Symfony-style-code on top of Wordpress (mentioning that usually puts people off but it's all PHP in the end, and you can choose to write clean code on either).
Lots of people see PHP in general as a dead end career-wise and WP specifically as almost an insult, so there aren't many that advanced their skills and have continued to work with PHP (or Wordpress, but I believe that an experienced PHP developer has no trouble picking up WP).
We're generally very neutral on how someone arrived where they are, we don't require certificates or degrees, we focus on experience and skills. I wouldn't hire someone who isn't experienced with at least one side of our stack though (unless they're extremely good) because it takes time from other developers to upskill them and that's the one resource we don't have.
I won't disclose where I work though as that would dox myself and I much prefer anonymity.
Sometimes it looks like the longer you're looking for a job, the harder it gets for some reason. That's unintuitive for me, as you should be getting more confident in interviews etc
Maybe. Probably? But I also sense a fallacy here. I could get a new job tomorrow. Maybe it took me 8 years to find that job and I didn’t realize that because I was employed the whole time.
People wonder why something was picked over before committing to it, that's all it comes down to
Focus on what you can control, and you can control the perception of that. If you are interested in money, professional validation, and corporate structure, go that way.
You can try to alter the cultural fundamental assumptions when you're done.
That sounds like poor signaling in that you think you are doing all the correct things but all evidence points to the contrary.
Instead of focusing on the trends you might try to look at qualifications like education, certifications, security clearance, skill expertise, open source contributions, and so forth. Trends are a gravity. I recommend distancing yourself from the crowd to uniquely stand out. Then as edge case opportunities open recruiters come to you.
To some recruiters, there's this sweet spot between 5 and 10 years experience where the applicant good / independent enough to hit the ground running, not too expensive, and still young enough to put up with company bullshit.
A big problem we have is a the sheer volume of AI slop resumes, fake applicants and people trying to cheat on interviews. We had to close a req for SWE because we had so many “people” (read: automated applicants) clogging up the pipeline. You effectively need a referral
Referrals are also getting games. If your company has a referral bonus, then I promise you pretty much every single referral you have looked at, is from a guy who DIDNT know the guy. I applied to 20 Big Tech companies last month. All from "referrals". Check out teamblind.com if you don't know (Be careful. The site is like a tech version of 4 chan. Well maybe not THAT bad.) The whole game is messed up.
Both of these views are consistent with each other. Most people are honest and issue few referrals. But the few people who are dishonest issue loads of referrals. Therefore, most of the referrals a company gets are from dishonest people.
The market in the EU is strange, it doesn't matter where you live. Every role is being advertised as a remote one, over 200+ applicants and it's virtually impossible to get noticed.
I blame this on people spamming fake AI CVs 24/7, no one is going to review hundreds of CVs.
I have been thinking exactly about this. CF Workers are nice but the vendor lock-in is a massive issue mid to long term.
Bringing D1 makes a lot of sense for web apps via libSql (SQLite with read/write replicas).
Do you intended to work with the current wrangler file format?
Does this currently work with Hono.js with the cloudflare connector>
Wrangler file format: not planned. We're taking a different approach for config but we intend to be compatible with Cloudflare adapters (SvelteKit, Astro, etc). Assets binding already has the same API. We just need to support _routes.json and add static file routing on top of workers, data model is ready for it.
For D1: our DB binding is Postgres-based, so the API differs slightly. Same idea, different backend.
Hono should just work, it just needs a manual build step and copy paste for now. We will soon host OpenWorkers dashboard and API (Hono) directly on the runner (just some plumbing to do at this point).
I think it would be worth it to keep the D1 compatibility, Sqlite and Postgres have different SQL dialects. Cloudflare has Hyperdrive to keep the connection alive to Postgres/other dbs, what D1/libSql/Turso brings to the table is the ability to run a read/write replica in the machine, this can dramatically reduce the latency.
This won’t replace my GL-AXT1800 which offers a lot more flexibility.
Unifi shipping without eSIM support is a big mistake imo.
I don’t want to have a 5g router(which are insanely expensive) or a second smartphone with 5G.
This is a travel router without a modem. It would be super inconvenient if you bought an eSIM for a device that does not have a modem. You might as well by an eSIM for your toothbrush when you are traveling abroad, it would equally "convenient."
reply