I'm not going to re-write it, the TL;DR is they are making an Apples and Oranges comparison.
Yes they "saved money" but in no way, shape or form are the two comparable.
The polite way to put is is .... they saved as much money as they did because they made very heavy handed "architectural decisions". "Decisions" that they appear to be unaware of having made.
> Sometimes you just want to click a button and the server or bucket or whatever is ready and here's the access info and it has sane defaults and if I need backups or whatnot it's just a checkbox. Your time is worth money too.
You're describing Hetzner Cloud, which has been like this for many years. At least 6.
Hetzner also offers Hetzner Cloud API, which allows us to not have to click any button and just have everything in IaC.
> We had a massive merge at work last year where two teams diverged for months on a shared codebase.
There is no tool in the world that can save you guys from yourselves. There is a reason why Agile methodologies put such a premium on continuous integration.
no, the integration branch is not "broken", its just not complete until all slices have been merged INTO the integration branch - after all slices have been merged, the integration branch is complete, yet has a non-optimal history (and most likely a wrong blame because of how git resolves the blame), - therefore the "kokomeco" branch is created after the slices have been merged, - there the original intended merge is done because the outcome of the conflicts is already known from the integration + slice branch merges.
Feel free to open issues/questions in the repo if you're interested, I merely stumble by ycombinator
> no, the integration branch is not "broken", its just not complete until all slices have been merged INTO the integration branch
What do you call an incomplete branch that is missing slices?
> after all slices have been merged, the integration branch is complete, yet has a non-optimal history (and most likely a wrong blame because of how git resolves the blame)
What is the value proposition then? Broken integration branches that leave a suboptimal history? What am I missing?
> Feel free to open issues/questions in the repo if you're interested, I merely stumble by ycombinator
I don't think there is a compelling reason to use this tool. It messes commit history and leaves integration branches in a broken state? Not a great selling point. The alternative would be to sync with branches using standard flows such as rebasing and merges from base branches. You don't need a tool for that, only a hello world tutorial on Git.
> What do you call an incomplete branch that is missing slices?
"incomplete", evidently. I don't see a real alternative here - you need some working space for an in-progress merge, and if you want to do the merge collaboratively, you'll want it on a branch. Just don't run CI on that branch till the merge is complete.
> Broken integration branches that leave a suboptimal history? What am I missing?
You appear to be missing the next step, where they use the merge resolutions from that suboptimal history to replay the original merge, giving you back nice clear history (and at this point, the integration branch can be discarded, presumably)
> weve basically been brainwashed to think we need kubernetes and 3 different databases just to serve a few thousand users. gotta burn those startup cloud credits somehow i guess.
I don't think it makes any sense to presume everyone around you is brainwashed and you are the only soul cursed with reasoning powers. Might it be possible that "we" are actually able to analyse tradeoffs and understand the value of, say, have complete control over deployments with out of the box support for things like deployment history, observability, rollback control, and infrastructure as code?
Or is it brainwashing?
Let's put your claim to the test. If you believe only brainwashed people could see value in things like SQLite or Kubernetes, what do you believe are reasonable choices for production environments?
i think you missed the "on day 1" part of my comment. k8s, iac, and observability are incrdible tools when you actually have the scale and team to justifiy them.
my point is strictly about premature optimizaton. ive seen teams spend their first month writing helm charts and terraform before they even have a single paying user. if you have product-market fit and need zero-downtime rollbacks, absolutly use k8s. but if youre just validatng an mvp, a vps and docker-compose (or sqlite) is usually enough to get off the ground.
> i think you missed the "on day 1" part of my comment. k8s, iac, and observability are incrdible tools when you actually have the scale and team to justifiy them.
No, not really. It's counterproductive and silly to go out of your way to setup your whole IaC in any tool you know doesn't fit your needs just because you have an irrational dislike for a tool that does. You need to be aware that nowadays Kubernetes is the interface, not the platform. You can easily use things like minikube, k3s, microk8s, etc, or even have sandbox environments in local servers or cloud providers. It doesn't matter if you target a box under your desk or AWS.
It's up to you to decide whether you want to waste your time to make your life harder. Those who you are accusing of being brainwashed seem to prefer getting stuff done without fundamentalisms.
You can avoid the overhead of working with the database. If you want to work with json data and prefer the advantages of text files, this solution will be better when you're starting out. I'm not going to argue in favor of a particular solution because that depends on what you're doing. One could turn the question around and ask what's special about SQLite.
If your language supports it, what is the overhead of working with SQLite?
What's special about SQLite is that it already solves most of the things you need for data persistence without adding the same kind of overhead or trade offs as Postgres or other persistence layers, and that it saves you from solving those problems yourself in your json text files...
Like by all means don't use SQLite in every project. I have projects where I just use files on the disk too. But it's kinda inane to pretend it's some kind of burdensome tool that adds so much overhead it's not worth it.
Battle-tested, extremely performant, easier to use than a homegrown alternative?
By all means, hack around and make your own pseudo-database file system. Sounds like a fun weekend project. It doesn't sound easier or better or less costly than using SQLite in a production app though.
> I suppose it’s there to avoid round-trip to the DB.
That assumption is false. The article states that the DB is hit either way.
From the article:
> The reason behind having a checksum is that it allows you to verify first whether this API key is even valid before hitting the DB,
This is absurdly redundant. Caching DB calls is cheaper and simpler to implement.
If this was a local validation check, where API key signature would be checked with a secret to avoid a DB roundtrip then that could see the value in it. But that's already well in the territory of an access token, which then would be enough to reject the whole idea.
If I saw a proposal like that in my org I would reject it on the grounds of being technically unsound.
If it's an internet-required smarthammer without a handle that instead hits out on voice prompt, sometimes without enough or with too much force, sometimes knocks the nail out of the way and punches a hole, and sometimes hits you in the face, then yeah
> If it's an internet-required smarthammer without a handle that (...)
A suitable comparison would be to be faced with a nailgun and proceeding to criticize it on the grounds it doesn't have a handle, it doesn't pull nails, and it requires electricity to run.
While you complain about those detailed those using nailguns are an order of magnitude more productive at the same task, and can still carry a hammer in their toolbelt.
Well, no, the examples are basically the same. A wifi nailgun that sometimes shoots a nail straight through the board, doesn't shoot one at all, shoots it randomly to the side, shoots you with it, etc.
You can use brick as hammer. Doesn't mean brick makes a good hammer or that person who tells that brick is a bad hammer and doesn't work from them is bad workman.
> In that: if it fails, it is only considered evidence that you were not doing it enough.
I think you are purposely omitting the fact that those failures have root causes that come from violating key Agile principles.
Things like "we started a project without a big design upfront and we accepted all of the product owner's suggestions, but whe were overworked and ran out of time, and the product owner refused to accommodate requests, and when we finally released the product owner complained the deliverable failed to meet the requirements and expectations".
This scenario is very mundane and showcases a set of failures that are clearly "not doing enough Agile" because it violates basically half of them.
> The solution can never be at fault, it's your execution, or your devotion to the process (in this case) that was faulty.
Agile is a set of principles focused on the process and its execution. What compels you to talk about Agile and pretend that processes and execution are anything other than the core topic?
If your stakeholders change requirements but don't change allocated resources and fail to stay up to date in the daily affairs to monitor and adapt, how is this anything other than glaring failures to adhere to basic Agile principles?
How much latency does this add?
reply