> What code is truly about is precision: code is unambiguous, even when it’s abstract. It’s easy to conflate ambiguity and abstraction—both refer to “a single statement that could have to multiple meanings.” But the meanings of an ambiguous statement are entirely unconstrained.
I used to believe this, but after working at a successful SaaS I have come to believe that correctness and unambiguity are not entirely necessary for successful software products.
It was a very sad realization that systems can be flaky if there is enough support people to solve edge case problems. That features can be delivered while breaking other features as long as enough users don't run into the middle of that venn diagram, etc.
Fact is it always comes down to economics, your software can afford to be as broken and unpredictable as your users will still be willing to pay money for it.
Right. One thing I learned over the years is that you can support arbitrarily high level of tech debt and still be able to effectively maintain and enrich a successful software system, as long as you throw enough warm bodies at it.
The overhead will get absurd, you'll end up with 10x or more increase in engineers working on the system, all of them making slow progress while spending 90% time debugging, researching, or writing docs and holding meetings to work out this week's shared understanding of underlying domain semantics - but progress they will make, system will be kept running, and new features will be added.
If the system is valuable enough for the company, the economic math actually adds up. I've seen at least one such case personally - a system that survived decades and multiple attempts at redoing it from scratch, and keeps going strong, fueled by massive amount of people-hours spent on meetings.
Adding AI to the mix today mostly just shifts individual time balance towards more meetings (Amdahl's Law meets Parkinson's law). But ironically, the existence of such systems, and the points made in the article, actually reinforce the point of AI being key to, if not improving it, then at least keeping this going: it'll help shorten the time to re-establish consensus on current global semantics, and update code at scale to stay consistent.
This is the realization that pretty much all engineers go through as they become more senior. The areas where the business really can’t afford issues is very small. Usually only accounting / billing and then that’s solved not by great code / design but just audit it once and never touch it again.
In the end most challenges for a business holding them back to better code quality are organizational, not technical.
> In the end most challenges for a business holding them back to better code quality are organizational, not technical.
This is true. And I get sad every time it is used as an argument not to improve tooling. It feels like sort of a self-fulfilling prophecy: an organizational problem that prevents us from investing into technical improvements... is indeed an organizational problem.
Well yeah, I'm not sure why that's sad. One can't find all edge cases at the beginning but only through usage of the app, and fix them over time. Be glad at least someone is using the app as that means the role of software is being fulfilled, as a tool to help people accomplish some goal, because much software written isn't even used by a single person.
You can also run a successful manufacturing company without considering environmental impact. A successful tobacco company without considering public health. A successful social media persona without considering cultural impact. A mobile app slop farm that floods the app store. If economics is your priority, then everything comes down to that.
That comment seems off-topic, but just to exemplify:
In your example even as the interface for those products is unstable (UI that changes all the time, slightly broken API), those products are coded in a language like C++ or Java, which benefit from compiler error checking. The seams where it connects with other systems is where they're unstable. That's the point of this blog post.
Yeah that's true, a product can be successful with truly bad code, but it also makes developers lives miserable each time they need to add a new feature, solve a bug, or simply understand how that entangled mess works.
Management and sales may not appreciate good software design and good code, the next developer that has to work on system will.
I wonder if the facebook redesign also sucked a lot of manual labor and it is now mostly done so they don't need so many people anymore to maintain that product.
There is also a huge surface area of security problems that can't happen in practice due to how other parts of the code work. A classic example is unsanitized input being used somewhere where untrusted users can't inject any input.
Being flooded with these kind of reports can make the actual real problems harder to see.
They wouldn't be classed as vulnerabilities then, since, you know, there is no vulnerability. Unless you have evidence that most of these issues are unexploitable, but I would be surprised to hear that they were considered vulnerabilities in that case.
This actually came up with multiple companies I worked at in Sweden. Apparently the law here is quite strict that you _can_ use your computer for personal matters and that your employer is not allowed to spy on you on those matters.
So they can monitor your email and slack server-side, but not your client-side stuff that doesn't touch their servers. However if you use a VPN then they can also monitor your DNS requests and every website you visit. Any kind of client-side telemetry is limited to a few things, however those things can involve what applications you have installed (like spotify) for security reasons or USB sticks plugged in.
Have also been using Bazzite since march on my home desktop and you are spot on. I think the main reason for average person linux being difficult these days are laptops with weird hardware configurations.
I use MacOS at work and although it is miles better than windows, if I had a choice, I would also use Linux for work.
It is absurd that there is no standardized UI toolkit, or rather that the web browser _is_ the standard with is characteristic _lack_ of user interaction idioms.
The fact that there are multiple platforms for UIs* is a huge failure of the industry as a whole. Apple, Microsoft and Google could have had a sit down together at any point in the last 20+ years to push some kind of standard, but they decided not to in order to protect their gardens.
*: a standardized UI platform doesn't necessarily mean a standardized platform. Just standardization of UI-related APIs and drawing.
My guess 10 or so years ago was that Google would be the first to bake Material UI into browser with web components, and then any browser would essentially reuse that to extend out whatever style they wanted. It really seemed like the way the web (and Google was heading). Instead we got bad Material UI knock-offs in about 45 different UI frameworks.
I just migrated to linux (Bazzite) in March, I have a RTX 3080. The only issue I ran into was that video stream compression is not supported on linux so I can't run 1440p 165hz with HDR on because my monitor doesn't support HDMI 2.1. Either I need to turn off HDR or lower refresh rate to 120hz.
I saw a teardown of an Ukrainian drone a while ago and I was surprised how similar the setup was to the IoT project I worked on. I could be setting up a good chunk of the software part of a similar system myself and I am not that specialized of an engineer.
> Each of these having only one consumer means they’re equivalent of inline code but cost us more to acquire (npm requests, tar extraction, bandwidth, etc.).
It costs FAR more than dep install time. It has a runtime cost too, especially if in frontend code using bundlers where it also costs extra bundlespace and extra build time.
Not really related to the topic, but I recently set up a baby-cam with ffmpeg by just telling ffmpeg to stream to the broadcast address on my home network and I can now open the stream from VLC on any device in the household.
A very heavy-handed solution, but super simple. A single one-liner. Just thought to share a weird trick I found.
It could be a little more efficient to use a multicast address. Even if you don't have any special multicast routing set up, all the receiving machines should be able to discard the traffic a bit earlier in the pipeline.
yeah I tried that for a bit but didn't manage to get multicasting to work. My internal network is fine even with the extra broadcast traffic. This is not a permanent installation, eventually I won't need the babycam anymore.
I saw some other solutions using nginx to serve the stream but that was much more complicated, just broadcasting is an one-liner (in a systemd daemon).
I used to believe this, but after working at a successful SaaS I have come to believe that correctness and unambiguity are not entirely necessary for successful software products.
It was a very sad realization that systems can be flaky if there is enough support people to solve edge case problems. That features can be delivered while breaking other features as long as enough users don't run into the middle of that venn diagram, etc.
Fact is it always comes down to economics, your software can afford to be as broken and unpredictable as your users will still be willing to pay money for it.
reply