Why are we, in 2026, still talking about ipv6? It is time to give it up and start over. Yes, it is unlikely we can agree on an ipv4 successor. But at this point we should be able to agree ipv6 is not going to be it.
What do you think should be done instead? If ipv4 but with longer addresses (which is called ipv6) is not to your satisfaction, what would be? You want to completely overhaul the internet like in ipv8 and you think ipv8 doesn't have the exact same problems and thousands more just because it's never been deployed and nobody's encountered them yet?
IPv6 is over 30 years old. The world has not embraced it. It isn’t going to. It is time to call it and try to figure out something that will work.
You do understand that one need not have a better idea to make that observation? The observation is self-evident and can exist in the world regardless of what we think about IPv6 or the feasibility of figuring out something people will want to use.
Getting defensive and playing “then you come up with something better”-games is not getting us anywhere. It is just part of the problem. It is unproductive.
That would indeed be a disaster because there is a lot of IPv6 usage and a lot more support from operating systems and hardware that has to exist first and now we should start again with something else would introduce a 3rd unsuccessful attempt with ....what benefits?
So now please get to step 2 of the argument: why do you think a world that did not embrace IPv6 (a false premise by the way, as 50% of internet traffic is IPv6) would embrace IPvBBORUD?
Again, you are not getting the argument. IPv6 is dead. It is better go give up and see what else can be done. Stop and think about what I'm saying before you react.
Are you suggesting we spend another 30 years beating a dead horse?
Java bent a lot to accommodate. There are lambdas and records which violate the old OOP rules, and now virtual threads which were sorely missing for things like web backends.
I have been keeping an eye on the outages. This is why I am looking more deeply into what I can do with self-hosted models. When I see people who want to build products on top of these services I can't help but think that people are mad. We're still a long way from these services being anywhere near stable enough for use in a product you'd want to sell someone.
Did they ever work? No, seriously. I've had a couple of them and the few times I really could have used them I discovered that they represented the worst backup solution I've ever had the misfortune to deal with. Slow, very hard to use beyond their primary integration with the OS (which isn't good to begin with), there's really no good way to keep an eye on how they are doing (what's actually backed up, if it is still there) and the performance is worse than any hand rolled solution I've ever used.
They never supported it properly in the first place and then it just meh'ed out of existence.
I hope "the new Apple" is going to take software seriously.
35 years in the tech industry has taught me one thing: incumbents that have been around for a long time are almost always more clueless and more full of shit than you think, what they do isn't as hard as they claim and you can probably do better given a fraction of the time they spent just because you don't have legacy systems to worry about and because technology and tooling has moved on.
Incumbents thrive on the myths about what they do being hard and impossible to replicate.
Yes, it is a lot of work to replace what you can get off the shelf today. But it isn't like the basic tech itself is all that hard to replicate step by step if you accept that it takes time and the first N development stages will give you something that isn't as feature rich and polished. And if one makes it open source, interoperability will be easier to do something about.
Perhaps some of the analysis tools/services you can buy today will be hard to replicate, but I doubt they are that hard to replicate. And it is worth having slightly suboptimal results for a couple of seasons than being on the receiving end of a hostage-situation.
But yes, it is certainly a huge effort to get what you actually need.
The Pareto principle applies. For highly complex systems it’s easy to build most of what the incumbents have. It’s the last 20% where it is hard to catch up just because the incumbents have decades of a head start and have the momentum. And even more so here because it’s not just software. It’s very science and hardware heavy.
For farming, it’s even more tough because the market has a really uneven distribution. Usually the best place to tackle huge incumbents is in the midmarket. They’re big enough to need your automation, but they’re small enough to take a risk to save some money, and the features you haven’t built yet aren’t blockers for them.
But there’s basically no midmarket farming, all farms are pretty much either really big or really small.
I would be surprised if this doesn't exist already in some nascent form?
This is an area where you would probably need entire ecosystem of systems that are onboard tractors, but also for the various implements you hook up to it to monitor sowing, fertilizing, spraying etc. Including backend systems that you can either self-host or subscribe to some service that doesn't have awful terms.
It shouldn't take immense amount of capital to make some real progress towards something that can make a difference.
Now that everyone seems to be discovering Hetzner I guess the countdown clock for enshittification has started ticking, so we have to start looking for the next place to escape.
And then you introduce extra two levels of nested loops and suddenly "i", "j", and "k" don't make any sense on their own, but "ProductIndex", "BatchIndex" and "SeriesIndex" do.
ijk for indices in loops are actually clearer than random names in nested loops precisely because it is a *very common convention* and because they occur in a defined order. So you always know that "j" is the second nesting level, for instance. Which relates to the visual layout of the code.
You may not have known of this convention or you are unable to apply "the principle of least astonishment". A set of random names for indices is less useful because it communicates less and takes longer to comprehend.
Just like most humans do not read text one letter at a time, many programmers also do not read code as prose. They scan it rapidly looking at shapes and familiar structures. "ProductIndex", "BatchIndex" and "SeriesIndex" do not lend themselves to scanning, so you force people who need to understand the code to slow down to the speed of someone who reads code like they'd read prose. That is a bit amateurish.
> ijk for indices in loops are actually clearer than random names in nested loops precisely because it is a very common convention and because they occur in a defined order. So you always know that "j" is the second nesting level, for instance. Which relates to the visual layout of the code.
In problem domains that emphasize multidimensional arrays, yes.
More often nowadays I would see `i` and think "an element of some sequence whose name starts with i". (I tend to use `k` and `v` to iterate keys and values of dictionaries, but spell `item` in full. I couldn't tell you why.)
I partly agree, and partly don't. When ijk really is unambiguous and the order is common (say you're implementing a well-known algorithm) I totally agree, the convention aids understanding.
But nesting order often doesn't control critical semantics. Personally, it has much more often implied a heuristic about the lengths or types (map, array, linked list) of the collections (i.e. mild tuning for performance but not critical), and it could be done in any order with different surrounding code. There the letters are meaningless, or possibly worse because you can't expect that similar code elsewhere does things in the same nesting order.
I think I know what you mean. Let's assume a nesting structure like this:
Company -> Employee -> Device
That is, a company has a number of employees that have a number of devices, and you may want to traverse all cars. If you are not interested in where in the list/array/slice a given employee is, or a given device is, the index is essentually a throwaway variable. You just need it to address an entity. You're really interested in the Person structure -- not its position in a slice. So you'd assign it to a locally scoped variable (pointer or otherwise).
In Go you'd probably say something like:
for _, company := range companies {
for _, employee := range company.Employees {
for _, device := range employee.Devices
// ..do stuff
}
}
ignoring the indices completely and going for the thing you want (the entity, not its index).
Of course, there are places where you do care about the indices (since you might want to do arithmetic on them). For instance if you are doing image processing or work on dense tensors. Then using the convention borrowed from math tends to be not only convenient, but perhaps even expected.
I think this may be related to how people read code. You have people who scan shapes, and then you have people who read code almost like prose.
I scan shapes. For me, working with people who read code is painful because their code tends to to have less clear "shapes" (more noise) and reads like more like a verbal description.
For instance, one thing I've noticed is the preference for "else if" rather than switch structures. Because they reason in terms of words. And convoluted logic that almost makes sense when you read it out loud, but not when you glance at it.
This is also where I tend to see unnecessarily verbose code like
func isZero(a int) bool {
if a == 0 {
return true
} else {
retur false
}
}
strictly speaking not wrong, but many times slower to absorb. (I think most developers screech to a halt and their brain goes "is there something funny going on in the logic here that would necessitate this?")
I deliberately chose to learn "scanning shapes" as the main way to orient myself because my first mentor showed me how you could navigate code much faster that way. (I'd see him rapidly skip around in source files and got curious how he would read that fast. Turns out he didn't. He just knew what shape the code he was looking for would be).
> strictly speaking not wrong, but many times slower to absorb. (I think most developers screech to a halt and their brain goes "is there something funny going on in the logic here that would necessitate this?")
I agree with this, but can't see how this applies to variable naming. Variable names can be too long, sure, but in my opinion, very short non-obvious variable names also make scanning and reading harder since they are not familiar shapes like more complete words. Additionally, when trying to understand more deeply, you have to stop and read code more often if variable's meaning is not clear.
That said, 1-2 char variable names work well in short scopes, like in some lambda, or when using 'i' for an index in a loop (nested loops would depend on situation), but those are an exception.
Like always, this is probably subjective too. And well-organized codebase probably helps to keep functions shorter, but there's often not much I can do about the existing codebase having overgrown functions all over.
> I think this may be related to how people read code. You have people who scan shapes, and then you have people who read code almost like prose.
I think this is an astute observation.
I think there is another category of "reading" that happens, is what you're reading for "interaction" or "isolation".
Sure c.method is a scalable shape but if your system deals with Cats, Camels, Cars, and Crabs that same c.method when dealing with an abstract api call divorced from the underlying representation might not be as helpful.
I would think that we would have more and better research on this, but the only paper I could find was this: https://arxiv.org/pdf/2110.00785 its a meta analysis of 57 other papers, a decent primer but nothing ground breaking here.
> I scan shapes. ... verbal description.
I would be curious if you frequently use a debugger? Because I tend to find the latter style much more useful (descriptive) in that context.
I think this is pretty insightful, and I might add this as another reason LLM code looks so revolting. It's basically writing prose in a different language, which make sense - it's a _language_ model, it has no structural comprehension to speak of.
Whereas I write code (and expect good code to be written) such that most information is represented structurally: in types, truth tables, shape of interfaces and control flow, etc.
Furniture maker, house framer, finish carpenter are all under the category of woodworking, but these jobs are not the same. Years of honed skill in tool use makes working in the other categories possible, but quality and productivity will suffer.
Does working in JS, on the front end teach you how to code, it sure does. So does working in an embedded system. But these jobs might be further apart than any of the ones I highlighted in the previous category.
There are plenty of combinations of systems and languages where your rule about a screen just isn't going to apply. There are plenty of problems that make scenarios where "ugly loops" are a reality.
I didn't say it was an absolute. But once a scope grows to the point where you have to navigate to absorb a function or a loop, both readability and complexity tends to worsen. As does your mental processing time. Especially for people who "scan" code rapidly rather than reading it.
The slower "readers" will probably not mind as much.
This is why things like function size is usually part of coding standards at a company or on a project. (Look at Google, Linux etc)
Another point of view: ideally it would just be "Age". But in languages that don't have the ability to "open" scopes, one might be satisfied p.Age, being "the age". I've also seen $.age and it.age, in languages with constructs that automatically break out "it" anaphora.
Imagine what this is going to look like in 2 years.
reply