That's definitely true for some of them, but for others it's not so clear, like the Apollo or Manhattan projects? Those of course also have lasting impact but it's more in terms of knowledge, which at least arguably we are also accruing with these data centers.
RS-25 - It was designed as HG-3 during the 60s for Saturn-V and manufactured for the Space Shuttle and refurbished for SLS and just launched last month.
Vehicle assembly building - Built for Saturn-V launches been in active use and continues today .
Crawler-transporters - Hanz and Franz were built in 1966 for Apollo and still used for launches.
There are plenty of other examples from Apollo program of actual hardware being repurposed and used for later missions.
In other mega space projects, Hubble is still doing active research, 35 years after launch, voyager is sending data close to 50 years later.
It is a whole another topic whether they should be used, how NASA is funded , and this is why makes programs like SLS or the shuttle are so expensive and so forth.
The point is these mega projects had a long lifetime of value, albeit with higher maintenance costs for the tech heavy ones like Apollo than say a bridge or a dam does.
The user's name is the name of an Anton Labs project [0]. Furthermore, the fact that the user is inconsistently bad at formatting, punctuation and capitalisation of their messages makes me suspect the user itself may be agentic-LLM-based, with a badly calibrated layer of "obfuscation" to pretend to be an oh-so-imperfect human.
Doesn't continuous time basically mean "this is what we expect for sufficiently small time steps"? Very similar to how one would for example take the first order Taylor dynamics and use them for "sufficiently small" perturbations from equilibrium. Is there any other magic to continuous time systems that one would not expect to be solved by sufficiently small time steps?
You should look into condition numbers & how that applies to numerical stability of discretized optimization. If you take a continuous formulation & naively discretize you might get lucky & get a convergent & stable implementation but more often than not you will end up w/ subtle bugs & instabilities for ill-conditioned initial conditions.
I understand that much, but it seems like "your naive timestep may need to be smaller than you think or you need to do some extra work" rather than the more fundamental objection from OP?
The translation from continuous to discrete is not automatic. There is a missing verification in the linked analysis. The mapping must be verified for stability for the proper class of initial/boundary conditions. Increasing the resolution from 64 bit floats to 128 bit floats doesn't automatically give you a stable discretized optimizer from a continuous formulation.
The abstract formulation is different from the concrete implementation. It is precisely b/c the abstractions do not exist on computers that the abstract analysis does not automatically transfer the necessary analytical properties to the digital implementation. Cauchy sequences & Dedekind cuts are abstract & do not exist on digital computers.
Infinity has properties that finite approximations of it just don't have, and this can lead to serious problems for certain theorems. In the general case, the integral of a continuous function can be arbitrarily different from the sum of a finite sequence of points sampled from that function, regardless of how many points you sample - and it's even possible that the discrete version is divergent even if the continous one is convergent.
I'm not saying that this is the case here, but there generally needs to be some justification to say that a certain result that is proven for a continuous function also holds for some discrete version of it.
For a somewhat famous real-world example, it's not currently known how to produce a version of QM/QFT that works with discrete spacetime coordinates, the attempted discretizations fail to maintain the properties of the continuous equations.
What really gets me is that the time between windows 95 and now is more than between voyager launching and Windows 95. Same for the moon landings for that matter.
Are you sure that availability of resources was a limiting factor during a large part of human evolution?
ie what has driven human population growth - a fundamental change in availability of natural resources or a fundamental change in how humans exploited them?
I'd argue it's the latter, and that's driven by accumulated knowledge - and before writing - the key repository of that was - old people.
Humans have selective adaptations to reduce resource competition between older and younger members of populations - examples are menopause and testosterone levels.
Part of the reason it benefited us that some but not all people become old is because people require more attention during two phases of their lives. Our biological evolution has prioritized care for the very young over the very old, with respect to a limit on resources (like attention), effectively until the modern age. In some cultures, for instance, those with teeth must pre-chew food for those without, or expected members to engage in ritual suicide at a certain age.
I think it's a mistake ( common ) to view any organism at a point in time as perfectly adapted.
It's like saying cars pistons are designed to wear out - because they do and as the car is perfectly designed ( the mistake ) then it must be for a reason.
Also take menopause - it happens a female has all the oocytes ( eggs ) they will ever have already at birth. Menopause happens when they run out.
What you are arguing is that the number at birth is optimised with a very indirect feedback loop - as oppose to a very direct one of how much resources do you put aside for eggs in terms of maximising number of direct children versus resources used. Occams razor suggests the latter is going to be stronger.
If what you say is true - think about it - old people wouldn't gradually crumble due to wear and tear, they would have evolved some much more efficient death switch. ie Women don't suddenly die post menopause.
Sure - though the tuned behaviour around turning the innate immune system up and down is probably dominated by the more recent part of that long history.
Observationally, ad spamminess is inversely correlated with user intent and platform prestige. So I suspect it will take quite a while until it gets quite this bad for the premium platforms.
1) start with a notification that ads are coming (already there)
2) adding 1 ad to start with
3) slowly increase ads
4) make it a huge part of the experience (like Google now)
reply