Hacker Newsnew | past | comments | ask | show | jobs | submit | timmg's commentslogin

Not the person you were replying to, but: did you read that letter?

It is horribly argued. It's mostly poor analogies and non-sequiturs. It's no wonder Buzzfeed was the only place they could get to publish it.


This tweet shows it as a percentage of US GDP:

https://x.com/paulg/status/2045120274551423142

Makes it a little less dramatic. But also shows what a big **'n deal the railroads were!


GDP adjustments are warranted, but it is more stark than both the estimates suggest.

The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance

Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.


The side effects of spending funds on these mega projects is also something to consider. NASA spending has created a huge pile of technologies that we use day to day: https://en.wikipedia.org/wiki/NASA_spin-off_technologies.

> NASA spending has created a huge pile of technologies that we use day to day

We're a little too early to know if that's the case here too. I do foresee a chance at a reality where AI is a dead end, but after it we have a ton of cheap GPU compute lying about, which we all rush to somehow convert into useful compute (by emulating CPU's or translating traditional algorithms into GPU oriented ones or whatever).


If all AI progress somehow immediately halted, the models that have currently been built will still have more economic impact than the Internet.

Not least because the slower the frontier advances, the cheaper ASICs get on a relative basis, and therefore the cheaper tokens at the frontier get.

We have a massive scaffolding capability overhang, give it ten years to diffuse and most industries will be radically different.

Again, all of this is obvious if you spend 1k hours with the current crop, this isn’t making any capability gain forecasts.

Just for a dumb example, there is a great ChatGPT agent for Instacart, you can share a photo of your handwritten shopping list and it will add everything to your cart. Just following through the obvious product conclusions of this capability for every grocery vendor’s app, integrating with your fridge, learning your personal preferences for brands, recipe recommendation systems, logistics integrations with your forecasted/scheduled demand, etc is I contend going to be equivalent engineering effort and impact to the move from brick and mortar to online stores.


i feel a lot of people in tech have this incuriously deterministic attitude about llms right now… previous <expensive capital project> revolutionized the world, therefore llms will! despite there really nothing to show for it so far other than writing rote code is a bit easier and still requires active baby sitting by someone who knows what they are doing

You have to agree that it's totally possible that none of those things you are envisioning getting built out actually end up working as products, right?

AI (LLM) progress would stop, and then everything people try to do with those last and most capable models would end up uninteresting or at least temporary. That's the world I'm calling a "dead end".

No matter how unlikely you think that is, you have to agree that it's at least possible, right?


> then everything people try to do with those last and most capable models would end up uninteresting

I believe that some of my made up examples won’t end up getting built, but my point is that there is _so much_ low hanging fruit like this.

Of course, anything is _possible_, but let’s talk likelihood.

In my forecast the possible worlds where progress stops and then the existing models don’t end up making anything interesting are almost exclusively scenarios like “Taiwan was invaded, TSMC fabs were destroyed, and somehow we deleted existing datacenters’ installed capacity too” or “neo-Luddites take over globally and ban GPUs”, all of this gives sub-1% likelihood.

You can imagine 5-10% likelihood worlds where the growth rate of new chips dramatically decreases for a decade due to a single black-swan event like Taiwan getting glassed, but that’s a temporary setback not a permanent blocker.

Again, I’m just looking at all the things that can obviously be built now, and just haven’t made it to the top of the list yet. I’m extremely confident that this todo list is already long enough that “this all fizzles to nothing” is basically excluded.

I think if model progress stops then everyone investing in ASI takes a big haircut, but the long-term stock market progression will look a lot like the internet after the dot com boom, ie the bloodbath ends up looking like a small blip in the rear view mirror.

I guess, a question for you - how do you think about coding agents? Don’t they already show AI is going to do more than “end up uninteresting”?


> Of course, anything is _possible_, but let’s talk likelihood.

The problem with talking likelihood is that it's an interpretation game. I understand you think it's wholly unlikely that it all fizzles out, I could read that from your first post. I hope it's also clear that I do think it's likely.

That's the point where we have to just agree to disagree. We have no rapport. I have no reason to trust your judgment, and neither do you mine.


I agree to disagree.

However I do feel a lot of this comes down to facts about the world now, eg whether Claude Opus is doing anything interesting, which are in principle places where you could provide some evidence or ideas, along the lines of the detail that I gave you.

My read so far is you are just saying “maybe it fizzles out” which is not going to persuade anyone who disagrees. Sure, “maybe”, especially if you don’t put probabilities on anything; that statement is not falsifiable.

> The problem with talking likelihood is that it's an interpretation game

I am open to updating my model in response to a causal argument, if you care to give more detail. I view likelihoods as the only way to make these sorts of conversations concrete enough that anyone could hope to update each other’s model.


Even if chatbot LLM's stop at their current capability, There's a whole ecosystem of scientific language models(in drug discovery, chemistry, materials design, etc), and engineering language models(software, chip design, etc) that are very valuable in their fields.

And even if chatbot LLM's seem to be a dead end, them and other machine learning algo's will be happy to use the data centers to create/discover a lot of stuff.


AI progress may fizzle out, but everything it produced so far would still be there. Models are just big bags of floats - once trained, they're around forever (well, at least until someone deletes them), same is true about harnesses they run in (it's just programs).

But AI proliferation is not stopping soon, because we've not picked up even the low hanging fruits just yet. Again, even if no new SOTA models were to be trained after today, there's years if not decades of R&D work into how to best use the ones we have - how to harness the big ones, where to embed the small ones, and of course, more fundamental exploration of the latent spaces and how they formed, to inform information sciences, cognitive sciences, and perhaps even philosophy.

And if that runs out or there is an Anti AI Revolution, we can still run those weather models and route planners on the chips once occupied by LLMs - just don't tell the proles that those too are AI, or it's guillotine o'clock again.


> there's years if not decades of R&D work into how to best use the ones we have - how to harness the big ones, where to embed the small ones, and of course, more fundamental exploration of the latent spaces and how they formed, to inform information sciences, cognitive sciences, and perhaps even philosophy.

I think my sense of "dead end" would entail none of those directions panning out into anything interesting. You would "explore the latent spaces" only to find nothing of value. Embedding the LLM models wouldn't end up doing anything useful for whatever reason, and philosophy would continue on without any change.


e.g. the climate models that could be run on some of these systems would dwarve anything we’ve been able to do so far.

I think there is little chance it is a "dead end", it's here to stay but at least LLMs seem to have hit the diminishing returns curve already, despise what investors might think, and so far none of the big providers actually makes money for all that investment

I think for many, if LLMs and AI only improves marginally in the next 5-10 years it is effectively a dead end. The capital expenditure necessitates AI does something exponentially more valuable than what it does now.

I think we are saying the same thing.i just think the pull back on AI will be dramatic unless something amazing happens very soon.


I just don’t see it. Both professionally and personally I’m producing so much more now. Back burner projects that weren’t worth months of my time are easily worth a few hours and $20 or whatever.

Why would I pull back?


You’re probably already experienced at your job and using AI to enhance that, or at least using that experience to keep the AI results clean. That’s something you or a company would want to pay for but it has to be a lot more than today’s prices to make it profitable. Companies want to get more out of you, or get a better price/performance ratio (an AI that delivers cheaper than the equivalent human).

But current gen AIs are like eternal juniors, never quite ready to operate independently, never learning to become the expert that you are, they are practically frozen in time to the capabilities gained during training. Yet these LLMs replaced the first few rungs of the ladder so human juniors have a canyon to jump if they want the same progression you had. I’m seeing inexperienced people just using AI like a magic 8 ball. “The AI said whatever”. [0] LLMs are smart and cheap enough to undercut human juniors, especially in the hands of a senior. But they’re too dumb to ever become a senior. Where’s the big money in that? What company wants to pay for the “eternal juniors” workforce and whatever they save on payroll goes to procuring external seniors which they’re no longer producing internally?

So I’m not too sure a generation of people who have to compete against the LLMs from day 1 will really be producing “so much more” of value later on. Maybe a select few will. Without a big jump in model quality we might see “always junior” LLMs without seniors to enhance. This is not sustainable.

And you enhancing your carpentry skills for your free time isn’t what pays for the datacenters and some CEO’s fat paycheck.

[0] I hire trainees/interns every year, and pore through hundreds of CVs and interviews for this. The quality of a significant portion of them has gone way down in the past years, coinciding with LLMs gaining popularity.


You're forgetting that the 20$ are not a sustainable price point. Would your backburner personal app thingy be worth 200$?

Yes, I have often paid more than that to have someone else develop a personal side project.

Compute capacity for the same workload always gets cheaper over time.

No piece of compute capacity (in the form of equipment) I have bought this year has been in any way cheaper than last year.

So what. Fluctuations over a year or two are meaningless. Do you really believe that the constant-dollar price of an LLM token will be higher in 20 years?

This is thoroughly debunked at this point. The frontier labs are profitable on the tokens they serve. They are negative when you bake in the training costs for the next generation.

I’ve wondered about this too.

Perhaps if we used something exotic like solid gold cookware, there might be some amazing benefits that people would love.

But it would be far from practical without being wildly subsidized…

With AI, it feels too much like the “grownups” are acting worse than the kids…


Lol are people like you going to be enough to support the large revenues? Nope.

A firm that see's rising operating expenses but no not enough increase in revenue will start to cut back on spending on LLMs and become very frugal (e.g. rationing).


Before they cut back on human programmers?

when ai is dead we can use all those gpus for zucc's metaverse xD s

The shovels and labour used to make those things where not depreciated.

The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.


Calling the GPUs the shovels is bonkers because a) shovels are cheap, GPUs are not. And b) when you build a bridge the bridge doesn’t need shovels to be passable. Without GPUs, the datacenter is useless, the model is useless, etc.

If anything, the GPUs are the steel that the bridge is made of. Each beam can be replaced, but if too many fail the bridge is impassible. A bridge with a 6 year lifespan for each beam is insane.


You’re taking the metaphor way too literally. The people who made the most profit weren’t literally selling shovels, they were the ones providing logistics and support services to the gold miners, like hauling tons of equipment over tens of miles of mountain or providing the sales channel for the gold. They siphoned off most of the profit from the ventures that depended on them (like LLMs depend on GPUs) because the miners had no other choice, to the point where even the most productive mines often weren’t profitable at all.

A less literal example is the conquistadors: their shovels were ships, horses, gunpowder, and steel. You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures. I.e. the cost of a ship capable of a cross Atlantic voyage going from 100k pieces of eight to over a million in the span of only a few years (predating the treasure fleet inflation!)

Gold rushes create demand shocks, and anyone who is a supplier to that demand makes bank, regardless of whether its GPUs or “shovels”.


> You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures.

Today this is real estate. And it's something people keep forgetting when arguing that ${whatever breakthrough or just more competition} will make ${some good or service} cheaper for consumers: prices of other things elsewhere will raise to compensate and consume any average surplus. Money left on the table doesn't stay there for long.


GPUs don't really have six year lifespans, though. The hardware itself lasts far longer than that, even hardware that's been used for cryptomining in terrible makeshift setups is absolutely fine for reuse.

Each of these GPUs pull up to a kilowatt of power. The average commercial power cost is 13.4 ¢/kWh. That means running a single H100 full tilt 24/7 is a power operationing cost of $1,100 per card per year.

In three years the current generation of GPUs will be 50% or more faster. In six years your talking more than 100% faster. For the same energy costs.

If you're running a GPU data center on six year old GPUs, your cost to operate per sellable unit of work is double the cost of a competitor.


One thing I am not entirely sure if there will be huge efficiency gains. Just looking at TDP that is the power consumption of say 3090 and 5090 and the increase is substantial then compare it to performance and the performance lift stops looking that great...

3x increase in compute for a 1.5x increase in tdp is pretty good considering the underlying process had barely changed. In anycase, consumer GPUs aren't a good metric as they operate with different economic constraints.

H100 to GB200 saw a 50x increase in efficiency, for example.


https://www.nvidia.com/en-us/data-center/gb200-nvl72/

Nvidia only advertises 25x efficiency. And that is their word...


Sure. But if that fully depreciates, $1100/year GPU produces $20k of economic benefit, would you decommission it as long as there is demand?

If my data center sells a pflop at $5 because of our electricity use and the data center a state over with newer GPUs sells it at $2.50/pflop, it doesn't matter how much economic benefit it generates, my customers are all going to the data center a state over.

I want to see math on how a single GPU will pull down that much revenue, because that seems like a dubious outcome.

Fair, I was hand waving to make a point. “If it generates more than $1100 + (resale price * WACC) + opportunity cost from physical space/etc” would have been more accurate.

But the point is — you don’t decommission profit generators just because a competitor has a lower cost structure. You run things until it is more profitable for you to decommission them.


That all depends on if you're running your own hardware (unlikely) or renting.

In context of datacenter using AI workloads, it's cheaper to replace them after few years with faster, more energy efficient ones, because the power cost is major factor

> A bridge with a 6 year lifespan for each beam is insane.

Not necessarily. Depends entirely on the value of the transport that the bridge enables.


You need to separate training and inference usage of GPUs for this analysis.

"Inference consumes 60–90% of total AI lifecycle costs." So shovel is not the right analogy, more like GPU = coal burning engine. And yes, coal was a big railroad expense, more so than financing construction debt.

> retain that capbibilty forever

Not really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.

Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.


Karpathy - and others - consider the pre-training knowledge as much a liability as an asset. If we could just retain the emergent reasoning and language capability without the hazy recollections the models would likely be stronger.

That's GPT4 thinking. New models use tools to look at current events or latest versions, and rely very little on weight knowledge.

You can pull new information into the context via RAG, but that is expensive and only gives very shallow understanding compared to retraining.

Not really.

For coding I care mostly about reasoning ability which is uncorrelated with cut off


That's definitely true for some of them, but for others it's not so clear, like the Apollo or Manhattan projects? Those of course also have lasting impact but it's more in terms of knowledge, which at least arguably we are also accruing with these data centers.

Not just knowledge.

RS-25 - It was designed as HG-3 during the 60s for Saturn-V and manufactured for the Space Shuttle and refurbished for SLS and just launched last month.

Vehicle assembly building - Built for Saturn-V launches been in active use and continues today .

Crawler-transporters - Hanz and Franz were built in 1966 for Apollo and still used for launches.

There are plenty of other examples from Apollo program of actual hardware being repurposed and used for later missions.

In other mega space projects, Hubble is still doing active research, 35 years after launch, voyager is sending data close to 50 years later.

It is a whole another topic whether they should be used, how NASA is funded , and this is why makes programs like SLS or the shuttle are so expensive and so forth.

The point is these mega projects had a long lifetime of value, albeit with higher maintenance costs for the tech heavy ones like Apollo than say a bridge or a dam does.


I’m not sure tax depreciation rates are the best measure here. Those GPUs will be used for much longer than 6 years, and the returns from the businesses will be an order of magnitude longer.

Azure ran K-80/P-100 fleets a bit longer for 8-9 years . Google does 9 years for TPUs .

In the current generation There are plenty of questions around

- viability of training to inference cascades (the key to extended life) given custom ASICs hitting production like cerebras did early this year.

- energy efficiency of older chips in tight energy environments , just new grid capacity constraints favor running newer efficient chips ignoring perhaps short term(< 1 year) price shock due to war.

- higher MBTF , compared to older GPUs modern nodes are 8 GPU clusters built on 2/3 nm processors depending on HBM memory, the tolerances are much lower especially for training.

- new DCs being spun up are being by up less than ideal conditions due to permitting, part supply and other constraints which will impact operating environment.

Not withstanding, all these issues and even taking a generous 10 year useful life . The expenses dwarf every mega project before it .


The jury is still out on this. Those tax based deprecation schedules are largely a relic of traditional data centers, where workloads are fairly moderate compared to AI use cases. Additionally, power and rack space constraints can complicate things quite a bit. If next gen chips are significantly more efficient and you are currently constrained by power availability, you might pull your old servers and replace them with the newer ones regardless of how much useful life you have left.

> Those GPUs will be used for much longer than 6 years

Will it be worth the cost of electricity to run them if the flops/watt of newer chips is lower?


actually the physical lifetime (not financial depreciation) for AI data center GPUs is even lower (3 to 4 years)

Like, they break? Or it just becomes more profitable for the data center to replace them?

It will become more expensive to fix than replace. Also more energy intensive than newer generation to operate. MBTF is significant the older the fleet gets higher the failure rates .

A typical node today is 8 GPU node today , you have to keep replacing failed GPUs by cannibalizing parts from other GPUs as nobody is selling new GPUs of that model anymore at higher frequencies.

In addition to outright failure there are higher error rates in computation in graphics it tends to be flickers or screen artifacts and so on.

Azure operated K-80s and P-100s for 9 and 7 years respectively but they were running at 2 GPU nodes and of course were much simpler compared to today’s HBM behomouths on 2/5 nm processor nodes . Google operates their custom ASIC TPUs for about 8-9 years .

With custom inference ASICs like cerebras hitting production the cascading of training NVIDIA chips to inference to get the 5-6 year useful life is also not clear.


I think there's more nuance to it. The real asset is the models that are being created.

Imagine this world: the bubble "pops" in a couple years. The GPUs stick around for a few more years after that. At the end, we pretty much don't train new foundation models anymore - no one wants to spend the money on the hardware needed to make a real advance.

People continue to refine, distill, and optimize the existing foundation models for the next century or two, just like people keep laying new track over old railway right of ways.


Only half of the rail capacity that existed during the railroad boom times was still in use by the 1970s. Lots of it was never really used at all after various railroads went bankrupt. But your point still stands.

That said, I'm pretty sure in a compute-hungry AI world you aren't going to retire GPUs every 6 years anymore. Even if compute capacity jumps such that current H100s only represent 10% of total compute available in 6 years, you're still running those H100s until they turn to dust.

I just think it's hard to compare localized railroad infrastructure to globalized AI capacity and say one was more rational than the other on a % of GDP basis until the history actually plays out.

If you compare global investment in nuclear weapons it would dwarf the manhattan project and AI thus far, and yet, 99.99999% of nuclear weapons investment is just "wasted" capacity in that it has never been "used." But the value it has created in other ways (MAD-enabled peace) has surely been profitable on net. Nobody would have predicted this at the time.

Playing armchair internet pessimist about the "new thing" always makes you feel smart but is usually not a good idea since you always mis-price what you don't know about the future (which is almost everything).


Great point!

Also railways would always have alternative uses at that time - e.g. logistics in warfare.

What other uses do GPU's have that are critical...? lol

In addition to your points, this is why I always laugh when people do backward comparisons. What characteristics do they share in common? Very little.


GPUs do have a use in warfare though. I mean, LLMs are basically offensive weapons disguised as software engineers.

Sure, LLMs can kind of put together a prototype of some CRUD app, so long as it doesn’t need to be maintainable, understandable, innovative or secure. But they excel at persisting until some arbitrary well defined condition is met, and it appears to be the case that “you gain entry to system X” works well as one of those conditions.

Given the amount of industrial infrastructure connected to the internet, and the ways in which it can break, LLMs are at some point going to be used as weapons. And it seems likely that they’ll be rather effective.

FWIW, people first saw TNT as a way to dye things yellow, and then as a mining tool. So LLMs starting out as chatbots and then being seen as (bad) software engineers does put them in good company.


Imagine comparing something that has a useful life of 100+ years vs a thing that is worn out, much less durable, and needs replacing much more often and can become obselete from innovation within its own product category.

Comical. China can continue innovating on GPUs and all this existing spend to stock up on compute is a waste. Again, comical. Moreover China has energy capacity that the US does not. Meaning all those GPU's that deliver less performance per watt? Yep going in the bin.

So yeah.. carry on telling me how this is going to yield some supreme advantage lmao.


> GPUs do have a use in warfare though.

Unclassified public cloud GPUs are completely useless when your warfighting workloads are at the SECRET level or above.


They’re unclassified public cloud GPUs today, much the same as the massive industrial base of the United States was churning out harmless consumer widgets in 1939. Those widget makers happened to be reconfigurable into weapon makers, and so wartime production exploded from 2% to 40% of GDP in 5 years [1]. But the total industrial output of course didn’t expand by nearly that much.

I think it’s maybe plausible that private compute feels similar in the next do-or-die global war.

[1] https://eh.net/encyclopedia/the-american-economy-during-worl...


The United States has almost no domestic capability to produce advanced semiconductors. There is no abundance of industrial capacity cranking out GPUs that can be quickly diverted from AI companies into weapon systems.

Even if private compute was at a level of maturity where you could use it for classified workloads, knowing that the infrastructure is being managed by someone in India or China, securely getting data into and out of that infrastructure is still a mostly unsolvable problem.


My point is the existing private DCs can be reconfigured for a different use. Building new gpus is not required to on-shore compute. We already have it. Obviously if the military started contracting out compute onto the hyperscalar clusters it would involve a host of changes. I wasn’t aware that they were letting India and China manage their infrastructure… That seems exceedingly unlikely? That relationship would obviously be severed if the compute was reconfigured for the military.

US is probably second only to Taiwan in terms of capacity to build advanced semiconductors and the gap is now closing as Intel gets back on track.

The US is one of the very few countries with the ability to produce advanced semiconductors.

wut? Intel with 18A can do it

Its low yields and tiny volumes are part of what gets the US from “no capacity” to “almost no capacity.”

yields are constantly improving on monthly basis, according to executives around 7% per month, so the capability is definitely there, but yields still needs some time

On the topic of warfare, wars are fought differently now. Compute will be mentioned in the same breath as total manufacturing output if a global war between superpowers erupts. In highly competitive industries this is already the case. Compute will be part of industrial mobilization in the same way that physical manufacturing or transportation capacity were mobilized in WWII. I’m not an expert on military computing but my intuition is that FLOPS are probably even more easily fungible into wartime compute than widget makers, and the US was able to go widgets->weapons on an unbelievable scale last time.

There are plenty of military uses for computing, but I also find it hard to believe anything but a handful of datacenters are or could be a major factor in anything but a completely 1 sided war. They are very vulnerable targets that are easy to locate and require large amounts of power and cooling. I also just don't see the application, encryption capabilities far exceed the compute available needed for decryption and computing precision and speed with even 20 year old tech far exceeds the precision of anything you would want to control. Even with tangible banefits, say 10% more or less casualties than there would be otherwise, in an exchange with anything resembling a peer military force im not sure it matters because everybody already loses.

Is that in terms of data centres or chips on the battlefield? Surely the latter is most important. Or will war alwys have perfect connectivity.

You could argue that compute was a decisive factor in World War II even (used in code breaking and designing nuclear weapons).

> What other uses do GPU's have that are critical...? lol

GPUs are essential to every kind of scientific and engineering simulation you can think of. AI-accelerated simulations are a huge deal now.


GPUs that have lives of..?

Now compare that with the life a rail road. Amusing.


Some of those railroad bridges might never have been constructed without those simulations.

This seems to show the railroads peaking around 9% of GDP. While that's lower than some of the other unsourced numbers I've seen, it's much higher than the numbers I was able to find support for myself at

https://news.ycombinator.com/item?id=44805979

The modern concept of GDP didn't exist back then, so all these numbers are calculated in retrospect with a lot of wiggle room. It feels like there's incentive now to report the highest possible number for the railroads, since that's the only thing that makes the datacenter investment look precedented by comparison.


But doesn't that overstate it in the other direction? Talking about investments in proportion to GDP back when any estimate of GDP probably wasn't a good measure of total economic output?

We're talking about the period before modern finance, before income taxes, back when most labor was agricultural... Did the average person shoulder the cost of railroads more than the average taxpayer today is shouldering the cost of F-35? (That's another line in Paul's post.)


The F-35 case is interesting. Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours, as they fill orders for US allies arming themselves with F-35's. US pilot training facilities are brimming with foreign pilots. It's the most successful export fighter since the F-16 and F-4, and presently the only means US allies have to obtain operational stealth combat technology.

What that means for the US is this: if the US had to fight a conventional war with a near-peer military today, the US actually has the ability to replace stealth fighter losses. The program isn't some near-dormant, low-rate production deal that would take a year or more to ramp up: it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete training and global logistics system, all on the front burner.

If there is any truth to Gen Bradley's "Amateurs talk strategy, professionals talk logistics" line, the F-35 is a major win for the US.


> Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours ... it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete logistics and training system, all on the front burner.

That's amazing. I had no idea the US was still capable of things like that.

I wonder if there's a way to get close to that, for things that aren't new and don't have a lot of active orders. Like have all the equipment setup but idle at some facility, keep an assembly teams ready and trained, then cycle through each weapon an activate a couple of these dormant manufacturing programs (at random!) every year, almost as a drill. So there's the capability to spin up, say F-22 production quickly when needed.

Obviously it'd cost money. But it also costs a lot of money to have fighter jets when you're not actively fighting a way. Seems like manufacturing readiness would something an effective military would be smart to pay for.


"I had no idea the US was still capable of things like that."

It's more than just the US though. It's the demand from foreign customers that makes it possible. It's the careful balance between cost and capability that was achieved by the US and allies when it was designed.

Without those things, the program would peter out after the US filled its own demand, and allies went looking for cheaper solutions. The F-35 isn't exactly cheap, but allies can see the capability justifies the cost. Now, there are so many of them in operation that, even after the bulk of orders are filled in the years to come, attrition and upgrades will keep the line operating and healthy at some level, which fulfills the goal you have in mind.

Meanwhile, the F-35 equipped militaries of the Western world are trained to similar standards, operating similar and compatible equipment, and sharing the logistics burden. In actual conflict, those features are invaluable.

There are few peacetime US developed weapons programs with such a record. It seems the interval between them is 20-30 years.


Now let's talk about the 155mm artillery shells

I think people were surprised to suddenly have a lot of demand for those.

Sure. Heavy industry. It's important. Maybe don't send it all to Asia because it's dirtier than software and finance.

It took a while to reach full production rate for the F-35. Partly because the supply chain (mostly US based because of the Buy American Act) had to come up to speed[0]. But also because there were running-changes being made to the plane, necessitating changes to the production line to accommodate them.

The F-22 production tooling is supposedly in storage at Sierra Army Depot. Why there and not at the boneyard at Davis-Monthan is an interesting question[1]. Spooling production of the F-22 back up will take less time than originally, but still won't be quick (a secure factory floor large enough has to be found, workforce knowledge has been lost, adding upgrades, etc.)

[0] Scattered across as many congressional districts as possible.

[1] I was at Sierra in the 80's on TDY and it was all Army and Army civilians. A USAF guy like me really stood out.


We do—our automotive assembly lines. F-22 is more of a deterrent. If we need more, it’s failed.

> Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours

Until we run out of materials

https://mwi.westpoint.edu/minerals-magnets-and-military-capa...


That's the problem with going too far using "money" or "GDP" - you can roughly compare the WWII 45% of GDP spent with today - https://www.davemanuel.com/us-defense-spending-history-milit... because even by WWII much was "financialized" in such a way that it appears on GDP (though things like victory gardens, barter, etc would explicitly NOT be included without effort - maybe they do this?).

As you get further and further into the past you have to start trying to measure it using human labor equivalents or similar. For example, what was the cost of a Great Pyramid? How does the cost change if you consider the theory that it was somewhat of a "make work" project to keep a mainly agricultural society employed during the "down months" and prevent starvation via centrally managed granaries?


You don't even need to go that far back to run into issues, when I read Pride and Prejudice, I think Mr. Darcy was one of the richest people in England at around £10,000/year, but if you to calculate his wealth in today's terms it wasn't some outrageous sum (Wikipedia is telling me ~£800,000/year). The thing is that the economy was totally different back then -- labor cost practically nothing, but goods like furniture for instance were really expensive and would be handed down for generations.

With £800K today, you may not even be able to afford the annual maintenance for his mansion and grounds. I knew somebody with a biggish yard in a small town and the garden was ~$40K/yr to maintain. Definitely not a Darcy estate either.

Thinking about it, an income of £800K is something like the interest on £10m.


£10,000 per year for Mr Darcy is 10,000 gold sovereigns per year. A gold sovereign at spot price today is about $1,100. So that’s over 10 million dollars per year in gold-equivalent wealth. Plenty to maintain his estate with.

Alternatively, £10,000 is 200,000 sterling silver shillings per year (20 shillings per pound) for him. A sterling shilling today is about $13.50 at spot price. So that’s $2.7million per year in silver-equivalent wealth. Still plenty!


Newsflash, old antique furniture from around that time is still really expensive even today. It was a hand-crafted specialty product, not run-of-the-mill IKEA stuff. If you compare the prices of single consumer goods while adjusting for inflation, they generally check out at least wrt. the overall ballpark. The difference is that living standards (and real incomes) back then for the average person were a lot lower.

Inflation is by definition the change in prices of a general basket of goods. Some things will outrun the basket and some things will underrun it. In general consumer durables have underrun, things like TVs and yes, sofas, are way way cheaper now than ever before. I'm not really sure why you would exclude IKEA type furniture, in most cases it's probably as good or better than a really old hand crafted one. If back then you needed to get an ultra luxury sofa but now you can get an IKEA one for the same general quality then that's a massive win for affordability even if the ultra luxury category still exists.

~£800,000/year when compared to median value in current UK? Outrageous is relative sure, but for most people out there it should be no surprise they would feel that as an outrageously odd distribution of wealth.

https://en.wikipedia.org/wiki/Income_in_the_United_Kingdom


The point is that ~£800,000/year is high, even possibly "very high" but it is not "most wealthy man in Britain" high, and certainly nowhere near "hire as many people as worked for Darcy".

Its more like making 800k per year today in India, where a lot of people make much less so you can have servants

The big change is the end of any sort of backing in money. The Minneapolis Fed calculated consumer price index levels since 1800 here. [1] Of course that comes with all the asterisks we're speaking of here for data going back that far, but their numbers are probably at least quite reasonable. They found that from 1800 to 1950 the CPI never shifted more than 25 points from the starting base of 51, so it always stayed within +/- ~50% of that baseline. That's through the Civil War, both World Wars, Spanish Flu, and much more.

Then from 1971 (when the USD became completely unbacked) to present, it increased by more than 800 points, 1600% more than our baseline. And it's only increasing faster now. So the state of modern economics makes it completely incomparable to the past, because there's no precedent for what we're doing. But if you go back to just a bit before 1970, the economy would have of course grown much larger than it was in the past but still have been vaguely comparable to the past centuries.

And I always find it paradoxical. In basic economic terms we should all have much more, but when you look at the things that people could afford on a basic salary, that does not seem to be the case. Somebody in the 50s going to college, picking up a used car, and then having enough money squirreled away to afford the downpayment on their first home -- all on the back of a part time job was a thing. It sounds like make-believe but it's real, and certainly a big part of the reason boomers were so out of touch with economic realities. Now a days a part time job wouldn't even be able to cover tuition, which makes one wonder how it could be that labor cost practically nothing in the past, as you said. Which I'm not disputing - just pointing out the paradox.

https://www.minneapolisfed.org/about-us/monetary-policy/infl...


And yet the homeownership rate in 1950 was 53% (an all-time high up to that point) compared to 65% today: https://www.huduser.gov/portal/sites/default/files/pdf/Housi... Only 80% of units had private indoor toilets or showers.

It is notable that the median monthly rent was $35/month on a median income of $3000, so ~15% of income spent on rental housing. But it's interesting reading that report because a significant focus was on the overcrowding "problem". Housing was categorized by number of rooms, not number of bedrooms. The median number of rooms was 4, and the median number of occupants >4 per unit (or more than 1 person per room). I don't think it's a stretch to say that the amount of space and facilities you get for your money today is roughly equivalent. Yes, greater percentage of your income goes to housing, and yet we have far more creature comforts today then back in 1950--multiple TVs, cellphones, appliances, and endless amounts of other junk. We can buy many more goods (durable and non-durable) for a much lower percentage of our income.

There's no simple story here.


What an interesting paper you found! Home ownership stats in contemporary times are quite misleading because of debt. Most home owners now are still paying rent in the form of a mortgage to a bank. In the 50s most home owners genuinely owned their homes 'free and clear'. The exact rate was 56% in the 1951 per your paper (which was a local low), and now it's at 40% which is a local high. And the contemporary demographics are all messed up - it's largely driven by older to elderly individuals in non-urban low-income states.

As for number of occupants, the 50s had a sustainable fertility rate. That means, on average, every woman was having at least 2 kiddos. So a median 4 occupant house would be husband, wife, and 2 children living in a place with a master bedroom, kids room, a combined kitchen/dining room, and a living room. Bathrooms, oddly enough, did not count as rooms. So in modern parlance it'd mostly be a 2/2 for up to 14% of one person's median income, and 0% in most cases as most people 'really' owned their homes.

We definitely have lots more gizmos, but I feel like that's an exchange that relatively few people would make in hindsight.


I sometimes feel that the facts are all out there, but half the people pick one half the facts as causal and the other half pick the other half. Are home prices rising because people have fewer kids (and therefore more to spend on housing) or are people having fewer kids because house prices are rising (and therefore less to spend on kids)?

I suspect that it's a complex mixture of all possibilities, and you can only really look at trends and your own life - the one thing you can have something resembling understanding and control.


> Are home prices rising because people have fewer kids (and therefore more to spend on housing) or are people having fewer kids because house prices are rising (and therefore less to spend on kids)?

Maybe a false dichotomy? My suspicion is that home prices rise because more credit becomes available (and not only homes prices but the price of other assets). If you think about it in broader terms this explains what happens to the fruits of our increased productivity - lenders extend more credit as productivity rises thereby claiming the benefit for themselves. The working person is still stuck with a 40 hour week because despite being more productive they have more debt to service.


There's something there, definitely - reading "ordinary man's guide to the financial life" from different eras is informative; many of the older ones work really hard to convince you that a home loan is something worth getting and "you'll pay it off faster than you think" - now we have guides talking about "good debt" and "never pay it off".

I posted just that on the Twitter feed but then I realized that railroad started at the beginning of an industrial revolution where labor was a far larger portion of GDP compared to industrial production. So it kind of makes sense that the first enabling technology consumed far more GDP than current investments do, even on a marginal basis.

Wild graphic. US spending on one flying killing machine (the F-35) is comparable to total spending on the Marshall plan to reconstruct Europe after WWII, or the interstate highway system, or all datacenters combined. Priorities!

I don't think that's right - the scale is logarithmic. The Marshall Plan is 20 times as expensive

And this is why I hate log scale graphs. Even in the cases where it does have a useful effect, 90%+ of people are still going to interpret it in a linear way and therefore make it massively misleading.

It’s hazardous to blend fixed and variable costs.

> Makes it a little less dramatic. But also shows what a big *'n deal the railroads were!

It also makes it more dramatic, consider the programs on the list and what they have in common.

* The Apollo program. A government-funded science project. No return on investment required.

* The Manhattan Project. A government-funded military project. No return on investment required.

* The F-35 program. A government funded military project. No return on investment required.

* The ISS. A government funded science project. No return on investment required.

* The Interstate Highway System. A government funded infrastructure project. No return on investment required.

* The Marshall Plan. A government funded foreign policy project. No return on investment required.

The actual return on investment for these projects is in the very long term of decades; Economic development, national security, scientific progress that benefits the entire country if not the entire world.

Consider the Marshall Plan in particular. It's a massive money sink, but it's nature as a government project meant it could run at losses without significant economic risk and could aim for extremely long term benefits. It's been paying dividends until January last year; 77 years.

And that dividend wasn't always obvious; Goodwill from Europe towards the US is what has prevented Europe from taking similar actions as China around the US' Big Tech companies. Many of whom relied extensively on 'Dumping' to push European competitors out of business, a more hostile Europe would've taken much more protectionist measures and ended up much like China, with it's own crop of tech giants.

And then there's the two programs left out. The railroads and AI datacenters. Private enterprise that simply does not have the luxury of sitting on it's ass waiting for benefits to materialize 50 years later.

As many other comments in this thread have already pointed out: When the US & European railroad bubbles failed, massive economic trouble followed.

OpenAI's need for (partial) return on investment is as short as this year or their IPO risks failure. And if they don't, similar massive economic trouble is assured.


European railroad bubble failed?

Can you explain that? I really have no idea what you are referring to?


The search term is the "Railway Mania", which predominant describes the UK's railroad bubble, with smaller similar booms on mainland europe. (You will have to look up French and German sources for the best info on those)

The bubble failed in the sense that massive commitments for new railways were made, and then the 1847 economic crisis caused investment to dry up, which collapsed the bubble and put a halt to the railroad construction boom. Those railway commitments never materialized, and stock market crashes followed.

I'm also being a little cheeky with what "massive economic trouble" entails; While the stock market was heavy on railroads and crashed right into a recession, the world in the mid-1800s was much less financialized so the consequences in absolute terms were less pronounced than a similar bubble-collapse would be today. As such, the main historical comparison is structural.

(Similarly, the AI bubble is likely to burst "by itself" unless OpenAI's IPO is truly catastrophically bad. What's more likely is that a recession happens and then the recession triggers a stock market collapse, which then intensify eachother. And so these historical examples of similar situations may prove illustrative.)


> and then the 1847 economic crisis ... the world in the mid-1800s was much less financialized so the consequences in absolute terms were less pronounced than a similar bubble-collapse would be today.

And yet 1848 was a very interesting year! Revolutionary even.


This is actually an interesting piece of history I haven't heard about. Thanks for the pointers

You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor in them because they were not a financial success?

Just confirms my suspicion HN is not a forum for intellectual curiosity. It's been entirely subsumed by MBAs and wannabe billionaires.


> You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor because they were not a financial success?

No. Re-read the comment.

I specifically say "No return on investment required" not "Has no return on investment". It didn't matter whether these projects earned back their money in the short term, or whether it takes the longer term of many decades.

The ISS hasn't earned back it's $150 billion, and it won't for a pretty long time yet. Doesn't mean it's not a good thing for humanity. Just means that it'd be a bad idea to have the project ran & funded by e.g. SpaceX. The project would've failed, you just can't get ROI on $150 billion within the timeframe required. SpaceX barely survived the cost of developing it's rockets. (And observe how AI spending is currently crushing the profitability of the newly-merged SpaceX-xAI.)

I'm not even saying "AI doesn't provide anything to humanity", I was saying that AI needs trillions of dollars in returns that do not appear to exist, and so it's likely to collapse.


The railroads and the interstate are arguably the biggest and broadest impact, especially in 2nd order effects (everything West of the Mississippi would be vastly different economically without them).

I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.


I agree that AI will probably have bigger effects that we could possibly predict right now. But unlike past booms/bubbles, I suspect the infrastructure being built now won't be useful after it resolves. The railroads, interstate system, and dotcom fiber buildout are all still useful. AI will need to get more efficient to be useful as established technology, so the huge datacenters will be overbuilt. And almost none of the Nvidia chips installed in datacenters this year will still be in use in 5 years, if they're even still functional.

The era of the AI data center will be brief because the models will get better and the computers will get more powerful, particularly on the desktop, laptop and phone/tablet . The transition will be like going from mainframe computers to personal computers.

No one's going to be able to afford that if you hollow out the consumer base by replacing people with AI

Early railroads didn't have a lot of standardization, so plenty of that investment did get deprecated

All of the trucks and carts and tools to build the railroads dont exist anymore. Just like the gpus wont either

In that analogy, the GPUs are like if the railroad tracks only lasted 5 years.

Is there really that much inefficiency in our distribution of goods and services such that AI could have this much impact?

I think the bet is more labor replacement, not saying that's particularly reasonable either

> I would not be surprised at AI having a similar enabling effect over the long term.

The big difference is that the current AI bubble isn't building durable infrastructure.

Building the railroads or the interstate was obscenely expensive, but 100+ years down the line we are still profiting from the investments made back then. Massive startup costs, relatively low costs to maintain and expand.

AI is a different story. I would be very surprised if any of the current GPUs are still in use only 20 years from now, and newer models aren't a trivial expansion of an older model either. Keeping AI going means continuously making massive investments - so it better finds a way to make a profit fast.


GPUs are consumables, not infrastructure. Model weights are the lasting thing.

It's always like that with software. You can still run an OS or a program made 20 years ago, in some cases that program may in fact have no modern replacements available (think niche domains) - meanwhile, in those 20 years, you've probably churned through 5-10 generations of computing hardware.


This is completely false - GPUs are not consumables, they are factors of production.

Models are technologies. Without the GPUs the technology is not accessible.

You sound like someone who thinks they have a strong understanding of economics when they don't.


I had to look up "factors of production" to see what this was about.

Looks to me like, as with a drill bit, a GPU could be reasonably classified as either a consumable or a factor of production.

This is because GPUs wear out and fail; the smaller the features, the faster electromigration kills them.


And I'm not an AI doomer, but hell no, give me another space program/station over this every single time and pretty please. We are not pioneering new engineering science or creating a pipeline of hard research and innovation that will spread in and better our everyday lives for the decades to come. We are overbuilding boring data centers packed with single-purpose chips that WILL BE obsolete within a couple years, for what? For the unhinged hope that LLM chatbots will somehow develop intelligence, and/or that people by the billions will want to pay a hefty price for dressed-up plagiarism machines. There is no indication that LLMs are a pathway to meaningful and transformative AI. Without that, there is no technical merit for the data centers being built currently to constitute future-proof infrastructure like highways and railroad networks did. There is no economical framework in which this somehow trickles down to or directly empowers the individual. This is a sham of ludicrous proportions, a sickening waste.

>There is no indication that LLMs are a pathway to meaningful and transformative AI.

Reality check, they are already astoundingly meaningful and transformative AI. They can converse in natural language, recall any common fact off the top of their heads, do research online and synthesize new information, translate between different human languages (and explain the nuances involved), translate a vague hand wavey description into working source code (and explain how it works), find security vulnerabilities, and draw SVGs of pelicans on bicycles. All in one singularly mind-blowing piece of tech.

The age of computers that just do what you tell them to, in plain language, is upon us! My God, just look at the front page! Are we on the same HN?


> Reality check, they are already astoundingly meaningful and transformative AI

The onus of the proof regarding their meaningful and transformative nature is on you.

The largest niche LLMs have so far managed to carve for themselves is software code, with the jury still on the fence as whether the productivity needle actually moved in one direction or the other, and the other, literal jury, enshrining the fact that vibe-coded software is not copyrightable and becomes a public good, that should give pause to any company living of selling software or software-related services as whether they want to poison their well.

Web search hasn't been disrupted very much either with users being quick to realise how hallucinogenic LLM summaries are (with the fact that it's baked in the tech and practically unsolvable being one of the reasons I don't consider LLMs a significant stepping stone towards actual AI).

The age of computers that respond to voice orders was 10 years ago, with Siri, Alexa, Google Assistant, nobody could care less then, and the fact the same systems became less capable after re-inventing themselves on top of LLMs probably won't have people care more now.


We are in such different universes that I fear that this will not be a productive discussion; to my eyes LLMs are the most obviously socially transformative technology in my lifetime, up there with "internet" and "smartphones".

You say the largest niche is software production. Okay, let's talk about that. If the jury is still out then the jury is asleep. When ChatGPT first came out - the GPT3 days, years ago, before "vibe code" was even a term - an artist friend of mine who never wrote a line of code in his life straight-up vibe coded 3d visuals to accompany a performance of the band he was in. In Processing, which he'd never heard of until ChatGPT suggested it to him. Do you realize what this means? Normies can use computers now. Actually use, not just consume. You can describe what you want and the computer will do it - will even ask you for clarification if your specification is too ambiguous. Hell, it will even educate you about the subject matter, meeting you at exactly your level, in your favorite writing style.

If you are still thinking in terms of whether vibe coded software is "copyrightable" or whether LLMs are useful for "selling software", you are a blacksmith scoffing that cars are pointless because they don't need horseshoes. Your entire framework is obsolete.


You are so focused on productivity that you missed the boat on the shape of the problem.

Vibe coded app are just throwaway codes that you don't understand and can't maintain. Most of our technology isn't creating new things but incremental improvement.

You are so focused on productivity when programming 's bottleneck is never about how many features you implement but how much you can understand your codebase.

Nobody cares about your internet slops but they care about verification of facts which unfortunately require human judgement.

LLM are just a different version of library code we already have, except without quality control by default.


>I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.

Maybe? It seems as if the tech is starting to taper off already and AI companies are panicking and gaslighting us about what their newest models can actually do. If that's the case the industry is probably in trouble, or the world economy.


> AI companies are panicking and gaslighting us about what their newest models can actually do

I think they have been gaslighting us from the beginning.


Bernie Madoff and his ilk made way for Sam Altman and his friends.

Like Madoff, they’re desperate to pump their Ponzi scheme for as long as they can.


Depreciation schedule:

Tulips: weeks

GPUs: 6 years

Fiber: 20-50 years

Rail, roads, bridges: 50-100+ years

Hyperscalers closer to tulips than other hard infra.


What rail, road or bridge in the US lasts 50 years? The maintenance of rail over 6 years costs more than replacing all the GPUs in a data center, even at their current markup.

Look up deterioration curve and maintenance curve (J shaped) for hard infra. TLDR is asset stays in good condition for 75% of lifetime i.e. decades with light maintenance (flat part of J). By roads I mean highways where most of the expense / work is in building out the base / sub base (i.e. ballasts for rail), that last decades. US is uniquely bad maintained/prevention but even then major assets do not deteriorate on GPU timeline.

have you seen our rails, roads and bridges?!? 50 year old ones in many places are being referred to as “new ones” :)

the only reason any “maintenance” on them is expensive is corruption which at municipal level rivals current administration in some places


I’m surprised there is no broadband rollout or telecom network on there. I guess it’s hard to quantify the cost within a specific event?

Indeed. Or for that matter, electrification?

The railroad buildout was a lot more, idk, tangible. Most of that money was spent employing millions of people to smelt iron, lay track, build bridges, blow up mountains, etc. It’s a lot more exciting than a few freight loads of overpriced GPUs.

Also a good point - railroads for sure brought a lot more optimism.

LLMs+Data centres on the other hand...


As sibling comments mentioned deceptive comparison as well. How about comparing in percentage of Gross Energy Output. https://www.sciencedirect.com/science/article/abs/pii/S09218...

It seems a little silly to put 71 years of private-and-public-sector infrastructure development alongside something highly targeted like the Manhattan Project. It might make more sense to compare the Manhattan Project to the first transcontinental railroad, as a similar targeted but enormously ambitious project amounting to a major technical milestone.

Likewise I don't think it makes sense to compare post-ChatGPT hyperscaler data center construction with all 19th-century US railroad construction. Why not include the already considerable infrastructure of pre-AI AWS/Azure? The relevant economic change isn't "AI," it's having oodles of fast compute available online and a market demanding more of it. OTOH comparing these data centers to the Manhattan Project is wrong in the opposite direction: we should really be comparing a specific headline-grabber like Stargate.

This categorization is just a confusing mishmash. The real conclusion to draw here is that we tend to spend more on long-term and broadly-defined things than we do on specific projects with specific deadlines. Indeed.


Were? How else do you expect to get goods around by land?

> most people who are keen on making such an argument, or who are identifying racial genetic differences as the primary takeaway of studies like this, are doing so to justify racism, either implicitly or explicitly.

That may certainly be true.

(Not OP, but) I always shutter when we want to deny scientific results because it might be "helpful" for someone making a racist argument.

My personal belief is that truth is the goal of science. Even in cases where the truth is uncomfortable.


It's very nice to believe in a pure system that exists outside of politics, but that's simply not how the world works, and it never will be.

There is no scientific breakthrough that has occurred sans politics. Politics choose the winners and the losers, and the realm is science is no exception.

All science is political, because the scientific institutions are made up of people, who are political. Your research project lives and dies by politics, as does your dissertation, who gets published, who receives awards, etc.

So when it comes to research of limited utility that has a nasty cadre waiting in the wings to pounce upon it, the wise person would think twice.


As I said to another person on this thread: if scientists let their political views override their pursuit of truth, the public will (rightly) lose faith in science.

So when you tell them to "trust the science" -- be it vaccinations, climate change or something else -- they have no reason to trust that science.


People don't have faith in inanimate things like science. They have faith in their leaders, who then lead the way in what to believe.

If those leaders believe in the integrity of the scientific institutions, their flock will follow. If they're anti-vax, their flock will follow. If they believe in some medical quackery, their flock will follow. If they believe in eugenics, their flock will follow. It's happened before.

What was fringe yesterday can become mainstream today, with the right leaders.


I enjoy that you are framing this as somethings that "may" happen in the future.

There are a few scientific topics that are too easily manipulated by bad actors who ignore all the nuance. You have to tread very, very carefully on those and ask yourself what good vs. what harm can come from it. We know from history that giving opportunist leaders a chance to classify humans into distinct sub-groups based on intelligence and other key traits ends in catastrophe.

I understand what you are saying and I don't disagree with the idea that bad actors will use science in bad ways.

But I think going down this path of denying (or hiding) science that can be used for bad ideas ends up causing (rightly, imho) a distrust of science -- which is far worse.

A distrust of science (not saying it was caused by this particular issue) is how we ended up with so much anti-vax sentiment in the US. And that is the reason we are seeing outbreaks of diseases that used to be minimal.

I think if you want people to "trust the science", you have to trust the people.


it seems like you are simultaneously arguing for a science that holds itself outside public opinion, and one that is beholden to it.

no, wait, I get it.

all scientists should expect mistrust because of perceptions of bias of any of them, regardless of how well founded. that seems at the very least unproductive.


> it seems like you are simultaneously arguing for a science that holds itself outside public opinion, and one that is beholden to it.

Apologies if I did a bad job explaining my opinion. But I was attempting to argue the exact opposite of that.

My view is that science should be the search for truth. And that if the truth is inconvenient for some political (or other) reason, so bet it. The truth is the goal. Full stop.

My feeling is that if scientists stop pursuing truth in cases where it doesn't fit their politics, they will (rightly, IMHO) lose the trust of the public. (Of course, in particular, those in the public who have different politics.)


so, because science as whole is not pursuing the idea that people with different genetics as a population are inferior in some ways to others with sufficient vigor, that we should expect a justifiable general distrust of science including completely unrelated results like global warming. I don't see how this is prescriptive in any way, except maybe to ... I guess find scientists that are will to accepting funding for ideas that are popular with some people? do you think that would help if they found those ideas to be meritless? or even if they didn't?

Unironically yes. Because it means that scientists are willing to lie or suppress results that offend their moral and poltical sensitbilities, and this should affect your credence in literally any scientific result reported by the institutional scientific research system.

Both sides of this thread are arguing based on fantastical versions of scientific practice that fit their priors. Scientists aren't avoiding studying this for fear of the harm it would do; they're not avoiding it at all.

It doesn't necessarily mean they lie or suppress results, it can just mean they don't pursue areas of study where the outcome is either a) nothing happens or b) bad actors use your results to "other" a whole group of people. What good can come from yet another study on race and IQ? Be specific.

Just saying, "We should do science for science's sake" is not enough. We've done that. Go read The Bell Curve and knock yourself out. What people like you seem to want is continued, motivated hammering of the issue.


you're asking science to give you some excuse for treating some people worse than others. maybe that's just not a very well formed question for a scientist to answer. if we just strip away the race nonsense and ask a more .. meaningful question like 'what is the genetic basis for intelligence', then no one is shirking that question because of what the answer might be. its just a really hard and also pretty fuzzy question.

but you still won't be satisfied with the answer, because even if one set of genes gets you 5% more 'intelligence' score, that still doesn't justify a apartheid state. do you think we should have different rules for people with different IQ scores?

you're saying that because science as a whole isnt particularly interested in assuming _your_ biases, that the whole enterprise is meaningless and corrupt, and thus we can't trust anything those white coats say.


> the fairly vague conclusion that some SNPs possibly linked to traits were selected for

Interesting. I find that part of the paper the most exciting. We always knew selection would happen for valuable traits. But seeing demonstrations of it in the timelines we have is pretty important.


Makes you wonder what is being selected for currently.

FWIW, there is some controversy around the “methodology” and honesty in that film. Not saying you should change your view of McDonald’s, but possibly of that movie.

Actually pretty interesting to think: in a few years you might buy a raspberry pi style computer board with an extra chip on it with one of these types of embodiment models and you can slap it in a rover or something.

Props for the great name!

> Keep in mind that this case is about about a minor, not an adult.

This obviously means that tech is going to have no choice but to do "age verification". And I don't think there's much of a way to do that that wouldn't be uncomfortable for a lot of us.


I would prefer Meta make their products less addictive for children, with the side-effect that they're perhaps less stimulating for adults, than for Meta to keep their products the way they are, gatekept behind a system that allows them access to even more of my personal data.

I understand why they would want the opposite. They can f*ck right off.


Oh, corporstions pushed age verification, so of course they will not have any choice now. But before that they could just stop being addictive regardless of age.


There are ways, like double blind age verification, in which neither the website knows anything other than "yes, >18", nor the verificator knows anything other than "I was asked if user X is >18, checked, yes". Website doesn't know actual age, verificator doesn't know which website it is or for what action was the request performed.

In fact it's even in the EU Commission's official guidance on how it should be done : https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:C... (point 46).


Or assign responsibility to…parents and legal guardians…who are not children.


Meta is not blameless here. Responsibility can be shared when Meta (and others) are essentially preying on children. It’s an uphill battle for parents by Meta’s design.


They’re not Meta’s kids, they’re freemium customers.


Sure, parents do bear some responsibility here too. But we are talking about a platform that is engineered to be addictive to adults too. So it’s not as if the platform isn’t still predatory even if we find a way to parent every child on the internet.


Doesn't this lawsuit (essentially) prove otherwise?


It would work if parents had legal course to seek justice against corporations that stalk, groom, and manipulate their children against their wishes.


I’m trying to figure out how this affects weekly limits, since those overlap peak hours. My observation is that it doesn’t. But I could be wrong.

If they are doing it “right” I think any off peak usage should count 50% toward your weekly limits.

Edit: it does look like they are doing it the "right" way.


> Does bonus usage count against my weekly usage limit?

> No. The additional usage you get during off-peak hours doesn’t count toward any weekly usage limits on your plan.


So the first 100% of 5-hour usage are billed against weekly usage at normal rates, but the second additional 100% are not counted?


I just watched my "weekly limit" get used while I ran a claude code command.

I'm not sure how to square that with the quote you gave.


Did you exhaust the five-hour usage limit already? As I understand it, the ”additional usage” refers to anything beyond the standard five-hour usage limit.


> Does bonus usage count against my weekly usage limit?

> No. The additional usage you get during off-peak hours doesn’t count toward any weekly usage limits on your plan.


Oops! Looks like we posted at the same time.


all weekend is off-peak


Somewhat orthogonal but: when do we expect "volunteer" groups to provide training data for LLMs for [edit: free] for (like) hobbyist kinds of things? (Or do we?)

Like wikipedia probably provides a significant amount of training for LLMs. And that is volunteer and free. (And I love the idea of it.)

But I can imagine (for example) board game enthusiasts to maybe want to have training data for games they love. Not just rules but strategies.

Or, really, any other kind of hobby.

That stuff (I guess) gets in training data by virtue of being on chat groups, etc. But I feel like an organized system (like wikipedia) would be much better.

And if these sets were available, I would expect the foundation model trainers would love to include it. And the results would be better models for those very enthusiasts.


Some of this exists already in pockets (Common Crawl, The Pile, RedPajama are all volunteer/open efforts). I suppose there's no equivalent of the "edit this page and see the impact" like with have with Wikipedia. Contributing to an open dataset has no feedback loop if the training infrastructure that would consume it is closed... seems like a feedback problem.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: