Hacker Newsnew | past | comments | ask | show | jobs | submit | tqi's commentslogin

Four(?) years ago this could have been a PhD thesis project

> solve tons of problems

I'm skeptical that air taxis could ever meaningfully reduce traffic congestion to / from JFK. Compared to cars, these would seem to require a significantly larger landing pad and passenger unloading space and need much more safety margin in-between drop offs. Maybe this is competitive vs the private helicopter market?


I wonder what % of traffic is to/from JFK. The subway decently connects much of the city to the JFK air train, but it's a fairly inconvenient journey. Toronto's UP express has made travel to YYZ significantly easier, but I doubt it's possible to construct something similar in NYC.

I love aviation, but I also don't see air travel as being a scalable/affordable solution to this problem. Then again, it's only meant to alleviate traffic burden for a certain segment of the population.


the problem with train it stops ... on every train stop. New york specifically, there are several networks(new jersey, mta, there are lines that are 100+ years old.

In general if you have an affordable enough option you'd never walk into subway, with your several luggages, to travel longer. Train is a decent plan b.


> if you have an affordable enough option you'd never walk into subway, with your several luggages, to travel longer

I'm moderately wealthy and lived in New York for a decade. I take the train between JFK and Manhattan. (Specifically, the LIRR.) It's faster, more reliable and–for me–more comfortable than taking a car. (It's also safer.) If I have my cat with me or I feel like having fun, I'll take a Blade, but that's realistically only shaving like 20 minutes off the travel time.


LIRR is not a dirty MTA train :) Noisy shaky helicopter is not an electric taxi with 6+ motors that gives you more stability with way less noise that flies after take of using wings.

Cars for sure are less convenient.


> LIRR is not a dirty MTA train :) Noisy shaky helicopter is not an electric taxi with 6+ motors that gives you more stability with way less noise that flies after take of using wings

I've also taken the A from Harlem to JFK once. It was fine. Tougher to read a book, like I can on the LIRR, mostly because the frequency of stops means having to constantly be aware of your belongings.

And agree on helicopters. We already have helicopters. Switching them to eVTOLs is a move forward.


if your air taxi is pilotless and electric, why it can't be scalable.

How many people do you think enter/exit JFK arrivals and departures every hour? Where are you going to land all those air vehicles? Is this a shuttle service with many seats? How do you plan for the air traffic for that many people?

About 7000 on average, but let's say 10000 since demand varies. And let's consider doing 10% of them with helicopters. If we average 3 people per helicopter, that's 170 groups in and 170 groups out. If each landing needs 5 minutes of pad time, that's 14 pads. Make it 20 to handle variation.

Wow, that makes it sound significantly more feasible than I would have guessed.


Where do you put those 20 pads in the city?

I'm not sure but there's a whole lot of city available. That half of the problem is easy on a technical level.

Those are all reasonable questions, but if some entity would be able to answer them, of all things, I think JFK, an airport, would be well equipped to handle them.

JFK airtrain carries about 30K passengers per weekday in 2025. how many landing pads would be needed to carry a meaningful % of that traffic alone?

travel time is 5-10 mins with 40 mins to 2hours.

Yes, it is better compared to helicopter. cheaper, less noise. e.g. you can place it more applications, for less money.


Well it's 5-10 minutes once you get to the west 30th st heliport, which can easily take 20 minutes within Manhattan. Plus getting loaded in, cleared for takeoff, and potential for backups at the landing pads, I suspect the gains are much less in practice.

How long does it take to get from a helipad to the terminals at JFK?

The time part reminded me of the old WWVB radio time signals.

If/when that goes away, I wonder if it will be cheaper to use a gps chip to make "self setting" clocks, or if everything will just be wifi connected.


GPS works well as a time source for smart watches and other mobile devices. But indoor clocks have trouble picking up GPS signals.

> The report estimates that training the latest frontier large language models, such as xAI’s Grok 4, can generate over 72,000 tons of carbon-equivalent emissions.

That seems pretty trivial, relative to 38bn per year globally?


The training of one LLM requires as much emissions as 17,000 people over a year. Which, according to the article, is 8 times more than last year, and may be underestimated by a factor 2.

That does not cover the whole usage: the hardware, the bots that collect learning data, the prompts, etc. And there are now many models of this size, and thousands and thousands at smaller sizes. And some of this parameters are increasing.

AI is estimated to emit more than 80e6 tons of CO2-equivalent this year. Much more than whole countries like Austria or Israel. Is that trivial?


How much does that come out to per user or per request? And how does that compare to anything else those people do? Like drive 10 minutes. Or eat a burger.

These numbers keep being put up as large in absolute terms but that’s deceiving for the average person who doesn’t have a way to compare them to something relevant in their lives.


Another way to put it: if training a model cost 72,000 tons of carbon, and it then gets used by 100 million people (typical of major models), the cost per person is 0.00072 tons.

Per the article, the average human uses over 5 tons per year (Americans: 18). Adding 0.00072 to 5 is not really noticeable.

(There is also the cost of inference, of course.)


Yeah it's basically nothing despite the fact that xAI seemed to intentionally crank up the carbon intensity for no reason.

Also, hilarious to select 2 major models from 2025 and they're both Grok, almost certainly the least useful, least used, and least interesting of that year.


> Claude doesn't ask you to define an attribution model. It doesn't open a whiteboard. It runs [query]"

Why is that a good thing? Claude didn't ask any obvious follow up questions, like what determined whether a user got an email or not? It is using the ab test terminology in Step 3 without any kind of confirmation that this is, you know, a valid test.


This is maybe beside the point but it annoys me how many CEOs wax poetic about "locking in" and "grindset" but then seem to have infinite time for bullshit side projects like owning an NBA team.


Indeed. So much so that the richest CEO of all is spread across so many companies that there's no way he's actually doing any real "work".


Doesn't sound like there was any incentive to get the answer right, so why would anyone bother fact checking AI answers. These marketing researchers are basically trying to rebrand path of least resistance to be a new thing?

On brand for Wharton I guess.


>These marketing researchers are basically trying to rebrand path of least resistance to be a new thing?

Or they're trying to show how the "path of least resistance" applies to AI use, but you took the path of least resistance and made an uncharitable interpretation of their paper :)


Not really, the same study could be done with giving people a calculator to do long division. How much participants bother to check the work is a function of 1) their expectations for how accurate the tool is 2) how much time they are afforded 3) what the upside of additional accuracy is. Just because they largely default to accepting the answers at face value doesn't mean they are experiencing "cognitive surrender", and doesn't mean calculators are some 4th system of thinking.


Despite what the folks here like to believe about themselves, I think the reality is we as attuned to what is in fashion and on trend as everyone else, just about different stuff. Last year it was Chatgpt, this year Claude is the new hotness. Things move so fast we barely have time to form our own opinion, so we fall back on what we read or hear from others. In 12 months who knows what it will be... Gemini? ¯\_(ツ)_/¯

Long term, my feeling is Anthropic's focus on enterprise is the most obviously lucrative but also least defensible application of LLMs. If (more likely when) open source models reach the point of being "good enough" then it's a race to the bottom on pricing. Maybe it will be like AWS vs GCP et al, but I kinda doubt it.


Where would you put 24x7 political content?


A little further down then social media apps, but mostly the same. After all, it's the main source of outrage bait for those apps. If we're talking about Fox News or CNN there's less specific user targeting and the delivery mechanism is more constrained.


That's more like perversion...


> When a caller asks something that isn’t in the knowledge base, the AI doesn’t guess.

I've seen a number of instances of this type of thing in wild, and under the hood it's usually some prompt that asks the llm to gauge confidence in its own answer. And from my admittedly naive understanding of how this all works, this seems extremely unlikely to be accurate. But I'm curious if that's still the case or if I'm operating under pre-2026 information?


Top of the line models still routinely get this wrong. This is just "make no mistakes" with more steps.


That is my understanding as well, but I see it so often and stuff changes so quickly it's hard to tell what is real.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: