Hacker Newsnew | past | comments | ask | show | jobs | submit | adamddev1's commentslogin

What are the big benefits of the runtime (BEAM) that are drawing people?

The same as ever: lightweight processes with isolated heaps and per-process garbage collection, and the native message passing style.

I'm so glad I switched to NeoVim. I've got the good LSP and auto-complete stuff, a nicer grepping experience, semantic moving and selecting with treesitter textobjects, and absolutely ZERO LLM AI stuff. (I still use LLMs outside my editor for some searching and questions, but may try to cut that down too.)

Call me a Luddite, but we are up against something extra insidious with this new AI wave, and the cracks of the psychosis are starting to show.


I was talking to a teacher and she was explaining how everyone is reaching for AI to have everything explained to them. "I'm too dumb to understand things," is the basic assumption people are now growing up with, reaching for AI summaries all the time without trying to understand anything themselves.

Instead of trying to understand things, people are reaching for better tools to have the thinking done for them. We are losing something huge.


Every major leap forward triggers Luddism in those prone to histrionics.

You have to offload cognition in order to recognize the next abstraction. That's always been how we tackle harder problems.

A good explanation is foreplay, not a replacement for the act itself. If people stop there, that's a premature-pedagogy problem, not an AI problem.

Somewhere, an AI is summarizing this comment for someone right now, and that person understands the issue better than you do.


Offloading cognition is what one does when they use abstractions that other people made through intense cognition. And it's fine to do that; people can build great things with great abstractions. A woodworker doesn't have to design and construct a tool to make great things with it.

But developing the people [who can build great new abstractions or the people who can build those abstractions into ergonomic tooling] involves a lot of cognitive struggle through which these people learn how to push knowledge forward.

Forming the mental models for how things work takes struggle. Debugging errors in your code forces you to figure out the disconnect between your mental model and reality.

Claude can figure out most errors I show to it much faster than I can, but when we're building something I could build from scratch, I regularly find even Opus 4.7 regularly provides vastly overcomplicated and inferior solutions and I have to redirect it. I assume this is also the case when we're building stuff that's new to me and I just can't recognize all of the overcomplicated suboptimal solutions until I get to testing the behaviors I need to be correct. If I got a tool like this at the start of my career or education, I just don't know how I wouldn't end up completely stunted.


This is not just another abstraction. It is something fundamentally different because it is a jump away from deterministic, transparent processes to a probabilistic black box. It's not like a jump from orality to books to digital media, or hand written arithmetic to calculators to programs. These abstractions were solid and dependable and could be relied upon to tackle harder problems. This abstraction is beyond leaky.

The assumption that "that person understands the issue better than you" is bold when the best AI summaries will often give back completely false summaries on any given issue.


You can make many of the same criticisms about television, or smartphones, or the internet. And indeed, all of those and many other technologies have had terrible effects. But the way to deal with that is to learn and teach how to use them responsibly, with conscious intent. It's certainly true that left to themselves, most people won't do that.

> These abstractions were solid and dependable and could be relied upon to tackle harder problems. This abstraction is beyond leaky.

Much like humans? This is an example of what I'm referring to. Unless you spend the effort to learn how to use the models effectively, you're going to have to wait until others do that for you. In the meantime, a disconnect arises because you're not seeing the benefits because you're not able to use them effectively. But other people are using them effectively and seeing significant benefits.


All those examples are fundamentally different because those are hard-coded, deterministic programs/algorithms/libraries.

> The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

Exactly. In direct contrast to this would be how Xerox and Bell funded laboratories just to pursue knowledge, without demands of profit. They ended up creating incredibly profitable things when driven my knowledge, and not profit.

I also read a book about math where the author argued that while the Greeks were driven to pursue truth for truth's sake, they ended up being far more productive and innovative. The Romans who were more driven to work for solutions to immediate practical needs, ended up being not so productive and innovative. He used this as a defense for efforts in pure math that seems to have no immediate application but ends up being massively, surprisingly powerful and productive for practical applications down the road. I think the same could be said for software development focussed on truth and correctness, rather than immediate productivity.


> It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.

It's not though. It's fundamentally different because TurboTax will still work with clear deterministic algorithms. We need to see that the jump to AI is not a jump from hand written math to calculators. It's a jump from understanding how the math works to another world of depending on magic machines that spit out numbers that sort of work 90% of the time.


Imagine if Math calculators were just subtly wrong some percentage of the time for use cases that people use dozens or hundreds of times a day. If you could punch in the same math formula 100 times and get more than 1 answer on a calculator, most people wouldn't trust those for serious work.

They probably wouldn't think that the calculator makes them faster either


If calculators did work that way, I'm afraid that people would nevertheless take them up because "it saves so much time", and would develop fancy heuristics to plausibility-test for errors.


I don't know how many times I've seen some Google AI summary or ChatGPT with references that, when I checked, did not say what what the AI summary said. If a high school student falsified references in a paper like this, they would get a bad or failing grade. This is bad, not acceptable, the teacher would say.

But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.


You should be grateful to have got back working links.


It's almost like people don't actually want LLMs all over their core tools...


I lived in Calgary for 4 years before we had smart phones w/ maps. The grid system was amazing, it was like being able to give easily processed human GPS coordinates. "Let's meet at 7th Ave and 9th Street." Done!


I gotta admit I used to think I just had a great sense of direction.

Then I moved back to Europe and realized it was a lie. It was just that the grid systems of the places I lived in the US were much easier to navigate.


Drivers for laptops. Do all the sound cards work flawlessly? Is the power usage/battery life similar? Sadly this is a big part of what holds it back.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: