Hacker Newsnew | past | comments | ask | show | jobs | submit | highfrequency's commentslogin

Why do you prefer the laptop to be thicker and heavier?

Nobody said that.

MacBooks of that period made compromises for useless gain in thinness. You can't with straight face tell that butterfly mechanism was a good tradeoff for .3 mm.


I don't want to think about how long I used that macbook where the keycaps would come off with my fingers as I typed, the switches were that broken.

It's like thinking about how much time I lost using a 2010 10" Atom netbook for development as a poor student where I'd close down all apps to watch a youtube video, and "rails server" took five minutes to boot on hello world.


That's a false dichotomy; there are plenty of keyboards that don't require recalls due to issues like the butterfly ones but also don't have the issues you're describing.

Luckily there are two lines: the Air and the Pro.

The issue people had was from 2016-2019, the Macbook Pros sacrificed a lot of usability for thinness, when that should only happen for the Airs.


I'd be fine with a thinner and lighter laptop if it was without compromises.

But having a shitty keyboard, losing the HDMI port, wasn't worth it.


Right? What was the point of a laptop with no "ugly ports" if everyone instead needed to carry around a stupid dongle to hang off it?

I think the preference is to have a battery that can run a CPU that's compiling, AI-ing, or rendering for an entire day (16+ hours) without having to worry about where an outlet is or being tethered to a wall or be thermal throttled. Right now that's a volume tradeoff. If there was something that ran as fast for as long and was MacBook Air (or the last Intel generation) thin, I don't think anyone would complain.

My old thinkpad was thicker but not heavier. Way more ports, didn't need dongles.

I believe that's why 90% of the focus in these firms is on coding. There is a natural difficulty ramp-up that doesn't end anytime soon: you could imagine LLMs creating a line of code, a function, a file, a library, a codebase. The problem gets harder and harder and is still economically relevant very high into the difficulty ladder. Unlike basic natural language queries which saturate difficulty early.

This is also why I don't see the models getting commoditized anytime soon - the dimensionality of LLM output that is economically relevant keeps growing linearly for coding (therefore the possibility space of LLM outputs grows exponentially) which keeps the frontier nontrivial and thus not commoditized.

In contrast, there is not much demand for 100 page articles written by LLMs in response to basic conversational questions, therefore the models are basically commoditized at answering conversational questions because they have already saturated the difficulty/usefulness curve.


> the dimensionality of LLM output that is economically relevant keeps growing linearly for coding

Doubt. Yes. there was at one point it suddenly became useful to write code in a general sense. I have seen almost no improvement in department of architecting, operations and gaslighting. In fact gaslighting has gotten worse. Entire output based on wrong assumption that it hid, almost intentionally. And I had to create very dedicated, non-agentic tools to combat this.

And all of this with latest Opus line.


I’ve started to pick up on some of the “unwilling to dig deeply into the humans perspective” & “provide ideation and then run with it” in 4.7. I actually think it’s consistent with confabulation, now that they’ve removed most of the models ability to observe its own reasoning in 4.7.

The effect is over-complicated engineering that takes way more time to review as to its right-size for the job.

Feels like hiding things, however.


Agreed. The proprietary nature of these tools is a huge impediment to their usefulness.

A intelligence plateau will happen sooner or later (my bet is on sooner), and when it does the open models will catch up. And everybody will be using open models and open source agents because they're so much more flexible.


Also doubt. But most likely because of organizational inertia. After a while, you’re mostly focused on small problems and big features are rare. You solution is quasi done. But now each new change is harder because you don’t want to broke assumptions that have become hard requirements.

Agreed, but are you also implying that the process of iteratively "programming something that's not it, and then replacing it" multiple times is not in the scope of what LLMs can/will do?

Most of the time taken during this process is spent getting feedback, processing it, and learning that it's not it. So even if LLMs drive the build time to zero, they won't speed up the process very much at all. Think 10% improvement not 10x improvement.

I'd even argue LLMs can speed up this iterative process.

Interestingly, non-coding improvements seem less clear. In the Virology uplift trial, Mythos does about as well as Opus 4.5, and Opus 4.6 is notably much worse than Opus 4.5 (p. 27).


"The window described above is not a metaphor. It is an arithmetic deadline...The window is not a metaphor for someone at your coordinates. It is a number."

9 to 1 bet that this is written by AI (perhaps proving the writer's point that he is being displaced).

I think AI is great, but I wish people would just post the prompt they gave instead of the full length decompressed essay. Much more efficient information transfer!


I think it was AI-assisted at the very least. Nothing wrong with that, but it's always a good idea to make another pass to identify and remove LLM "tropes".


I think that writers may stop using "It's not x, it's y" analogies in their writings because every time they do, someone calls it out for being AI. Same with "Em Dashes" and such.


“Your objective is to find the weirdest niche possible that still has the potential to change everything.”

“Virality based social media is inherently homogenizing.”

Some nice nuggets in here!


> I’ll tell the LLM my main goal (which will be a very specific feature or bugfix e.g. “I want to add retries with exponential backoff to Stavrobot so that it can retry if the LLM provider is down”), and talk to it until I’m sure it understands what I want. This step takes the most time, sometimes even up to half an hour of back-and-forth until we finalize all the goals, limitations, and tradeoffs of the approach, and agree on what the end architecture should look like.

This sounds sensible, but also makes me wonder how much time is actually being saved if implementing a "very specific feature or bugfix" still takes an hour of back and forth with an LLM.

Can't help but think that this is still just an awkward intermediate phase of development with adolescent LLMs where we need to think about implementation choices at all.


Small features or bugfixes generally take a minute or two of conversation.


Can you be more specific about which math results you are talking about? Looks like significant improvement on FrontierMath esp for the Pro model (most inference time compute).


Frontier Math, GPQA Diamond, and Browsecomp are the benchmarks I noticed this on.


Are you may be comparing the pro model to the non pro model with thinking? Granted it’s a bit confusing but the pro model is 10 times more expensive and probably much larger as well.


Ah yes, okay that makes more sense!


Given how good Apple Silicon is these days, why not just buy a spec'd out Mac Studio (or a few) for $15k (512 GB RAM, 8 TB NVMe), maybe pay for S3 only to sync data across machines. No talent required to manage the gear. AWS EC2 costs for similar hardware would net out in something ridiculous like 4 months.


Principles aren’t tested until they bump into conflicting incentives.


This. Super important.

A pre-commitment means nothing unless you have the mechanisms in place to enforce it.

A pre-sacrifice would be more effective.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: