Hacker Newsnew | past | comments | ask | show | jobs | submit | leptons's commentslogin

Apple hardware isn't exceptional, it's maybe slightly above average, sometimes, and still costs more than it should - even the neo with its paltry 8GB of RAM. Apple has had plenty of hardware problems and design foibles. So many.

Their software is equally average in most respects, and has a far smaller market share worldwide across all form factors they support.

iOS is the reason I'll never own another iPad.

I mean, it's fine that you like it, but "spoilt" seems like an exaggeration.


> Apple hardware isn't exceptional, it's maybe slightly above average

Is this a serious statement? If Apple's hardware is maybe slightly above average - what's above it? It's an easy company to hate on - but you don't have any other platform integrated as well as Apple's right now IMO. Unless you maybe count Huawei.

Edit: I think I may be referring more to the holistic picture. But still curious what hardware you think is better.


> Is this a serious statement? If Apple's hardware is maybe slightly above average - what's above it?

The only hardware I can think of that's consistently higher-quality than Apple is niche stuff at a much higher price point.

e.g. the Seneca keyboard, Sennheiser Orpheus headphones, etc.


Yeah, no way that is a serious statement. My Macbook Air from 2011 still works perfectly, and the original iPhone that I found stashed inside one of my dad's cabinets charged, turned on without issues and was fully usable as well. That's a device that's almost two decades old. If that's not hardware quality, I don't know what is.

I am forced to use a MacBook for work and I think it is shit. I partucularly abhor its reflective screen and the toy keyboard. I hate that it has only 4 usbc ports that force me to have a dongle.

I can't fathom why people like that crap apart from looking slick.

Mac OS is shit, and I would unironically prefer to be forced to use Windows at work.

Style over substance. Pure crap to me.


The only thing a vibe coder should be able to copyright, is the prompt text they wrote. Not the output of the LLM, only the text they wrote to instruct the LLM what to do. And even that is pretty iffy, because most of it like "put a button on a page" is not copyright-able.

Apple also shouldn't force you to use Safari if you install Chrome on iOS, but so far the DOJ hasn't followed through with the antitrust lawsuit started under the previous administration. I guess gold participation trophies are enough to work around lawsuits depending on who is in charge.

https://www.justice.gov/archives/opa/media/1344546/dl?inline


There is no way in hell I would give an LLM direct access to a database to write whatever query it wants. Just no way.

I'll create some safe APIs that I give the LLM access to where it can interact with a limited set of things the database can do, at most.


What makes you think the people dealing with the LLMs' code won't also be using LLMs to "deal with it"?

We're all now basically junior coders who have no idea what is in the codebase. Without LLMs, we won't be able to "deal" with any of it.

And I don't like it one bit.


Because you can’t assume everyone else is as indifferent about wasting people’s time as you are. Some of us don’t want to actively make our colleagues/customers miserable. That decision forces me to decide if I will be a part of the problem even if I generally do good work I can stand behind. You’re forcing me into a decision making process purely out of your desire to not do the bare minimum when working. That’s not right.

I also may be staring at consequences you are not. It’s passing the buck with no regard for who is left to deal with the results at the end.

What if we are working on, say, accessibility tasks? If I see your work won’t actually help those in society who seriously need these features, what am I supposed to do? My kneejerk is 1) fix it (more work for me, selfish on your part), 2) kick it back to your lazy hands that clearly doesn’t see this as an issue, or 3) send it up the chain where someone else has to ask these questions or - worse - it gets shipped and people who need this stuff are screwed. This is basic ethics.


Maybe you missed the part where I said "And I don't like it one bit"?

Yes I think so as well. Didn’t read your intent correctly. My bad

That's all well and good, but what happens when the price to run these AIs goes up 10x or even 100x.

It's the same model as Uber, and I can't afford Uber most of the time anymore. It's become cost prohibitive just to take a short ride, but it used to cost like $7.

It's all fun and games until someone has to pay the bill, and these companies are losing many billions of dollars with no end in sight for the losses.

I doubt the tech and costs for the tech will improve fast enough to stop the flood of money going out, and I doubt people are going to want to pay what it really costs. That $200/month plan might not look so good when it's $2000/month, or more.


Why not try it yourself? Inference providers like BaseTen and AWS Bedrock have perfectly capable open source models as well as some licensed closed source models they host.

You can use "API-style" pricing on these providers which is more transparent to costs. It's very likely to end up more than 200 a month, but the question is, are you going to see more than that in value?

For me, the answer is yes.


What makes you think I haven't tried it myself?

The "costs" are subsidized, it's a loss-leader.


Bedrock and other third party open weight hosted model costs are not subsidized. What could possibly be the investment strategy for being one of twelve fly-by-night openrouter operators hosting the latest Qwen?

It's an important concern for those footing the bill, but I expect companies really in the face of being impacted by it to be able to do a cost-benefit calculation and use a mix of models. For the sorts of things GP described (iptables whatever, recalling how to scan open ports on the network, the sorts of things you usually could answer for yourself with 10-600 seconds in a manpage / help text / google search / stack overflow thread), local/open-weight models are already good enough and fast enough on a lot of commodity hardware to suffice. Whereas now companies might say just offload such queries to the frontier $200/mo plan because why not, tokens are plentiful and it's already being paid for, if in the future it goes to $2000/mo with more limited tokens, you might save them for the actual important or latency-sensitive work and use lower-cost local models for simpler stuff. That lower-cost might involve a $2000 GPU to be really usable, but it pays for itself shortly by comparison. To use your Uber analogy, people might have used it to get to downtown and the airport, but now it's way more expensive, so they'll take a bus or walk or drive downtown instead -- but the airport trip, even though it's more expensive than it used to be, is still attractive in the face of competing alternatives like taxis/long term parking.

Not focusing on pokemon for a start. Maybe use something more people can recognize and evaluate. I have zero knowledge of pokemon, I see it as a niche thing for ultra-nerdy people, and not something everyone is familiar with. Nothing about that test can be evaluated by anyone but a pokemon expert. Sorry, but pokemon isn't as mainstream as some people might think it is.

I think you underestimate how popular Pokemon is.

By most objective measures it's the largest entertainment franchise in all of history.

Would you also object to any other pop-culture reference for the same reason?


>I think you underestimate how popular Pokemon is.

No, I think you are overestimating how popular pokemon is.

>By most objective measures it's the largest entertainment franchise in all of history.

I don't care? Only a small set of pokemon fans would be able to gain anything from this "test".

>Would you also object to any other pop-culture reference for the same reason?

Yes.


I don't know but I won't touch anything Elon owns with a 10,000 foot pole.

Maybe in 2010 or 2015, but in 2026? Nobody is quitting their high paying job when the job market is this rough. A bubble has burst and there just are not the tech jobs out there that there used to be.

And employers know this, so they are enacting all kinds of draconian policies because they know employees know that they can't just leave the job and also keep their families fed.


job market is 2019 levels this rhetoric is nice, but doesn't stack up. yes it's not 2021 levels which is where they over hired and hired a bunch of people they would not have hired before then.

This really depends on where you are. In the Bay Area it may be 2019 levels, in other parts of the country it is way worse than 2019.

The tech job market was about 2019 levels a year ago. It's materially worse now.

We are at 2001 dot-com bubble burst levels now, as far as I'm concerned.

If only there was some way where workers in this profession could form some type of JOIN(but like a vertical version?) between different sets of workers, even crossing company boundaries, so that workers could coordinate to ensure that everyone would be quitting at once, and therefore have any power at all to block anti-worker edicts.

So, like an intersection of workers?

They aren't in "our" control, they are in the control of the very wealthy, who will not let the thought of natural disaster and mass extinctions affect their bottom line until there's no resource left for them to exploit.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: