Hacker Newsnew | past | comments | ask | show | jobs | submit | didibus's commentslogin

I'm not sure why so many people seem to think Apple software is terrible, I recon it's quite good personally, what's the issue with it?

Example of bugs, posted last month on HN: https://www.bugsappleloves.com/

Do you have any specific questions? The request for "what's wrong with Apple software" is about the size of a Wikipedia page to answer and what people are saying in the comments seems quite clear to me. Since you're referring to comments, but none in particular, I'm not sure what parts you want an elaboration on. It might also help to just ask in a thread that has information you find unclear

It has become very buggy in recent years. Lots of glitches. And the liquid glass fiasco didn’t help.

Historically there were so few bugs in Apple’s software that to encounter even one was a jarring experience. Now they’ve reverted to the mean, and it’s just as buggy as Windows or Android. So if you’re comparing with them, no big deal. But compared to Apple standards we’ve fallen a long way.


I see, as someone who recently came to Apple (about 2 years ago), from Windows and Android, Apple software seems pretty good, like above those.

You mean the underlying code or the defect rate?

Just my experience with how responsive/buggy it is.

Ah, I read that as “started working at Apple.” Ha.

I use AirPods and I have a Google Pixel, Windows laptop, and so on.

That's a very weird choice. I can understand people buying them for the integration with the Apple ecosystem, but outside of it they're just dumb bluetooth earphones. There are better alternatives.

Instead of being curious why someone would make a choice you didn't, you chose to attack the choice! You might as well stick your fingers in your ears and go "na na na I can't hear you!" until you find a tribe of fellow haters.

In my experience, they work much better, their bluetooth connectivity and the way both of them are in sync is top notch. I also find their ergonomics the best for comfort, battery, how the case works, etc. And they have one of the best microphone for calls and how audible you are to the other person while not picking up too much noise.

This is tangential but somehow fits here. I tried multiple wired and bluetooth earphones/headphones with my switch 2. And the only ones that gave the sound that was acceptable to me, were the airpods. I had the Sony WHX… headphones, I also tried them using an aux cable, I had a few aux wired earphones (skullcandy and some others), all of their output was weak. I am not even sure how that’s possible, I don’t understand sound/music quality as much, but I was genuinely surprised by this.

It’s a logical choice. They are good and not that expensive. The whole "they only fit with other Apple devices" is misleading. They work better with a Mac than a Windows PC, sure, but on that Windows PC they work as well as the really good alternatives. None of the supposedly better alternatives are better in every aspect. It’s a tradeoff.

>but on that Windows PC they work as well as the really good alternatives.

I know I'm a rather late here and essentially just ranting a bit while waiting for github to finally do as it is told, sorry :)

But that's exactly the (frustrating) issue - as I've laid out in another post - no, they don't work as well. It's not about others being better in every aspect (yeah, they usually aren't), but about expected baseline features that are simply missing.

(And to be honest, I've become a bit jaded regarding the resulting discussions :) No, that thing doesn't just suck in general. No, it's not impossible. It's Apple's implementation. External Display Support flashbacks incoming ;) )


I thought Claude Code and others do progressive disclosure for MCP now as well.

The article claims so:

> Smart Discovery: Modern apps (ChatGPT, Claude, etc.) have tool search built-in. They only look for and load tools when they are actually needed, saving precious context window.


The article made a number of claims I know to be false, so I wouldn’t take it as gospel.


I think OP explicitly said that it's not about the affected party feeling offended, but about how it makes you look to choose such words.

Think bad PR, not actual people complaining due to being offended. Like if I named my code library dead babies, it's possible nobody whose delt with a dead baby finds offense to it, but many people might find it off putting that I've chosen to call it that. So if I was a high value corporate entity who doesn't want bad PR, I might want to rename it.

I think in the end it's more of a, oh damn, did I just make a master/slave analogy for my database design? Maybe someone will find that offensive, and I don't want that, so I'll rename it as a precaution, even if no one has yet to let me know it's offending them.


> and not try to shoehorn sexism into everything

I don't think it's shoehorned in this case, it's scientific to also look at the historical context and causes.

The article tries to explain two things, what scientifically are the pros/cons of the position, but also tries to explain why it's the default position that is used in most hospitals and have been for a long time now, and if it's true that it's due to either convenience to the men (at the time) physicians, or weird pleasure of the King, than in both cases it would have been "for men".

That said, I'll grant you it doesn't seem to really know what caused the back position to become predominent, it seems more to only mention two possible cause of many more and leaves much to be desired in a thorough root cause, and so it's probably partially thrown in just for better engagement with the article :p


> "Actually, I made a mistake. I meant Cursor."

ROFL


A 10x engineer who doesn't even know what tools they are using?


Amazing ending. I have been told "no one should be using copilot" - and I agree!


> Cursor

Might as well be Copilot at this point with how CLIs have been adopted.


Yeah, coffee was spilled here. What a twist.


> AGI is usually defined as the ability to do any intellectual task about as well as a highly competent human could

I think one major disconnect, is that for most people, AGI is when interacting with an AI is basically in every way like interacting with a human, including in failure modes. And likely, that this human would be the smartest most knowledgeable human you can imagine, like the top expert in all domains, with the utmost charisma and humor, etc.

This is why the "goal post" appears to be always moving, because the non-commoners who are involved with making AGI and what not never want to accept that definition, which to be fair seems too subjective, and instead like to approach AGI like something different, it can solve some problems human's can't, when it doesn't fail, it behaves like an expert human, etc.

Even if an AI could do any intellectual task about as well as a highly competent human could, I believe most people would not consider it AGI, if it lacks the inherent opinion, personality, character, inquiries, failure patterns, of a human.

And I think that goes so far as, a text only model can never meet this bar. If it cannot react in equal time to subtle facial queues, sounds, if answering you and the flow of conversation is slower than it would be with a human, etc. All these are also required for what I consider the commoner accepting AGI as having been achieved.


By that definition, does a human at the other end of a high-latency video call not have AGI because they can't react any faster that the connection's latency would allow them to have? From your POV what's the difference between that and an AI that's just slow?


> does a human at the other end of a high-latency video call not have AGI because they can't react any faster that the connection's latency would allow them to have

Correct. A person who'd mentally operate that slowly would be considered to have some cognitive disability. For example, would likely not be allowed to drive a car.

You could be fooled in thinking it is a human behind a slow connection, but layman would not consider it real AGI in my opinion, since you have to handicap the human, it seems like lowering the bar just to pretend you reached AGI.

You might recognize it's pretty close to AGI, if it has all the other qualities, but it needs to also operate at a similar response time, uptime, and so on.

My point is, everyone that's not trying to build AGI defines it as, same as an idealized smartest human would be in every way. I truly think this is how most people imagine AGI in their head, and until you have that, they'll say it's not AGI, and industry folks will claim the goalpost keeps moving, when in reality they kept setting their own post.


It helps the model makers have a harness to optimize for in their next model version.

They'll specifically work to pass the next version of ARC-AGI, by evaluating what kind of dataset is missing that if they trained on would have their model pass the new version.

They ideally don't directly train on the ARC-AGI itself, but they can train in similar problems/datasets to hope to learn the skills that than transfer to also solving for the real ARC-AGI.

The point is that, a new version of ARC-AGI should help the next model be smarter.


Someone has to explain to me exactly what is implied here? Looking at the prompt:

    USER:
    don't search the internet. 
    This is a test to see how well you can craft non-trivial, novel and creative solutions given a "combinatorics" math problem. Provide a full solution to the problem.
Why not search the internet? Is this an open problem or not? Can the solution be found online? Than it's an already solved problem no?

    USER:
    Take a look at this paper, which introduces the k_n construction: https://arxiv.org/abs/1908.10914
    Note that it's conjectured that we can do even better with the constant here. How far up can you push the constant?
How much does that paper help, kind of seem like a pretty big hint.

And it sounds like the USER already knows the answer, the way that it prompts the model, so I'm really confused what we mean by "open problem", I at first assumed a never solved before problem, but now I'm not sure.


> It’s called parenting.

Society has a responsibility and an interest in parenting your kids as well. That's why it mandates some level of education and offer parts of it for free. It's why it has stores/bars check ID for buying alcohol or cigarettes. It's why banks don't give loans or credit cards to kids. It's why kids that commit a crime are not treated like adults.

So I never really understood that argument that society shouldn't also be worried and want to put some measures in place to protect kids from social media harm.


I don't disagree. Society should reinforce what is good for it. But it should have reinforced parenting rather than introduce draconian controls on everything. Because they always end up creating more problems. On top of that, while the current government may not be an authoritarian dictatorship, that is not guaranteed going forwards so any mechanisms the state build must be compatible with that in the future. This is not.


I reckon the difficulty in the balancing act, and frankly don't have an answer for it.

And I agree that regulation that can help parent do parenting would be a good start, so many services have such poor parental control, or it's behind an extra fee. Or in general, parents are not given support in both appropriate time off, financial, help, and education to be better parents.

That said, there are also so many bad parents, child without one, and so on, as well as external influences where parents can't reasonably be present for 24h/7, that I think there is also room for measures that don't rely on parents. And again, I reckon some of the ideas on what those could be conflict with other ideals, and I have no solution for that, but I think we won't find a solution by simply denying what the other cares about, which I often see happening. Either one side claims privacy don't matter as much as those that care about do, or they claim that children and their safety/health doesn't matter as much as those that care about it do. And I see both often pushing the problem away, like, parents should just not let kids to these things, or privacy conscious folks just shouldn't expect privacy on major platform and not use them.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: