Hacker Newsnew | past | comments | ask | show | jobs | submit | hakunin's commentslogin

Everybody who says it's a 5-9-27x seems to not be aware of the obvious loophole. More like 50x increase. You were able to use over $500 worth of Opus on a $10/mo Github plan easily, no hacks. You could just prompt "plan this out for me, don't stop until fully planned, don't ask any questions", and you would get ~$5 worth of planning in one 3x request. At 100 requests/mo, each easily reaching $5, that's easy $500 worth of tokens.

Bingo. I created a few autonomous skills that did exactly that for plan review, implementation, and branch review, review autonomously until green.

I was using 100M+ tokens per day, $250 per day or so and only paying $160 per month to GitHub.

I cancelled my GHCP sub and switched to Codex last week, so far so good but I miss Gemini 3.1 Pro for UI work.


And this, right here, is why none of us can have nice, cheap things.

It was going to happen regardless due to the nature of enshittification. If they really wanted to stop people using 100M tokens a day, they could've prevented it years ago.

So, silicon valley decides to use their playbook of expand at all costs by burning money to acquire the market (like a carcinoma), and it is the users fault ?

Should we be blamed about uber destroying the taxi business, or airbnb the hotel one? Oh sorry, "disrupting".

Uber was dirt cheap, now it is the same price as taxis, and the people working for it (the "partners", not employees) have no social benefits.

Airbnb was cheap and humane, now it is THE cause for housing crises and massive residential property "investment".

The playbook of silicon valley is destructive, not disruptive.

It is by design aimed towards wealth accumulation. The ones with most money can capture the market, and make even more. It really is late stage capitalism.

And the more wealth inequality there is, the more pain, poverty and instability will be as well. AI will only exacerbate this.


Uber and Airbnb are not autonomous robots.

If people wouldn't use their services, nothing would happen. They would just go bankrupt.

So yeah, I'd say it's entirely people's fault. Because people just wanted to use their services without thinking what they're causing.

Customers who think only about themselves and noone else.


> Customers who think only about themselves and noone else.

When was this ever different? And do you expect it to ever change?


I disagree completely. You cannot expect every consumer to be fully educated and aware of the consequences of their purchasing power.

This is the role of legislation, educated experts creating policies so that you don't have to do business analysis before making a purchase.

Would I pay 10x the price for tokens and be outcompeted by other companies, hoping that openAI will go out of business ? This is entirely unrealistic.


Was the business model of Uber ever a secret? What about AirBnb?

Even if we argue that we can't require from every human being to understand what they're doing, I'd still argue that there are more people who perfectly understand it and don't care than people who have no idea how such a business operates.

> You cannot expect every consumer to be fully educated and aware of the consequences of their purchasing power.

Huh? I cannot expect that people understand consequences of their actions? What are we, animals? Of course sometimes things aren't simple, and we cannot predict that using some service will create some longterm effects that in the end will be harmful. Some things are hard to predict.

But some things are easy to predict and my point is that this was exactly this case.

I mean, now we all know what Uber and AirBnb did, and we still use them because we don't care (generally speaking, I've used uber maybe 3 times in my life, AirBnb never).


No, we are not animals. But life has become so increasingly complex that is infeasible for the average person to be that invested in everything in order to have an educated opinion.

I do NOT want to have to research the business model of companies before I buy their products or services. I would like to outsource that to the government, and spend my time actually enjoying life.

Am I supposed to be invested in every change that happens around me ?

What if I am a baker, using chatGPT to experiment with recipes and develop them. Am I supposed to read about LLMs, tokens, and the silicon valley playbook ?

No. I should not have to do any of those things.


If you think you should not do these things, then you're a part of the problem.

If a company will advertise that they can take your oil and "dispose it legally", and then on their website they will openly write that they've found a loophole allowing them to store oil on the bottom of the ocean, then you say it's morally OK to use their services because it's legal?

If todays legislations are cargo and are being bought and sold based on the number of hired lobbyists, then you say it's OK to base our moral compas on that?

If you're a baker then you need to figure out how LLMs work at least to a level so that you could say that you've tried to figure it out, just as when I'm a software developer and I need to figure out how kidney stones work, because it might be in my own personal interest to know this.

Same thing is when buying stuff from Chinese vendors that ship cheap stuff to every corner of the world. You can buy their cheap products using your blind excuses, but then don't blame your local markets that for some unknown and unpredictable reason they closed operation.

We have brains for a reason, and we need to use this organ to fight our way through the complexity. This is the tax every one of us has to pay for being human and to live in a human world.


Yeah it was crazy. Nowadays I use pi with OpenAI GPT 5.4/5.5, which to me seems both better and more generous than Claude. I supplement it with OpenCode Zen to get access to a bunch of models at token cost, and OpenCode Go ($10/mo) to get subscription-style access to Kimi, GLM and friends.

What is pi?

The minimalistic harness famous for having a tiny system prompt (avoids context pollution), and being what underlies openclaw. (https://pi.dev)

That was not my experience. When I tried to use Opus for longer tasks with Copilot, it would fill up the context completely and then crash without any output, while still consuming premium requests. (At least from September 2025 to January this year. Haven't tried after that.)

Copilot has improved immensely in 2026. I'd say to give it a try again if you're up for it. It works about as well as Claude Code these days in my experience.

So enormously that the haven't had any sane permission system built in this year.

Unfortunately, Opus was removed from the student plan in March. So far, I had been happy with GPT-5.3-Codex, but that model seems to have been removed this morning.

On pi coding agent, it worked very well for me over the past few months, but started glitching more recently, just prior to this announcement.

This might be Pi being buggy as heck.

When using opencode or copilot CLI, the error messages are displayed normally and it's possible to see what's going on. Under Pi, it sometimes just hangs, or Pi crashes with some bun stacktrace and that's it.

Copilot has introduced additional limits for Claude models in past month, and it's rather easy to hit it. Pi often doesn't show anything when this limit hits (although sometimes it shows the error, I guess it depends on Pi version).


Hm, didn’t encounter any crashes you describe in my usage, but your second paragraph sounds familiar.

Even more so, questions and user answers from agents were not charged as separate requests.

And when you make your harness ask you for next steps in a tool call, the journey continues forever, yeehaa

This was my solution to very very though compiler tests that would take sometime up to 4 hours to figure out. Some of the time would be spent on running the tests, but still... I was burning so much tokens. I have free Copilot for my open source work so I wasn't even paying the $20.

this is the project that I am working on https://github.com/mohsen1/tsz


Yeah, people learned.

I created a 4 subagents that polled for new tasks, and restart after ~5h.

It was a great run.


I did many 1h+ sessions of agent asking questions, delegating to subagents - all for 1 premium request.

I would say its a x1000 increase in price for agentic workflows.


exactly why i loved Github Copilot, you could pull of these shenanigans, and nothing would ever happen. That was the best part of it

but now, you get literally nothing


Find myself having forgotten most answers (books, etc), having started so long ago. It's a different world now.

Yeah, the status page confirms this by listing CLI as a separate item from "access to passwords" and indicating it having major outage unlike the latter.

None of my `op` scripts seem to work. Looks like I can't login to the website either.

I’ve been looking at emulation for the first time in a long time, and it also blows my mind that entire big detailed games that we played for many hours take 100-400kb total (NES) or 2-4mb (Genesis).


My first computer had 32KB. Reading the headline there's still a part of my brain that went "69KB? Luxury!".


> My first computer had 32KB.

1KB for me[0]. Then another 1KB[1] expanded to 16KB via my father's electronic wizardry. Then an official 16KB[2] and ever upwards from there.

[0] https://en.wikipedia.org/wiki/ZX80

[1] https://en.wikipedia.org/wiki/ZX81

[2] https://en.wikipedia.org/wiki/ZX_Spectrum


Still amazed how much fun it is to play a 36KB Stargate Defender!


Welcome to the world of embedded systems. They often do not have more resources that that. Even as completely new development (of pool control system or electricity meter).


Does Github count it as copilot chat usage when you use AI search form on their website, I wonder?


I wonder if that’s it! I occasionally do some code search on GitHub and then remember it doesn’t work well and go back to searching in the IDE. I usually need to look into not the main branch because I do a lot of projects that have a develop branch where things actually happen. But that would explain so I guess this is it.


Do you think Chinese LLMs acquired training data legitimately? I think the whole situation is a bit funny, but I don't think the US "started it" to be fair.


> Do you think Chinese LLMs acquired training data legitimately?

I think they probably acquire it in accordance with Chinese law.

> but I don't think the US "started it" to be fair.

Who are you quoting with those marks? Started what? To be fair to whom?


> I think they probably acquire it in accordance with Chinese law.

You can easily look up[1] how China struggles with effective enforcement of IP laws.

And specifically for LLMs, Anthropic recently claimed that Chinese models trained on it without permission.[2]

> Who are you quoting with those marks?

Double quote marks have other uses besides direct quotes, such as signaling unusual usage.[3] In this case, talking about countries like they're squabbling kids.

> Started what?

Fishy use of others' IP, packaging others' work without attribution.

> To be fair to whom?

To US companies using Chinese LLMs without attribution.

---

[1]: https://en.wikipedia.org/wiki/Allegations_of_intellectual_pr...

[2]: https://www.reuters.com/world/china/chinese-companies-used-c...

[3]: https://en.wikipedia.org/wiki/Quotation_marks_in_English#Sig...


They said Chinese law, which is not the same as American law, and presumably using IP the way they have is legal there, if indeed they actually did, as allegations of IP theft are just that, allegations, and even if they weren't, all nations in the history of mankind have been "stealing" "intellectual property" since forever, including the US from Britain, literally with the good graces of the fledgling US government [0].

As to what Anthropic said, it's quite specious as this analysis shows [1], ie the amount of "exchanges" is only tantamount to a single day or two of promoting, not nearly enough to actually get good RL training data from. Regardless, it's not as if other American LLM companies obtained training data legitimately, whatever that means in today's world.

[0] https://theworld.org/stories/2014/02/18/us-complains-other-n...

[1] https://youtu.be/_k22WAEAfpE


The linked wikipedia article specifically talks about China struggling to enforce Chinese law. Here's a quote:

> Despite making efforts in intellectual property protection in China, a major obstacle in prosecution is corruption in courts; local protectionism and political influence prohibits effective enforcement of intellectual property laws. To help overcome local corruption, China established specialized IP courts and sharply increased financial penalties.

> all nations in the history of mankind have been "stealing" "intellectual property" since forever

You can't use 100-400 years ago as the counterexample to what happens today. It's like justifying Russian invasion of Ukraine with colonists invading Native American territories. We're in a different world order, things that were normalized that far back shouldn't be normalized today.


> The linked wikipedia article specifically talks about China struggling to enforce Chinese law. Here's a quote: > > Despite making efforts in intellectual property protection in China, a major obstacle in prosecution is corruption in courts; local protectionism and political influence prohibits effective enforcement of intellectual property laws. To help overcome local corruption, China established specialized IP courts and sharply increased financial penalties.

That doesn't sound like struggling to me.

https://www.matec-conferences.org/articles/matecconf/pdf/201...

Compare with the growth in cases in the US:

https://www.uscourts.gov/data-news/judiciary-news/2020/02/13...

Why is it China increasing cases is evidence of struggling to you? Do you think the US is also struggling? What exactly are you talking about?

> You can't use 100-400 years ago as the counterexample to what happens today.

The US joined the Berne convention in 1988. I do not think we are talking about 400 years ago, but we're talking about the majority of the US history, having law that it was okay to ignore copyrights of the rest of the world.

> It's like justifying Russian invasion of Ukraine with colonists invading Native American territories

I don't agree: One can also mean that there is no justification for the invasion of the Ukraine just like there was no justification for invading American territories.


> Why is it China increasing cases is evidence of struggling to you? Do you think the US is also struggling? What exactly are you talking about?

I didn't say anything about increasing cases. "a major obstacle in prosecution is corruption in courts; local protectionism and political influence prohibits effective enforcement of intellectual property laws"

> we're talking about the majority of the US history, having law that it was okay to ignore copyrights of the rest of the world.

For the majority of world history slavery was the norm. _Majority_ of history doesn't matter. What matters is the order established in recent history.

> there was no justification for invading American territories

Colonization was normalized and institutionalized at that time way more than land invasion and annexation today. It's not even close.


> I didn't say anything about increasing cases

You also didn't read the source from where that link was from.

> What matters is the order established in recent history.

> Colonization was normalized

Sounds pretty racist man.


> You also didn't read

I did.

> Sounds pretty racist man.

It was.


They are struggling to enforce domestic IP law because it directly affects their own businesses, they don't care about international IP law.

Human nature is the same in any time period, there is no "normalization" at all, it's just how humans have always and will always continue to act, even today, with the world order currently breaking down.


Human nature may be the same, but it differs based on context. Humans act differently in a threatening high risk, low order world than they do in a more stable, lawful world. There is normalization, because in a pre-nuclear, pre-military alliance, pre-diplomacy, pre-world-police world you had to be much more ruthless and cunning as a state. The norms for people were completely different.


I see no evidence that they do act substantially differently post nukes given everything going on in the world in the news today. Regardless, this thread is going off topic, have a good day.


> You can't use 100-400 years ago as the counterexample

Or just a year or two ago?

> https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settl...


I don’t mind blaming Anthropic, but you linked to them settling.


> You can easily look up[1] how China struggles with effective enforcement of IP laws.

I didn't see anything in there about Chinese companies violating Chinese law.

Can you so easily look up how American companies struggle with effective enforcement of Chinese IP laws? I think it should be pretty easy to see how American companies struggle with effective enforcement of European IP laws, and I can tell you it is similar.

From here, it is not so clear that the US can even enforce its own laws at the moment.

> signaling unusual usage

Thank you!

> In this case, talking about countries like they're squabbling kids.

> > Started what?

> Fishy use of others' IP, packaging others' work without attribution.

I see. I guess if China is 3000 years old then maybe obviously, because the US is such a young country by comparison.

So you think it is "fair"[1] to violate Chinese Law because there were people in China who violated US law first?

If so, I think that is pretty childish.

[1]: I am trying it out!


> So you think it is "fair" […]

Maybe fair in a tit-for-tat sort of way, but not okay. That's why I called the whole situation funny. The rest of your post is answered in the sibling comment.


> claimed that Chinese models trained on it without permission

That's extremely rich coming from Anthropic, though? Well they would know all about it of course...


> That's extremely rich coming from Anthropic

And funny.


I mean as if anthropic and openai did.


> If you'd invested in Bitcoin in 2016, you'd have made a 200x return

Except you would've probably sold it at any of 1.5x, 2x, 4x, or 10x points. That's what people keep missing about this whole "early bitcoin". You couldn't tell it will 2x at 1.5x, you couldn't tell it will 4x at 2x, and so on.


Simple: ask "why" in a PR review, put the answer in a code comment. If there is a bigger / higher level "why", add it to git commit description. This way it's auto-maintained with code, or stays frozen at a point in time in a git commit.

More: https://max.engineer/reasons-to-leave-comment

Much more: https://max.engineer/maintainable-code


Of all the points the "other side" makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.

If you only understand the code by talking to AI, you would’ve been able to ask AI “how do we do a business feature” and ai would spit out a detailed answer, for a codebase that just says “pretend there is a codebase here”. This is of course an extreme example, and you would probably notice that, but this applies at all levels.

Any detail, anywhere cannot be fully trusted. I believe everyone’s goal should be to prompt ai such that code is the source of truth, and keep the code super readable.

If ai is so capable, it’s also capable of producing clean readable code. And we should be reading all of it.


“Of all the points the other side makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.”

This is a valid argument. However, if you create test harnesses using multiple LLMs validating each other’s work, you can get very close to compiler-like deterministic behavior today. And this process will improve over time.


It helps, but it doesn't make it deterministic. LLMs could all be misled together. A different story would be if we had deterministic models, where the exact same input always results in the exact same output. I'm not sure why we don't try this tbh.


I've been wondering if there are better random seeds, like how there are people who hunt for good seeds in Minecraft


it's literally just setting T=0. except they are not as creative then. they don't explore alternative ideas from the mean.


Are you sure that it’s T=0. My comment’s first draft said “it can’t just be setting temp to zero can it?” But I felt like T is not enough. Try running the same prompt in new sessions with T=0, like “write a poem”. Will it produce the same poem each time? (I’m not where I can try it currently).


> just add more magic turtles to the stack, bro

You're just amplifying hallucination and bias.


> other side???

> We don’t have to look at assembly, because a compiler produces the same result every time.

This is technically true in the narrowest possible sense and practically misleading in almost every way that matters. Anyone who's had a bug that only manifests at -O2, or fought undefined behavior in C that two compilers handle differently, or watched MSVC and GCC produce meaningfully different codegen from identical source, or hit a Heisenbug that disappears when you add a printf ... the "deterministic compiler" is doing a LOT of work in that sentence that actual compilers don't deliver on.

Also what's with the "sides" and "camps?" ... why would you not keep your identity small here? Why define yourself as a {pro, anti} AI person so early? So weird!


You just described deterministic behavior. Bugs are also deterministic. You don’t get different bugs every time you compile the same code the same way. With LLMs you do.

Re: “other side” - I’m quoting the grandparent’s framing.


GCC is, I imagine, several orders of magnitude mor deterministic than an LLM.


It’s not _more_ deterministic. It’s deterministic, period. The LLMs we use today are simply not.


Build systems may be deterministic in the narrow sense you use, but significant extra effort is required to make them reproducible.

Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.

Edit: added second paragraph


I'm not using a narrow sense. There is no elasticity here. See https://en.wikipedia.org/wiki/Deterministic_system

> significant extra effort is required to make them reproducible.

Zero extra effort is required. It is reproducible. The same input produces the same output. The "my machine" in "Works on my machine" is an example of input.

> Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.

You can have unreliable AIs building a thing, with some guidance and self-course-correction. What you can't have is outcomes also verified by unreliable AIs who may be prompt-injected to say "looks good". You can't do unreliable _everything_: planning, execution, verification.

If an AI decided to code an AI-bound implementation, then even tolerance verification could be completely out of whack. Your system could pass today and fail tomorrow. It's layers and layers of moving ground. You have to put the stake down somewhere. For software, I say it has to be code. Otherwise, AI shouldn't build software, it should replace it.

That said, you can build seemingly working things on moving ground, that bring value. It's a brave new world. We're yet to see if we're heading for net gain or net loss.


If we want to get really narrow I'd say real determinism is possible only in abstract systems, to which you'd reply it's just my ignorance of all possible factors involved and hence the incompleteness of the model. To which I'd point of practical limitations involved with that. And that reason, even though it is incorrect and I don't use it in this way, I understand why some people are using the quantifiers more/less with the term "deterministic", probably for the lack of a better construct.


I don't think I'm being pedantic or narrow. Cosmic rays, power spikes, and falling cows can change the course of deterministic software. I'm saying that your "compiler" either has intentionally designed randomness (or "creativity") in it, or it doesn't. Not sure why we're acting like these are more or less deterministic. They are either deterministic or not inside normal operation of a computer.


To be clear: I'm not engaging with your main point about whether LLMs are usable in software engineering or not.

I'm specifically addressing your use of the concept of determinism.

An LLM is a set of matrix multiplies and function applications. The only potentially non-deterministic step is selecting the next token from the final output and that can be done deterministically.

By your strict use of the definition they absolutely can be deterministic.

But that is not actually interesting for the point at hand. The real point has to do with reproducibility, understand ability and tolerances.

3blue1brown has a really nice set of videos on showing how the LLM machinery fits together.


> they absolutely can be deterministic.

They _can_ be deterministic, but they usually _aren't_.

That said, I just tried "make me a haiku" via Gemini 3 Flash with T=0 twice in different sessions, and both times it output the same haiku. It's possible that T=0 enables deterministic mode indeed, and in that case perhaps we can treat it like a compiler.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: