Hacker Newsnew | past | comments | ask | show | jobs | submit | pell's commentslogin

I tried that afterwards in a new session. Asking about the virus itself was fine but as soon as I asked about developing a vaccine, the chat got flagged again.

Does resuming with Sonnet help? I wonder if it is Opus-specific limitation

While I agree with you that agentic coding still has quite a way to go and is not always producing the quality that I would want from it, I can say quite confidently that its baseline is way above some of the production code in many applications many people use today. It really isn’t that code before agents was primarily written with taste and beautiful structure in mind. Your average code base is a messy hell full of quick fixes that turned into all kinds of debt over the years.

I took the previous post, with its mention of the ball of mud, to be about complexity.

“Taste”, is used in many cases, I suspect, to give a name the collection of practices and strategies developers use to keep their code and projects at a manageable level of complexity.

LLMs don’t seem to manage complexity. They’ll just blow right past manageable and keep on going. That’s a problem. The human has to stay in the loop because LLMs only build what we tell them to build (so far).

BTW, the essay that introduced the big ball of mud pattern to me didn’t hold it up as something entirely bad to be avoided. It pointed out how many projects — successful or at least on-going projects — use it, and how its passive flexibility might actually be an advantage. Big ball of mud might just be the steady state where progress can be made while leaving complexity manageable.


I think there are at least two factors behind ye olde ball of mud that LLMs should be able to help with:

1. Lack of knowledge of existing conventions, usually caused by churn of developers working on a project. LLMs read very quickly.

2. Cost of refactoring existing code to meet current best practices / current conception of architecture. LLMs are ideal for this kind of mostly mechanical refactoring.

Currently, though, they don't see to be much help. I'm not sure if this is a limitation in their ability to use their context window, or simply that they've been trained to reproduce code as seen in the wild with all its flaws.


Keeping complexity down is always a conscious act. Because you need to go past the scope of the current problem and start to think about how it affects the whole project. It’s not a matter of convention, nor refactoring. It’s mostly prescience (due to experience) that a solution, even if correct and easy to implement, will be harmful in the long term.

Architecture practices is how to avoid such harmful consequences. But they’re costly and often harmful themselves. So you need to know which to pick and when to start applying them. LLM won’t help you there.


I agree. I do wonder if what I'm seeing is a limitation of the reasoning power of LLMs or if it's just replicating the patterns (or lack thereof) in the training data.

Indeed. On github I wonder what the proportion is between well engineered big systems versus throw away/student projects.

Even for the well engineered stuff I suspect there is a strong bias towards standalone projects versus larger multi-component systems.


If you're traveling via Europe, you could finish US border control in Ireland directly before departure and won't be checked again.

Very much. Try to start a union in China and see how communist that country is. China is essentially a right-wing hypercapitalist country run by a dictatorship.


To be fair, I don't know where starting a union under Mao would get you


AI will also just run some query or grep commands against it for the most part and has no magical way of finding connections that a human can’t.


No. But it also can automate some of the tedium away. Maybe there's some level of organic linking that you do, but it can also go through and be more thorough. I guess it depends on where you derive the value - if you want to be the one discovering the connections and making them, then obviously less so.


Interestingly, contamination of the forensic equipment was considered early on already. However, due to the geographic area of the findings and initial negative control tests using fresh swabs, they ruled it out.


The way you describe the alternative option seems not very good faith.


Obama actually did this in 2016.

https://www.cnn.com/2016/08/03/politics/us-sends-plane-iran-...

> Washington CNN — The Obama administration secretly arranged a plane delivery of $400 million in cash on the same day Iran released four American prisoners and formally implemented the nuclear deal, US officials confirmed Wednesday.


Secretly-ish - it was announced publicly 7 months prior (Jan 2016) and it was the first instalment of a legal settlement, not just some random or ransom payment.

Obviously Republicans decried it with bad faith bullshit because reality and sanity don't matter to them.


"reality and sanity"? The reality is the US gave them cash to improve their living standards and enrich their country, not their uranium.

With that money they chose to massacre their own people and fund terrorism across the region.

https://en.wikipedia.org/wiki/2026_Iran_massacres


Again, it was legally owed money, a decades-old arbitration claim from some arms deal.

Now we're spending a multiple of that literally every day for this war. And screwing the global economy in the process. Is this a better deal?


> With that money[...]

Delivered in August 2016.

> ...they chose to massacre their own people...

In 2025-26 according to your link.

I dunno, that's a big chronological gap to bridge for implying a causal relationship to work.


Account created 52 days ago and working over time ever since to defend trump and the regime. No submissions. Color me skeptical.


Skeptical of what, exactly?


While the optics of this may look bad, the same thing happens after armed conflict too; the US has spent boatloads of money in Afghanistan on top of all the military costs, and we're basically in the same situation as before.


Hmmm… $400M and we got a nuclear deal?

We're looking at 5x that this time around (so far) and no deal in sight. Not sure this admin is doing the smart thing.


We got a nuclear deal Iran violated so badly that we had to take military action.


And the bad faith keeps on rolling. We get it, you're a MAGA true believer, it's not like you're being subtle. But besides trying to troll the good people at HN, what is your point?


Was I not clear?

"I'm glad someone is finally doing something about it rather than sending palettes of cash on an jet to radical Muslims."

Point is you can mock Trump with your minesweeper game and jeer from the sidelines, but it's a better policy than sending bad guys money.


Yeah war in the middle east is great policy, very popular and definitely what he campaigned on.

The corruption and incompetence are both unprecedented, but you keep doing your dance!


So far, the EU's track record on privacy is definitely a lot better though. Not saying it'd always stay that way of course.


The emphasis of "domestic" surveillance is definitely concerning.


>Not Google.

Google's main revenue source (~ 75%) is advertising. They will absolutely try to shove in ads into their AI offerings. They simply don't have to do it this quickly.


But they haven't. Last time I checked, you don't get shoved ads into your face if you paid for the product.


That's true but OpenAI also isn't introducing ads to the Plus accounts as far as I'm aware.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: