I tried that afterwards in a new session. Asking about the virus itself was fine but as soon as I asked about developing a vaccine, the chat got flagged again.
While I agree with you that agentic coding still has quite a way to go and is not always producing the quality that I would want from it, I can say quite confidently that its baseline is way above some of the production code in many applications many people use today. It really isn’t that code before agents was primarily written with taste and beautiful structure in mind. Your average code base is a messy hell full of quick fixes that turned into all kinds of debt over the years.
I took the previous post, with its mention of the ball of mud, to be about complexity.
“Taste”, is used in many cases, I suspect, to give a name the collection of practices and strategies developers use to keep their code and projects at a manageable level of complexity.
LLMs don’t seem to manage complexity. They’ll just blow right past manageable and keep on going. That’s a problem. The human has to stay in the loop because LLMs only build what we tell them to build (so far).
BTW, the essay that introduced the big ball of mud pattern to me didn’t hold it up as something entirely bad to be avoided. It pointed out how many projects — successful or at least on-going projects — use it, and how its passive flexibility might actually be an advantage. Big ball of mud might just be the steady state where progress can be made while leaving complexity manageable.
I think there are at least two factors behind ye olde ball of mud that LLMs should be able to help with:
1. Lack of knowledge of existing conventions, usually caused by churn of developers working on a project. LLMs read very quickly.
2. Cost of refactoring existing code to meet current best practices / current conception of architecture. LLMs are ideal for this kind of mostly mechanical refactoring.
Currently, though, they don't see to be much help. I'm not sure if this is a limitation in their ability to use their context window, or simply that they've been trained to reproduce code as seen in the wild with all its flaws.
Keeping complexity down is always a conscious act. Because you need to go past the scope of the current problem and start to think about how it affects the whole project. It’s not a matter of convention, nor refactoring. It’s mostly prescience (due to experience) that a solution, even if correct and easy to implement, will be harmful in the long term.
Architecture practices is how to avoid such harmful consequences. But they’re costly and often harmful themselves. So you need to know which to pick and when to start applying them. LLM won’t help you there.
I agree. I do wonder if what I'm seeing is a limitation of the reasoning power of LLMs or if it's just replicating the patterns (or lack thereof) in the training data.
Very much. Try to start a union in China and see how communist that country is. China is essentially a right-wing hypercapitalist country run by a dictatorship.
No. But it also can automate some of the tedium away. Maybe there's some level of organic linking that you do, but it can also go through and be more thorough. I guess it depends on where you derive the value - if you want to be the one discovering the connections and making them, then obviously less so.
Interestingly, contamination of the forensic equipment was considered early on already. However, due to the geographic area of the findings and initial negative control tests using fresh swabs, they ruled it out.
> Washington
CNN
—
The Obama administration secretly arranged a plane delivery of $400 million in cash on the same day Iran released four American prisoners and formally implemented the nuclear deal, US officials confirmed Wednesday.
Secretly-ish - it was announced publicly 7 months prior (Jan 2016) and it was the first instalment of a legal settlement, not just some random or ransom payment.
Obviously Republicans decried it with bad faith bullshit because reality and sanity don't matter to them.
While the optics of this may look bad, the same thing happens after armed conflict too; the US has spent boatloads of money in Afghanistan on top of all the military costs, and we're basically in the same situation as before.
And the bad faith keeps on rolling. We get it, you're a MAGA true believer, it's not like you're being subtle. But besides trying to troll the good people at HN, what is your point?
Google's main revenue source (~ 75%) is advertising. They will absolutely try to shove in ads into their AI offerings. They simply don't have to do it this quickly.
reply