I try hard not to care but subconsciously spelling errors and grammar issues scream low-quality work to me. It’s the kind of mistake that’s the easiest to correct, and they didn’t bother.
The phrase “missing comma” is missing an article. You need “a” or “the” before that. As a result when reading your comment, I subconsciously think of it as low quality.
But it’s okay. HN comments aren’t supposed to be high quality anyways. I know mine aren’t. But the official product documentation ought to be.
Between you, me, and the Deepseek team, so far as I'm aware, only one entity has caused the Western frontier model companies to panic by delivering an open model that competes far more cheaply, to the point where people are running versions of it at home.
So they spelled software wrong. So what? Outside of this being the mental equivalent of a too-scratchy-sweater for the kinds of people sensitive to that sort of thing, I don't see why it matters.
Those of us that have spent a lot of time programming with non native English speakers (the majority of software engineers on earth) have learned long ago that English ability has no correlation with engineering ability.
It may be a sign deepseek isn't "only for" Americans. Billions of non-native speakers communicate in "flawed" versions of English. Similar for other languages. Circling back to polish instructions for the picky among the Americans... hmm
If it tickles anyone's subconscious feelings, it would be their internal guiding myth of exceptionalism.
With their recent forays into authoritarianism, it's becoming ever harder to paper over the reality.
There’s no exceptionalism. I’m not even an American. I just happened to have a string of English teachers in high school that rejected grammar mistakes in student essays with the same vigor they rejected bad arguments, logical fallacies, and more. It’s a classical style education: the trivium comprises grammar, logic, and rhetoric, therefore that was how the teachers evaluated the student essays.
I despise American exceptionalism myself. This is entirely an issue about the quality of the language, not the nationality of the person behind it.
This is cool! I ended up also inventing my own syntax to place at the top of one-off scripts to specify deps. (For single-file Python scripts, vs one with a full project dir that has pyproject.toml) I will adopt this instead.
Sounds a lot like paying for online ads, they don't work because you're not paying enough, when in reality bots, scrapers and now agents are just running up all the clicks.
You pay more to try and get above that noise and hope you'll reach an actual human.
The new "fast mode" that burns tokens at 6 times the rate is just scary because that's what everyone still soon say we all need to be using to get results.
Here I am mostly writing code by hand, with some AI assistant help. I have a Claude subscription but only use it occasionally because it can take more time to review and fix the generated code as it would to hand-write it. Claude only saves me time on a minority of tasks where it's faster to prompt than hand-write.
And then I read about people spending hundreds or thousands of dollars a month on this stuff. Doesn't that turn your codebase into an unreadable mess?
I'm not getting results. That's the point. Claude doesn't fucking work without human intervention. When left to its own devices it makes bad decisions. It writes bad code. It needs constant supervision to stop it from going off the rails and replacing working code with broken code. It doesn't know what it's doing!
It's about as far as you can get from being able to work independently.
Yegge is an entertainer. Gas Town is performance art, it's not meant to be taken seriously.
Why is everyone obsessed with Mac Minis. They're awesome but for the work that these people are attempting to do? Just seems... nonsensical. Renting a server is cheaper and still just as "local" as any of this (they want "self hosted", I don't think anyone cares about local. Like are people air gapping networks? lol)
And a senior director of Nvidia? He had several Mac Minis? I really gotta imagine a Spark is better... at least it'll be a bit smarter of a cat (I'm pretty suspicious he used a LLM to help write that post)
It seems like the monkey-ladders story. Someone probably just had one sitting around and it worked or needed to do something Apple-specific and that message got lost along the way
I've been thinking about this recently and it seems like the most enthusiastic boosters always suggest difference in results is a skill issue, but I feel like there are 4 factors which multiply out to influence how much value someone gets:
- The quality of model output for _your particular domain / tech stack_. Models will always do better with languages and libraries they see a lot of than esoteric or proprietary
- The degree to which "works" = "good" in your scenario. For a one off script, "works" is all that matters, for a long lived core library, there are other considerations.
- The degree to which "works" can be easily (best yet, automatically) verified.
- Techniques, existing code cleanliness, documentation etc.
Boosters tend to lay all different experiences at the feet of this last, yet I'd argue the others are equally significant.
On the other hand, if you want to get the best results you can given the first 3 (which are generally out of one's control) then don't presume there's nothing you can do to improve the 4th.
Theoretically I'd want a totally different model cross checking the work at some point, since much like an individual may have blind spots, so will a model.
Not really. Sure you can find something on ebay but some parts can be hard to find (I collect old SPARC machines).
More importantly, you can't go back to the time where you could interact with the world with the older machines. My SPARCstation10 would run a web browser marvelously in 1993, but that combination isn't going to browse any website today.
reply