I dunno. If I work on GitHub and I say “obscure subsystem X” has been breached, it’s no more useful than the level of specificity that Vercel has already given (“some customer environments have been compromised”)
It’s really funny to train your AI on the data of failed companies. My coworker made a similar joke that if we trained an AI on our data they’d think our core business is helping dumbass users reset their passwords or fixing linting errors.
Just sanity checking - if I only ever install axios in a container that has no secrets mounted in to its env, is there any real way I can get pwned by this kind of thing?
Seems… improbable. There will certainly be less of us, but the fact remains that nobody wants to debug this shite vibecoded apps companies are pushing, and some simply are not able because of skill atrophy and perverse incentives to use AI at the cost of stability.
reply