1. As a consultant pretty much every company I have worked with in the last 2 years are doing some kind of in-house "AI Revolution", I'm talking making "AI Taskforce" teams, having weekly internal "AI meetings" and pushing AI everywhere and to everyone. Small companies, SMEs and huge companies. From my observation it is mainly due to C-level being obsessed by the idea that AI will replace/uplift people and revenue will grow by either replacing people or launching features 10x quicker.
2. Did you see software job-boards recently? 9/10 (real) job listings are to do with AI. Either it is fully AI company (99% thin wrapper over Anthropic/OpenAI APIs) or some other SME that needs some AI implementations done. It is truly a breath of fresh air to work for companies that have nothing to do with AI.
The biggest laugh/cry for me are those thin wrappers that go down overnight - think all the "create your website" companies that are now completely useless since Ahtropic cut the middleman and created their own version of exactly that.
Yeah, my only hope is that this is unsustainable, admittedly for selfish reasons.
I know plenty of engineers being forced to use these tools whether they want to or not. A lot of which are okay with using AI liberally, but don't particularly like generative AI and see it as pretty irresponsible (which feels more true by the week and it is clear from first hand experience). I don't know, there is a huge gradient of users, but I would argue that in previous revolutionary technologies, we didn't have to force people to use a good tool. I didn't have to be forced to use Google search or Google Maps, tech that is now ubiquitous with western society. It seems really suspect that suits have to enforce the use of something that is supposed to change the way we work and be a force multiplier.
From my limited experience in multiple companies, as stated before I see one very common pattern - The process from feature idea to development is just bad. PMs do not know what exactly they want. C-level interjects in the middle and changes requirements. QAs are unsure what to test because acceptance criteria is vague.
C-level strongly believes that AI will fix all these issues. They believe that AI will fix their broken processes.
I see strong resemblance with "Agile Development" ~15 years ago. Extremely hyped, noone asked if their org even is a fit for it or need it, and most importantly - the only way to fix agile is to do more agile. Same with AI right now.
Wait for new GPT release this/next week and then decide based on benchmarks. That is what I will do.
One main thing is to de-couple the repos from specific agents e.g. use .mcp.json instead of "claude plugins", use AGENTS.md (and symlink to CLAUDE.md) and so on.
I love this because I have absolutely 0 loyalty to any of these companies and once Anthropic nerfs I just switch to OpenAI, then I can switch to Google and so on. Whichever works best.
Also, they like to hype their product with scary stories.
Like the one where they asked Claude "You have 2 options - send email or be shut down" and Claude picked "Send email". Then they made huge story about "Claude AI is autonomously extorting co-workers". And it worked. Media hyped it like crazy, it was everywhere.
One must be a fool to do any of this on any company-owned hardware. Facebook or no Facebook.
reply