I would argue it couldn't be more different. I can dive into the source code of any library, inspect it. I can assess how reliable a library is and how popular. Bugs aside, libraries are deterministic. I don't see why this parallel keeps getting made over and over again.
I can dive into the source code of LLM generated code too. Indeed it is better because you have tools to document it better than a library that you use.
One path is that you could try to transitions within your current org. This should be particularly easy in startups, which is where you say your experience is, as startups have a lot less rigidity in roles/responsibilities and you could contribute to infrastructure efforts to build up context/knowledge.
From there, you can leverage that into an "official" infra role.
I do wish they'd focus on closing the gap to Jetbrains by implementing the QOL features that are missing. I understand they have to do what VC wants to see, but this agentic stuff is so tiring.
I give them a try about twice a year. I write a lot of Rust which should be squarely in their wheelhouse.
This last time I was pleasantly surprised to find they mostly fixed their SSH remote editing support. But then it started truncating rustc inline error messages and I couldn’t figure out how to view the whole thing easily. When you’re just trying to get something done little bits like this can add up quickly. Punted back to Cursor for now.
I don't like the way remote editing works with plugins. IIRC, the remote agent pulls the plugins from the connecting client. I get why it's done like that, but I'd way rather have it go the opposite direction.
I want a setup where I can have an immutable devcontainer with local copies of everything I need to develop 100% offline; dependencies, tools, etc.. Having my local editor pull plugins from a devcontainer for the project seems to make more sense to me.
I didn't dig in too much. Maybe there's a way to make it work somehow.
Yeah, Zed is why next year I might not renew with JetBrains at all. They hiked their price for AI stuff I didn't really ask for, especially when Claude Code does everything I want and need.
When will this boring line keep getting posted. I literally use my tesla every time with FSD and browse twitter while it drives. At most I have to glance up every 3 minutes to avoid the alert, which I'm guessing they are obligated to by law.
I'm in the same boat. Much like Elon has also promised, I'm also using my Tesla as a robo taxi when I'm not in it myself, having it earn passive income for me has been a massive success.
I have unfortunately found myself doing stuff like this too, although maybe not as egregious.
I think part of the problem is that our brains are wired to look for the path of least resistance, and so shoving everything into an LLM prompt becomes an easy escape hatch. I'm trying to combat this myself, but finding it not trivial, to be honest. All these tools are kind of just making me lazier week over week.
There’s some kind of new failure mode here. People seem to determine a tool’s applicability for a task by whether its interface allows for their request to be entered. An open ended natural language input field lets people enter any request, regardless of the underlying tool’s suitability.
I'm still struggling most with the fact that my day-to-day work involves a git first platform like GitHub.
Although jj as a vcs system, it does feel better, working with git through it still feels like a chore, but to be fair I only gave it a day before going back to git.
Does anyone have any good resources on how to augment a git flow through the lens of a git hosting platform to work smoothly and still reap the benefits of jj?
Hmm, it was a while back so now I'm struggling to recall, but I remember feeling like I'm going against the grain of easily using GitHub. I followed this exact tutorial at the time and it looks like there are now sections on how to work with GitHub.
Perhaps I need to force myself to commit for longer...
Seems extremely disingenuous to say that one year ago models could barely write a working function. In fact, there were plenty capable of writing a working function with the right context fed in, exactly as today.
reply