A lot of the people who post online have no experience with the paid PCB tools and those tools already have quite a lot of automation, and the automation interfaces work between different CAD & EDA vendors. Shared, hierarchical, and repurposed schematics are also totally a thing.
I spend almost no time on boiler plate stuff. And with good constraints, which require serious thought and understanding, tons of routing & checks can be automated too. Right now.
So, IMHO, there is not a lot of fat in the process for AI to automate away without a lot more EE and physics models, and the ability to interpret multiple specs, built in. And the current AI tools are very far from that.
> those tools already have quite a lot of automation
Not to mention the level of customization and tooling that companies like Apple have themselves built out around the PCB tool. Playing around with Cadence at home is going to be a different experience than using it at a large tier1 company.
I was mostly sticking with more systemic factors against AI adoption, but I agree with you completely.
As you said, professional PCB design has largely automated the easy stuff, and the hard stuff is going to be largely illegible to an LLM. A competent engineer could route a 10L HDI board which powers on in under a week, getting it ready for mass production is what takes the other 8+ months and 5 design spins, and I don't see much opportunity for AI to help there.
Your analogy is more spot on that you may know.
The syntax is just a bit off ;)
"File > New Project from Template"
KiCAD comes with all the usual suspects, including Arduino and the various hats. You can get pmod templates, etc. They're actually really nice.
I use the pmod template all the time because it saves time and they're convenient to plug into Arty dev boards. PCBs are so cheap and quick I'll often make a quick PCB with a template because I just want a cleaner connector system. PCBs are basically bread boards these days.
What counts as AI help and therefore should be disclosed? For example I often use Grammarly to edit some of my more important writing (but not this post obviously) because it does find grammar mistakes and it does give good readability suggestions (I have a tendency to be wordy) and the process is quicker saving time. I don't always take its advice, as many of its suggestions are not my voice, but it is a useful tool. So do I disclose?
I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.
I'm sure someone, somewhere, once wrote one that wasn't but in general, yes they are.
The ones I use certainly are. And with a bit of training you can reason and predict how they will respond to a given input with a large degree of accuracy without being familiar with how the particular compiler under question was implemented.
Not so with the AI tools. At least with the ones I use anyway.
Technically LLMs can be ran in deterministic mode as well, but I don't think that is enough. Even a deterministic LLM is too chaotic, small changes in prompts or the otherwise general context can result in vastly different outputs. This makes me think we need some other factor that is stronger than (or maybe orthogonal to) determinism. A notion of being well-behaved or some other non-chaotic term, so that slightly different inputs don't lead to vastly unexpected results.
Even that doesn't feel quite correct, because a compiler does seem quite chaotic. Forget a semi colon and an otherwise 99.99% code base results in a vastly different output. But it is still a very understandable output. Very predictable. So while treating both compilers and LLMs as functions that map massive input strings to massive output strings, there is some property I can't well define that compilers have that LLMs still lack (and the question is if they'll always lack it). I can't really define what it is, but it is something more than determinism.
Given the same compiler, I believe they would be the same between runs given the same inputs. I suppose that could not be true at the margins, but I would expect correctness out of whatever path it chose.
For all intents and purposes yeh. Its really about the variance in actual outcomes vs the expected. The variance is not much is it? With LLMs that absolutely isnt the case.
There is no physics based reason why it couldn't work. If the industry really wanted to do it they could. But they don't. The primary reason is LPDDR just has too many pins. A DDR5 SODIMM has 262 pins and is an unwieldy beast. LPDDR5 has 644 pins.
LPCAMM2 really shows the trade-offs. It adds a lot of bulk and cost, and repairability hasn't been valued high enough by the market to cover that overhead for most consumers. That's why Micron exited the market they played a big part in founding.
They should share the battery life numbers of default shipping configuration while running Linux with whatever settings they want. Then publish the configuration and settings. Same as every other manufacturer.
The expansion card system seems like something I would actually really like, especially as a hardware engineer. But the more I thought about it I couldn't really think of any compelling expansion cards that were worth the effort. So I figured I would look at what was in their store to see what other people thought up, and there isn't really any 3rd party store that I could find.
A lot of the people who post online have no experience with the paid PCB tools and those tools already have quite a lot of automation, and the automation interfaces work between different CAD & EDA vendors. Shared, hierarchical, and repurposed schematics are also totally a thing.
I spend almost no time on boiler plate stuff. And with good constraints, which require serious thought and understanding, tons of routing & checks can be automated too. Right now.
So, IMHO, there is not a lot of fat in the process for AI to automate away without a lot more EE and physics models, and the ability to interpret multiple specs, built in. And the current AI tools are very far from that.
reply