In a study like this, there’s also a difference in motivation. An AI will mechanically “take the study seriously.” I’m not convinced the doctors will.
But when making decisions about a real patient’s care, a doctor will be operating under different motivations.
They can also refer patients to a specialist, defer a diagnosis until they have more information, use external resources, consult with other doctors.
Doctors aren’t chatbots. They are clinical care directors.
Presuming there are no issues with information leakage, it’s genuinely impressive AI can perform this level of success at a specific doctoring skill. That doesn’t make it a replacement for a doctor. It does make it a useful tool for a doctor or a patient, which is exactly what we’re seeing in practice.
PCR (the chemical reaction) isn’t near optimal, but thermocyclers (the device) are hard to improve on.
For PCR, one of the innovations I’m excited about is the development of PCR that preserves chemical properties of the input DNA, like CG methylation. This is a critical epigenetic mark on cytosines (C DNA bases). When cytosine’s followed by guanine (G base), forming the sequence CG, its complement is also CG. There’s an enzyme called a maintenance methyltransferase that copies CG methylation from the template ssDNA strand to the new reverse strand during DNA replication.
Normally this mark gets diluted into invisibility during PCR, because there’s no maintenance methyltransferase to preserve it as the input DNA is copied. A thermostable maintenance methyltransferase can preserve CG methylation throughout PCR. This is brand new technology that’s just making its way into the scientific marketplace now. It’s the kind of PCR innovation my lab’s excited about.
I just want to point out that there’s a huge difference between thoroughly investigating the family after abuse of this magnitude has been proven, and making parents legally culpable for any harm that comes to their children in general.
We can react to the fact that mothers can do more to protect their children from abuse in many ways. We can give them better access to information and support in getting away from abusers. We can create better links between police and communities they serve. We can create more pathways for children to be exposed to healthy adult behavior and connections with healthy adults, even when the family is dysfunctional.
But when we find evidence that existing supports have failed, deeply investigating why is critical.
The investigators will be able to calculate how many rounds of abuse the victim suffered. The more it happened, the less likely it is the mother was unaware. And if course, the victim can tell us directly whether the mother knew. If so, she deserves a decade of her life in prison as well.
Good design allows systems to work without anyone knowing how the whole thing works.
AI and humans are labor that can be put to work designing and vetting such systems. The problem with AI isn’t that it builds things we don’t understand. It’s that we do not have much experience with its failure modes, limitations and risks. There are many unknown unknowns.
It’s directly analogous to the problem of hiring, management, outsourcing and contacting. Sure, we know that labor can produce massive, highly reliable systems nobody fully understands. But how do we coordinate labor, AI and human, to successfully produce the systems we actually need? What failure modes and advantages does AI introduce into the mix for specific projects?
That’s where the uncertainty comes from, not the lack of comprehensive knowledge of the systems themselves.
Academia is a huge place, and no, its basic function is absolutely not to suck up to power. Every academic I know sees their function as the opposite, even if few take advantage of the limited protections afforded by tenure to speak truth to powers
But Tyler Cowen’s calling in particular is ABSOLUTELY to suck up to power.
Only in a monopoly situation. If you have several companies with comparable models you can easily switch between, all desperate for revenue to recoup their massive capex. you’re fine.
But when making decisions about a real patient’s care, a doctor will be operating under different motivations.
They can also refer patients to a specialist, defer a diagnosis until they have more information, use external resources, consult with other doctors.
Doctors aren’t chatbots. They are clinical care directors.
Presuming there are no issues with information leakage, it’s genuinely impressive AI can perform this level of success at a specific doctoring skill. That doesn’t make it a replacement for a doctor. It does make it a useful tool for a doctor or a patient, which is exactly what we’re seeing in practice.
reply