It’s unlikely any LLM tasked with a prompt involving medical records did not automatically address separation of concerns. The type of data involved is worst case scenario. One JS file is also worst case scenario. This is why it may feel manufactured. If it is true, they truly deserve to be put on blast.
I can 100% imagine prompts that would even feel natural that would never hint at any medical background of the data being processed. Could be as simple as using customer instead of patient.
Computational semiotics has been empirically proven. Model releasing soon. In the mean time, for the love of god someone recognize this and help blow these numbers out of the water.
I was one of these publishers caught in the war with facebook. I lost 10 million fans and any respect I had left for FB.
By the time a FB doom scroller makes it to you outside of Facebook they are cooked. 0 attention, 0 retention, diminishing returns (if you paid for it).
It’s the bifurcation of meaning. We speak unintelligible languages at each other using the exact same vocabulary. I developed the Semiotic-Reflexive Transformer that empirically proves this and provides the solution. No more black box. Computational semiotics is the most underrated technology of 2026.
reply