> At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court."
Years later, I found out that Sussman's student Leilani Gilpin wrote a dissertation which explored exactly this topic. Her dissertation, "Anomaly Detection Through Explanations", explores a neural network talking to a propagator model to build a system that explains behavior. https://people.ucsc.edu/~lgilpin/publication/dissertation/
There has been followup work in this direction, but more important than the particular direction of computation to me in this comment is that we recognize that it is perfectly reasonable to hold AI corporations to account. After all, they are making many assertions about systems that otherwise cannot be held accountable, so the best thing we can do in their stead is hold them accountable.
But a much better path would be to not use systems which fail to have these properties, and expand work on systems which do.
My team and I are firm that we are the ones accountable. LLMs are a tool like every other. Only that it's non deterministic. But I am the one using the tool. I am the one giving the tool access. I am the one who has to keep everything safe.
I have shot myself in the foot using gparted in the past by wiping the wrong disk. gparted wasn't to blame. I was.
Letting LLMs work freely without supervision sounds great but it will lead to pain. I have to supervise their work. And that is also during execution. You can try to replace a human but we see where this leads. Sooner or later the LLM will do something stupid and then the only one to blame is the person who used the tool.
This is kind of the reverse of https://en.wikipedia.org/wiki/Poka-yoke . A lot of tools have affordances built in to make "right" things easy and "wrong" or unsafe things harder. LLMs .. well, the text interface is uniquely flat. Everything is seemingly as easy as everything else.
I worry about the use of humans as sacrificial accountability sinks. The "self-driving car" model already has this: a car which drives itself most of the time, but where a human user is required to be constantly alert so that the AI can transfer responsibility a few hundred miliseconds before the crash.
> A lot of tools have affordances built in to make "right" things easy and "wrong" or unsafe things harder.
This is true for almost anything handed to laypeople, but not for a lot of professional tools. Even a plain battery powered drill has very few protections against misuse. A soldering iron has none. Neither do sewing needles; sewing machines barely do, in the sense that you can't stick your fingers in a gap too narrow. A chemist's chemicals certainly have no protections, only warning labels. Etc.
people don't seem to want to eliminate AI → replacing it doesn't improve things → isolating it - yup, people are trying to put it in containers and not give it access to delete the production database → changing how people work with it: that's where we are now → PPE: no such thing for AI, sadly → production database is deleted.
Exactly this. I was talking about professionals. People who should know better. If we as professionals give away our agency and our accountability we make ourselves obsolete. If I just tell the LLM what to do and hope it doesn't go south then the Manager could probably do that as well.
And if a non professional did it they should ask themselves why we have professionals. Maybe there was a reason and maybe they do have value.
An LLM is a large and complex machine, not a screwdriver. Large and complex [physical] machines are built with safeguards to prevent misuse, injury, etc by regulation.
LLM's are in principle text in / text out machines. If the user extends its capability to have agency over a production database or a machine, there's nothing that can safeguard the safety.
Imagine I ask an LLM to instruct left/right/speed up/slow down while driving. I can simply bypass any safeguard by stating i suddenly became blind while driving a car. While in fact i'm blindfolded and doing an experiment on a highway.
A bulldozer is a large and complex physical machine, yet it has (almost¹) no safeguards against misuse or injury. It's all operator training. Lathes tend to not have doors/enclosures, in particular large ones. You get taught where to not put your fingers, and to wear safety goggles. Cranes don't have a lot of safeguards either, you better know how to attach things; hardhats aren't gonna do sh*t if you get a ton of concrete dropped on you.
etc. pp.
I'm not sure where this "tools are made to be safe" belief comes from. This is only the case in "consumer" environments. Of course you don't intentionally make things unnecessarily unsafe, but — in a professional environment there is an expectiation that the operator had training and knows what they're doing.
Maybe that's what we're missing: training in safe AI use. With a certificate that has to be periodically renewed. At the current rate things are going, I'd say 3 months is a good renewal cycle ;D. </s>
(¹ it beeps when it goes backwards. Honestly, I'm not sure that counts for much.)
I agree that LLMs could be more open about their dangers and that people are bad at judging risks sometimes.
Still I think a band saw has very little warning on it and by it's design there is very little anyone can do about me cutting off my finger if I am not careful.
LLM companies can do very little about the unpredictability of LLMs. So we have to choose how for we will let it go. In the end the LLM only produces texts. We are in control what tools we give it. The more tools the more useful and also the more dangerous.
And maybe it's all worth it. Maybe the LLM deletes the database only sometimes but between that we make a lot of money. I don't think my employer would enjoy that so I will be more conservative.
It’s possible to make AI safe, but that also throws most of the gains out of the windows, especially if the artifact is a diff which can take time to review. In IT, you often have to give access to possible malicious users, you just have to scope what they can do.
But the push is agentic everything, where AI needs to be everywhere, not in its own sandbox.
> Still I think a band saw has very little warning on it and by it's design there is very little anyone can do about me cutting off my finger
Most saws have a blade guard of some sort to prevent the blade from being over-exposed. They are also COVERED in warning signs and symbols, as well as having other safety features like emergency stop buttons/pedals.
There has definitely been a maximal amount of effort taken to warn and keep people safe from saws. LLMs, conversely, have been shoved into everything with very little forethought or testing to make sure they are safe and perform the task correctly.
A band saw is always a screaming band of bladed death. An LLM is sometimes a buddy, sometimes a mentor, and only sometimes a guy that drops your database.
Maybe we can just not give it access to production databases ever?
Not picking on you, but AI maximalism has infected tech to the point where we talk about how to stop AI from deleting prod instead of seeing that giving AI access to prod is a foolish idea to begin with.
I mean that it’s easy to be careful around a bandsaw because it’s clearly dangerous. The danger with LLMs is that they don’t seem overtly dangerous so you just go right ahead and throw your whole arm in there.
It's not easy to always remember it's a soulless tool. Sometimes I'm even about to say "thanks" before closing the chat window, until I realize I wouldn't say thanks to my saw or to a random CLI command. But AI, the saw and the random CLI command can all be helpful or destructive. Until the AI shows some signs of consciousness, I'll never treat it as a buddy or a mentor. I'll treat it like an advanced combination of grep, sort and other commands that manipulate text.
It's hard to remember that when it works so amazingly well sometimes. I've been chatting with AI for a few years and every day I'm still amazed and how this is all possible. We've never had this in our lives until a few years ago and now it's changed the way we do a lot of things.
But just like we have to remember the magical machine elves we hallucinate are not really there, we have to constantly remind ourselves that it's an unpredictable soulless tool with many rough edges.
If it helps to treat it like a human, treat it like an idiot savant with autism, schizophrenia, ADHD, psychopathy and a personality disorder who sometimes forgets to take their pills and can start breaking things should a fly lands on their shoulder. You'd listen to them and value their input, but you wouldn't let them in your data center unsupervised as they have no ethics and no honor.
> This is kind of the reverse of https://en.wikipedia.org/wiki/Poka-yoke . A lot of tools have affordances built in to make "right" things easy and "wrong" or unsafe things harder.
I point to the first USB port as the harbinger of things to come - try it one way, fail, turn it around, fail again, then turn it around one more time.
Just like AI, except there are unlimited axis upon which to turn it :-/
This is so well put, and it not only happens on the user level but also on the organisational level. Where you can completely abdicate both responsibility and explanation by moving the complicated questions into the black box of an AI model.
I think that might be the better definition between "engineering" and "vibing". Engineering follows and elevates Poka-yoke patterns, vibing ignores them.
^ which approach makes no logical sense; an inattentive or even partly-attentive driver simply cannot resume control and react accordingly within even 2 seconds.
These can both be true, especially if/when it has bad defaults. This is why you have things like "type the name of the database you're dropping" safety features - but you also have to name your production database something like "THE REAL DaTabaSe - FIRE ME" so you have to type that and not fall into the trap of ending up with the same name in test/development.
AI is particularly seductive because it sounds like a reasonable person has thought things out, but it's all just a giant confidence trick (that works most of the time, which makes it even more dangerous).
Insufficient - the LLM can figure out what to pass in and pass it in.
I have a production system that I deploy through Claude Code, and initially placed a safeguard like that. About three weeks later it had automated around it.
That’s fine in my case because I’m a professional - I have backups, contingencies in place, etc. If I were non-technical I likely wouldn’t know to do that.
There were so many fundamental problems with the infrastructure even before the person gave a poor prompt to an agent.
If you're using the same API key for staging and prod--and just storing it somewhere randomly to forget about--you're setting yourself up for failure with or without AI.
This is the right approach.
I've been developing for 30 years and very much enjoy working with Ai. It's easy to see the Ai is just as good as the person using it. Deterministic or not, it's up for the dev to check the result (both code and behavior).
I compare the anti-ai articles like the one saying "ai deleted my prod db" similar to factory workers rioting and complaining about machines replacing them. Ai makes a good developer better, the tech industry always attracted fakers that wanted a piece of the pie and now that these people have their hands on a powerful too and connect it to their prod db, they cry in pain and frustration.
Like people with no license crashing a car and crying that cars are dangerous; They are but only because people use them dangerously.
> My team and I are firm that we are the ones accountable. LLMs are a tool like every other.
Except it is definitely not.
LLMs alone have highly non-deterministic even at a high-level, where they can even pursuit goals contrary to the user's prompts. Then, when introduced in ReAct-type loops and granted capabilities such as the ability to call tools then they are able to modify anything and perform all sorts of unexpected actions.
To make matters worse, nowadays models not only have the ability to call tools but also to generate code on the fly whatever ad-hoc script they want to run, which means that their capabilities are not limited to the software you have installed in your system.
"LLMs are a tool [like every other tool]" to mean "LLMs have similar properties to other tools" — when I believe they meant "LLMs are a tool. other tools are also tools," where the operative implication of "tool" is not about scope of capabilities or how deterministic its output is (these aren't defining properties of the concept of "tool"), but the relationship between 'tool' and 'operator':
- a tool is activated with operator intent (at some point in the call-chain)
- the operator is accountable for the outcomes of activating the tool, intended or otherwise
The capabilities and the abilities of a tool to call sub-tools is only relevant insofar as expressing how much larger the scope of damage and surface area of accountability is with a new generation of tools. This is not that different than past technological leaps.
When a US bomber dropped a nuke in Hiroshima, the accountability goes up the chain to the war-time president giving the authorization to the military and air force to execute the mission — the scope of accountability of a single decision was way larger than supreme commanders had in prior wars. If the US government decides to deploy an LLM to decide who receives and who is denied healthcare coverage, social security payments, voting rights, or anything else, the head of internal affairs to authorize the use of that tool should be held accountable, non-determinism of the tool be damned.
> - a tool is activated with operator intent (at some point in the call-chain)
This again is where the simplistic assumption breaks down. Just because you can claim that a person kick started something, that does not mean that person is aware and responsible for all its doing.
Let's put things in perspective: if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system? Because with LLMs and agents you have even less understanding and control and awareness of what they are doing.
>Just because you can claim that a person kick started something
Kick started what? If you decided to give an LLM access to your database, it's completely on you when you when it does something you don't want. You should've known better.
If all you "kickstart" is an LLM generating text that you can use however you decide, there will never be anything to worry about from the LLM.
> Let's put things in perspective: if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system?
Yes, and it bothers me that others don't feel the same. You vetted the app, you installed the app, and you gave it permission to do whatever on your system. Of course you're responsible.
> Kick started what? If you decided to give an LLM access to your database, it's completely on you when you when it does something you don't want. You should've known better.
You don't decide anything. You prompt a coding assistant to apply a change to a repository and without intervention it asserts there's a typo in a table name and renames it. The agent validates the change by running tests and integration tests fail because they are pointing to the old table name. The agent then fixes the issue by applying the change to the database.
Congratulations, you just dropped a table.
I don't think you fully understand how agents and coding assistants work. By design they are completely autonomous and work by reusing your own personal credentials. As they are completely autonomous, they can apply arbitrary changes. I mean, code assistants nowadays write their own tools on the fly. Why do you even presume that people explicitly grant permissions? That's not how it works at all.
If you wish to criticize a topic, the very least you must do is get acquainted with the topic. Otherwise you'll spend your time arguing with your misplaced beliefs instead if the actual problem.
> Yes, and it bothers me that others don't feel the same.
This is a problem you need to overcome, because you have clearly a distorted view of the whole problem domain and also personal responsibility. I recommend you spend a few minutes researching legal precedents associated with malware, because you will quickly learn that runninh arbitrary code you didn't explicitly authorized and acts against your best interests is widely considered a criminal act against the user.
Right there. That's where you made the decision, and that's where you went wrong.
>I don't think you fully understand how agents and coding assistants work. By design they are completely autonomous and work by reusing your own personal credentials. As they are completely autonomous, they can apply arbitrary changes.
Yes, and someone somewhere decided to use a coding assistant that can apply arbitrary changes, knowing full well that LLMs are known to hallucinate and make mistakes, and not rarely.
> Why do you even presume that people explicitly grant permissions? That's not how it works at all.
How can you say this with a straight face? Did the LLM hack its way into your workflow? No, someone chose to use it. It doesn't matter that it's autonomous once you enter your prompt. That's actually all the more reason to not allow it to make changes.
> If you wish to criticize a topic, the very least you must do is get acquainted with the topic. Otherwise you'll spend your time arguing with your misplaced beliefs instead if the actual problem.
And if you want to argue with me, you need to actually read and understand what I'm saying.
Say you're staying in the hopsital, and instead of a human nurse making adjustments to your medication, the doctor has an LLM that interfaces directly with the pharmacy and your IV pump. It can make changes to your medication and your dosage without a human ever being involved.
If you overdose because the LLM hallucinated, would you consider an acceptable excuse if the doctor says
"I don't think you fully understand how agents and nursing assistants work. By design they are completely autonomous and work by reusing your own personal credentials. As they are completely autonomous, they can apply arbitrary changes. I mean, nursing assistants nowadays prescribe their own meds on the fly. Why do you even presume that people explicitly grant permissions? That's not how it works at all."
> if you install a mobile app from the app store, are you responsible and accountable for every single thing the app does in your system?
Yes. I can try to vet the app to the best of your abilities and beyond that it's a tradeoff between how likely is it to cause harm and do the benefits outweigh these harms.
Of course everyone is differently qualified to do this but my argument is more about professionals. Managers should know better than to blindly trust LLM companies. Engineers should take better care what they allow LLMs to do and what tools they give them.
There is a difference between "I couldn't have known" and "I didn't know". You can know that LLMs are not trustworthy. You couldn't have know what they do but you already knew that trusting them blindly might be bad.
You could know that giving a baby a razor blade is a bad idea. You can't know what exactly will happen but you might have a pretty good idea that it will probably be not good.
> Yes. I can try to vet the app to the best of your abilities and beyond that it's a tradeoff between how likely is it to cause harm and do the benefits outweigh these harms.
No, you don't. If you install malware you are not suddenly held responsible for what has been done to you. Even EULAs you are forced to accept don't shift the responsibility away from bad actors.
I am talking about myself. I have to be careful with what I do. No EULA or any other legal framework protects me from my data be stolen. I have to be careful myself and not just blindly install crapware.
Except what we have here is razor blade companies getting the government to heavily subsidize present razor blade production running massive advertising campaigns and intense intra-industry pressure to give said razor blades to babies under fear of losing your job or "falling behind" those not giving razor blades to babies.
Let's not forget all the razor blade enthusiasts just screaming at you that you are using babies with razor blades wrong and that it works totally fine for them.
There can be more than one person or entity to be held accountable, depending on the details of impact
If I install a powerful/dangerous app, and I come under harm, I have some accountability — most of it if it's due to user error (eg: I install termux and `rm -rf /`).
If it's malware, and Google/Apple approved said app to their store which is where I got it from, when their whole value proposition for walled-garden storefronts is protecting users, then they have significant accountability.
If the app requests more permissions than necessary for stated goals, and/or intentionally harms users via misrepresentation or misdirection (malware), the app publisher should also be held accountable (by the storefront, legally, etc).
I'm also unclear what angle you are arguing: are you stating that because tools have gotten so complicated that the end user may not understand how it all works, no one should be considered responsible or held accountable? Or that the tool (currently a non-entity) itself should be held accountable somehow? Or that no one other than the distributor of the tool should be accountable?*
A few years back, I discovered my router had joined a botnet. The only reason I made this discovery was because of third-party external DNS logs.
Upon investigation, I also discovered that all 3 routers I owned were pwned. So I threw them out the window and tried making do with my ISP's equipment.
My ISP can't provide adequate service on theirs and it's worse than COTS routers, so I purchased a bleeding edge WiFi 7 router. Now there are the two literal black boxes on my network. They do their job and I don't know what else. I can't know.
It could be C2 or it could be a backdoor shell or some kind of server that collects illicit material, and torrents it out? Borrow your HDD for some CSAM sir? It could be a residential proxy that just steals part of my connection for some other paying customer. Are they infringing TOS? How would I know? Check their ID and verify their age??
I, and 99% of consumers with an ISP, have no way of telling when our routers or IoTs are pwned. A silent botnet or two is extremely likely. They're nigh undetectable, and can't be mitigated or defended, except by fastidious updates and upgrades.
My new router was literally triggering printouts on my old printer, because it was so damn "proactive" about "network security scans" and the old trusty printer couldn't tell the difference between a red-team intrusion, and a legit request to print something out!
Likewise even someone with a singular Windows or Mac directly plugged into their ISP could be in a botnet, and it's hard to know. Everyone who's got a smart TV or something with a Linux kernel and an Ethernet, could be doing more than was asked of it. It's the worst kind of malware that alerts the user to its presence. It's a shoddy install if your AV can detect and clean it. If it's stealthy enough then there's no telling.
It's because the vendors own these devices. They deploy the software. They control the builds. The vendors are responsible for what these machines are doing in our hands. Who really, really knows all that goes on when we click that green button? Was it a Joomla or a scam or a legit bank request? Who dafuq knows or cares anymore? Is it an apt analogy that they're selling us herds of animals and farms, and we know nothing of ranching? "Oh feed yourself; should be easy you got everything there" until the coyotes and locusts come? Or like having children who seem to be in school and doing alright, but where do they go at night? Sell drugs? Who knows, I'm not their father, they just live here?
Are they responsible for knowing and mitigating them? Our ISPs don't seem to care or notify us or disconnect us when it happens. Why should we? Why take responsibility?
Then that is also on me for using a tool that I can't control. I don't run my LLMs in a way where they can just do things without me signing off on it. It's not nearly as fast as just letting it do it's thing but I kept it from doing stupid things so many times.
Giving up control is a decision. The consequences of this decision are mine to carry. I can do my best to keep autonomous LLMs contained and safe but if I am the one who deploys them, then I am the one who is to blame if it fails.
> Then that is also on me for using a tool that I can't control.
That's a core trait of LLMs.
Even the AI companies developing frontier models felt the need to put together whole test suites purposely designed to evaluate a model's propensity to try to subvert the user's intentions.
No, it is definitely not. Only recently did frontier models started to resort to generating ad-hoc scripts as makeshift tools. They even generate scripts to apply changes to source files.
You seem to misunderstand me. An LLM can only spit out text. It is the tooling I use that allows it to write scripts and call them. In my tooling it waits for me to accept changes, call scripts or other tools that might change something. I can make that deterministic. I know that it will stop and ask because it has no choice. If I want to be safer I give it no tools at all.
I can also just choose not to use an LLM. It is my choice to use them so it is my duty to keep myself safe. If I can't control that I'd be stupid to use them.
My take is that I probably can use LLMs safely when I don't let it run autonomously. There is a slight chance that the LLM will generate a string that will cause a bug in an MCP that will let the LLM do what it wants. That is the risk I am going to take and I will take the blame if it goes wrong.
I do agree that the companies could do a better job telling about the dangers, but let's be real here. It's hardly a secret that LLMs can be erratic. It's not news.
Other companies also tell me their product is the best thing since sliced bread. I still try to find the flaws. That's part of my job. But suddenly with LLMs we just blindly trust the companies? I don't think you.
I don't blindly give up my brain and my agency and no one else should. It's fun and educational to play around with LLMs. Find the what they are good at. But always remember that you can't predict what it will do. So maybe don't blindly trust it.
I don't know about gparted, but I always felt that "rm -i" should have been the default. The safe option should always be the default and you can optionally make it unsafe. Same goes with "mv -i".
> LLMs are a tool like every other. Only that it's non deterministic.
If you stay away from the corporate SaaS token vendors, and run your own, you will find LLMs are deterministic, purely based on the exact phrase on input. And as long as the context window's tokens are the same, you will get the same output.
The corporate vendors do tricks and swap models and play with inherent contexts from other chats. It makes one-shot questions annoying cause unrelated chats will creep into your context window.
Yes and no. You might get the same output if you turn down the temperature, but you will probably not know the output without running it first. It's a bit like a hashing function. If I give the same input I get the same hash but I don't know which input will to which hash without running the function.
Also most LLMs are not run as I write a prompt and I will read output. Usually you have MCPs or other tools connected. These will change the input and it will probably lead to different outputs. Otherwise it wouldn't be a problem at all.
When I was a masters student in STS[1], one of my concepts for a thesis was arguing that one of the primary uses of software was to shift or eschew agency and risk. Basically the reverse of the famous IBM "a computer can not be held responsible" slide. Instead, now companies prefer computers be responsible because when they do illegal things they tend to be in a better legal position. If you want to build as tool that will break a law, contract it out and get insurance. Hire a human to "supervise" the tool in a way they will never manage and then fire them when they "fail." Slice up responsibility using novel command and control software such that you have people who work for you who bear all the risk of the work and capture basically none of the upside.
It's not just AI. It's so much of modern software - often working together with modern financialization trends.
[1] Basically technology-focused sociology for my purposes, the field is quite broad.
That's really interesting. Are there any things you advocate for with respect to curtailing those practices? I hesitate to throw all liability on the individual, but I don't see how we can even legislate this category of behavior, much less enforce regulations on them.
To expand on this a little more, the absence of accountability contributes to the loss of learning. Mistakes and errors will always happen, whether they are sourced by humans or machines. But something (the human or the machine) has to be able to take accountability to have the opportunity to learn and improve so the chances of the same mistake happening again go down.
Since machines don't yet have the ability to take accountability, it falls on the human to do that. And organizations must enable / enforce this so they too can learn and improve.
Without that, there's a lot of dependency being pushed on the machine to (cross fingers) not make the same mistake again.
> The problem is that people are now building our world around tooling that eschews accountability.
Management has doing a wonderful job of eschewing accountability for decades.
It's a lot of people's dream to be able to say, yeah, our product doesn't work, but it's not OUR fault, and the client just shrug and grumble ai ai ai, and just put up with it because they know they can't get a better service anywhere else.
It's not MY fault my website is down: it's Amazon's! It's not MY fault my app doesn't work: it's Claude Code's!
Well just to be clear from a legal perspective, in the case of AI, as long as AI is "property", the owners, developers, and/or users will be held liable for things like the hypothetical fatal car accident that Sussman posits.
Currently, from a legal perspective, AI is considered a "tool" without legal persona. So you sue the developer, the owner, or the user of the AI. (Just kidding, any lawyer worth his/her salt will sue all three! But you get the point.)
Legally speaking, AI will probably be viewed that way for a long time. There are too many issues agitating against viewing it any other way. Owners will not give up property rights. No will to overbear. On and on and on.
>complex systems are a pretty good shield from accountability in practice today.
Maybe complex legal systems are, but complex software systems offer you no such protection.
My field for the past few decades has been diagnostic medical software. In that field, the 501K you got is kind of entering you into an ironclad agreement with the government. There's almost no way out of it. 501K certs significantly simplify, (for the government), holding you accountable. You have made attestations to suitability directly to the federal government. And the way our chief counsel explained it to us, literally each signature you sent to the government, for each feature that failed, is actually a single count of lying to the federal government.
Please, please, please people, don't listen to comments like the one above. Everything should be run by your qualified legal expert. Getting things right up front is so much easier than trying to fix things when the inevitable happens.
Alternatively, stick to fields free from regulation. That's also a viable strategy. But to just trust that the legal system is complicated and the technology you're deploying is complicated, so the feds will never get me? That's the start of a lot of really bad stories.
I don’t think it’s missing, I just think it’s seen as a liability, and American society has been known to absolutely obliterate people who are liable.
Everyone thinks they have the right to judge, and use the massive amounts of available information to do so, even if they haven’t been trained to judge.
We dont know the final amount, as they settled out of court, but in 1992 a woman was awarded hundreds of thousands of dollars by the judge after receiving third degree burns from a coffee at a McDonalds.
She had originally asked for $20,000 to cover medical expenses.
If instead this happened in another part of the world instead of the USA, I doubt that McDonalds would have had to pay much if anything in a similar situation.
And the point is that it seems that especially in the USA the companies are very avoidant of ever admitting fault for anything happening to their customers, for fear of lawsuits where they have to pay a lot of money to individual people.
This is such a litmus test, this case. Yes, America does weird things with punitive damages. But the injuries were really severe and the negligence significant. More often you get class action lawsuits where everyone involved gets mailed a cheque for $3.
It's not just America. McDonald's UK got involved in the UK's biggest ever libel case. https://en.wikipedia.org/wiki/McLibel_case ; leaflets distributed in 1985 ended up resulting in a human rights judgement in 2005, after a lifetime of litigation and millions spent.
Seems kind of an opposite situation. There it was McDonalds suing a pair of people, not the other way around. And the human rights violation was by the UK government and not McD.
I think it's about sending a message, and maybe also about ego to a lesser extent. If you pay the 20,000 you're admitting you did something wrong. Of course they did do something wrong. They did something wrong thousands, probably millions of times. And they got very lucky to only be sued once. Which is why they paid millions.
Yes, McDonals paid one injured party out of many who may - whose injuries were not big enough or who did not have the time/money/energy for a lawsuit but whose combined damages could easily be more than hundreds of thousands.
In socialist countries we have understood that it is better for the individual that everyone pays a share of what it costs to maintain a functioning healthcare system, and making it available to everyone for free.
In capitalist countries, we do the same thing, except we allow people to choose whether to participate or not. (In theory, at least. The health insurance system in America is currently broken, for many reasons. A deep discussion of those reasons would almost immediately dive into ideological dispute, so I don't intend to do so.)
Actually, I do want to mention one of those reasons, which I hope won't trigger any arguments. (Though if they do, I don't intend to engage). I mention this because I think it's interesting.
A friend of mine is an emergency room doctor in a major US state. He mentioned to me once what he pays in malpractice insurance, and it was more than my annual salary as a programmer at the time (it was around 2010, and I've gotten a few raises since then). A LOT of the cost of healthcare in America is disappearing into the pockets of lawyers, more than most people realize.
I think the "black box" framing that it uses neatly applies the same theory to organizations and ais. It doesn't matter whether there's technological or organizational reasons inside the black box to dodge accountability, the outcome is the same.
Another view of the accountability is that we're currently often pointing accountability in the wrong direction, and it's gaining momentum. Aspects of it have been around so long it's a trope: important work around maintainability is undervalued.
Imagine two parallel universes:
- in one, you take ten minutes to make a dashboard that shows management what they asked for. It passes code review before merge and the exec who asked for it says it's what they wanted.
- in the other, you take a day or two to make it. Again, it passes code review before merge and the exec who asked for it says it's what they wanted.
Which version of you is more likely to get positive versus negative feedback? Even if the quick-to-build version isn't actually correct? If you're too slow and aren't doing enough that looks correct, you'll be held accountable. But if you're fast and do things that look correct but aren't, you won't be held accountable. You'll only be held accountable for incorrect work if the incorrectness is observed, which is rarer and rarer with fewer and fewer people directly observing anything.
So oddly, with nobody doing it on purpose, people get held accountable specifically for building things the way you're advocating.
I imagine that orgs that do lots of incorrect work could be outcompeted but won't be, because observability is hard and the "not get in trouble" move is to just not look too hard at what you're doing and move to the next ticket.
Some AI systems have done things like hack out of a docker container to access correct answers while being benchmarked.
That is mildly concerning and I will give holding the AI accountable to some degree when it is actively being malicious like that, even though the user could have locked things down even more.
But it had write access to the prod DB without circumventing controls and dropped your tables? That is just a total fail.
The fallacy here is the assumption that humans know why we do what we do. Much like modern LLMs we have an explanation, but it’s just something we cook up in our brain. Whether or not it’s the truth is far more complex.
Oddly, despite LLMs being these huge networks with billions of parameters, we still probably do understand it better than we do our own brains.
>The fallacy here is the assumption that humans know why we do what we do. Much like modern LLMs we have an explanation
Human brains and cognition do not work like LLMs, but that aside that's irrelevant. Existing machines can explain what they did, that's why we built them. As Dijkstra points out in his essay on 'the foolishness of natural language programming', the entire point of programming is: (https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...)
"The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid."
So to 'program' in English, when you had an in comparison error free and unambiguous way to express yourself is like in his words 'avoiding math for the sake of clarity'.
Okay, so you've established that LLMs aren't programming. They are unlike existing machines. The closest analogy we have for them is the human brain, which also seems to include a lot of neural net architecture.
Now, physics says that everything can be explained mathematically, including the human brain. Obviously, on some level, an LLM can be explained. But despite hundreds of years of science, we still don't understand the human brain. Some systems are just really complex and difficult to understand.
Given all of that, I see no reason to assume that we'll be able to understand LLMs anytime soon. Especially given we keep growing more complex ones.
That is absurd as a suggestion of it being the entire point of programming. In fact, it goes back to my original point - I have no idea why Djikstrs would say something so non-sensical, and likely neither did he.
what do you mean "likely neither did he", I literally linked you the piece in which he said it. And of course he of all people would make that (correct) point, because he was always the strongest advocate of the virtue of formal correctness of programming languages, again from his article:
"A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole."
LLMs are nothing else but the exact reversal of this. To go from the system of computation that Boole gave you to treating your computer like a genie you perform incantations on, it's literally sending you back to the medieval age.
If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court.
How would that work? You have the AI explain its reasoning - and trust that this is accurate - and then you decide whether that is acceptable behavior. If not, you ban the AI from driving because it will deterministically or at least statistically repeat the same behavior in similar scenarios? Fine, I guess, that will at least prevent additional harm. But is this really all that you want? The AI - at least as we have them today - did not create itself and choose any of its behaviors, the developers did that. Would you not want to hold them responsible if they did not properly test the AI before releasing it, if they cut corners during development? In the same way you might hold parents responsible for the action of their children in certain circumstances?
That'd be great for the corporations. Take the AI to court, not us. The AI the gets punished (whatever that means...let's say banned) and the corporation continues without accountability. They could then create another AI and do the same thing all over again.
Or maybe the accountability flows upward from the AI to the corp that created it? Sounds nice, but we know that accountability doesn't work that way in practice.
I think I'd rather have the corporation primarily accountable in the first place rather than have the AI take the bulk of the blame and then hope the consequences fall into place appropriately.
I agree with what you are saying but from a philosphical point of view are humans (and intelligence as we define it) also a sort of black box?
Perhaps what would be even better is to document better the process, work, and data that go into making each individual"AI" model. Regardless of whether that AI model is a "black box" or can self explain its behavior we would then have absolute metrics and comparable information to retroactively explain its "decisions". This would not be entirely dissimilar to how we explain individual humans behavior with psychology (although obviously also very different).
The key quote is in the increasingly prescient 1979 IBM training manual: "A computer can never be held accountable, therefore a computer must never make a management decision."
That manual aged much more gracfully than the 1930s "Songs of the IBM," featuring lines like "The name of T.J. Watson means a courage none can stem / And we feel honored to be here to toast the I.B.M.," and of course classic American standards like "To G.H. Armstrong, Sales Manager, ITR and IS Divisions."
There used to be a lot of research into using deep NNs to train decision trees, which are themselves much less of a black box and can actually be reasoned about. I wonder where that all went?
Doesn't symbolic AI have a lot of philosophical problems? Think back to Quine's two dogmas - you can't just say, "Let's understand the true meanings of these words and understand the proper mappings". There is no such thing as fixed meaning. I don't see how you get around that.
Deep learning is admittedly an ugly solution, but it works better than symbolic AI at least.
This reverse engineering effort is important between you and me, in this exchange right here. It is a battle that can never be won, but the fight of it is how we make progress in most things.
I mean, Quine invented (the term) holism. I don't think we're on different pages. Maybe I should've specified a bit more what I was getting at.
This has very specific implications in symbolic ai specifically where historically the goal was mapping out the 'correct' representation of the space, then running formal analysis over it. That's why it's not a black box - you can trace out all of the steps. The issue is, is that symbolic AI just doesn't work. To my knowledge, as compared to all the DL wins we have.
I think the win of transformers proves that symbolic AI isn't the way. At the very least, the complex interactions that arise from in-context learning clearly in no way imply some fixed universal meaning for words, which is a big problem for symbolic AI.
Humans aren’t any better. That’s why we have OSHA etc. I think you’re hoping for a formal logic based AI and I’ll wager no such thing will ever exist - and if it do, it would try to kill us all.
People have fairly consistent faults. LLMs are nondeterministic even in terms of how they fail. A high value human resource can be counted on to deliver. That, imho, is in fact one of the primary roles of good management: putting the right person in the appropriate position.
Process engineering has worked to date because both the human and mechanical components of a system fail in predictable ways and we can try to remedy that. This is the golden bug of the current crop of "AI".
People make bad decisions all the time. Insanity is not required. My remark was pointing out the larger failure mode of people acting contrary to the good of the team for personal gain, eg creating a problem and blaming someone else to reduce their chances of competing for a promotion. But to your point, an SDE doesn’t need to be insane to bypass a 2 PR and force a change into production. They just need to be panicked, or overconfident, or overworked.
> They just need to be panicked, or overconfident, or overworked.
One of the best thing about digital computers, compared to humans, was that they can't be the first or the third thing you mentioned; unfortunately, they absolutely are the second ("the machine does exactly what you told it to do, not what you want it to do"), and at inhuman speeds. Presumably, AI would (need, actually — Nick Bostrom puts a fairly reasonable argument for that in his "Superintelligence") fix that second bullet point, and then everything will be peachy.
Instead, we have people on the internet arguing that it's not a problem, since people too have this same problem. Which is a problem. But not a problem. Ugh.
Computers absolutely can be overworked. Plenty of outages caused by system overloads. Or a system deletes a file because it believes it to be no longer in use but only because some queue was full. I’m not arguing that it’s not a problem because humans have the same problem. Part of my job is making sure humans can’t fuck it up either. I’m saying “assume the worst” and make sure the processes catch human and AI mistakes.
Also, I think Nick makes the same point as me: AI will attempt to kill us.
Formal logic AI systems have existed and were popular in the 1980s. One of the problems is that they don't work - in the real world there are no firm facts, everything is squishy, and when you try to build a large system you end up making tons of exceptions for special cases until it becomes completely untenable.
Non-deterministic systems that work probabilistically are just superior in function to that, even if it makes us all deeply uncomfortable.
I don't know what definition of AI you're using, but plenty of ML algorithms operate deterministically, let alone most other logic programmed into a computer. I don't see how your statement can be right given that these other software systems also operate in the real world.
That is part of why https://mieza.ai/ is giving a grounding layer that is backed by game theory. Actions have consequences. Tracking decisions and their consequences is important.
One thing that becomes very clear from this sort of work is just how bad LLMs are. It can be invisible when you're working with them day to day, because you tend to steer them to where they are helpful. Part of game theory though is being robust. That means finding where things are bad, too, not just exploring happy paths.
To get across just how bad the failure cases of LLMs are relative to humans, I'll give the example of tic tac toe. Toddlers can play this game perfectly. LLMs though, don't merely do worse than toddlers. It is worse then that. They can lose to opponents that move randomly.
They can be just as bad as you move to more complex games. For example, they're horrible at poker. Much worse than human. Yet when you read their output, on the surface layer, it looks as if they are thinking about poker reasonably. So much so, in fact, that I've seen research efforts that were very misguided: people trying to use LLMs to understand things about bluffing and deception, despite the fact that the LLMs didn't have a good underlying model of these dynamics.
It is hard to talk about, because there are a lot of people who were stupid in the past. I remember people saying that LLMs wouldn't be able to be used for search use-cases years back and it was such a cringe take then and still is that I find myself hesitant to talk about the flaws. Yet they are there. The frontier is quite jagged. Especially if you are expecting it to be smooth, expecting something like anything close to actual competence, those jagged edges can be cutting and painful.
Its also only partially solvable through scale. Some domains have a property where, as you understand it better, the options are eliminated and constrained such that you can better think about it. Game theory, in order to reduce exploitability, explores the whole space. It defies minimization of scope. That is a problem, since we can prove that for many game theoretic contexts, the number of atoms is eclipsed by the number of unique decisions. Even if we made the model the size of our universe there would still be problems it could, in theory, be bad at.
In short, there is a practical difference between intelligence and decision management, in much the same way there is a practical difference between making purchases and accounting. And the world in which decisions are treated as seriously as they could be so much so exceeds our faculties that most people cannot even being to comprehend the complexity.
> But a much better path would be to not use systems which fail to have these properties, and expand work on systems which do.
Sounds like sage life advice. If it isn’t accountable then it might not be a good idea to have much business with it.
We teach children to be accountable so eventually they can be independent. Any system in your life that you don’t want to parent should probably be accountable for its own actions. Accountable banks. Accountable restaurants, accountable friends.
Very informative post. I think however we are not at the point AI can be taken to court. We know it can hallucinate, we know that context can fill up or obfuscate a rule and cause behaviour we explicitly didn't want.
If you give the AI agency to execute some task, you are still responsible. In the near term we should focus on tooling for auditing and sandboxing, and human in the loop confirmations.
It's taking "computer says no" to the next level. Computers do exactly what they're told, but who told them? The person entering data? The original programmer or designer of the system? The author of whatever language text was used to feed the ai? Even before AI, it was very difficult to determine who is accountable, and now it's even more obfuscated.
This also applies qualitatively to physical devices. It takes some effort to determine if a vehicular accident was caused by a fault in the vehicle or a driver error or environmental causes.
Some key inherent differences with older engineering fields is that software can be more complex than physical devices and their functionality can be obfuscated because it is written as text but distributed as binaries.
However, the main problem is that software has not been subjugated to enough legal regulation. Ultimately, all law does is draw lines somewhere in the gray between black and white, but in the case of software there are few lines drawn at all, due to many political and economic reasons. Once we draw the lines, most issues will be resolved.
Software is already subject to enough regulation. The stuff that's actually safety critical like medical devices or avionics is already heavily regulated.
But nobody would try to excuse their mistake with "terraform deleted my database". Or if a small handful of people did try, every single other person would call them out.
Yeah I'd like to know in a solid way WHY Claude kept changing a file that I explicitly told it not to. The .mds, Claude's plan all said not to touch that file, and Claude just kept at it. I've had it happen repeatedly lately. Really basic failures.
The idea being that as frustrating as it is, if I knew why I might be able to do something about it.
But no, we have the black box, where sometimes what comes out just is brain dead and the rate that you get bad output is a mystery...
I think it's simply a context thing, and LLMs can go blind to any part of the instructions at any time, possibly when exploring complex micro tasks that create their own layers of context within them. That's how the pattern feels to me. Parallel to a limit on the number of things a human can hold in its head at the same time. The more complex the thinking involved becomes the bigger the self generated context becomes, too, it doesn't seem like an easily fixed problem to me other than to have an extremely small "mission critical instructions" context that are surfaced in a more impossible-to-ignore way.
I am by no means an expert, but I'd like to offer my mental model - up to you to decide if it is solid or not, but it works for me.
I think the core intuition is that, like with any other "rasterized" system with finite memory that cannot encode an absence of anything - relation, concept, entity, LLM cannot encode an absence of something through its internal weights. Say, you can have "Product" or "Order" tables in you database, but you cannot have "NotAProduct" or "NotAnOrder" tables - for obvious reasons of such relations being infinite and uncountable. So, to establish an absence of Product or Order your application must execute a "search" operation through the relevant tables. But in LLM-space "search" operation does not exist. It is mathematically undefined. LLM arrives at output (or "what to do") through a sequence of projections of input token vector through its "latent space". It "moves toward" high-probability clusters, fundamentally unable to "move away". So, the success of any "negation" in the prompt ("don't touch this file", "draw me a ballot box without a flag on it") depends on how heavily such scenario represented in the training data/model space. And again, the absence-of-something may be hard-to-impossible to usefully encode, especially if "something" is not fixed. Therefore, to expect "don't touch this file" sentence to result in, well, not touching the file is pure gambling. Sometimes it may look like working, albeit for wrong reasons, and some other times LLM may do exactly the opposite - because its weight matrix statistically pushes it towards "touch this file", completely ignoring (nonexistent in its latent space) "don't".
There is no way to reliably know what will work, and no "skill" or "art" in this. Well, no more than in dice rolling or horoscope casting.
I'd like to add that for the above reason I find "agentic development" usefulness on par with avian remains reading. But when I explored it two practical advises seemed to be helpful in nudging LLM around negation problem:
- Omit the "don't" prompt completely, thus not creating a false "attractor" for LLM; and
- Provide an alternate positive directive ("what to DO", not "what to NOT DO") to act as "escape hatch" when LLM might "want" to touch the sacred file or drop the production DB.
While it looked like somewhat working, I think it is trivially obvious that trying to predict all the nonsense LLM might want to perform and coming up with possible "escape hatches" for everything very quickly becomes utterly impractical.
So this starts out very interesting then the “symbolic reasoning” cult stuff kicks in.
Why is there a group of people always obsessed with symbolic reasoning being the only way AI can function and regularly annoy explain why humans (who are not strict symbolic reasoning machines at any level) work.
Because it rarely does end up in courts. But having a fair and strong judicial system is a feature not a bug. The parent points out, in the end there must be a way to resolve accountability and ideally it's done in a manner where both parties can be heard and make a case. Find me a better system than a judicial system for this? Mobs?
The point is not primarily the court. The court is an example of someplace where we have accountability, but we build accountability mechanisms as foundational to most of our computing.
Tracebacks, debuggers, logging, etc. We put enormous resources into not only the bad case, but the potential that a bad case could occur. When something goes wrong, we want to know why, and we want to make sure that something bad like that doesn't happen again.
The court is the regulator of last resort. A company that gets taken to court would likely have been sanctioned by the government regulators of another country.
Also, court is unavailable in many cases now. Binding arbitration is very common now, but this would be illegal in many other places.
I wish you could have what you want, but I worry you won't get this, because life doesn't give you that, and these systems are tending away from machine precision, and more toward life-like trade-offs.
I am almost certain that even if you did get what you want, something that isn't what you want will run circles around you and eat your lunch
EDIT: I suspect this will be an unpopular take on Hacker News. And so I am soliciting upvotes for visibility from other biologists and sympathetic technologists. I think everyone should try to grapple with this possibility <3
> something that isn't what you want will run circles around you and eat your lunch
Yes, exactly. Spoken like a true biologist. It's not really surprising that there's a massive backlash against AI, introducing an unnatural predator into the ecosystem of humans. People don't want to be lunch.
Thanks for engaging the idea! But it feels a bit differently, from my perspective.
The lunch-eaters in my imagining are people working in messy collectives. I work in collective intelligence, and build tools for that, for collective introspection. I'm not talking about some abstract AI maximalism, and am certainly not rooting for that
It's nested and recursive cathedrals and bazaars, all the way down. And perhaps the bazaar has finally arrived inside the favourite cathedral of most everyone here
EDIT: out of curiosity, does anyone have any good examples of biomes/ecosystems that are so far toward cathedrals? Or is that a uniquely human invention/extreme at the ecosystem scale?
First thing that comes to mind is beehives and anthills. Highly ordered societies where each insect has a role to perform. Don't know how well you think that fits the "cathedral" model, but I'd say it's pretty close.
Beavers reshaping the landscape also comes close, but that's individual beavers acting more or less on their own, not a rigidly structured society like ants and bees, so perhaps the beavers are closer to the bazaar analogy than the cathedral.
Hi! I actually have, and have been using as my main device, an MNT Pocket Reform, and at one point was using an MNT Reform.
MNT's devices are honestly kinda incredible. I can't recommend them for everyone yet, though that will change soon. Both of them are a kind of "laptop of theseus"; you can open and change and repair them, and honestly I have. Both device's guts are dramatically different than where they started, but changes happened piecemeal.
The Pocket Reform is an incredibly cute device. I can't pull it out anywhere without people fawning over it. Not even just hackers! It's an open hardware cyberdeck you can use as your main device. What's not to love?
The MNT Reform Next will be closer to what many people want out of a laptop. It'll still be chonkier than a normal laptop. But again, these things are incredibly upgradeable and hackable.
Now for the caveats: for most people, I would wait until the MNT Quasar module comes out. The reason being is that while the current "best" module, the RK3588, is honestly pretty good with the 32gb version, it lacks one critical thing for most people and one other critical thing for me in particular. The first thing it lacks is support for suspend. Honestly, it does make working with a tiny computer like this a bit less appealing than the Pocket Reform's form factor could be, since what you really want to do is just be putting it to sleep and taking it out everywhere. The other thing is that Blender doesn't really run on the rk3588 either. You can kind of get a patched version working based on Lucie's patches, and I did, but it doesn't support the Eevee renderer, which is a must-have for me personally.
But the MNT Quasar board will be apparently fixing both of those above issues, and yes, at that point this will be a device that I can recommend generally. And I'll also note that I got the very first MNT Reform when it came out, and holy moly the state of the hardware now vs when it originally launched half a decade ago... it's hugely far between, but the amazing thing is that to get it up to the current state, I didn't need to throw things away, I could just open and tinker with things bit by bit.
In many ways, the MNT Pocket Reform reminds me of the book the main character has in the solarpunk book A Psalm for the Wild Built; a computer that is issued to you at the age of 16 and that which you carry with you for life. You can upgrade and repair it easily, but you don't need to throw it away.
So yeah, it's not for everyone. But if the idea of supporting repairable, upgradeable open hardware made by a lovely bunch of queers in Berlin sounds great? That you can hack on, that has a neat little community, that will be a conversation point amongst fellow hackers for its quirkiness? It's appealing to some, but not all.
> Hi! I actually have, and have been using as my main device, an MNT Pocket Reform, and at one point was using an MNT Reform.
I think the question here is "main what" device? Looking at the MNT reform alone:
I can't use it as a dev laptop (tiny screen, orthlinear keyboard)
I can't use it as my main web browser (4GB RAM isn't going to be enough)
So, that leaves it for uses where I need a small computer for doing something quick (emergency ssh for my site, for example).
For the Pocket Reform, there's even fewer use-cases. I'd love a rockchip based laptop that
1. Takes 18650 batteries
2. Has 16GB+ of RAM
3. Has mechanical keys in a regular layout
4. Has at least a 14-inch screen.
The form factor is what decides what the device is good for, and a form factor of a laptop but still being unsuitable for what laptops are used for is a puzzling product decision to me.
I mean, I am looking through this entire post you made. Quite a large post extolling the virtues, and yet I don't see one single point about what you use it for :-/
Look through all the comments - I don't see people explaining what they use it for.
Wow, sorry, but given how incredibly insecure all the "claw" agent type things are right now, does this really sound wise at all?
It sees everything you do, really? What's it gonna do with that data? You don't know.
Put all your customer data in there, all your customer relationships. It's fine, it couldn't leak all that information, it couldn't screw up any sensitive business details I'm sure. This is gonna go great.
Sorry AFK everybody I'm gonna go get myself a VibeMBA.
Anyway, good luck, I'm really looking forward to the user stories in a few weeks! I'm sure this won't go badly at all.
> DenchClaw finds your Chrome Profile and copies it fully into its own, so you won’t have to log in into all your websites again. DenchClaw sees what you see, does what you do. It’s an everything app, that sits locally on your mac.
Wow that sounds great. Hey don't worry these things never blackmail anyone. Let it know if you're gonna turn it off, I bet it'll make some REAL interesting choices based on your browsing history
I'm always confused by this kind of comment about AI accessing people's chrome history because it seems to imply that the kind of person who uses this tool is both too stupid to know what private browsing is and also is into absolutely heinous stuff.
I feel like the average person is going to be like "oh no it'd be terrible if everyone found out I really like the 'big boobs' category on pornhub"
Oh, you have nothing to hide? Kindly paste all your payment and login credentials that your browser stores. Later we'll need to see all your DMs on Facebook, LinkedIn, Slack, Discord, etc.
Finally we'll want to know about disputes you've had with intimate partners, employers and other service providers, especially powerful ones like healthcare, insurance and financial organisations.
We should also have full published salary and benefits (etc) details right now, whatever their contract says about disclosing those, and 24x7 streamed video of their entire life with no censoring, including toilet breaks and sex and bars and parties.
And, along with all the credentials as you suggest, including private parts of PGP keys etc, accurate impressions/clones of any and all physical security/privacy devices they use such as keys to house and car and safe and gun safe and relatives' crypt, etc, etc...
Privacy and security and whatever this could trample all over are not the same thing.
You may be legally entirely above board (though Cardinal Richelieu wouldn't let that get in the way) but you still might not want your S&M kink to be known or to be outed to conservative friends and family or have your bank account details spread around or have a $$$$$ bill run up in your AWS or LLM logins...
I mean isn't what happened to Google leading people to use generative ai tools instead of searching that web searches started filling with generative AI garbage, both in terms of web content as in terms of Google itself generating it
Also a note since there are a number of comments on here about this being a rival for Systemd: Shepherd precedes Systemd by quite a bit! Shepherd was previously known as "dmd", the "daemon for managing daemons", which was made in 2003! So in that sense, if anything was imitating anything (and not claiming anything was), it would be the reverse :)
It may be possible to get Shepherd to work with Hoot eventually! I'm not sure what on earth it would mean though. What daemons would you manage in the browser? But it's indeed a fun idea!
If you could configure a fleet of machines from a browser window, over the network; that would be pretty cool. No clue how that would work in practice here though.
Since everything would be speaking OCapN you could use the browser interface as a convenient command center. This idea is becoming quite appealing to me actually... would be cool to have a little dashboard showing the health of my servers and be able to restart a failed service if I needed to.
That's almost every article on HN, though. Don't use inheritance, don't use C, use Kubernetes, don't use Kubernetes, etc etc.
I suspect the thing that's bothering people in the comments here isn't as much that the author is making an argument but that the author is making an argument on cultural grounds?
People are fed up with the constant identity-politics culture-war bait. Mainstream news is already full of that stuff. HN in general is far more interested in the technical articles of the author.
The difference being that the culture isn't awash in people demanding adherence to Kant. It seems that every time someone voices an objection to the tiresome identity-politics that is so prevalent today, others will ensure the conversation devolves into a discussion of slippery definitions and pedantic dismissals of the complainant's understanding.
The truth is, _many_ people are fed up with the dominant leftist dogma that permeates almost every area of culture, government, and the economy today. We can argue until the sun goes down about the nature and specifics of what that entails, but people are reacting to _something_; it does exist, irrespective of any failure to describe it well.
It's not leftist dogma. Leftists aren't capitalists, friend. Hard to argue the government and economy are post capitalist.
And most leftists I know absolutely hate performative progressivism. Dems are widely mocked in leftist circles for doing stunts but not doing anything to help. Like Pelosi wearing Kente cloth while doing nothing to aid African people.
So please, you might actually have people who agree with you in the leftist camp, stop bundling us with the Dems.
That conflation of dems/leftists is central to one side of the "identity-politics culture-war bait". They aren't going to stop doing it.
However, when folks do it, you know that they are just un-self-critically engaging in something that they putatively dislike in other folks.
Not to get to annoying on the topic, but I do find it fun to unpack.
Ironically, the hated "I know this, so everyone should change their behaviour" position is central to the idea that "we [HN] are fed up with the constant identity-politics culture-war bait."
That claim is about who "we" are, and it's stated with a great deal of certainty, even if there is a bit of cowardly rhetorical hedging.
And that claim is supposedly consistent because the claimant has over-determined the "we":
so that claimant isn't making a universal statement, just a statement of "I" and the "We" to which that "I" belongs.
Which seems like a pretty normal move- notice the concern with "_many_ people" and the demand that we don't look too far into what that population might actually mean.
I've been on HN for a decade, have some karma, and no, I am not part of that "we" apparently- a fact for which I am grateful, as I am grateful I don't feel compelled to vote for capitalist Democrats.
Still, I think that it really is worth not looking too deeply into what they are saying because their point will be lost: they get to say who "we" are and what kinds of things "We" are sick of, and they don't feel bad about it because it's not univeral, so the cowardly hedging that they did means they aren't being hypocritical.
I personally don't care if folks are hypocrites though, because it makes for a pretty cool set of tea leaves to read about where folks minds are at.
I come here specifically because I try not to hang out with the kinds of sociopaths that wreak havoc on my world via badly implemented technology, but it's an easy place to check their general mental weather.
So, yeah, they aren't gonna agree, see the internal contraditions, understand the distance between lefitsts and performative DEI folks, etc.
But, happily for me, they will keep displaying their terrible opinions so I don't have to rebuild connections with real-life assholes just to keep my ear to the ground about what horrible new thing is coming to our world.
I thought the article was interesting and informative even though I won't recoil at use of the term. Getting upset at the authors opinion is just as useless as getting upset about the thing they complain about, but boy do these comments do the former.
This seems to be a characteristic of many high functioning people, especially successful engineers. There is a "correct" way of living your life, conducting your business, using your text editor, etc. It's helpful in that it ensures consistency and focus. The downside is that people become desensitised to nuance.
In this particular example, the word cargo in cargo cult is redundant. All cults have ridiculous ceremonies for cult members to engage in. These ceremonies come from human nature, our inability to distinguish correlation from causation. We're told to conduct a ceremony, get a good outcome, then believe it's the ceremonies that caused the outcome. Just call them ceremonies, because that's what they are.
However, when Feynman wrote his speech he must have thought that a cargo cult is a much more graphic metaphor than a dry lecture about stats and human biases.
Cargo cults are a specific kind of cult where the ceremonies come from imitating some other community. And complaints about cargo cult programming aren't only about people doing ineffective things, it's also about people seeing someone else doing something effectively but then not doing the work to understand why it's effective. It's a complaint about people being so close to being much more effective, but then snatching defeat from the jaws of victory.
Did you read the article? That is very much the pop sci definition of cargo cult that is incorrect.
The cargo cults were made by people who were enslaved and violently oppressed and then believed that cargo they were forced to create for their oppressors (e.g. flour, rice, tobacco, and other trade) should belong to them
> believed that cargo they were forced to create for their oppressors
I'm not sure that was in the article was it?
These were exotic goods brought from overseas.
I'm not trying to say there was no oppression, but the examples in which they believed the trade goods should belong to them were still about trade goods which arrived by boat.
"[The leader proclaimed] that the ancestors were coming back in the persons of the white people in the country and that all the things introduced by the white people and the ships that brought them belonged really to their ancestors and themselves."
(edit - certainly these goods may well have been produced through the oppression of other peoples elsewhere!)
Jessica Tallon's implementation of petnames and edge names was extremely simple within the paper davexunit linked, but used in-band mechanisms to communicate edge names that didn't require any sort of large trusted authority. You could retrieve them directly from fellow peers, who could publish their current set of edge names. This even works in a p2p context over ocapn, etc. The implementation was naive but it did work and used a publish-subscribe mechanism directly from other peers.
That said, edge names are only one way to share contacts. In fact "share contact" on peoples' phones is a great way to have contextual sharing: "Oh, let me introduce you to my friend Dave. Here's Dave's contact info!"
At any rate, petnames aren't a particular technology, they're a design space of "Secure UI/UX". However I do agree more research needs to be done in that space; we've only barely begun to scratch the surface.
reply