> One big reason I don't think LLMs are (currently) conscious is because they are static
It is true that the LLM itself is static. However it's context window is self-modifiable, based on its inputs and outputs.
> I think they need some kind of temporal awareness... and some mechanism for self-modification or active learning based on their input.
Why?! (besides, they do, see above)
I bring this example up, and it's clear evidence in humans that neither of these things are required for consciousness, and one that I deal with in my home. People with dementia that have no memory that are no longer able to learn suffer a different issue that not being conscious.
> If an experience flows through them and leaves them completely unchanged, are they actually conscious of the experience?
This line of thinking precludes dementia patients with no retention of memory are not conscious.
I agree having an experience, and being conscious of that experience are two different things, though.
> Transformers have approximate knowledge of many things. Is this not 'general'?
Of course not. That's like saying the Encyclopedia Britannica is AGI.
> What does AGI mean to you?
I would define AGI as human-like machine intelligence (or superior).
This is difficult for some people to understand because they don't understand what "human-like" means in the first place. Neuroscientists would be able to set some of these wayward computer scientists straight on this question.
I can see how that would be implied by my comments so you're right to question that.
The principles that are found in the brain are what gives qualification to "AGI", not the brain itself, so it's possible there are other architectures that would qualify.
A few observations on LLMs that give the game away:
- They require releases. You get a single binary blob and that blob is forever stuck at its so-called "intelligence" level. It never learns anything new.
- They're stuck approaching the limit of human intelligence. This is because the technique cannot exceed human intelligence. I realize that OpenAI has made claims to the contrary, saying things like "oh our model found out some proof that was never proven before" — this doesn't count. It's a side effect of training on the Internet. In fact that proof probably did exist (in pieces) somewhere on the Internet, it just wasn't widely publicized.
So, you'll know it's AGI when you no longer see companies releasing new models. AGI won't require new models because the architecture will be what matters as whatever models you have will be constantly updating themselves in real-time, just like the human brain does (and every other brain).
And, you'll start to see the AIs actually outsmarting the smartest humans on the planet in every subject.
> - They require releases. You get a single binary blob and that blob is forever stuck at its so-called "intelligence" level. It never learns anything new.
True. But learning isn't the same thing as intelligence. My father who has dementia and is unable to learn anything new due to memory issues is still 'intelligent'.
> - They're stuck approaching the limit of human intelligence.
Is general intelligence > human intelligence then? Is there some static 'human level' that I should be measuring myself against?
There is considerable overlap between the smartest bear and the dumbest human. same is true with LLM's and humans how.
What you seem to be describing isn't AG(eneral)I, but artificial greater intelligence.
> What you seem to be describing isn't AG(eneral)I, but artificial greater intelligence.
If you ignore what I said in answer to you earlier then perhaps it would make sense to draw this conclusion. But if you take the full context of what I said then no, it's clear that I am not referring to "artificial greater intelligence".
Just in the previous comment I said that rats would qualify, because the architecture is what matters.
Your example with dementia is clever but that's an example of the biological architecture breaking down. Please forgive the crude analogy but it's like asking if a house is still a house if it's been burned down partially. I suppose part of it is still a house.
FWIW there are other definitions of intelligence that are wholly immaterial.
Spirits are considered intelligent even though they have no body because they are composed of pure non-physical consciousness. Plants are intelligent even though they also have no brain.
That fundamental sort of living conscious intelligence isn't what I see discussed much in these contexts though.
What you will notice about it though is that unlike frozen LLMs, this type of intelligence also has the capacity to change, interact, and learn from its environment.
If we go with this definition instead, then on a large enough timescale everything can be considered intelligent, even rocks.
...Let's not go with the nonsense definitions then.
I agree, systems don't need a brain to be intelligent, and (on a related point:) I don't think systems need to be conscious to be 'intelligent'.
You are excluding this system (llm+harness) that learns (separately), can modify it's surrounding environment via a shell interface (including setting up a nightly training loop to reweight itself based on it's daily actions and interactions) from being intelligent. Do I have that right? Or are you thinking in terms of 'only' the LLM?
I do call openclaw style agents "living agents", although they might be closer to a kind of zombie. Living agents like openclaw et. al. do have a self-modifying property of sorts thanks to their memory, and so that system might be more AGI-ish, but, still, it has a fundamental cap to its potential, which remains frozen at the LLM.
> (including setting up a nightly training loop to reweight itself based on it's daily actions and interactions) from being intelligent
I'd have a harder time arguing that sort of system isn't AGI.
My point is learning may be required to create intelligence, but not 'run' intelligence. And LLM's 'learn' in their training, no? It happening at a different times doesn't truly matter.
Agreed. The widespread anthropomorphizing is getting so tiring.
I blame it on the big companies in the space, but seeing intelligent folks regularly attributing intelligence to a complex autocomplete system is disappointing.
Okay. for that to be a reasonable take: curing deafness must then destroy culture.
would that ACTUALLY happen, though? I challenge that assumption.
Take this hypothetical scenario: magic... magically all deafness is gone, suddenly and instantly. Would this destroy friendships? Would this erode personal relationships? Would this destroy (the very useful invention of) sign language? Would this destroy books or media? Would this devastate financially members of this community? would this kill anyone?
Well, besides the secondary effects of suddenly hearing, potentially leading to accidents. Do you actually think any of the above would happen?
I don't actually see anything like that happening. This is conservatism dressed up wearing a minority's hat. This is staunch resistance to change because of fear of lacking the familiar experience using a gross comparison to prevent reasonable analysis.
But I also believe in personal choice. Mandating conversion is not a power I want to give the government in any capacity. I just do not see the 'genocide' argument.
This is an example (like Christianity) about how horrible ideas attach themselves to identity to prevent their excision from their host. If you don't think Christianity is a good idea: suddenly it's a personal affront to them. If you don't think being deaf is an advantage or neutral: suddenly it's a personal affront to them. Be wary of anything attaching itself like this to your identity: you usually get infected when you are too young to have defenses.
I understand the concern. I want sign language to continue to exist and be used. It's far too useful for communication in a loud/silent area. I think it should be adopted much more widely as a second language.
I however do not wish children to be subjected to the will of the parents when that will dictates that the child's variety of sensory experience is intentionally limited by withholding medical intervention. That is cruel.
There will never not be deaf people, and we should build society so hearing isn't a requirement.
> When it comes to children, then, the question is not just "do I want my child to hear better than I can", but also "do I want my child to speak the same language and belong to the same culture that I do" - something most parents want very much.
That's simply: 'what is best for my child' vs 'what is best for my relationship with my child'. Only one of those actually has the best interests of the child at heart. Only one of those opinions is respectable. Growing up with the latter leads to resentment towards the parent generally.
Its more like making that choice: and it having it be permanent. The child can never visit the city. You can always make someone deaf (unethical), but you can't always reverse hearing loss at an advanced age.
Pedantic and relevant. If they lost the voice samples, they wouldn't have it for training new models. If they were copied, then they have lost nothing in terms of training.
When a model is trained on multi-contexts, some growing over time like we see now (conversations), some rolling at various sizes (as in, always on), such as a clock, video feed, audio feed, data streams, tool calling, we no longer have to 'pollute' the main context with a bunch of repetitive data.
But this is going in the direction of 1agent=1mind. When much more likely human (and maybe all cognition) requires 'ghosts' and sub processes. It is much more likely an agent is more like a configurable building piece to a(n alien) mind.
> Hating on Tesla is the logical result of vehicles with door handles that won't open from the inside when the power is cut.
Hating on tesla is easy because they are STILL lead by a man-child who has chosen to sig-heil behind the presidential podium. And he's still in charge of tesla. At some point: it's on tesla too for continuing to have that person as CEO.
It is a large subsection, but a subsection, that both rally against capitalism and AI. I haven't found people of the '1$$$% capitalism great' people to hate AI... which I do find ironic: but most things tend to fall into irony on that side of the spectrum, so I don't find it surprising.
Oh no. I looked at a screen. There goes all my joy...
/s
Objectively worse in some vectors, Objectively better in others. Being able to get medical advice quickly. Being able to communicate to vastly different people broadening your horizons. And yes, more comparisons to make (the thief of joy).
b. There are almost no behavioral similarities between cats and Claude
.
d. Therefore claude can not be conscious.
You are missing: c. Everything conscious must behave like a cat.
This logic is clearly not sound. I don't think you're position is a coherent one.
reply