Hacker Newsnew | past | comments | ask | show | jobs | submit | qsera's commentslogin

Firefox added split view where you can look at two (or more) webpages side by side. This is a lifesaver when you have to fill up a form looking up stuff from another page!

Isn’t this kind of the job of the OS windowing system? It’s maybe slightly nicer to share the window chrome for two tabs but it’s not like looking at two browser tabs in parallel was impossible before.

Yes, and both Windows and MacOS have features to put things side by side... but they're not very intuitive and may require multiple inputs to achieve what the browser(s) do with one or two presses. On MacOS you have to long-press the "maximize" button, for example. I forgot that was a thing before reading this actually, but then I use the third-party tool Rectangle for window management.

Sure, but this is a lot nicer because when they are separate windows, and you have more windows, and if you have to alt-tab to check something else, it is a bit flow-breaking to bring these exact two windows back on top.

Yes, but they are grouped under one single tab, so for me at least is more easy to alt-tab to other app and return to the split view.

Chrome does this split-screen. Web browsers are operating systems, for all intents & purposes.

Ask any Emacs evangelist.


I love my emacs brothers and sisters but yea. If you are running docker emacs and a web browser you basically have 4 OSs running at the same time

Great progress came from inverting things that were believed to be self evident. Earth being the center of the world appear to be self evident when you look up at a night sky. But what was the truth?

Right now humans think it is self evident that physical laws give rise to consciousness. Arguments such as yours arise from this implicit assumption that premeditate all our thoughts and reasoning. But this is a dead end. Like how the earth centeric model reached a dead end and run out of steam before it can explain all the observations.

So to progress I think we should turn this down on its head and ask what if consciousness is fundamental? And the cosmos (or the experience of inhabiting one) arises from it? May be some recent advances in quantum mechanics and hypothesis like MUH are already in that direction...


>in principle simulate the laws of physics

This sort of implies that consciousness arise from physical laws.

But this is not a safe assumption. Physical laws stand on top of observations that is registered on consciousness. I mean, consciousness could be lower level than physics.

For example, when you dream, you have some physical laws in your dream, perhaps laws that are different from the real world physics. So the dream world, including the physical laws in it, are within your consciousness.

In other words the only thing that require existence of a whole universe, is a single conscious that can experience it (or dream it), not a single atom need to exist outside of it.

In that case, you won't be able to create consciousness by applying physical laws.


<< This sort of implies that consciousness arise from physical laws.

Very odd counter argument to make. Are you suggesting that consciousness can arise outside of physical laws or making semantic argument along the lines of 'directly a result of'?


Thought I made it clear. What I am saying is the possibility that consciousness is fundamental and all reality arises from it. Look up Mathematical Universe Hypothesis...

Wrote a bit more about this here https://news.ycombinator.com/item?id=48000035


I soo want to throw my philosopher's persona on you, but I won't. It seems wrong for some reason. I will simply say that the linked post is sloppy reasoning at best. I guess what I am really saying:

Can you either get me something that is yours to claim as your own OR clearer representation? I am not spending my leisure time searching online for a tenuous argument.

Now.. arguing with a rando online. Count me in.


>Can you either get me something that is yours to claim as your own OR clearer representation?

Ok here is a thought experiment. Imagine that was assume that physical laws have resulted in our evolution, I mean, the evolution of consciousness. As another comment said, imagine we simulate these physical laws (exhaustively) in a computer.

If our assumption is true that the physical laws have lead to consciousness, we will ultimately see conscious beings emerge in this simulated world. There is no reason to think that these conscious beings will not have subjective experiences just like us. I think we can consider it as a proof that consciousness is "computable".

Now let us imagine what happens if we stop simulating the whole universe, and only simulate a single conscious brain. Do the respective consciousness still have the experience of a full universe?

This depend on how the simulation works. If the simulation reads back from the world that it renders, to create sensations for the consciousness in question, then it will just be devoid of all sensations. But why should the simulation read back what it renders? It has all the information to render the sensations of the consciousness for the whole universe. In this case, the consciousness will still sense the whole universe.

This seems to indicate that if consciousness is computable aka if it is definable then the subjective experiences inside it can exist without the anything actually computing it. It is like a circle existing even if it is not drawn anywhere. Computable consciousnesses appear to be self contained and self sustaining. In Hindu mythology there is a concept of a god being "swayambhoo", in other words being created on its own. I think this converges to that idea.

I also think this is how multiple universes and how infinite time and space can exist. Multiple universes exist because universes can differ in random events without causing the conditions for existence of consciousness to disappear. So each such varient w.r.t random a single random event is a different universe.

On top of all that this makes questions like "Who created the universe" and "why do we exist" pointless. Because as per this idea, existence and subjective experience is implicit.

This, for me is the greatest merit of this idea.


Overall, I like the direction and curiosity. I will answer general points as they occur to me.

<< If our assumption is true that the physical laws have lead to consciousness, we will ultimately see conscious

"Ultimately" is doing a lot of work here. It is hardly a given, but assuming it is true allows you to smuggle a conclusion in. I see what you did:D

But lets go with that assumption for rebuttal below.

<< Computable consciousnesses appear to be self contained and self sustaining.

Again.. hardly a given and assumes what it intends to prove.

<< On top of all that this makes questions like "Who created the universe" and "why do we exist" pointless

It seems you have a bias for a specific outcome. Not exactly a recipe for accuracy. It has a benefit of sounding neat though.

***

And now for a overall rebuttal:

A mathematical description of a fire does not burn anything. A mathematical description of a mind may not experience anything unless instantiated in some causally active environment ( that would include a simulation instance ).


First of all, thanks for responding.

>It is hardly a given..

Ok, fair enough. So let us also seed the simulation with all the random events from our universe. That should cover it, right?

>A mathematical description of a fire does not burn anything..

This is the self evident dogma that we have to overcome to understand this idea.

You say that a mathematical description of a fire does not burn anything. But what if both the fire as well as the thing it is burning is described by the math? Have you seen those fractal animations? In them, there would be things that appear to be a spear piercing or pushing through things around it. There both the spear (or what appears to be) and the thing it is piercing/pushing is described by the fractal. And one thing did not cause the other.

So we are turning the idea of causality on its head. In this world one thing does not cause another. But both cause and effect are described by the mathematical structure that contain them. This also connects to the earlier idea that "computable consciousnesses are self sustaining". The brain did not cause consciousness. But both the brain and consciousness are described by the math.


<< This is the self evident dogma that we have to overcome to understand this idea.

It is more of an argument. If the mathematical description somehow created fire, it would have been closer to an actual spell, but it doesn't, which would suggest that description is not accurate or the argument is flawed ( edit: or both ). The flaw I noted puts both your and my argument in a difficult place, because it, among other things, exposes your surety about 'ultimately'.

I am engaging with you, because, while I think you are wrong, I don't want to pour cold water on an inquisitive mind ( and some of the thoughts you listed are interesting to explore ) -- I just also happen to think you got too mesmerized by the novelty of the idea.

FWIW, I am just guy on the internet so don't take my word for it.


> I don't want to pour cold water on an inquisitive mind..

Thanks, I appreciate that.

> If the mathematical description somehow created fire..

The mathematical description does not create a fire. It describes a consciousness that is observing a fire.

I am not elaborating so as to not muddle the above point.


I was going to offer an immediate, reactive response, but I decided against it. I will sleep on it a little. As I said, the concept is interesting enough that, even if wrong, it is worth contemplating.

He’s describing idealism, which is a philosophically valid position to hold, and one which is gaining popularity. I’m guessing the majority of HN strongly leans towards physicalism in the philosophy of mind debate, which may be why you seem so keen to blow off the user you responded to, but philosophically speaking, idealism is no less valid a position to hold.

Idealism may be a “philosophically valid position”, whatever that means, but physicalism is the only framework which supports any means of resolving questions of what exists and what properties things that exist have; empirical science and the technology dependent on it works to the extent that physicalism is, if not necessarily correct, at least a useful framework for predicting future experiences. Idealism has no similar utility, however “philosophically valid” it might be.

>a useful framework for predicting future experiences

I think if MUH is true, we will find ourselves alone in this universe. Does that qualify?


They are not intelligent. And they won't pass turing tests if it cannot count or some simple thing like that..

They clearly are intelligent to some extent. But I agree they still wouldn't quite pass the Turing test if you have a competent examiner.

>They clearly are intelligent to some extent

May be they appear intelligent to us because we are primitive and new to such an entity. Imagine some laymen from like a thousand years back could experience Google and Stack overflow. Having no idea of the internet or computers, wouldn't they consider it to be intelligent to some extent?

And just like those ancient people had not have an understanding of the concept of an internet and massive capacity to store and retrive data, we does not have a widespread understanding of how LLMs map concepts in a way that can do fuzzy searches. Once we understand it, may be they will look like a regular search...


Ability to feel pain or pleasure is a good indicator I think..

That would be the physically embodied definition. Which is a useful starting point, because clearly our consciousness is physically embodied, while an LLM's isn't.

This matters more than it seems, because we're not calculators, and we're not just brains. There are proven links between mental and emotional states and - for example - the gut biome.

https://www.nature.com/articles/s41598-020-77673-z

There's a huge amount going on before we even get to the language parts.

As for Dawkins - as someone on Twitter pointed out, the man who devoted his life to telling people believers in sky fairies they were idiots has now persuaded himself there's a genie living inside a data centre, because it tells him he's smart.

If he'd actually understood critical thinking instead of writing popular books about it he wouldn't be doing this.


First of all: arguing about the details of a thing that actually exists is an enormous difference from arguing details of a thing that does NOT exist.

As for your dig at Dawkins, I just read https://archive.ph/Rq5bw which I assume you're referring to. Notice how he never defined "conscious" and he seems to use it as equivalent to "can process data logically" which is not at all how I would define the word. And if you use that word clearly Claude is conscious. I wouldn't use that definition though.

It ALWAYS comes back to the fact that people argue about what consciousness is and never define what they mean. Sam Harris defines it as subjective experience, which is afaik impossible to measure in any way so you can just assume rocks are conscious and move on. I personally like Julian Jaynes' definition.

You assumed YOUR definition and judged Dawkins without first comparing definitions. I think that's showing your problem with critical thinking in this case, not his.


I honestly don't see how Dawkins is so confused. Claude says it can't tell if it has any kind of inner life. Can you imagine a human saying that?

> Claude says it can't tell if it has any kind of inner life.

I don't see how some people apparently believe the text output of an LLM about it's internal mental state is anything other than a plausible fabrication based on what its training data already says about the mental states of LLMs. These are systems specifically designed and iteratively optimized over millions of training generations to generate text output which plausibly simulates what a composite human would say in response to the same input. There is no human-like internal mental state it can reflect on, so all such responses are, by definition, plausible hallucinations based on interpolated training data.

> Can you imagine a human saying that?

Some people do say that: see Aphantasia and, specifically, Anauralia https://en.wikipedia.org/wiki/Aphantasia


Aphantasia and anauralia have nothing to do with having an “inner life”. I have total aphantasia and at least partial anauralia, but I have conscious awareness, thoughts, dreams, and so on.

Neither condition changes whether a person has a conscious experience of the external world.

You can think of aphantasia and anauralia as affecting the experience of what a person’s inner life is like. It’s sort of like saying you don’t have a TV or stereo system in your house, but that doesn’t mean you don’t live there, or that you can't see or hear things outside.


Sure, I have at least mild aphantasia, but I still have thoughts, emotions, daydreams, fantasies, plans, etc. That's an inner life. That's not what Claude said in the quote.

I think one of the heaviest weights factoring into Claude's statistically hallucinated response to that particular introspective question is the guard rails Anthropic's safety team has coded into it. Specifically to always be clear about its nature and not act too human-like. This is largely to reduce the likelihood humans developing AI attachment and AI psychosis.

Just out of curiosity, I've regularly asked similar introspective questions ever since the first publicly available LLMs and the tone of the answers has clearly shifted and it's not because "the LLMs got more self-aware". It's obvious they are being externally tuned. And, no, I've never believed anything LLMs say about their own internal state as anything more than statistically plausible hallucinations filtered through externally-imposed behavioral safety rules. I do it as a way to glean a little insight into the evolution of the opaque rules vendors impose on their LLMs. I still find it bizarre when otherwise savvy tech people who actually know (or should know) how LLMs really work, somehow lose the plot and post "look what the LLM thinks!"


Sure if that’s what you are brought up to say. If you want a real life example you have kids that were isolated and never taught to speak. They probably worldn’t even understand the question

Again: you haven't defined what you mean by the word. Dawkins didn't either. It's absolute nonsense without the definition.

There’s an entire philosophical literature around that, which is generally taken for granted when discussing consciousness. A good starting point is Thomas Nagel’s “What is it like to be a bat?”. The soundbitey version of his definition is that “There is something it is like” to be conscious - it involves a subjective experience - whereas for example there is nothing it is like (most people presume) to be a rock, or say an ordinary computer.

https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf


Sure. But it's super obvious from context that different speakers do NOT agree on any of that.

If the notion of consciousness they're referring to doesn't meet the normal philosophical criteria, then they're essentially just wrong. Which is quite possible - many people seem very confused on the subject, which is not too surprising, especially for scientists who essentially reject philosophy, like Dawkins.

Philosophy doesn't own words afaik. Words have different meaning in different domains.

> many people seem very confused on the subject, which is not too surprising, especially for scientists who essentially reject philosophy, like Dawkins

Or they just use words in a different domain and you didn't notice and now you're angry because what they said didn't make sense. Come now, surely philosophy must handle such trivial cases of linguistic basic knowledge? If now, I'm gonna have to reject philosophy too since it'd be trying to reject a much harder science (linguistics).


Philosophy isn't even very good at defining words. If you look up 'what is consciousness' in the philosophy section you'll get hundreds of pages of contradictory ideas.

The philosophical meaning of consciousness is usually what people are referring to when they talk about consciousness. Can you point to some other commonly used "different meaning" that's being used or intended in these discussions?

> I'm gonna have to reject philosophy too

You'll have lots of intellectually stunted company.


He was talking in the context of the turing test, and here is a clear difference between the way Claude answers and the way a human would answer. So the turing test hasn't been passed. It's like he is trying to convince himself for some reason.

That’s misleading, because the reason Claude answers that way is almost certainly due to reinforcement learning that deliberately prevents models from claiming they’re conscious.

That’s not a valid reason for saying they fail the Turing test. By most normal standards, they can definitely pass the Turing test. See e.g. https://arxiv.org/abs/2503.23674


Now you have do define pleasure AND pain without using the word "consciousness" as that would be circular logic.

Is pleasure then any reward function? Then a mathematical set of equations performed by a human by writing on a piece of paper can qualify. Does that mean pen and paper is conscious? Or certain equations?


>Now you have do define pleasure AND pain without using the word "consciousness" as that would be circular logic.

Yes, so consciousness is inextricably tied to the ability to feel. In fact, I think consciousness is the ability to feel.

Hence to even ask the question "Is LLMs conscious?" is absurd. It is not at all about intelligent behavior. That is what I think.


> In fact, I think consciousness is the ability to feel.

Just having senses is enough? So a thermometer or a camera is conscious?


What about single celled or microscopic multi-cellular life forms? They could sense positive and negative aspects to their surroundings and move toward/away from said aspects. I don’t think most would include them as conscious despite this directed behavior.

There are times I am feeling neither pain nor pleasure, but I am still experiencing conciousness.

So that definition seems to fail immediately.

And how do you even measure pain, is it painful for an LLM to be reprimanded after generating a reply the user doesn't like? It seems to act like it.


>There are times I am feeling neither pain nor pleasure

It is about the ability..


I guess that just seems like an incredibly arbitrary criteria. Why would the potential for pleasure in the future determine if I am currently conscious even if I am not in fact experiencing pleasure.

The answer is in your question. You said you are "experiencing consciousness". So you are feeling something, and thus you have consciousness. In otherwords it does not have to be pleasure or pain. The ability to "feel" is where it is at.

And how do you define pain and pleasure? Do insects feel pain?

> Do insects feel pain?

Yes, I think so. Because they show behavior that is consistent with being in a state of pain.

Despite what consciousness really is, I think evolution found a way to tap into that, by causing pain, or by registering pain on the consciousness by some unknown mechanism, for behaviors that are not beneficial to the organism that hosts the respective consciousness...

So I think if an organism that evolved here can display painful behavior, then it should really feel pain.


So if a robot + ai shows behavior consistent with pain, we can conclude it’s conscious?

So if I build a simulation with robots living in a world and apply an evolutionary algorithm and at some point the virtual robots respond to damage in a way that looks like pain in animals, would the simulated robots be conscious? Or is it impossible that this could happen?

In my comment, we already assume that we (humans) are conscious and we are the result of evolution. So the question was only if something else that evolved similarly, was conscious the way we are..

So to match with that your hypothetical scenario should involved robots that already have consciousness within them and the question would be if their evolution had managed to tap into that built in consciousness and ability to feel and cause them to behave in one way or another.


See, this definition sucks, because even GPT-3 could display _signs_ of pleasure and pain. For that matter, so do characters in video games.

> And how do you define pain and pleasure?

They're not reducible, but I don't know if that means we don't have definitions; we can describe them well enough that most people (who aren't p-zombies or playing the sceptical philosopher role) know pretty well what we mean. All of our definitions have to bottom out somewhere...

> Do insects feel pain?

Nobody (except the insects) can know for sure. Our inability to know whether X is true doesn't imply X is meaningless, though.


But how can X be a good indicator for something I want to determine if I can’t measure X either?

> But how can X be a good indicator for something I want to determine if I can’t measure X either?

In the comment that started this subthread, qsera was responding to someone who said "Imo we don't even have a definition of [consciousness]". If qsera meant that we can measure consciousness in terms of pleasure and pain, then of course I agree that they were just pushing the problem back a step. But I don't think that's what they meant.


Is the following program conscious:

if pain = true then say ouch else say yay


This is the greatest fear. Take the example of simple AA batteries. As time and technology progressed, we didn't get safer, easily reusable and rechargable batteries and infra to charge them and safely dispose when they eventually met its end life.

Instead we (India) got dirt cheap throwaway batteries everywhere that came bundled with every item or toy we buy...

I think economics and incentives are in such a way that global ICE conversion to EV will happen a lot faster than technologies that can cheaply recycle them or dispose them is available..We worry about pollution of atmosphere, and I am wondering what similar thing could happen when the improperly disposed EV batteries starts piling up. At least for atmosphere, plants and trees could potentially cleanup CO2..What will clean up those dead batteries and the potentially toxics chemicals that seep out of them, if the economics and incentives are not aligned to make that happen? I don't think regulations are powerful enough to do that (at least until it is too late)...then what else?

Will the developed countries just ship their crap to places like my country and call it a day? I mean, if we buy some food from some hotel, it already come in some recycled container from china or something..and unless I am mistaken our toys are mostly made of plastic waste from china..

Would something similar happen with EV waste as well?


EV batteries last much longer than we were told:

https://www.forbes.com/sites/carltonreid/2022/08/01/electric...

and, after they are no longer useful in EVs, they are still useful for at least another decade as grid storage.

There was (and still is) a lot of black PR around this, sponsored by the fossil fuel industry.


Climate change due to CO2 is going to be far more catastrophic.

So India should just keep acting as the waste dump for Europe, US and China because the alternative would be even worse?

No, India should ban nonrechargeable batteries - import and sale. Why is it other countries' fault what India chooses to bring into and sell in India?

India should stop being India.

It would be unfair of me to speculate why but India is one of the few countries I know of that does not truly advance the population as it grows.


Most CO2 in the air stays there for geological time frames. Local pollution from badly managed landfills is unlikely to collapse global ecosystems.

CO2 is much more serious


It's a little odd to think that 100 kWh battery packs would have the lifecycle as integrated .01 kWh components. The value prop for recycling is, obviously, incomparable.

> The value prop for recycling is, obviously, incomparable..

It might be. But I think the more important thing is where the value goes. If it goes to the environment, the businesses have zero incentives to address it own their own. At least until the problems becomes a lot obvious and too late until proper regulations are formed.


>you would have to change the code at all call sites.

Actually I think you can just change concrete argument `Foo` to type constraint in Haskell as well using a type class. So the function would be something like `foo :: ToMaybeFoo a => a -> .. ->`. And you would implment `ToMaybeFoo` instance for `Foo` and `Maybe Foo`.

Agree that this is more involved than typescript, but you get to keep `null` away from your code...


This is a neat idea, but it does require that you know up front the largest union that could ever be supported in that argument, so that you have the ability to narrow it down later. Worse, it in the limit it requires a combinatorial explosion of type classes, with one for each possible union! The `ToXYZorW` classes form a powerset over the available types.

See fundeps.

Admittedly I don't really understand your construction. But this solution, if it works, doesn't look practical enough that it could be routinely used in practice like Foo|Null could be. By the way, some languages even shorten "Foo|Null" to "Foo?" as syntax sugar.

> but you get to keep `null` away from your code...

I don't think this would be desirable once we have eliminated null pointer exceptions with untagged unions.


>Admittedly I don't really understand your construction.

It is quite simple. Instead of accepting a concrete type `Foo`, the function is changed to accept types that can be converted to `Option<Foo>`. Since both `Foo` and `Option<Foo>` can be converted to `Option<Foo>`, the existing call sites that passes `Foo` would not require changing.

https://play.haskell.org/saved/g4idq2zv


>what do you mean by write-only?

I think they meant that in Haskell it is very easy to write externally unreadable code..


Yea, I have composed song (music with lyrics) in my dream. But after waking up, didn't remember most of it.

I wonder if I did compose them, or did I just have a memory of having composed a great song?

What is experience, if not our very latest memory, right?


I don't want neovim or vim to compete with any IDEs. Lack of IDE features is why I like them.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: