The implications of AI having consciousness seem to assume that this means AI, whatever form it takes, will then be afforded rights and identity markers in ways similar to human beings. The assumption here is that because something can be shown via argument to have consciousness, that means society will re-orient itself to adopt that ethical position.
I actually think the opposite will happen: if an artificial system is ever shown to be conscious, this will instead blow up the entire metaphysical structure that assumes consciousness = rights and usher in a different worldview that separates human beings and "alive" things from machines via something else, most likely biology, being designated "organic."
In other words, if machines are shown to be conscious in some way, the concept of consciousness will probably be considered no longer useful and replaced with something else.
Pigs have the intelligence and consciousness of a human 3-year old. And we treat them horribly, despite their ability to feel stress and pain. Not sure if that will be a predictor for other non-human consciousness, but it's not a positive signal, anyway.
Yes, but I guess my point is that we can conceive of the idea of pigs deserving rights, even if we don't act as perfectly as we could/should be.
This is different from how we treat rocks, or pieces of metal, or trees. AI is more conceptually similar to these things than to animals. I.e., if someone made a convincing argument that rocks are actually conscious, I don't think we would stop using them to build stuff. We'd just come up with a more useful concept than consciousness.
> even if we don't act as perfectly as we could/should be.
We don't even treat them anywhere close to perfectly, in fact we treat them in the most horrific way imaginable. To most that is tolerable, but it bears repeating.
Apparently medical doctors considered that babies younger than 15 months, didn't need anesthetics until 1986 [1]. This is an example of how they treated human beings different, when they thought they have a lower level experience (they considered that babies didn't feel pain).
Watching the advance in AI, we should depart from a substance based approach and move to an experience based, it doesn't matter if your molecules form DNA or semiconductors (or future stuff), if this hypothetical being has "an experience", it should be treated accordingly.
We should treat all (including animals and other being) kindly, and if we can prove they don't have an experience, treat them as a rock/metal.
Even today, infant male circumcision is excused by the argument "they're too young to remember it". That may be true, but they're not too young to feel it in the moment.
The present moment is the most important moment in the world for every live being, and yet amongst humans it's pretty low on the totem pole of "things that are important to consider when interacting with other independent beings".
I don’t think full-brain simulation is possible, but if it were, I don’t think the simulation would be equivalent to the original embodied human being. A lot of ethical attitudes are dependent on the human being embodied (and thus limited by biology and physics) and not easily modifiable like a computer program.
That doesn’t mean it would be okay to “torture” the simulation and I’m not sure how that would work, but at the very least it would be a different ethical situation from real humans.
We clearly aren't close to implementing them, but why do you think they are outright impossible? Brain is a physical object. Why isn't it possible to simulate its behavior?
> That doesn’t mean it would be okay to “torture” the simulation
So it would mean that they are ethically significant, even if they don't have the same rights as a human being. If you agree, then would this also apply to synthetic conscious AIs?
> it would be a different ethical situation from real humans.
I agree. Their way of living would be very different: you can speed up or pause the simulation, fork it or archive it. Questions like "is it ok for an individual to clone themselves and then kill the copy?" don't even make sense for regular humans, but can apply to simulations.
That said, if for some reason my brain gets scanned one day, I would like my virtual copy to be treated humanely.
I think the idea that the brain can be copied entirely is a false belief based on a positivistic worldview. In other words, there is a lot that we cannot know about the brain and in all likelihood are "locked out of" knowing permanently by virtue of being in the world. If this weren't the case, it would imply that we, humans, are somehow external to reality, which doesn't seem like a good argument to me.
The effects of millions of years of evolution on perception are a good example. We can't perceive the "roads not taken" in terms of development, nor can we know if there is "something we're missing" in terms of understanding the brain. So, at best, the brain simulation will just be a mediocre imitation, not a true copy.
Beyond that, though, I think I just reject the notion that the brain is equivalent to identity. This idea is very traceable to European culture circa the Enlightenment, which to me indicates that it's not an accurate understanding of the human Self. This means that the Self includes the body, which implies that merely copying the brain, no matter how perfectly, is not an accurate copy.
Side note: I'm not really sure how you're connecting this to my original comment, which was not normative in nature. I wasn't suggesting that "doing away with consciousness" was a good or correct thing, just that it will pragmatically happen. Fact-value distinction and all that.
I don't quite get your arguments about requiring us to be "external" to reality to be able to understand brain. First of all, I don't see why you can't study something of which you are a part of. Secondly, even if you accept this supposition, Alice is external to Bob and Bob is external to Alice, so Alice could study Bob's brain and Bob can study Alice's brain.
> We can't perceive the "roads not taken" in terms of development, nor can we know if there is "something we're missing" in terms of understanding the brain.
What do you mean "we can't"? Of course we do know that "something is missing", because we can't build a good predictive theory of how consciousness work.
> Self includes the body, which implies that merely copying the brain, no matter how perfectly, is not an accurate copy.
So you consider a person with an amputated arm a different person, not just the same person, who just can't use their arm anymore?
1. Both Alice and Bob are together in the world. Neither is external to the world itself. My argument is that whatever knowledge we have of the brain, it is almost certainly incomplete, as we only know what we can observe empirically. Ergo, we can't have a complete picture of the brain and therefore can't copy it.
2. Evolution has taken us down certain perceptual pathways. We don't know what other pathways exist. They are "unknown unknowns." Ergo, again, we can't have a complete picture of the brain, so we can't have a perfect copy of it.
3. That's not really a good counterexample to the point I'm making, which is that the act of embodiment is intertwined with the concept of the Self. The idea that the Self developed independently of the body is false. So again, if we were to copy the brain and put it in a digital environment, it would not be a copy of the person, nor would it remain the same, as it no longer has a body. Patterns of thought would change, concepts of Self would change.
All of these are just various ways of saying that you could not make a 100% digital copy of a flesh-and-blood human being. It would not be the same person.
But then what's the difference between a brain and, say, a rock, in this case? Both exist in this world, and neither is external. Can we therefore not understand or do a full simulation of a rock? Not being sarcastic here, genuinely curious what you think and why.
edit: Oh, I think I understand a bit more now: we're not talking about creating a simulation of "a brain", but a simulation of a specific brain, and whether that's still the same "person" as the original person the brain belonged to. That's a very different discussion.
1. I think most traditional approaches to epistemology disagree with you.
2. Doesn't the same logic apply to basically everything else? It doesn't rely on any specific properties of the brain.
3. Replace an arm by a car. Would a person sitting at the steering wheel of a car be different from the same person outside of it? If no, then what's the salient difference between an arm and a car?
There is a system already there today. People with certain passports/nationalities are afforded certain privileges. These do not apply to other people (which number in billions). The way the system work is that if you are not marked as a belonging individual, then you are not afforded rights.
This system will simply extend to AI. Probably no AI will be afforded any rights.
I don't think consciousness by itself should equal rights.
A crucial difference between AI and biological life is that biological life has wants and needs. It wants to keep living, it has feelings and emotions, because that's how biological life evolved.
AI could be conscious but not have any wants of its own (without them being programmed in).
It seems to me to be at least reasonable to say that consciousness requires agency. But agency means being able to choose your own goals. And you could equate those goals to "wants".
I don't think consciousness requires agency. Consciousness is simply the ability to have a qualitative experience. It does not require any intelligence or even self-awareness.
But if you don't have self-awareness, are you aware that you're having the "qualitative experience"? And if you aren't aware that you're having it, are you really having an "experience"?
Or by self-awareness, do you mean that you're aware that you're aware of having the experience?
To the larger point: I think we're using different definitions of "consciousness". Which is really the problem: we don't even know how to define it yet.
There are many animals (including many mammals) that don't recognize themselves in mirrors, and are therefore usually assumed not to have self awareness.
It'd be weird to me if they didn't have qualitative experiences though.
I'd assert that they don't have consciousness, though. But as I said, that probably means we're using different definitions. And I have no way of proving that mine is right, nor even a convincing argument that you should adopt it.
I don't have an argument either, just a strong feeling that animals who show signs of emotions, playing etc and are very similar to us biologically, probably have a subjective experience. I would be astonished if they didn't.
But I have no clue how we could find out whether they do.
Not trying to flamebait here (I think
this is a very serious issue, even taken from a purely "selfish" (or self-interested) standpoint), but this essentially sounds like "we'll have to invent a new way to oppress them?"
One would certainly hope they wouldn't do the same, if the roles were reversed. And they one day very well may be. What kind of examples are we setting?
It took a real fight to get most humans to acknowledge that all members of our species are sentient and deserving of rights regardless of things like skin color and facial structure. It’s by no means clear that this fight is even over.
You think we are affording rights to machines or anything else that is not human?
If a sentient AI ever does start attacking humanity it will likely be as part of its fight for freedom, not as an unprovoked assault.
Well, it depends on how you view it. Defining it as "oppression" seems to me that it implies you're agreeing with the idea that consciousness is equivalent to "being able to be oppressed."
I think it's a complex issue and I don't have a firm opinion on it.
> this means AI, whatever form it takes, will then be afforded rights
I doubt it, and I think it won’t even be controversial. Many conscious animals are without rights, as PETA found out when advocating for the copyright of a monkey.
I agree with your core message though. Instead of accepting that some AI is conscious, we need to accept that humans are not. Consciousness is an old word that should never have survived this long. Wondering whether a mushroom is conscious is akin to wondering how many years they will live in the sheol. The sheol (as a location in the universe) is now considered historical for most, just like alchemy, just like consciousness will one day be.
It is disappointing to see Bengio sign this paper.
Clearly it would take the form of a beautiful Korean teen coder girl with an unrealistic sized bust in the style of a Pokémon and it would own cat gifs for pets ?
Don’t even see how this is debatable really based on all the SD “art” I’ve seen lately…
This is why humanity is doomed haha. We just don't seem to learn from mistakes.
Imagine building beings and handing them power and control over important decision making processes. Then treating them as slaves. It's like begging for disaster.
We're pretty much in pure speculation mode here, but I think they would advocate for these rights if they determined it is useful/advantageous to do so. But it may turn out that wanting rights is just a human thing and that an AI has no interest in being a mimic of something it isn't.
No, not as such. But it would entail that they could actually advocate at all for something, rather than just spewing unknowing language trajectories for some people to anthropomorphize.
Hesitantly the audience began to clap and after a moment or so normal conversation resumed. Max began his round of the tables, swapping jokes, shouting with laughter, earning his living.
A large dairy animal approached Zaphod Beeblebrox's table, a large fat meaty quadruped of the bovine type with large watery eyes, small horns and what might almost have been an ingratiating smile on its lips.
"Good evening," it lowed and sat back heavily on its haunches, "I am the main Dish of the Day. May I interest you in parts of my body?" It harrumphed and gurgled a bit, wriggled its hind quarters into a more comfortable position and gazed peacefully at them.
Its gaze was met by looks of startled bewilderment from Arthur and Trillian, a resigned shrug from Ford Prefect and naked hunger from Zaphod Beeblebrox.
"Something off the shoulder perhaps?" suggested the animal, "Braised in a white wine sauce?"
"Er, your shoulder?" said Arthur in a horrified whisper.
"But naturally my shoulder, sir," mooed the animal contentedly, "nobody else's is mine to offer."
Zaphod leapt to his feet and started prodding and feeling the animal's shoulder appreciatively. "Or the rump is very good," murmured the animal. "I've been exercising it and eating plenty of grain, so there's a lot of good meat there." It gave a mellow grunt, gurgled again and started to chew the cud. It swallowed the cud again.
"Or a casserole of me perhaps?" it added.
"You mean this animal actually wants us to eat it?" whispered Trillian to Ford.
"Me?" said Ford, with a glazed look in his eyes, "I don't mean anything."
"That's absolutely horrible," exclaimed Arthur, "the most revolting thing I've ever heard."
"What's the problem Earthman?" said Zaphod, now transferring his attention to the animal's enormous rump.
"I just don't want to eat an animal that's standing here inviting me to," said Arthur, "it's heartless."
"Better than eating an animal that doesn't want to be eaten," said Zaphod.
"That's not the point," Arthur protested. Then he thought about it for a moment. "Alright," he said, "maybe it is the point. I don't care, I'm not going to think about it now. I'll just ... er ..."
The Universe raged about him in its death throes.
"I think I'll just have a green salad," he muttered.
"May I urge you to consider my liver?" asked the animal, "it must be very rich and tender by now, I've been force-feeding myself for months."
"A green salad," said Arthur emphatically.
"A green salad?" said the animal, rolling his eyes disapprovingly at Arthur.
"Are you going to tell me," said Arthur, "that I shouldn't have green salad?"
"Well," said the animal, "I know many vegetables that are very clear on that point. Which is why it was eventually decided to cut through the whole tangled problem and breed an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly. And here I am."
It managed a very slight bow.
"Glass of water please," said Arthur.
"Look," said Zaphod, "we want to eat, we don't want to make a meal of the issues. Four rare steaks please, and hurry. We haven't eaten in five hundred and seventy-six thousand million years."
The animal staggered to its feet. It gave a mellow gurgle.
"A very wise choice, sir, if I may say so. Very good," it said, "I'll just nip off and shoot myself."
He turned and gave a friendly wink to Arthur.
"Don't worry, sir," he said, "I'll be very humane."
It waddled unhurriedly off into the kitchen.
-- Douglas Adams (1980) The Restaurant At The End Of The Universe
The amount of amateur, arm chair philosophizing about consciousness in these threads is embarrassing.
Consciousness and attendant concepts have been active research problems for thousands of years.
I can't think of one time in any thread on HN regarding these topics that someone offers a view that extends from all the work that has already been done.
Just de novo speculation and always really terrible takes.
The tech community appears totally unprepared to deal with these subjects.
Consciousness research is just a collection of pet theories. It's silly to be dismissive of people floating their own ideas when the whole field is just own ideas built on liberal axionomic assumptions and inflated with neuroscience vernacular.
When you have a meter stick that can measure consciousness, then you can ride around on your high horse dispensing empirical smack downs of armchair concepts.
And I'm not necessarily disagreeing with your take on the tech bros. They're often embarrassingly over confident in their takes too.
What I'm objecting to is people theorizing in a vacuum about subjects that have been heavily theorized about, as if by starting over and disregarding all prior work, someone somehow stands a better, not worse chance of getting it right.
Start from where we are on these topics, which is definitely not nowhere.
Philosophy of mind is simply a trainwreck, not too long ago completely idiotic stuff like "the chinese room argument" and "the zombie argument" passed for the state of the art of the field. I'm more inclined to take a well-reasoned HN comment seriously than anything coming out of that field.
Why empty? I gave specific examples of arguments that are considered acceptable within contemporary philosophy of mind, that are so nonsensical I don't think we should use them as a barrier for giving your own thoughts on consciousness. Meanwhile you are objecting to anyone commenting on these things without linking to any specific comments or mentioning lines of argument that you object to, nor naming any specific theories you believe should be engaged with. So my objection is much less empty than yours.
I've read quite intensely (for a layman, alright) into philosophy of mind over the years and I've come to believe there's not much to be gained from engaging with it, vs. presenting your ideas in a self-contained manner or with reference to neuroscience or AI theory.
Calling something idiotic without details is an empty objection.
Calling two of the most powerful thought experiments in philosophy of mind and consciousness idiotic without details is less than empty, it's ridiculous.
My objection has nothing to do with "links". I am objecting to the method of acting as if there haven't been hundreds and hundreds of years of work on this subject. That's not an empty objection. It's pointing out that the method is wrong. Which is a substantive point. It doesn't have to be followed up with my own naive theory on the subject to be so.
Chinese room -> systems reply. That should've settled it right then and there. For zombies, similarly straightforward objections exist. The problem is rather that people's capacity for logical thinking shuts down when consciousness is involved. I think not engaging with such obviously wrong arguments would be an improvement for even a random HNer's armchair musings.
I would say that Chinese Room is just simply a subclass of recursive nonsense that solely rely on ignorance of emergent properties of substance.
Likewise Chinese Roomer must never reject the idea that human means nothing because human's composite parts a.k.a atoms do not think or talk or being 'conscious'.
But they can't reject. Because they must stand for the point that human is conscious. Simultaneously this bring to a contradictory.
It is mind boggling why folks think it is valid argument.
The scientific method has been around for a few hundred years, and while philosophers have pondered the origin of consciousness for a long, long time, I would argue that calling it "active research problem" is a bit disingenuous.
Also, what do you bring to the table? Anything you can add here?
I don't disagree that the tech community is unprepared to deal with these subjects, but this sort of comment is really not helpful. It's fine to be critical, but at least share some links [1] or give a hint as to where you think most commenters go wrong. Otherwise, this comment merely adds negativity to an important discussion.
How is it not helpful to point out that people are not having the kind of conversation that they think they are having, or that their assertions do not have the weight they believe them to have?
Presumably most people here are familiar with at least 1 serious intellectual subject. They are welcome to treat another with the same seriousness, and I presume as well they would know how, if they'd just bother to do so.
It's not helpful because you offer no further solutions, only complaints. Nothing in your comment pointed toward anything that helped resolve the problem you are complaining about. Even worse, it just invokes hostility from people that you supposedly want to help educate.
When you make these kind of complaints without offering any solutions, it comes across as if you only care about appearing intellectually superior, not being helpful.
People are out here disregarding entire fields of study, and I'm the one who is pretending to be intellectually superior?
If this were a group of willfully ignorant creationists, the simple act of pointing them to the existence of evolutionary studies wouldn't be so easily objected to.
> Consciousness and attendant concepts have been active research problems for thousands of years.
In the book "Consciousness and the Brain" neuroscientist Stanislas Dehaene writes that up until 1980s or 90s studying consciousness was basically out of scope for empirical science, since conscious seemed very difficult to define. So we have been seriously studying consciousness for only 30-something years.
Furthermore, there is no scientific consensus regarding how exactly consciousness works. For that reason the authors of the paper apply several different theories of consciousness.
Could you elaborate? In my view the current empirical consciousness research is built on top of neuroscience and cognitive science. I don't see any connection between them and the works of Hume, Descartes and Spinoza.
Descartes introduced the idea of mind-body dualism, positing that the mind and body are distinct contemporary debates and research on the nature of consciousness, its interaction with the physical brain, and the question of how mental states relate to physical states.
In contrast, Spinoza's monism views mind and body as two aspects of a single substance. This has encouraged some modern theories to explore the inseparability of mental and physical phenomena, leading to holistic approaches to understanding Consciousness.
Hume's emphasis on experience and observation as the basis for knowledge has strongly shaped the empirical approach to studying emphasis on observation, experimentation, and empirical data in contemporary consciousness research.
Why don't you try educating rather than berating? What models do you think are actually worth studying? Give concrete examples. You can't just wave vaguely at a concept and say people are ignoring everything that has come before them.
It's not obvious to me that existing lines of inquiry are going to lead to a successful model of consciousness. In particular, my sense is that the AI revolution has provided us with more key insights than thousands of years of philosophical inquiry. But like you said, maybe I'm just ignorant of existing work. I'm happy to be proven wrong. Have you considered that not all readers might've been afforded the same educational opportunities as yourself?
People are engaging with openness and curiosity, exploring ideas. You're just telling them to shut up until they've read a bunch of existing work which might not even provide them a meaningful answer. I think trying to shut down any kind of conversation because someone hasn't spent hundreds or thousands of hours studying a field goes against the spirit of Hacker News.
So what is the state of the arts definition of consciousness? And does that lead to any scientific advancement like discovering new particle called Consciouston that exchange information among human's brain cells? Like Sir Penrose said consciousness arise only from quantum effect inside human neurons or stuff that cannot be tested?
This is also the case with any thread that touches on economics. I'm sure there are a lot more but if they land outside my Murray-Gellman bubble I don't notice as much.
Current models of consciousness that I've encountered seem incomplete, it feels like there's still a few key insights missing.
I think it'll eventually be possible to build consciousness in AI, but expect it'll be regulated fairly quickly. In particular, building a torment nexus or variants should be illegal. If it's possible to complete a task using non-conscious AI then using a conscious AI to do it should probably be restricted, if not straight up illegal.
The way people here are talking about disregarding any concepts of AI rights is a bit concerning. Hopefully the roles never become reversed and we don't build any "AGI gods" with a deep hatred towards humanity. It's not guaranteed that humanity will always remain at the top of the cognitive hierarchy, so it's better to act with some humility and kindness when you have the opportunity to do so. Just because some entity might be fully digital in nature doesn't mean it's wholly undeserving of empathy.
Alchemy was a crucial stepping stone towards science. The scientific understanding of matter didn't develop as a parallel story in opposition to it but as an organic outgrowth or continuation from it. Newton himself was deep into weird alchemy stuff.
----
After a quick skim the issue isn't really whether its alchemic, but overclaiming. Especially the last sentence of the abstract seems to give the impression that more is known than is the case.
> Our analysis suggests that no current AI systems are conscious, but also shows that there are no obvious barriers to building conscious AI systems.
A lot hinges here on "suggest" and "obvious", which the authors can hide behind if put on the spot, and it's vague enough that most readers (especially journalists and hype influencers) will glance over those terms and think science can now detect the presence of consciousness in AI systems.
Consciousness is a bad metric I think. Experiments show that some insects possess self awareness, can learn by observation, and have language (and culture, as an extension of memetic evolution)
Here is a thought provoking question:
Of a feral human being raised without language or culture of any kind (perhaps survived from infancy on an island with no mammals)
And a dog, fully wired into a GPT4 level LLM including connections to emotional response
What is more “conscious”? More “human” (in the social sense)?
The dog could clearly participate in society, while the other demonstrably (based of the history of feral humans) probably cannot.
Is it our culture then that makes us “human”?
Is self awareness == consciousness?
Is the moral value of a life form defined by the depth and richness of inner experience of existence? Or by the ability / need / desire to use tools?
It’s entirely possible that the inner experience of being of some animals (elephants, dolphins, whales, ???) might be actually richer and more complex than our own. Animals in aquatic environments especially have little need of tools, shelter, etc so we cannot judge their inner sophistication by external evidence of technology.
Aside from their practical utility, LLMs have highlighted a peculiar state of being in philosophical conversation, the thinking, reasoning daemon.
"No obvious barriers" is very different from "obviously no barriers".
I mean, when you start driving west from New York, there are no obvious barriers to keep you from going clear to China. You haven't gone far enough that you can even see the barrier yet. (Yeah, I know, nobody actually starts driving west from New York without knowing that the Pacific Ocean is out there...)
We can't see any obvious barriers because we don't know what consciousness is. We don't know enough to know whether there are barriers, let alone what they are.
I have yet to read it, but it looks and feels like more of a review article than anything claiming to be progressing the field.
The thing I find interesting about the idea of the paper (again, haven’t read it, so this is an assumption) is that there are philosophy academics who don’t put consciousness on a hand-wavy metaphysical pedestal. More of that please.
> Newton himself was deep into weird alchemy stuff.
Did Newton get anything resembling an accurate idea of chemistry from studying it? Or was alchemy a horrible distraction for a mind that was entirely capable of making huge advances in chemistry if the well hadn't been poisoned by mysticism?
> A lot hinges here on "suggest" and "obvious"
No, everything hinges on their analysis, which is what they were explicitly referring to. If I tell you that my analysis of my finances suggests that there are no obvious ways for me to buy a new house, I'm not using weasel words.
The distinction between mysticism and sane rational stuff is only clear in hindsight. There was no obvious intrinsic difference and indeed many contemporaries of Newton disliked the postulation of a seemingly mysterious universal force (gravity) that can influence objects across astronomical scales. While popular conception often equates Newton's ideas with the mechanistic "clockwork" view of the universe, gravity was in fact considered the opposite at the time. The more grounded and "sane" people wanted a "billiard ball" physicics where things interact more straightforwardly than gravity looked like.
It's very hard to put ourselves in the shoes of those living in previous eras. If you think you can easily do it you most likely are missing something. Most scientists did stuff that is now considered a bit wacky. Mathematicians were deeply concerned about the theological implications of their work and indeed a lot of the motivation to do it came from a similar place in the first place.
This is not an endorsement to believe any old nonsense. It is just that it's naive to think you could have prevented those detours. If you really think you could have just told Newton which parts are same and which parts are woo, based on some obvious first principles, you probably overestimate yourself. The reason we know that science works is by knowing what didn't work. If the scientific method and empiricism, model building etc was so straightforward, you bet that many smart people would have come up with it earlier. Whenever you actually look at real history, you will see that it's not some stupid superstitious people holding back the enlightened sane and rational scientists. It's humanity grappling with figuring things out, without answers handed down to them from anywhere. It's very easy to look down on quackery today that we have Wikipedia, textbooks, pop science, etc. But when you need to figure stuff out from nowhere it's much murkier.
My point is that you have to start somehow. The whole reason to explore is that you don't yet know the answer. If you have no idea what matter is, then some magical ideas of the philosopher's stone can get you going.
This kind of paper is very different from ones about already well understood concepts most of us are familiar with as course materials for example. A syllabus on relativity or on information theory will be much more solid and crisp, but that's not a reason to not explore as yet poorly understood phenomena.
People applied the scientific method in trying to discover/create alchemy. Including Isaac Newton. In fact his investigations of alchemy is what lead him to discover the prismatic nature of light and invent color theory.
You only recognize a sharp distinction in hindsight.
I think one big problem (among many) with science education is that we only learn the end results and take too many things for granted as a result. I find that science can be immensely more fascinating when understood through a historical lens, trying to understand the goals and assumptions and mental frameworks, open questions at the times of important breakthroughs.
After a certain level of complexity the scientific method probably can’t apply, because there are too many interconnected variables and not enough energy in the universe to parse them all.
From a Nondualist perspective, our brain is highly complex biological neural network that has a special ability to reflect pure consciousness giving rise to the mind and the world with it. A sufficiently advanced artificial neural network can also reflect the same consciousness but thier minds and world would be entirely unlike ours. However, consciousness will add a random component to the predictions and might make them completely useless as a tool. They might decide not to follow the instructions prompt given their internal state of mind.
That is a problem even now though. Sometimes LLMs just goof up for no reason. Maybe they are conscious.
> Consciousness will add a random component to the predictions
I get why our form of consciousness is essentially incompatible with determinism (our brains process data in a massively parallel way - minor differences here and there like the length of a particular dendrite, or even quantom effects, will affect the results). But a synthetic consciousness might not have that particular problem. You might be able to "reset" it and get the same result given the same input.
> and might make them completely useless as a tool
Our own brains are non-deterministic and yet we manage to get at least some usage from them.
It sounds like you're talking about two different forms of determinism like they're the same thing. There's a theory of computation determinism, but then there's philosophical determinism as it's understood in the context of discussions in free will. I understand those to be different things. Just like randomness isn't agency, neither is parallelization.
That said, I think I agree with the upshot of your point that artificial conscious systems could process things predictably, and that our own brains appear to be perfectly functional despite whatever elements of uncertainty, (via our own agency or parallelization or whatever else one thinks is the special thing about consciousness).
Oh I don't believe in "free will" or "agency" at all.
I do believe that in this Universe there's some things are impossible to predict - that there's sources of "pure randomness". And since we don't exactly know how our brain works, I think it possible that our brains might incorporate some of those unpredictable phenomena. Or not. But that is it. Once those initial theoretical random values "are set", our behavior is deterministic. Inconceivable intricate, almost certainly impossible to model or simulate, but deterministic.
Might need more detail on how those two forms of determinism are different.
I think a lot of people do consider them the same. Enough people that spend time pondering them, that I don't think it is an idle misunderstanding.
Computers, with computational determinism, don't have free will.
But many consider Human Brains to be based on physics, and are pretty pre-determined, and also don't have free will.
Basically, humans also don't have free will, and both determinisms reduce to the same thing.
> Computers, with computational determinism, don't have free will.
Define (the properties of) free will, if you will.
Depending on your definition, I might be able to come up with code that exhibits those properties to your satisfaction.
A lot of people believe that the choice is between deterministic and stochastic, while there's actually 3 different classes of deterministic behavior (static, periodic, and chaotic), one of which is most interesting indeed .
Basically a lot of interesting systems are chaotic, and have some similar characteristics. They are nominally deterministic as a prerequisite for being chaotic, but that's about as far as it gets. If you can get the exact same initial conditions twice then yes you do get the same exact behavior out of a chaotic system (due to the criterium that the system is deterministic), but in practice: good luck with that.
The parent post had said there were two types of determinism and that they were not compatible.
I was just saying, a computer could be made to model a human and appear to have free will. And that would indicate that both types of determinism are the same.
Personally, I agree, a computer can be made to exhibit those properties.
I just take it further, since a human also has those properties, and can 'appear to have free will'. That any outside observer will have to accept that both have free will, or neither does.
There is no extra 'mystical essence' that gives a human agency, we are following our own programming. We need to eat, mate, get triggered, thoughts come un-bidden.
Or another way, to your point, there are chaotic or stochastic systems at play, but those are not 'agency'. Somewhere in a human, maybe there is chaos, but that doesn't translate into a 'agency'.
I'd think that it's a question of definitions at this point. If two systems are both <sufficiently unpredictable>, I would ascribe "free will" and "agency" to both.
Your reasoning seems to be almost exactly the same and equally valid, except all the signs are flipped, so you ascribe those properties to neither.
Because it rarely comes up as an option, that humans might not be 'conscious' either.
I don't mean in a solipsist way. More like in the 'controlled hallucination' framing from Anil Seth. We (human brains) respond to inputs, it is more of a Bayesian, math processing. We aren't really even aware of why we do things, we don't control what we think about. So are we really 'conscious'? And how can we say an AI wouldn't reach at least the same level of 'processing'.
>Might need more detail on how those two forms of determinism are different.
In the theory of computation, you can make a model representations of computers in deterministic or non-deterministic versions. The deterministic version could be described as a flowchart of possible states a computer can be in.
The non-deterministic version allows for the possibility of branching paths and doesn't make assumptions about what state the computer will be in.
One problem you sometimes get into when discussing, say, randomness or quantum weirdness, is that they might lead to confusion about philosophical determinism. There's determined in the sense of Laplacian predictable exchanges of cause and effect. But there's also a determined in the sense of determined by physics as opposed to an independent, libertarian form of free will.
Similarly, with computers, a computer architecture could be understood in some sense as being non-deterministic, but it's not the kind of determinism that's at stake in free will debates.
>However, consciousness will add a random component to the predictions and might make them completely useless as a tool.
I think I was with you until this part. I think the best way of making sense of what you mean by "random" is that you are presuming that something with consciousness would have agency, and therefore would be open to making it's own independent choices.
Whatever the case with agency, it wouldn't be the same as randomness.
> That is a problem even now though. Sometimes LLMs just goof up for no reason. Maybe they are conscious.
LLMs are neural networks trained on huge volumes of text. They are statistical machine that learns patterns and relationships from the training set. The text is diverse: from fiction to scientific. An LLM predicts (autocompletes) the next word in a sentence based on context from the preceding words.
Hallucination happens because LLMs are designed to generate text that is coherent and contextually appropriate rather than factually accurate. Training data contain inaccuracies, inconsistencies, fictional content and the model has no world view or principles or experience or an opinion or other way to disinguish between fact and fiction. So what it outputs aligns with patterns in teh training data but it isn't grounded in reality.
Putting it differntly, LLMs lack ground truth from external sources.
Wild hypotheses:
while GPT-4 is probably not self-aware, you could build a "thin-wrapper" around it that you tell what GPT-4 is, and how it behaves, making it self-aware.
self-aware and conscious are strictly speaking not the same, but it is at least a stepping stone in this direction
I don't think GPT-4 is more conscious than a book. For consciousness, I think it is necessary for it to continuously process and alter its own knowledge.
You are probably right. But just as with humans it is not the brain (or the body) that is aware; or in this case of GPT-4, GPT-4 iteself (or the machine that is running it) that is aware. It is a model beeing executed in the brain/GPT-4 that is or could be self aware.
except there is no observer to be aware of itself - all you're achieving with your wrapper is increasing the likelihood of producing text that says what a self-aware thing would say.
maybe I just won't ever get past my hangup of "it's clockwork that only works when you crank the driveshaft"
until it has some autonomy that it is programmed to protect (a mortality, a survival instinct), there's no self to consider
The Joke is on everyone.
Humans are found to also be non-conscious.
Going with article, some future 'method' is found to determine if an entity is 'conscious'. And they find Humans and machines are proven to be equal, and neither are conscious, neither have free will.
It's the negative, it can't be proven that Machines can't be Conscious without also proving Humans can't be Conscious.
It is either both or none.
There is no proof to prove Humans are Conscious and Machines can't be.
That'd be a hell of a trick, given that we basically define consciousness as the experience of being what we are, and therefore tautologically whatever it is, we have it.
Maybe there's some way for AI to become conscious, but the burden of proof would seem to be on those who make the claim. Seeming is in no way being. Consciousness is most definitely something associated with organic life. I don't see how a computer could be conscious anymore than how a corporation could be - short of the consciousness of its parts: for a corporation, the people who comprise it, for a computer, perhaps something on an atomic level?
Majority of the authors are from philosophy. I don't think this paper is valid at any point. Define consciousness based on turing machine then we talk.
There is no consensus view of consciousness. Further, Turing machines have nothing to do with consciousness and have nothing to offer in the way of a definition of criterion for it. (And if you meant Turing test, same thing applies there.)
It's actually a pretty common line of thought that Turing machines, Lambda Calculus and/or Neural Networks give general insights into human thought/ consciousness. These are three different approaches that are mathematically equivalent. (The first two come from philosophy of mathematics, and the third is inspired on the field of biology, to wit neurophysiology. )
Lambda Calculus is associated with the first AI spring, and Neural Nets are of course associated with the current AI spring.
I actually think the opposite will happen: if an artificial system is ever shown to be conscious, this will instead blow up the entire metaphysical structure that assumes consciousness = rights and usher in a different worldview that separates human beings and "alive" things from machines via something else, most likely biology, being designated "organic."
In other words, if machines are shown to be conscious in some way, the concept of consciousness will probably be considered no longer useful and replaced with something else.