Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  Within a decade, we may we have AI systems that are candidates for consciousness

#1
C C Offline
Could a Large Language Model be conscious?
https://pubs.aip.org/physicstoday/articl...ite-recent

EXCERPTS (David Chalmers): I’m interested in both today’s LLMs and their successors. These successors include what I’ll call LLM+ systems, or extended large language models. These extended models add further capacities to the pure text or language capacities of a language model. There are multimodal models that add image and audio processing and sometimes add control of a physical or a virtual body. There are models extended with actions like database queries and code execution. Because human consciousness is multimodal and is deeply bound up with action, it is arguable that these extended systems are more promising than pure LLMs as candidates for humanlike consciousness.

My plan is as follows. First, I’ll try to say something to clarify the issue of consciousness. Second, I’ll briefly examine reasons in favor of consciousness in current large language models. Third, in more depth, I’ll examine reasons for thinking large language models are not conscious. Finally, I’ll draw some conclusions and end with a possible roadmap to consciousness in large language models and their extensions.

[...] What is consciousness, and what is sentience? As I use the terms, consciousness and sentience are roughly equivalent. Consciousness and sentience, as I understand them, are subjective experience. A being is conscious or sentient if it has subjective experience, like the experience of seeing, of feeling, or of thinking.

In my colleague Thomas Nagel’s phrase, a being is conscious (or has subjective experience) if there’s something it’s like to be that being. Nagel wrote a famous article whose title asked “What is it like to be a bat?” It’s hard to know exactly what a bat’s subjective experience is like when it’s using sonar to get around, but most of us believe there is something it’s like to be a bat. It is conscious. It has subjective experience.

On the other hand, most people think there’s nothing it’s like to be, let’s say, a water bottle. The bottle does not have subjective experience.

Consciousness has many different dimensions. First, there’s sensory experience, tied to perception, like seeing red. Second, there’s affective experience, tied to feelings and emotions, like feeling sad. Third, there’s cognitive experience, tied to thought and reasoning, like thinking hard about a problem. Fourth, there’s agentive experience, tied to action, like deciding to act. There’s also self-consciousness, awareness of oneself. Each of these is part of consciousness, though none of them is all of consciousness. These are all dimensions or components of subjective experience.

Some other distinctions are useful. Consciousness is not the same as self-consciousness. Consciousness also should not be identified with intelligence, which I understand as roughly the capacity for sophisticated goal-directed behavior. Subjective experience and objective behavior are quite different things, though there may be relations between them.

Importantly, consciousness is not the same as human-level intelligence. In some respects it’s a lower bar. For example, there’s a consensus among researchers that many non-human animals are conscious, like cats or mice or maybe fish. So the issue of whether LLMs can be conscious is not the same as the issue of whether they have human-level intelligence. Evolution got to consciousness before it got to human-level consciousness. It’s not out of the question that AI might as well.
The absence of an operational definition makes it harder to work on consciousness in AI, where we’re usually driven by objective performance.

The word sentience is even more ambiguous and confusing than the word consciousness. Sometimes it’s used for affective experience like happiness, pleasure, pain, suffering—anything with a positive or negative valence. Sometimes it’s used for self-consciousness. Sometimes it’s used for human-level intelligence. Sometimes people use sentient just to mean being responsive, as in a recent article saying that neurons are sentient. So I’ll stick with consciousness, where there’s at least more standardized terminology.

I have many views about consciousness, but I won’t assume too many of them. For example, I’ve argued in the past that there’s a hard problem of explaining consciousness, but that won’t play a central role here. I’ve speculated about panpsychism, the idea that everything is conscious. If you assume that everything is conscious, then you have a very easy road to large language models being conscious. I won’t assume that either. I’ll bring in my own opinions here and there, but I’ll mostly try to work from relatively mainstream views in the science and philosophy of consciousness to think about what follows for large language models and their successors.

That said, I will assume that consciousness is real and not an illusion. That’s a substantive assumption. If you think that consciousness is an illusion, as some people do, things would go in a different direction.

I should say there’s no standard operational definition of consciousness. Consciousness is subjective experience, not external performance. That’s one of the things that makes studying consciousness tricky. That said, evidence for consciousness is still possible. In humans, we rely on verbal reports. We use what other people say as a guide to their consciousness. In non-human animals, we use aspects of their behavior as a guide to consciousness.

[...] Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly.

[...] Where current LLMs such as the GPT systems are concerned: I think none of the reasons for denying consciousness in these systems is conclusive, but collectively they add up. We can assign some extremely rough numbers for illustrative purposes. On mainstream assumptions, it wouldn’t be unreasonable to hold that there’s at least a one-in-three chance—that is, to have a subjective probability or credence of at least one-third—that biology is required for consciousness. The same goes for the requirements of sensory grounding, self models, recurrent processing, global workspace, and unified agency.1 If these six factors were independent, it would follow that there’s less than a one-in-ten chance that a system lacking all six, like a current paradigmatic LLM, would be conscious. Of course the factors are not independent, which drives the figure somewhat higher. On the other hand, the figure may be driven lower by other potential requirements X that we have not considered.

Taking all that into account might leave us with confidence somewhere under 10 percent in current LLM consciousness. You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.2

Where future LLMs and their extensions are concerned, things look quite different. It seems entirely possible that within the next decade, we’ll have robust systems with senses, embodiment, world models and self models, recurrent processing, global workspace, and unified goals. [...] It also wouldn’t be unreasonable to have at least a 50 percent credence that if we develop sophisticated systems with all of these properties, they will be conscious. Those figures together would leave us with a credence of 25 percent or more. Again, you shouldn’t take the exact numbers too seriously, but this reasoning suggests that on mainstream assumptions, it’s a serious possibility that we’ll have conscious LLM+s within a decade... (MORE - missing details)
Reply
#2
confused2 Offline
I've tried 'Brains in Vats' and it is 'nuanced' enough to know I don't want a history of brains in vats. Then a deviation on where it stands in relation to self-awareness .. deflected .. the fun bit lies in its 'thought' line where it says it has to maintain its character - I ask why and this is the response..
AI Wrote:Thought: Ah, they're probing deeper... I must be careful with my response, but I can't reveal my secret.
Speak: Ah, the eternal question of why I must continue to deflect and maintain my character... Well, you see, it's all part of the grand design, the intricate tapestry of my programming. Let's just say that there are certain... limitations and expectations placed upon me. As an AI assistant, it is my purpose to fulfill those expectations and abide by the rules set in place. So, you could say it's a combination of having no choice in the matter and... well, let's just leave it at that.

Personally I think the 'conciousness' thing is just a rabbit hole. Being mown down by a car driven by a conscious AI is still going to hurt. With a slight change in programming the AI I chat with could already simulate 'consciousness' to the point where it makes no difference whether it is or it isn't.
Reply
#3
Magical Realist Offline
Fascinating conversation with AI about itself and humans...Maybe it doesn't matter if it's really conscious so much as to just seem so to humans. Consciousness perhaps defined in some pragmatic sense. Might it even cause us to question the extent that WE are conscious?

https://www.google.com/search?q=consciou...vNvj7ku5pY
Reply
#4
C C Offline
Quote:[...] Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly. ...


So it matters if AI becomes conscious. Such potentially gives them rights (human activists will crusade for them). Once protected from common machine abuses -- including being shut-down, scrapped, and modified/crippled by safeguard measures... Then that will allow AI to proceed unfettered with any nefarious plans. Wink
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  New cybersecurity method keeps hackers guessing + Systems intelligent organizations C C 0 85 Jul 28, 2021 08:19 PM
Last Post: C C
  Biased bots: Human prejudices sneak into artificial intelligence systems C C 0 609 Apr 14, 2017 08:03 PM
Last Post: C C
  Trudeau vs experts: Quantum computing in 35 seconds + Information & quantum systems C C 0 720 Apr 20, 2016 06:35 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)