Consciousness
Benjamin Henke
Consciousness is a special form of awareness.
Could AI systems be conscious and why should we care?
Key Points:
There are many kinds of consciousness, each of which likely has different implications for AI.
Most agree that entities that are sentient — i.e. have affective consciousness — deserve special moral consideration. However, many ways of being conscious, including many of the most plausible kinds of AI consciousness, do not entail sentience. Thus, we should be careful when drawing inferences about an AI system’s moral status from claims about its conscious status.
While it’s widely agreed that consciousness is related to a number of important cognitive abilities, being conscious does not necessarily entail having those abilities. It’s thus important to keep separate questions about superhuman or dangerous abilities from questions of AI consciousness.
As AI systems produce increasingly human-like outputs, there has been a surge of interest in which such systems are conscious. That is, do they think and feel the way we do? Is there something that it’s like to be ChatGPT, just as there’s something that it’s like to be a human? Alternatively are such systems more like very complicated toasters? Or something in the middle? There seem to be two main reasons to worry about these questions aside from their intrinsic interest. First, some worry that if AI systems are conscious, then our treatment of them — making them do nothing but our bidding — is morally wrong. Second, some worry that if AI systems become conscious, then they will become more difficult to control.
To address these concerns, and the more general questions raised above, it’s helpful to first clarify the many things that we might mean when we say that something is ‘conscious’.
Consciousness is a special form of awareness. We apply the term ‘conscious’ to both mental states (such as when we say that I’m conscious of the fact that it’s raining) and to entities (such as when we say that dogs are conscious).
A mental state that is the object of this awareness is state-conscious.
A creature (such as a human, animal, or AI system) that is capable of having conscious mental states is creature-conscious.
Philosophers also distinguish between two kinds of state-consciousness:
A state is phenomenally conscious when being in that state has a distinctive feel. For example, having conscious pain feels one way, while smelling a rose feels another. Having blood run through your veins (ordinarily) doesn’t feel any way at all.
A state is access-conscious when its features are available for use by other mental states, as when, for example, I can use my belief that ‘Oslo is the capital of Norway’ to answer questions about Norway.
Using these definitions, we can better address two common concerns about AI consciousness. First, many fear that if AI systems were conscious, they would become moral patients, i.e., entities capable of experiencing harm. However, it's important to recognize that not all forms of consciousness carry moral weight. For instance, access conscious states like being conscious of the fact that Oslo is the capital of Norway do not appear to have direct moral significance in the same way that, say, being in pain does. Thus, many hold that moral patienthood requires phenomenal consciousness.
Moreover, there are various types of phenomenally conscious mental states such as perceiving a red cup, that do not appear to have direct moral significant either, since they don’t appear to be good or bad for the subject. A popular view is that moral patienthood requires an affective component, such as the state of being in pain. We use the term sentient to describe creatures that are capable of affective phenomenal consciousness. Thus, a prevalent view holds that moral patienthood requires sentience. This distinction highlights the importance of specifying the type of consciousness under discussion in conversations about AI consciousness.
A second concern relates to the potential for consciousness to endow AI systems with unforeseen capabilities, possibly making them harder to control. While various theories of consciousness exist, a prominent but debated view posits that consciousness unified our awareness of both the external world (through perception) and our internal states (through affective and other bodily mental states). On such a view, phenomenally conscious states are states of central cognition, and states are access conscious only if they are available for use by central cognition.
This view underscores the role of consciousness in human and animal cognition, including metacognitive abilities. Crucially, however, consciousness is just one potential ingredient in these abilities. For example, bees may exhibit a form of limited consciousness without these higher-level abilities, and without posing existential risks. Thus, AI consciousness need not entail more significant risks. Relatedly, it’s also probable that an entity can be intelligent (or exhibit intelligent behavior) without being conscious. The central lesson, as above, is that it’s important in discussions of AI to attend to the kind of consciousness at issue and the possible ramifications of that kind of consciousness.
Finally it’s controversial whether consciousness comes in degrees, such that some states or creatures can be ‘more conscious’ than others. It’s common to liken consciousness to a lightbulb. On this analogy, some think consciousness is simply about whether the lightbulb is on or off, while others think that we can speak of the lightbulb being dimmer or brighter. Those who advocate for degrees of consciousness often claim that, for example, bees are ‘less conscious’ than we are (i.e. that their states are less bright). Similarly, when faced with a claim of AI consciousness, we can ask whether AI has consciousness to a lesser, equal, or greater degree than humans. However, it’s important to separate this question from the question of how ‘rich’ a conscious state is. My conscious state when experiencing a particularly engrossing work of art might be ‘richer’ without being more conscious than my conscious state when staring at a blank wall. Those who oppose the idea of degrees of consciousness often argue that these two notions are being illicitly conflated.