Artificial General Intelligence
Anandi Hattiangadi
Artificial Intelligence (AI) is an artefact, created by a human agent, that is capable of behaviour we would view as intelligent if it were human. Artificial General Intelligence (AGI) is an AI whose intelligence is equal to or surpasses human intelligence in its level and generality.
Are AI chatbots AGIs? How can we tell? Why does it matter?
Key Points:
AI chatbots may qualify as AGIs according to some conceptions of AGI but not others. They may also qualify as AGIs to some degree, but not fully.
Different methods of evaluating whether a system is an AGI may be appropriate to different conceptions of AGI.
Our assessment of risks and benefits of developing AGI depends on which conception of AGI we are working with.
It was once thought that if a computer could beat a human at chess, it would possess AGI. But this assumption was called into question in 1997, when IBM’s supercomputer, Deep Blue, beat the world chess champion at the time, Garry Kasparov. One reason for questioning this assumption was that it was abundantly clear that Deep Blue couldn’t do anything other than play chess, whereas humans are capable of doing many different things. So, Deep Blue was viewed as a narrow AI. Another reason was that Deep Blue did not seem to have any of the general capacities–for reasoning, grasping abstract concepts, planning, and so forth–which seem to be distinctive of human intelligence, to which AGI had long been indexed. These two reasons reflect different ways of thinking about AGI, either in terms of how many things a system can do, or in terms of a system whose intelligence is in some important respects like that of a human.
Defining AGI
Since its inception, the development of AGI has been considered to be the North Star goal of AI research. However, there is no consensus on how to define AGI. Some define AGI operationally, in terms of directly observable phenomena such as inputs and outputs, while others define it theoretically, in terms of capacities that underlie observable phenomena.
A well-known example of an operational definition is due to Alan Turing, who argued that the question whether a machine could think could be replaced by the question whether a machine could perform well enough in a conversational exchange to fool a human judge into thinking it is human. Ned Block criticised this approach by considering Blockhead, a hypothetical system in which sequences of prompts and responses are pre-programmed, so that Blockhead can mechanically go through these sequences and engage in sensible conversational exchanges, but without thinking or possessing any intelligence at all. This case speaks in favour of defining AGI theoretically.
Though Turing’s Test is too imprecise to be applied to current systems, there are many operational definitions of AGI that are still assumed, if at times only implicitly. For example, AGI is sometimes defined operationally in terms of scope:
A system has wide scope if it is able to produce outputs in response to task-related inputs that are at or above the level of human responses on a wide range of tasks of which humans are capable, from writing poetry to coding. Scope can be assessed by tests designed for humans, such as college entry aptitude tests.
For example, whereas humans are capable of doing many different kinds of things, Deep Blue has a narrow scope, since it is only capable of playing chess.
It is also common to find theoretical definitions of AGI in terms of the capacities that are thought to underlie performance on tasks. For example, AGI is sometimes defined in terms of the possession of a suite of domain general capacities:
Domain general capacities are easily transferred from one context to another, such as from the training environment to novel situations, or from learning to achieve a goal in one way, to discovering new ways to achieve that goal. The set of domain general capacities include perceptual-spatial processing, quantitative reasoning, commonsense knowledge, creativity, abstract thought, understanding natural language, logical reasoning, learning from experience, or rational decision-making.
Context Matters
Different definitions of AGI are relevant in different contexts. For instance, companies may be most interested in an AI with wide scope, while academics may be most interested in systems with domain general capacities. Moreover, a system may satisfy one definition of AGI but not another. For example, a system that unites a set of narrow AIs in a single device, such as a smartphone, might qualify as an AGI in terms of scope, while lacking any domain general capacities. AGI may also come in degrees, reflecting either the range of tasks a system can perform, or its level of competence in exercising domain general capacities.
What is Intelligence?
The term ‘intelligence’ designates a set of cognitive capacities for acquiring knowledge and putting it to use to achieve goals. Since AGI is often indexed to human intelligence, as opposed to that of other animals, it is often thought to involve cognitive capacities for acquiring knowledge and putting it to use that are distinctively human.
What is artificial?
What we consider to be artificial forms of intelligence is not as clear cut as it may seem.
Benchmarking AGI
The ‘shallow’ method of benchmarking AGI is to simply observe the system’s performance on tests designed for humans. This method is well suited to testing whether a system satisfies an operational definition of AGI, but ill-suited to testing whether a system satisfies a theoretical definition of it. For instance, scope could be satisfied in the absence of any domain general capacities, just as Blockhead can engage in sensible conversation without thinking. To probe theoretical conceptions of AGI, one needs ‘deep’ methods of benchmarking, which discriminate between competing hypotheses regarding the capacities underlying a system’s performance.
AGI benefits and risks
The emergence of AGI promises significant benefits to humanity, but also tremendous risks. A system that possesses AGI to a high degree might have the cognitive and communicative capacities of a human, but with the memory and computational power of a super-computer. Such a system might have the potential to benefit humans in performing difficult, dangerous, or tedious tasks; advance science or discover cures for human diseases; and solve the many problems facing humanity. At the same time, there is a risk that AGIs will begin to autonomously set goals that do not align with human values, or become so powerful, they would outcompete humans and ultimately render us extinct.