Reasoning
Nick Shea
Reasoning is a mental process in which a thinker moves step-by-step through a series of thoughts in the service of working out what is the case or what to do.
AI systems produce intelligent outputs but do they, like humans, engage in reasoning?
Key Points:
Researchers had long struggled to design AI systems that perform the kind of reasoning that people readily perform in natural language.
A recent step-change is the advent of large language models whose outputs look like natural language reasoning - they match what a person would say when reasoning through a problem step-by-step.
It is as yet unclear whether the internal computational processes used by large language models and other deep neural networks are anything like human reasoning.
An AI chatbot offers you some advice about what film to see this weekend. But why did it suggest Love at First Swipe? We want to know its reasoning. Did it infer I like sappy rom coms just because I’m female? Whether the computer engaged in reasoning to reach its outputs is of course important to understanding how it works. It is also crucial if we want to assess what it’s doing - is it objectionable? does it make sense? - and hold it to account.
When humans reason well, each reasoning step is an inference in which the new thought follows from the meaning of the previous thoughts. Reasoning through a series of steps is typically effortful and reliant on working memory, hence subject to interference by a concurrent task that also draws on working memory. That is, reasoning generates and is subject to cognitive load. Psychologists call this a ‘type 2’ process (sometimes called ‘system 2’).
All reasoning is in some sense logical, but it need not be deductive (where the conclusion is guaranteed to be true if the premises are true). It includes inductive reasoning, inference to the best explanation, and other forms of commonsense reasoning, where the conclusion is only made more probable by the premises.
Some issues that arise:
1. Outputs vs. internal processes
Although the outputs of some advanced large language models display (or at least approximate) reasoning, this does not imply that the way information is processed within the AI model also constitutes reasoning. An active area of current research investigates the internal processes responsible for output behaviour. One important question is whether the inferences performed by a model internally amount to reasoning.
2. Type of representations involved
Many psychological and computational processes make transitions between representations on the basis of their meanings. This is true of the way information is processed in early visual areas of the brain, for instance. It is also true of the way distributed representations are processed in deep neural networks (the currently dominant form of AI). While all these transitions might be described as inferential, the category of reasoning is usually considered to be restricted to representations that are constructed out of concepts. Conceptual representations have the same kind of structure as natural language sentences and can support inferences that depend on that structure, for example: from ‘all diamonds are durable’ and ‘that is a diamond’ to ‘that is durable’. It is an open question whether representations in DNNs have meaningful structure and, if so, whether they have the kind of structure on which reasoning can be performed.
3. Norms
Norms are applicable to human reasoning, for example with probabilistic reasoning, that the conclusion is likely to be true if the premises are true (see rationality). Everyday reasoning exhibits biases and other systematic errors, and deploys heuristics that allow us to approximate normative standards but that also fail in certain contexts. A question for AI systems is whether normative standards apply to their putative reasoning processes. For example, is it problematic if an AI system uses error-prone heuristics? If so, where do these norms come from; for instance, do these norms always derive from the intentions of the human designer or user?
4. Functional role
While cognitive science has tended to focus on reasoning as a tool for an individual to draw new conclusions, a recent influential line of thought argues that the capacity for reasoning evolved in a social context, as a way of persuading others or checking what they are telling us. Thus, it is important for AI designers to consider the role of reasoning in interactions. A central case is where a human user interacts with an AI system and asks it to explain and justify the outputs it produces.