Representation

Alex Grzankowski

A representation is something that stands for another thing. 

Did the AI system consider my gender or race when reaching this output?

Key Points:

  • In AI, in order to interpret, in human-understandable terms, how a system reaches an output, it is essential to determine what a system represents as it solves a problem.

  • A representation is something— like a word, diagram, or model — that stands for something else and which is used by a system to complete a task because of what it stands for.

  • Representation should not be conflated with mere indication. For example, while the rings of a tree indicate its age, they do not represent it in the way 'representation' is typically used in the mind sciences. The tree does not use the rings for any goals or tasks.

We think of AI systems like a chatbot as working through a request to reach a useful output – we might think of the system as “thinking through” the request. In order to do this, a system must be able to represent information and manipulate it. But what is it that AI systems represent? Do they represent anything at all, or simply manipulate numerical values?

A representation is something that stands for something else. Familiar representations are words and sentences, sculptures and photographs, and maps and diagrams. Representations are typically isolatable entities such as ink on a page, a string of 1’s and 0’s, or a hunk of shaped clay, but complex states and events (e.g. a distributed state of a brain or a neural net) can be representations as well. In AI research, understanding what information a system represents is crucial for transparency and alignment. For example, in reaching a decision on parole, we want to ensure that a system is taking into consideration the information we want it to and not irrelevant or problematic information. For example, we might want to avoid taking into consideration race but we might want to pay close attention to recidivism. More generally, part of explaining what an AI system is doing and knowing why it is doing it turns on knowing what (if anything) the system is representing when reaching an output.

A second reason representation is important for AI research is that representation is fundamental to discussions about and theories of learning, intelligence, and cognition. Genuine AGI requires an intelligent, thinking machine and it is commonplace to unpack such properties in terms of representations and their manipulations.

‘Representation’ should not be used too loosely. If it is, we run the risk of attributing capacities to systems too easily. Part of the reason for this is that it is easy to run together representation and indication. The rings of a tree co-vary with the tree’s age and so indicate the age of the tree, but the rings do not represent the age of the tree in the sense of ‘represents’ typically used in the mind sciences. The fact that the rings co-vary with age is not used by the tree to achieve any goals or complete any tasks and so we should not say that the tree, by virtue of having such and such number of rings, represents its age. We, sophisticated thinkers, might use the rings to represent age or time, but our ability to use the rings to track age doesn’t mean the tree represents time or age. In AI, we must differentiate between what we, as users, can read off of a system and what information the system itself actually uses. For instance, the vast data a modern AI language model (LLM) is trained on might include complex grammar rules, but the AI might or might not use these rules when completing a task. 

Discerning what information and which algorithms a system is actually using is the sort of work one finds in human and animal Cognitive Science as well as in “naturalistic” approaches to the mind in Philosophy. Recent work in AI inspired by these approaches aims to interpret AI systems using similar methods. How exactly to apply these methods from Cognitive Science and Philosophy is not presently widely agreed upon amongst AI researchers but is an active area of research under the umbrella of “AI Interpretability”.

As a final point of clarification and caution, the properties of representations should be distinguished from the properties they represent. A model (i.e. a complex representation) of a storm is not typically wet or windy but can be used to inform us about wetness and windiness. Likewise, a model of a human brain or a brain process might inform us about thinking and cognition without itself implementing or instantiating thought or any other cognitive processes – a model of thinking and reasoning might not itself think or reason. 

Previous
Previous

Forthcoming Entries (Copy)

Next
Next

Embodiment