Common-Sense Psychology

Carson Johnston

Common-sense psychology is the implicit framework people use to interpret behaviour and mental states, such as beliefs, desires, and intentions to explain and predict the actions of ourselves, others, animals, and recently AI systems.

Can a chatbot have beliefs, desires, and intentions?

Key Points:

  • Common-sense psychology is an intuitive framework that helps us make sense of human, animal, and often AI behaviour by appealing to mental states like beliefs, desires, and intentions.  

  • Researchers are exploring whether AI systems have or could develop an implicit common-sense psychology and mental states akin to humans, including whether they can predict human behaviour or interactions with other agents by appealing to a similar process. 

  • Applying common-sense psychology to AI is a double-edged sword. AI systems equipped with a common-sense psychology might be safer, more reliable, and easier to interact with but treating them as minded might mislead us about their capabilities, creating false senses of trust and confusing their status as entities deserving moral consideration.

The Standard View

Common-sense psychology, often called "folk psychology," is the implicit framework people use to interpret others' behaviour and mental states, such as beliefs, desires, and intentions (see intention). For human-to-human interactions, this framework encompasses the intuitive, everyday assumptions we hold about how mental states drive observable behaviours. It is both self-reflective, concerning one's own mental processes, and other-reflective, concerning the inferred mental processes of others. At its core, common-sense psychology enables humans to understand behaviour by presuming underlying mental states as causes. For instance, we might infer sadness from a frown or interpret grabbing a coat as a signal of intending to stay warm outside.

Beyond its application to humans, the concepts of common-sense psychology are often applied to entities that display human-like behaviours, like certain primate species, and increasingly AI systems. When interacting with conversational AI, such as ChatGPT or Claude, people frequently attribute mental states—such as beliefs, desires, or intentions—to these systems, even when aware that these machines might lack genuine mental states.

This tendency arises because we perceive machine outputs as bearing internal causes like mental states. While some see this as merely metaphorical, it can represent a genuine attempt to understand how a machine's internal processes shape its actions. However, perspectives vary significantly: some view mental state attributions to machines in a realist spirit, implying that the machine possesses mental states akin to humans, while others adopt a “fictionalist” approach, suggesting that attributing mental states serves a narrative, pragmatic, or explanatory function without presuming they truly exist in the machine.

Mind-Reading & Mental Models

Whether applied to humans, animals, or AI, the interpretive process above aligns with what philosopher’s call "mind-reading"—the act of observing behaviour, such as body language or speech, and inferring internal mental states from these cues. Mind-reading abilities, argued to begin in early childhood, develop as mechanisms for navigating social interactions, helping individuals predict others' behaviour by hypothesising about their mental states. Philosophers suggest that in doing so, we construct a mental model or representation of another’s mental states, enabling us to handle a range of complex social interactions. This capacity to model others' mental states is often referred to as "Theory of Mind" in the philosophy of mind.

When we extend Theory of Mind to AI, we may assume that machines harbour something analogous to mental states that influence predictable behaviours, allowing us to "mind-read" these processes based on observable cues like generated text. However, some challenge this extension, noting pragmatic issues in attributing mental states to AI systems, as doing so risks anthropomorphizing them. Anthropomorphizing machines by treating them as if they had human-like minds can distort interpretations of AI behaviour, overstate their capabilities, and foster excessive reliance or unwarranted trust. Careful consideration is essential to avoid misinterpretations and support transparent, reliable interactions with AI.

Extending the reach of common-sense psychology to AI

Can AI systems read our minds, in the sense of “mind reading” above? Some argue that embedding an analogue form of common-sense psychology within AI systems, enabling them to "mind-read" and construct mental models of users, could enhance human-machine interaction, multi-agent systems, and machine interpretability. Such capacities could improve decision-making, prediction, flexible cooperation, and value alignment, which would support safer and more intuitive AI applications.

Researchers argue that if AI systems could model both their users' and their own behaviours in terms of mental states, they might better explain their actions in ways that align with human expectations. However, the applicability of this concept to AI is debatable; it is not clear that AI systems will be capable of self-representation and even if AI systems could develop self-representational abilities, it is uncertain whether the cognitive mechanisms in AI align closely enough with human psychology to enable effective behavioural predictions on their part.

In summary, applying common-sense psychology to AI presents both opportunities and risks. Embedding an implicit common-sense psychology could enhance AI’s interpretability and human-machine interaction, making systems more user-friendly and transparent. Such developments may also bring us closer to a form of artificial general intelligence. By equipping AI systems with some sort of common-sense psychology or self-reflective capabilities, AI practitioners might be able to train AI systems to meta-learn (learn how to learn) which might improve their performance on common-sense related tasks that machines have historically struggled with. However, as stated, common-sense psychology-inspired models could misrepresent AI’s actual abilities, fostering unrealistic expectations and raising ethical concerns about AI’s treatment and governance. Balancing the practical benefits of common-sense psychology with the risks of anthropomorphism and misrepresentation will be crucial as we consider how best to design and understand advanced AI systems.

Previous
Previous

Embodiment

Next
Next

Reasoning