Agency
Fabio Tollon & Anandi Hattiangadi
Agency is the capacity to perform actions, where an action is something that one does, rather than something that merely happens.
Are AI-systems agents, and why does it matter?
Key Points:
Agency can manifest at different levels, from ‘full-blown’ to minimal. More cognitively demanding types of agency come with more associated responsibilities.
AI-systems would require advanced cognitive abilities to be full-blown agents, but would only require goal-directed behaviour to be minimal agents.
It might sometimes make sense to conceive of AI-systems as ‘ersatz’ agents if it helps us predict their behaviour.
While minimal agents cannot be morally responsible for their actions, they might still be deserving of certain rights, such as protection of their well-being.
Actions are different from happenings. A rock rolling down a hill might be something that happens, but it is not an action. We say things like “it happened for a reason”, but strictly speaking, happenings have nothing to do with reasons. You throwing a rock, however, is an action. It is something you do for a reason.
Recently, sustained attention has been given to the question of whether complex artefacts (such as AI-systems) might be agents. One compelling and intuitive reason to think that these artefacts are not agents is that, no matter how complex they might be, they lack mental states like beliefs and desires, which are thought to be necessary to represent an agent’s reasons. However, it is not clear that all forms of agency require such sophisticated mental states. Perhaps simpler representations of reasons suffice for a simpler form of agency, such as that displayed by lizards, bees, or even amoebae. Indeed, in some cases, it matters more whether we treat a system as an agent, rather than whether it really has the internal mental states that are required for agency. Since agency can manifest itself in different ways, getting clear on what kind of agency we are talking about, and why it matters, can help us better assess the potential agential powers of AI-systems. When thinking about AI, then, it is important that we keep in mind what sense of agency we’re interested in, and the various requirements associated with each of them. Some useful distinctions include:
Full-blown Agency
The paradigmatic full-blown agent is a typical adult human who is capable of doing things for reasons: the beliefs, desires, and intentions that cause the agent’s behaviour, and make sense of it from her own perspective. For example, suppose I open the fridge because I want a drink, and I believe that if I open the fridge, I will get a drink. In this case, the desire (for a drink) and the belief (there’s one in the fridge) both cause me to open the fridge, and explain what made opening the fridge seem like a sensible thing to do, from my perspective. If an AI system is a full-blown agent, it must have sophisticated cognitive states, since beliefs, desires, and intentions are abstract representations of possible future events (my opening the fridge), and they have a logical structure, which allows them to figure in reasoning (I want a drink; if I open the fridge, I will get a drink; so, I will open the fridge). A full-blown agent may be held responsible for her actions, on the basis of her reasons, which are often relevant to the moral or legal status of actions. For example, whether an act constitutes incitement to violence may depend on the intentions of the agent who performed the act. To determine if an AI system is a full-blown agent, one must determine whether it is capable of having beliefs, desires or intentions, for which it can be held responsible.
Minimal Agency
One might think that at least some non-human animals and prelinguistic infants are agents, though they may not be capable of having beliefs, desires, or intentions. For example, a caterpillar eating a leaf is clearly doing something, since it seems to be engaged in goal-directed, purposeful behaviour. Still, one might think that a caterpillar is too simple an organism to have beliefs, desires, or intentions. Some have suggested that these non-human animals and prelinguistic infants have minimal agency, which require representational capacities of some kind, such as perception and memory, though not the sophisticated states required for full-blown agency. Though minimal agents may not be held responsible for their actions, they may nonetheless be bearers of wellbeing or rights, in virtue of having goals that may or may not be met. To determine, if an AI system is a minimal agent, one needs to determine whether it is capable of goal-directed behaviour. If it is, we may have to give it moral consideration or rights.
Agency and Moral Agency
Agency should also be distinguished from moral agency. While all moral agents are agents, not all agents are moral agents. Only full-blown agents can be moral agents, since only acts that are freely chosen by an agent are subject to moral evaluation, and since moral deliberation involves weighing reasons for and against action, one can only freely choose to do something if one represents one’s reasons explicitly in the form of beliefs, desires, or intentions. If a tiger kills a deer, its action is not subject to moral evaluation, because though the tiger engaged in goal-directed behaviour, it did not freely choose to kill the deer. In contrast, if I decide to steal, my act will be subject to moral evaluation because I chose to steal on the basis of some reasons or other. Furthermore, the reasons for which a moral agent acts may be relevant to the moral evaluation of the action. For instance, my stealing may be wrong if I stole out of greed, but it may be permissible if I stole to prevent a nuclear war. If an AI system qualifies as a moral agent, it is capable of having reasons to freely choose to perform action, and thus can be held morally responsible for their actions, as well as liable to moral sanction.
Predictive or Ersatz Agency
Sometimes, we assume that some entity is an agent because doing so puts us in a better position to predict its behavior. This approach does not seek to explain what is actually happening in the mind of a given entity. Rather, if we can reliably predict the behaviour of the entity by treating it as if it had a mind, then we have an instance of predictive agency. For example, even if a Roomba (an automated vacuum cleaner) lacks minimal agency, we can predict its behaviour by assigning ‘beliefs’ and ‘desires’ to it. It can be easier to understand a roomba’s behaviour to say that it ‘wants’ to get behind the sofa, or that it ‘believes’ the door is open. Importantly, this makes no claims on whether the entity is in fact a full-blown agent or not.
AI Affecting Our Agency vs AI Agency
Thus far, this entry has focused on the question of whether AI-systems are themselves agents. Another important question is whether and how AI-systems impact human agency. For example, we might ask to what extent medical diagnostic tools support, supplement, or replace the agency of doctors, or how social media use can increase or decrease human agency.
Collective/Group Agency
While this entry has focused on individual agency, we can also conceive of agency in collective terms. Entities such as corporations might be considered ‘collective agents’, and may be held responsible as collectives. It may be that collective agency is a better model for AI systems, such as Chatbots, which may develop in distinct ways on different devices.
To decide whether an AI system counts as an agent, it is important to first decide whether we are interested in full-blown agency, minimal agency, or merely predictive agency. Whether an AI system qualifies as an agent in any of these senses matters, since full-blown agents can be held responsible for their actions, while minimal agents cannot; and though minimal agents may qualify for moral consideration, ersatz agents are neither responsible for their actions, nor deserving of moral status. What kind of agency an AI system actually possesses is relevant to a range of issues arising when humans and AIs interact. For instance, when AIs assist humans in making decisions, the sort of agency an AI system has will have a bearing on how we should apportion responsibility for the outcomes of those decisions. Moreover, if we are mistaken about what sort of agency AI-systems possess, this could have grave consequences. For instance, if we think an AI system is merely an ersatz agent, when it is a full-blown agent, we run the risk that it will have desires or values that conflict with ours. Or if we think that an AI system is a full-blown agent when it is not, we run the risk of giving it too much responsibility for decision-making. Finally, what kind of agency an AI possesses might have a bearing on how these systems diminish or support human agency, and how agency might evolve as AIs become increasingly integrated into human collectives or groups.