Moral Responsibility
Ben Steyn and Fabio Tollon
Someone, or something, is morally responsible for their actions, if they have sufficient knowledge of, and control over, their actions and potential outcomes, and might reasonably be the subject of, for instance, praise or blame for those actions.
Are AI systems morally responsible for their actions?
Key Points:
When discussing ‘(un)ethical / (im)moral AI’, it is good to distinguish between an AI that is inscribed with morally relevant instructions by its human developers, and, alternatively, ‘an AI doing its own moral reasoning’ in some deeper sense.
It is possible to question whether AI has the relevant faculties to engage in the deeper reflection required for it to be ascribed ‘full moral responsibility’.
Moral responsibility is distinct from legal responsibility; which is a duty to act in a certain way either imposed by the law or created by agreement. Legal responsibility for AI can be restricted to a well-specified set of parties to be enforceable, but moral responsibility may be more diffuse among a wide variety of actors, human and non-human.
AI systems seem poised to perform actions in the world which have moral consequences. Consider an AI-enabled nursing robot administering life-saving treatment to patients; driverless cars making snap decisions on whether to turn the vehicle onto a busy pavement to avoid a hazard in the street; or autonomous bomb disposal robots trying to decide which of two bombs to diffuse first when many casualties are at risk at once.
Morality is the system of values and principles that determine whether certain courses of action are right or wrong. Morality chiefly concerns the actions of those who are morally responsible. Someone, or something, is morally responsible for their actions, if they have the capacity to exercise sufficient knowledge and control over their actions and potential outcomes, and might reasonably be the subject of, for instance, praise or blame for those actions. Moral responsibility is typically ascribed to human adults making decisions under normal conditions (e.g. when not cognitively impaired, under duress, or subject to manipulation), and to groups of humans acting collectively such as in businesses or governments.
Importantly, this kind of responsibility is distinct from legal responsibility, a duty to act in a certain way either imposed by the law or created by agreement. Legal responsibility will typically be restricted to a well-specified set of parties, such that laws and contracts can be enforced in a practical way. But moral responsibility might be argued to be more diffuse among a wide variety of actors. In the case of AI, we may want to hold AI companies legally responsible, while acknowledging moral responsibility is wider.
AI systems are increasingly capable of performing morally relevant tasks independently of human oversight and control. The question is, then, who is responsible? Options include: AI system designers, end users, producers of training data, and/or other actors in social systems which govern AI’s usage. In determining where degrees of responsibility ought to fall, some important ideas and distinctions include:
The ‘problem of many hands’
In the modern technological world, it is often hard to attribute moral responsibility to a single actor. Consider that a plane crash might be the result of multiple failures in process, design, engineering, and policymaking. In the case of AI, the problem is amplified. An AI model might be trained on millions of human-written articles. To what extent are article-producers responsible when the model’s outputs are unfavourable?
Combinations of AI and Human Moral Reasoning
To be held morally responsible, an AI must be capable of exercising moral judgement. That could mean one of these four scenarios, each increasingly more demanding:
that an AI is inscribed with morally relevant instructions by its human developers to deal with specific cases, for instance, a driverless car is coded with a decision procedure, which engages the brakes upon spotting a child suddenly running into the road; or a hospital care robot will withhold administering a drug if the patient’s heart rate is above a certain level.
that an AI autonomously calculates the right course of action in specific cases, having been inscribed by human developers with a more abstract set of moral principles (e.g.: ‘minimise harm to human life’, or ‘do not defy the explicit wishes of a patient at the hospital’).
that the AI appears to reason ‘like a human does’; it acts on the basis of a moral stance not explicitly encoded in it by humans, but derived autonomously through its understanding and experience of the world and human society. For instance, it comes to learn of its own accord, with no explicit instruction, that it is considered wrong to lie.
consider an AI that acts in the world on the basis of its own superior moral judgement, which may lead to moral behaviour not easily understood by humans.
Phrases like ‘ethical/moral AI’ are sometimes used to refer to the narrow inscription activity described in scenario 1. But these don’t seem to be cases where AI is exercising its own moral judgement. Consider that we also inscribe many more rudimentary technologies with morally relevant instructions (a bomb is set to explode, an x-ray is set to show life-saving images, a hospital triage software might follow an automated logic flow). But we wouldn’t say of the bomb or x-ray or the automated triage software that they are themselves morally responsible! In all these cases, and by analogy the AI case, we would typically hold the human creators mostly responsible. Meanwhile, scenarios 2, 3, and 4 involve some amount of an AI doing its own deeper moral reasoning, and so, the attribution of moral responsibility becomes progressively murkier and more distributed.
AI possessing/lacking relevant cognitive capacities
there is disagreement over whether AI systems can possess the relevant capacities for making moral judgements, as in scenarios 2-4 above. Can an AI be held responsible for decisions involving suffering, if, for instance, it cannot grasp or feel what suffering is? Capacities argued to be relevant here include understanding, intentionality, common sense, emotions, and empathy. Conversely, humans’ moral judgement coutld arguably be compromised by things like greed or jealousy, so perhaps the very lack of these capacities could play in AI’s favour.
AI lacking free will
For centuries, philosophers have debated the relationship between free will, determinism and moral responsibility in humans. Some doubt a person can have moral responsibility when they don’t have free will - the true ability to make a choice. In the case of AI, there might be an additional reason to doubt the capacity for free will when the system has been coded to function according to a set of deterministic rules, although some have argued that human behaviour is equally deterministic, and others maintain there is moral responsibility irrespective of free will.
Knowledge and Control
For someone(thing) to be appropriately held morally responsible it is traditionally thought that there are two necessary conditions that need to be met: the knowledge and control conditions. For an agent to satisfy the knowledge condition, they should be capable of knowing what they are doing, while having the capacity to grasp its moral significance. For an agent to have control, roughly, they must have the capacity to choose to act in the way that they do. In the case of attributing moral responsibility in the vicinity of AI, then, we might ask how AI-systems interact with these two conditions. For example, can a human be held responsible for an AI’s behaviour when it is operating outside of human oversight (control condition) or making decisions on the basis of information unknown to humans (knowledge condition). We might also ask how AI fares with respect to other accounts of moral responsibility, such as those that emphasise the role of the emotions (e.g. evaluating the good or ill will of someone to another).