Bias

Jackie Kay

A bias is a systematic inclination in a particular–possibly unwanted–direction.

Are AI systems biased?

Key Points:

  • AI systems, being statistical models, are susceptible to biases introduced by non-representative data and societal influences, leading to systematic prediction errors.

  • AI can exhibit cognitive biases different from human heuristics, and human developers may unintentionally embed their own implicit biases into AI systems, resulting in unintended prejudiced behaviors.

  • Variables used as proxies for sensitive characteristics, like household income for race, can cause AI systems to develop hidden biases, leading to discriminatory outcomes despite privacy and fairness regulations.

Different disciplines have different definitions of the term ‘bias’ (for instance, in neural network architecture ‘bias’ refers to the learnable parameters added to an intermediate value before applying the activation function), but in the following we consider the notion in relation to AI. 

Statistical bias is a systematic prediction error which may be introduced by non-representative data selection, or particular quirks of the chosen mathematical model. AI systems are statistical models, and therefore susceptible to statistical bias. While statistical bias may result from the model’s technical design, the information produced by our society and culture is also a source of statistical bias for AI. Biases may proliferate in the AI’s training data, or in the supervised labels produced by data workers.

Psychologists and cognitive scientists study cognitive biases, a habitual pattern in our reasoning which often results from the heuristics we use to streamline our thinking. While AI may sometimes imitate our cognitive biases, neural networks often form completely different heuristics for reasoning, and therefore AI may also exhibit cognitive biases that are wildly different from ours. 

On the other hand, one should also be vigilant of human cognitive bias when reasoning about AI, such as anthropomorphism or black-box thinking, the tendency to ignore the inner workings of a system and accept its behavior at face value.

In humans and machines, where there is reasoning, there is probably cognitive bias. Bias is not an inherently bad thing (in fact, it might be the only thing). However, many biases are empirically, and often morally, problematic and can lead to negative consequences when acted upon. Social biases, also known as prejudices, for example, can contribute to injustice and unfairness. The study of prejudice often focuses on groups which are historically underprivileged and marginalized, and it is generally agreed upon that equity and justice for these groups is morally relevant. One important risk of bias in AI is that these groups may be underrepresented in the data and, moreover, are often the targets of human cognitive bias.  AI can hence exhibit prejudice, caused by training on data that reflects historical inequities, or by prompting the system to output socially biased content. This can lead to the algorithmic amplification of social bias, in which AI makes biased decisions or portrays prejudiced content that systematically harms certain groups.

Another concept from psychology is implicit bias, which is when a person unintentionally acts on their prejudice. This may lead to a dissonance between their good intentions and the negative consequences of their actions, which can reinforce unfair social structures. Humans bring their own implicit social biases when developing, testing, and interacting with AI. Designers may not even be aware of how their biases influence the design of the system itself. When a new system is built or a new result is shown, it is important to reflect on how we might be viewing this artifact through a prejudiced lens.

Another common entry point for bias in an AI system are proxies: variables which “stand in” for unobservable or immeasurable properties. Proxies can play an important role when data is hard or even impossible to collect, but proxies can also lead (purposefully or accidentally) to biases creeping into AI systems. For example, household income might correlate with race due to systematic income inequality. If household income is then used by an AI system in decision making -- say, by an automated loan provider -- there is a risk that the system will develop a hidden racial bias. When thinking about AI, it is important to consider which observable information could serve as a proxy for sensitive characteristics, and the consequences of its response to these proxies.

Previous
Previous

Moral Responsibility

Next
Next

Intention