Artificiality
Ben Steyn
An artificial object is an object created by an agent, typically a human. Artificial objects are to be contrasted with natural objects, which just are those objects not created by agents.
What is ‘artificial’ about artificial intelligence? Is a person with a neural AI implant a type of artificial intelligence?
Key Points:
All objects and processes lie somewhere on a continuum from natural to artificial.
Where we apply the label of artificial is vague.
Traditional definitions of artifice imply human artificers, but in the age of AI, we might have to adjust the definition to include the possibility of non-human artificers.
We all have an intuitive sense of what is and isn’t artificial. Artificial intelligence involves hardware and software, code, computers, algorithms and the like, while natural intelligence involves ‘wetware’, brains, nervous systems etc. But as we see advances in human/AI interfaces and biotechnology, a categorisation problem emerges: it becomes vague where the term artificial ought to apply. As the term ‘artificial intelligence’ becomes legally and conventionally operative, we will want to know what combinations of biology and technology are within scope of rules and norms.
Consider the following cases, and for each, ask “are these instances of artificial intelligence?”:
A human architect relies on a pen and paper as an aide memoire.
A human architect relies on augmented reality glasses and automated computer-aided-design software to create and refine their designs.
A human takes a therapeutic drug which substantially changes their neurochemistry to enhance their cognitive ability in a certain domain.
A human is implanted with an AI chip, transmitting thoughts, feelings, and sensory stimuli to the brain.
A computer software is embodied in a robot which perfectly models the processes of a human brain and its impact on the robot’s actions.
A human mind is uploaded to the cloud, inhabits a digital world, but in other respects, retains similar thought processes to an embodied human.
The nature-artifact continuum
On first approximation, to call something an artifact is to say it is man-made. The description of artificiality can apply to objects, both physical (chairs, computers) or ephemeral (ideas, theories, stories), or to processes (an artificial insemination, something’s being made in a factory). Artifice has typically been contrasted with nature (or naturalness), which is defined, to a first approximation, as the absence of a human maker. Consider that we might describe someone as a natural talent or a natural beauty. In both cases, we are saying this attribute is the lack of artificiality, that is, not the product of some intentional human activity, such as a coach, or a cosmetics company. Artificiality/naturalness lie on a continuum: Consider that a commercially farmed lemon in the supermarket is natural relative to a lemon-scented room fragrance, but less so than one of the early lemons found growing in the Amazon rainforest.
Now reconsider the intelligence examples 1-6 above. We are forced to conclude that these intelligences demonstrate a difference in degree of artificiality, but it is not easy to draw a sharp line between what is artificial and what is natural. Where we want to draw the line as to how we apply the term is a pragmatic question for communities to consider. In lieu of any intentional boundary-setting by policy and lawmakers, we might suppose the goalposts simply move with the times; consider that a modern day ‘natural birth’ still involves the technology of a modern hospital. In a world where AI implants are universal, we may be content to reserve the term artificial for intelligences exclusively in silico.
Non-Human Artifice
Traditional definitions of artifice as ‘man-made’ hinge on the involvement of humans. This has remained unproblematic for hundreds of years. But now AI raises the prospect of a non-human artificer. An autonomous AI or a cyborg may itself make some object, and intuitively, we would not want to call this object natural even though it is not human-made. How then, should we adjust the definition of artifice to accommodate something AI-made?
A knee-jerk response would be to say an artifact is just “something that is made by something else”. This is too generous to be meaningful. It would encompass the bee’s beehive, the beaver’s dam, and even the moon’s tide, indeed the entire natural world. We ideally want to isolate artifacts to those objects made in a certain way, by a certain type of thing.
A possible way of grounding non-human artifice is in the notions of agency and intentionality. An artifact is an object made by an agent with intentions to create the object to fulfill a certain purpose. Whether an AI can engage in artifice then hinges on whether they exhibit these characteristics of agency and intentionality. If it turns out that they lack these characteristics, we must conclude that the artifice is still to be attributed to the AI’s human creators, rendering AI as simply a tool, employed by the artificer.
The problem of multiple makers
It is not easy to be precise about artifice, as currently defined, because what it means for X to make Y is ambiguous. Consider that objects often have multiple ‘makers’ involved in their production, for instance, a team of engineers working on a rocket. Another problem concerns an object’s lineage. A human may be ‘made by’ their parents, but also in some sense, their grandparents, but also, in some sense, they are a product of their society and environment. Meanwhile, an AI may be the product of an AI and many humans, and in turn, their grandparents and society etc. A possible way of establishing the most relevant artificer for a given object is to look for the most recent entities in an object’s lineage with their own agency and intentionality. So, if a human makes an agential AI which in turn makes another AI, we consider this AI-made. But the multiple makers problem means tracing this lineage would be very complicated in practice. This ambiguity may pose a problem where we seek to attribute moral responsibility for the behavior of certain AI systems (See entry moral responsibility).