Report List

Introduction to issues of artificial consciousness and human dignity (11:15-12:30, room 705, 7F, Saturday, Toshi Center)

If consciousness, and by extension, an equivalent of human persons, can be created artificially, what does this portend for the notion of a unique human dignity and value of human persons?

Human dignity is an absolutely fundamental concept within contemporary rights discourse, but at the same time, it lacks a precise definition. It comes from the latin’s dignitas, basically meaning a mark of honor and respect, yet, its usage and general function within the rights discourse is straightforward, even if its definition is imprecise:

It functions as the prerequisite of having rights as such, and is basically a symbolic formulation of the Enlightenment-era notion of inherent, inalienable rights.

Human dignity, as a foundation of the capacity to possess rights, is predicated upon a unique human haecceity, a human thisness which both distinguishes humans from other sentient and non-sentient beings, and denotes the particular capacities that uniquely confers human beings the status of ethical subjectivity and the ability to possess rights.

These particular capacities as generally understood, thus not only distinguish humans as a species, but are also taken to metaphysically ground our ethical subjectivity. Ethical subjectivity can be thought of as the ability to perform ethically relevant actions, and the capacity to be affected by others’ actions. An ethically relevant action is an action for which you are responsible, generally (but not always) with regard to this action being the result of a free and rational choice, rational in the sense that the agent is capable of understanding the significance and the effects of the action. So a cat that kills a mouse, for instance, is normally not considered to perform an ethically relevant action, since the behaviour is not taken to be the product of a free and rational choice, or the cat, the agent in question, is not taken to be capable to understand the significance and the effects of the action, even if we grant that the mouse is capable to be relevantly affected by the act in question.

This is just an example, the question of whether we can ascribe free will to animals and what level of understanding they can possess is a complex issue way beside the point of this paper – this is just to illustrate the generally accepted preconditions for an ethical subjectivity unique to humans.


Moving on to artificial intelligence, we generally distinguish between weak AI and strong AI. Weak AI basically refers to the capacity of machines that are programmed to adapt and respond to input in some general sense, rather than just calculating input in a completely predictable manner. Weak AI is for instance exemplified in the way that Google Translate “learns” new languages, that is, by automatically parsing massive text dumps and encoding updated associations between words as it receives new input, which then feed back into the process.

Strong AI refers to artificial intelligences which exhibit actual subjectivity. The criterion is either that the machine in question must pass a form of the Turing-test, and thus to a human interacting with it, must be indistinguishable from another human person; or, an entirely different criterion: that the machine in question possesses actual phenomenal subjectivity.

What is actual phenomenal subjectivity? The first-person perspective. The capacity to actually experience something, the ability to access that which in philosophy sometimes is known as qualia. Consciousness.

The difference between seeming to possess subjectivity in the sense that it can fool a human into thinking he’s interacting with another person, and actually possessing a first-person perspective is of course absolute. However, on a foundation of behaviourism, it has sometimes been argued that a machine which behaviourally is absolutely indistinguishable from a human subject, must also be regarded as on par with humans with regard to subjectivity. On the assumption that subjectivity is something that really exists, I would argue that this is obviously false, since you can’t establish that something is actually true in reality, from the mere fact that humans believe it to be true.

To be sure, however, several philosophers inspired by reductionist behaviourism, such as Dennett or Metzinger, deny that human beings actually possess a real first-person perspective and maintain that subjectivity is a form of illusion. So their position opens for the argument that, while a strong AI, just like humans, cannot actually possess any first-person perspective, as it would behaviourally be on par with human persons, it should therefore be afforded the same level of respect and dignity.

So we have two versions of strong AI – one with a behaviour so advanced that it would seem to possess subjectivity – and one where it actually would possess subjectivity in the form of a first-person perspective. Both versions, as we see, have possible relevance for issues of ethics and human dignity. The actual-subjectivity-version has an obvious and inescapable relevance, because it would entail that AIs could possibly be ethical subjects in the same sense as human beings.

But the seeming-subjectivity variant may also have an ethical relevance, in two possible ways. It can make it plausible, or at least seem possible, that an AI possesses actual subjectivity, and thus provide support for this other form of strong AI – and assuming the behaviourist interpretation, it can be used to argue that since subjectivity effectively reduces to objective behaviour, an AI that behaves like a human person, also ought to be treated like a human person.

With all this said, there are two basic ways that strong AI can potentially challenge certain values and human rights – ontologically and materially. Ontologically, the very notion that an AI can possibly be on par with a human person, either because it can possess subjectivity, or via the behaviourist route I just mentioned, has the potential of displacing human uniqueness. This is an ontological shift, a change in terms of what we take to actually exist in reality, introducing artificial subjects into the basic makeup of entities in the world.
Materially, the introduction of AIs in, for instance, areas of decision-making – something which in many parts of the world already is implemented regarding weak AI – has the potential of effectively usurping human autonomy in the political sphere, to various degrees and in various ways. A combination of these two dimensions of influence is of course also possible, such as, for instance, in a case where advanced AIs exhibiting person-like behaviour would also be part of institutional decision-making.

Addressing narratives of the possibility of strong AI

Yet my focus in our project will be on the ontological issues, and not the institutional ones. My basic take on the affirmation of the possibility of strong AI is that it possibly entails an ontological shift that has the potential of significantly affecting human rights discourse. We need to critically assess the notion of strong AI and on a sound philosophical basis determine exactly in what sense artificial personhood is possible or not, and we need to scrutinize the effects of these narratives that affirming strong AI in terms of how they can influence our identity and our understanding of rights.

At the outset, I assume that there are three basic ontological outcomes of the AI-affirming narratives:

1. Artificial subjectivity is taken to be equivalent to human subjectivity
2. Artificial subjectivity is taken to be qualitatively different from human subjectivity
3. Artificial subjectivity is denied

The further implications of each of these then depend on how we can metaphysically frame the response in question. For instance, if we go with 1, the equivalence depends on what exactly we mean with human subjectivity as such. For instance, is the AIs subjectivity based in immaterial Cartesian substance that somehow is generated when we produce a certain configuration of matter? Then there is in principle possible to mass-produce the consciousness of each and every one of us, which would eliminate the uniqueness of human persons. Or, for example, is the AIs subjectivity rather an effect of us bringing out something pre-existent, something that is simply channelled through a certain configuration of matter? Then we would have no conflict with human uniqueness and dignity. And on 3 – if we deny actual subjectivity to AIs – there would be no prima facie conflict with human uniqueness, but depending upon how this denial is explained metaphysically, it could either threaten the notion of an actually existing human consciousness if we go by Metzinger’s and Dennett’s route, or it could reinforce it, if we anchored the denial in Aristotelian dualism – for instance.

And there’s also the concrete psychological issue of denying subjectivity to something that is very much like another human person – to many of us, this will not feel or seem right, and such a stance could, indirectly, possibly enable us to deny the personhood of actual human beings.