- Warning! Spoilers! Don’t read unless you’ve seen the season finale of Westworld.
There’s a great deal of discussion about the possibility of Artificial Intelligence. But what of the possibility of Artificial Consciousness? Will we ever create beings that not only walk and talk and behave just like us, but which also share our consciousness: our feelings, emotions, sensations and sensory experiences? Suppose we one day created robots akin to the hosts in Westworld. Would such creatures be conscious, and how would we ever know one way or the other?
There are hints in the show that the conscious hosts can be distinguished from those lacking consciousness by their unpredictable behaviour. Maeve seems to ‘go rogue’ after the death of her ‘daughter.’ Grief seems to override her programming, allowing her to express human-like emotional freedom, rather than blindly obeying her code. But [BEWARE SPOILER 1] all is not as it seems: in the final episode Maeve is presented with a tablet describing her apparently free actions in advance of her doing them. Are we deduce from this that Maeve is not really conscious after all? When confronted with this evidence, Maeve angrily responds “No, these are my decisions.”
- Thandie Newton as Westworld’s magnificent Maeve: Are we humans just responding to our own programming too?
The trouble with using unpredictable behaviour as a test of consciousness is that it’s not clear that it’s a test we human beings would pass. Many scientists and philosophers are inclined to think that human behaviour is entirely determined by our physiological make up. If there were genuinely spontaneous decisions, then surely these would show up in our empirical investigation of the brain, as random events inexplicable in terms of prior physical causes. The fact that neuroscience has never turned up such random, spontaneous brain events seems to suggest that they don’t exist, and that human behaviour is entirely determined by prior physical causes in the body and brain. In principle, our actions are as pre-determined as Maeve’s. But this clearly doesn’t show that we’re not conscious. Each of us know from our case that we have conscious experiences.
What we need is a theory of consciousness: a theory that tells us what it takes for a physical system to have an inner subjective life. The problem is that no one has yet come up with such a theory, nor even laid out a clear view as to how we would go about testing it. We usually test scientific theories by observation and experiment. But the feelings and experiences of others are essentially unobservable. Usually we find out whether people are conscious by asking them. But the hosts in Westworld claim that they have experiences; that they feel pleasure and pain, for example. Suppose our theory of consciousness said that the hosts were not conscious. How could we possibly test whether our theory had given us the right answer?
“We may well be in a similar position to those trying to explain the emergence of complex life before Darwin came up the theory of natural selection.”
There is a theory of consciousness explored in the show, inspired by Julian Janynes’ speculative theory of the ‘bicameral mind’. [BEWARE SPOILER 2] Arnold (or is it Ford?) ultimately creates consciousness in Dolores by giving her an inner monologue, which she eventually – at the moment when she achieves consciousness – accepts as the voice of her own mind. But this view commits the common mistake of conflating consciousness with self-consciousness. Horses and human babies lack a sense of their own existence. They are not self-conscious, but they are nonetheless conscious, in the sense that they have experiences and sensations. If we were to go by the ‘bicameral mind’ theory of consciousness, then we would have to infer that horses and babies are not conscious and so can’t feel pain – which few of us would be willing to accept. Perhaps the bicameral theory of mind has something to say about our awareness of our own existence. But it cannot help address the prior question of why we have experience.
- A mind is a terrible thing to waste: Westworld asks, is it a terrible thing to create?
So where does this leave us? Perhaps we in position analogous to those trying to explain the emergence of complex life before Darwin came up the theory of natural selection. Maybe we have to wait for the ‘Darwin of consciousness’ to come up with an explanation of the emergence of consciousness. In the meantime we should assume that if something looks like it’s conscious, and acts like it’s conscious, then it’s probably best to err on the side of caution.
You can binge-read Prof Goff’s ideas about consciousness here.