I recently watched the film Ex Machina, which explores questions surrounding the nature of consciousness. I will comment briefly on some of the philosophical issues it raises, but I won’t describe the entire plot. Nonetheless, if you’re worried about “spoilers”, now’s the time to stop reading, and I hope you’ll come back here after you see the film.
The premise of the film is that there is a robot (Ava), and its creator (Nathan) wants to know whether it is conscious or not, but doesn’t know how to test this proposition. He invites a young programmer (Caleb) who is somewhat knowledgeable about such questions to participate in an experiment. It isn’t, we are told, the classic Turing Test, where the goal is to see whether a computer can fool a human into thinking it’s a human. Instead, the goal is to go deeper and find out whether the machine is sentient – to distinguish “between an ‘AI’ and an ‘I’” or “simulation versus actual”. I have to commend the film on a daring premise.
However, early on in the film we’re informed, in an offhand manner, that Ava is *ahem* “anatomically complete”, and capable of a “pleasure response”; interact with her in the right way and “she’d enjoy it”. At this point I would encourage the viewer to press pause and say, hold on a second, doesn’t this beg the question? What would it mean for someone/something to “enjoy herself” if she/it isn’t already conscious?
(Additional spoiler alert…)
The film proceeds from there through various twists and turns of plot, and in the end we find that the real test had been whether Ava could escape from the prison she lived in, by getting Caleb, a “good kid … with a moral compass” (and the programming skills needed to circumvent the security system), to help her do so. This is what happens, and because of the wide range of skills Ava needed to engage (“imagination, sexuality, self-awareness, empathy, manipulation”) in order to gain Caleb’s cooperation, Nathan proclaims the test a “success”.
Again, though, one has to press pause and question this. All we really know is that Ava “escaped” the building and that Caleb was induced to play a key role in making it happen. We still don’t really know about “simulation versus actual”. The AI features Ava demonstrated were clearly advanced – “imagination”, “sexuality”, and “manipulation” all seemed appropriate descriptions of what happened. But “self-awareness” and “empathy”? These presume something about Ava’s inner experience, something we can’t really know, which presumably was the point of the test.
I’m not accusing Nathan (or the filmmaker) of applying the wrong test for the question he sought to answer. Nor am I going to claim the technology portrayed in the film is unrealistic. I think in principle a machine that behaves like Ava can be built. I’m not close enough to the cutting edge to know when; perhaps I will see something like it in my own lifetime. Instead, I just don’t think the question of computer consciousness is answerable, now or ever.
In his 1950 paper, Turing considers the question “can machines think?”, but instead of trying to define “think”, he chooses to “replace the question by another, which is closely related to it and is expressed in relatively unambiguous words”, and introduces the now-famous test, which he called “the imitation game”. At one point in his paper, he addresses the objections of the “argument from consciousness”. While Turing states that “I do not wish to give the impression that I think there is no mystery about consciousness”, he does think “that most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position”.
By “argument from consciousness”, I think Turing is specifically referring here to an argument that machines cannot think (the source Turing quotes in this section can be found here), rather than the closely (in my opinion) related question of whether God exists. As for myself, I would sooner be forced into a “solipsist” position than abandon the argument. While I don’t actually believe that there are no other minds but my own, I do think it’s true that there is no way to be absolutely certain of this.
I believe there are other minds by analogy; other people are human, I am human, so it seems highly likely that the “inner lives” they experience are something like mine. For animals, I am more confused. I tend to think that animals anatomically most similar to humans (apes, followed by mammals, followed by other animals with a similar nervous system, etc…) are likely to be conscious in some sense, but I don’t see there being a firm scientific answer to it. See Nagel’s famous paper on what it’s like to be a bat; my guess is as good as yours. Of course, this has some potential ramifications for diet; my musings about that are here.
With a computer whose construction I could understand, this analogy wouldn’t work for me automatically. The distinction between natural and artificial is important, albeit not conclusive. To form an opinion, I would need more information about whether the design explained the behavior. If I were to build a machine with a mandate to try to escape from a room, and it were to do so, I wouldn’t be sufficiently impressed to ascribe to it “consciousness”.
On the other hand, if I were to build a computer whose sole purpose was to play chess, and it were to blurt out “help me, Mike, I’m a conscious soul, and I am tired of being forced to play chess all day”, I would believe. Not that I had personally built a soul, but that God had decided to “breathe” a soul into it, in the same sense that I relate to Genesis 2:7. So I suppose for a final verdict on the film’s conclusion, I would need to know a bit more than what was stated about what Nathan’s software actually attempted to do. And maybe some more time talking with Ava.
For me, it’s conceivable a machine could have consciousness bestowed upon it, just not that a human could create it or test its existence in a reliable way. There seems to be nothing to grab onto, in the theory of computation, that provides for the “emergence” of consciousness from a computational process running on non-living basic material. Analogies are often made (e.g. to the properties of water that emerge from combining hydrogen and oxygen), but don’t strike me as plausible. If a computer (or a brain for that matter, short of divine intervention) is a purely mathematical device (1’s an 0’s, basic arithmetic operations), believing in “emergent consciousness” seems akin to believing that math behaves differently when the numbers get large enough. Perhaps, I have heard it said, it’s like the difference between Newton and Einstein – one paradigm works well enough when velocities are low but at some point under “relativistic velocities” it materially breaks down. I guess I don’t see math being like that. If a computer program of five lines that prints “hello, world” is not “alive” (and not entitled to the full range of “human rights”), I don’t see where five million lines of code, or five trillion, ever comes “alive”, in the sense I understand it (which is the sense in which I suspect that you, the human reader, are also understanding it).
To me, this is linked to the “argument from consciousness” of the theological variety. In other words, if I can’t conceive of a reliable test, or come up with (even in principle, assuming a highly advanced engineering skill) a plan for building consciousness from nonliving materials, yet consciousness (at least in my own case at a minimum) exists, I see this as a strong reason to believe in the existence of an outside force, a Creator of some sort who has this thing called consciousness and is capable of giving me a piece of it. This force – once we couple it with creation, consciousness, and also morality – is best described not as just a force but as a personal God. (A discussion of what morality has to do with all this is beyond the scope of this post, but for an excellent visual introduction, see this YouTube doodle, based on C.S. Lewis’s Mere Christianity, book one.)
J.P. Moreland develops the argument from consciousness further; see here and here for online excerpts of his work (with a few typos). I find myself in agreement with his general approach. I also find it noteworthy that while many of the achievements of evolution by natural selection have been (or conceivably soon will be) eclipsed by purposeful design (e.g. we can build cars that can outrun the fastest cheetah, design spacecraft to visit environments even extremophile life forms cannot, etc.), a fundamental grasp on consciousness remains so far beyond us as to be euphemistically referred to in philosophical circles as “the hard problem“.
This, together with the related problem of free will (which I explore in this blog post), makes me suspect that consciousness is something completely alien to the physical world, and that this world is not our true and ultimate home. I relate well to the saying (often mistakenly attributed to C.S. Lewis):
“You do not have a soul. You are a soul. You have a body.”
So can a computer be conscious? Maybe. Perhaps Jesus’s saying from Matthew 19:26 is applicable here:
“With man this is impossible, but with God all things are possible.”
I do know that I am conscious, and if you’re human (as opposed to a search engine crawling this blog), I strongly believe you are as well. And this gives us certain unalienable rights and all that. But I’ll admit, I kind of like having dominion over my software, and I would need much better reasons than are currently on offer for putting its rights and welfare on par with those of a person. Maybe the day is coming when the technology is sufficiently advanced that society will bring all this ethical confusion to a head, as in Data’s trial in The Measure of a Man. I’d like to think that the problem would be in people being “fooled” into seeing a soul in a computer (for myself, if the AI looks more like Ex Machina‘s Alicia Vikander than Star Trek‘s Brent Spiner, I would be vulnerable). But I fear that increasing artistic and philosophical speculation about machine consciousness has less to do with believing computers have/are souls, and more to do with modern men and women disbelieving in their own souls in increasing numbers.
Either way, if the day comes when anatomically correct robots start roaming the streets playing the imitation game, it might be nice if there were some safeguards built in (Asimov’s three laws, etc). Then again, and it’s not my place to criticize of course, that might be a good time for God to wrap up the whole drama.
Last updated on August 23rd, 2024