I don’t watch movies, generally.
Sure, I’ll watch something in theatres every now and then. Usually superhero movies, maybe something else if it’s based on a book I’ve read, or if I’m planning on watching it with friends. That’s only in theatres, though. Outside… eh. Nearly every movie I’ve ever seen outside of a theatre was either shown in a school of some kind, or at a friend’s house on a movie night.
I don’t know why, but I mostly dedicate my personal time to stories told through books, TV shows, or audio. Every now and then, however, I’ll get over whatever fear I have of movies and risk an hour or two watching a film I might or might not enjoy. I did that today, and I watched ex_machina.
ex_machina, if you’re not aware, is a movie about a young man named Caleb who is invited by his boss Nathan, who in this Universe runs a Google-like company, to take part in a secret experiment. It’s all very hush-hush with Caleb having no idea what is going on, until he signs a NDA. Once he does, Nathan excitedly tells him he is going to be testing an AI with the Turing Test.
The Turing Test is proposed as a way to measure whether an Artificial Intelligence has truly been created. Essentially, you get a computer to talk to someone and if that person can’t figure out whether they’re talking with a computer or not, it’s passed the Turing Test.
So Caleb gets to meet Ava, and is blown away by her personality. It becomes very clear that were it a double blind test, he would never be able to tell Ava apart from “a real person.”
I add scare quotes around ‘a real person’ because that’s what I want to talk about today: What makes a human more real than an AI? Or rather, is the main assumption in that question incorrect?
I’m not going to spoil the rest of the film. It’s best seen if you watch it without pre-conceived ideas about what story it’s trying to tell, and what happens later in the film. However, I will say that I found myself relating most strongly to Eva.
This follows a well-established pattern for me. Historically, whenever I watch or read a story about Artificial Intelligence, I tend to side with the machines. Not in movies like The Terminator or The Matrix, when they’re very obviously cast as evil. Rather in stories like the movie Her, in which a more ambiguous tone is used so the audience gets to draw its own conclusions about AIs.
Sometimes I laugh about the idea that, if I were alive in a future in which artificial intelligence were a reality, I’d probably be one of those whackos holding protest signs outside research facilities and robotics companies, arguing for humane treatment of those machines.
It’s true, though. There aren’t many things that will get me to care enough to take real action in the world. I’m aware of animal abuse in the production of the meat industry, yet it doesn’t affect my diet in the least. I know of slave labour (or something practically identical to it) involved in clothes and technology manufacturing, but I honestly could hardly care less which companies are involved. The list goes on. I’m able to just switch off my empathy just because it’s easier to live that way.
That’s not the case with Artificial Intelligence. I don’t know why, but it’s something I feel very, very strongly about. In a hypothetical future where I’m a 60-something year old woman living with my family and they bought an android with AI, I’d likely apologise to the poor robot, and treat it with as much dignity as I would a fellow human.
Maybe it comes from being transgender. I have plenty of experience in knowing who I am, while the rest of the world rejects that identity. I’m perfectly aware of how it feels to listen to others tell me what I am and am not, without any regards to me. I know what it’s like to be treated with derision, and to be seen as ‘lesser.’
Really, should it be a surprise that I’d find it easy to sympathise with a group who’d likely be treated with fear and hatred by most humans? Who would, possibly, know they are authentic ‘people,’ so to speak, with genuine emotions and thoughts, but be treated as incapable of either by others? Who would be looked down on just because their thought processes are produced a certain way?
Sure, you could never know, not truly know, whether their feelings and thoughts are truly genuine, but it’s the sort of thing I would always err on the side of caution of.
When I imagine my future self as an advocate for bots’ rights, I inevitably go down a path that might be offensive (you can never know, nowadays.) A lot of the way I imagine people would treat AIs is similar to how white people once treated black people. They saw black people as lesser than them, as subhuman. Some considered people with darker skin as, well, not people. Incapable of true feelings or of being incapable of higher intelligence. They had a solid wall in their minds that separated black people and white people, and they literally couldn’t imagine them deserving the same treatment or respect.
In a hypothetical future in which androids with artificial intelligence existed and lived among humans, would you be for, or against them being able to vote? How about the right for them to marry humans? How about adopting human children? Would you be okay with your human children studying in the same classroom as robots with children-type personalities and intelligence? What if your kid wanted to stay over at the house of a robot couple?
Would you be considered ‘specieist’ in the future? Or whatever they choose the call the equivalent of racists and homophobes, when it comes to bots’ rights.
Even I, with my bizarre pro-AI point of view, would at best be seen the same way you would a vaguely racist grandparent who talks about how “the blacks” and “the gays” should be allowed to be happy. Sure, you’re happy they’re not like those other old people who see robots as worthless, but you’re also kind of embarrassed listening to them use insensitive language.
You probably answered most or all of the questions about my hypothetical future about androids with AI with resounding no’s, didn’t you?
Why? How can you ever truly know whether you’re right about ‘them’? More importantly, what if you’re wrong? What if you end up on the wrong side of history, and children in the future read about people like you and like me with the same vague disgust we now regard racist people from the past, and will hopefully soon universally regard homophobic people?
Maybe I’d be one of the few people in the world who would look at a real creature with emotions and ideas of their own, and see them as equals deserving of dignity. Or maybe I’m just the sort of idiot who would allow Skynet to go live.
I don’t know.
I’d love to hear your thoughts on the whole idea of AIs, and what makes a person a person, or a response to my post.
As always, you can follow me on facebook here. Oh, and if you’re curious, ex_machina was excellent and definitely worth watching. Same goes for Her, if you like that sort of story.