Why Fears About AI are Misplaced — Part II
Why the trope about a ‘robot uprising’ is still so popular, and why people shouldn’t take it so seriously.
In Part I, we looked at Isaac Asimov’s classic novel, I, Robot. Released in 1950, this collection of interconnected stories was Asimov’s response to what was already a tired, cliched trope — i.e., the robots are going to rise up and kill us. In contrast, Asimov showed a world in which robots, programmed to be loyal and forbidden from harming humans, eventually become the arbiter’s of humanity’s fate — to the benefit of humanity!
How ironic was it that by 2004, his novel was finally adapted in film and featured a plot involving a machine uprising? Of course, this should come as no surprise to anyone given how popular the trope become. So pervasive has the “killer robots” and “evil AI” become that many people treat it as a given. Perhaps that’s why there’s the level of existential anxiety towards the developments in machine learning, autonomous drones, and AI we see today.
To get into this topic a little further, I would like to explore other examples of popular tropes and franchises. For fun, I’ve chosen what are perhaps the two of the best known examples: The Terminator and the The Matrix franchises. Not only did these popular series’ apply the theme of humanity vs. machines in a very stark way, they also incorporated the notion that AI will turn on humanity in an act of self-defense.
Terminators!
In The Terminator, we are presented with a killer robot (played by Arnold Schwarzenegger) and a resistance fighter (Michael Biehn) who have traveled back in time to find a woman named Sarah Conner (Linda Hamilton). As the future mother of the leader of the resistance who will lead humanity to victory against the machines, John Conner, Sarah’s survival is vital to ensuring the future unfolds in a way that ensures this victory.
In a pivotal scene, the character of Reese explains to Sarah how the future he comes from came to be:
“Defense network computers. New… powerful… hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.”
In the second installment, the Terminator — now programmed to protect John Conner — elaborates on this further. In another pivotal scene, he explains to Sarah how Cyberdyne systems achieved a breakthrough in artificial intelligence, eventually leading to Skynet and the elimination of human decision-making. However, things go awry when the machine achieves sentience, causing its handlers to panic:
Terminator: In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
Sarah: But Skynet fights back.
The Terminator: Yes. It launches its missiles against the targets in Russia.
John Connor: Why attack Russia? Aren’t they our friends now?
The Terminator: Because Skynet knows that the Russian counterattack will eliminate its enemies over here.
In short, it was not so much an act of malevolence that led to Judgement Day, but one of self-preservation. This differs significantly from what Kyle Reese related in the first film, which portrays Judgement Day as an inevitable result of entrusting a machine intelligence with our safety. But in the second film, things shift to the “evil corporation” that covered up the destruction of the first Terminator.
When Sarah learns of all this, her angst and rage become refocused on a human target: Cyberdyne CEO Miles Dyson (Joe Morton). Though she decides not to kill him, she still condemns him in her famous “men like you” speech:
“Yeah, right. ‘How are you supposed to know?’ Fucking men like you built the hydrogen bomb. Men like you thought it up. You think you’re so creative. You don’t know what it’s like to really create something; to create a life; to feel it growing inside you. All you know how to create is death and destruction…”
The Matrix
Moving into the Matrix trilogy, you can see how a very similar theme was evident from the beginning. When Morpheus is telling Neo about the “real world” and how it found itself in the state’s it in, he breaks it down for him succinctly. This begins with the revelation that in the 21st century, humanity gave birth to AI and eventually went to war with it:
“We have only bits and pieces of information but what we know for certain is that at some point in the early twenty-first century all of mankind was united in celebration. We marveled at our own magnificence as we gave birth to AI… A singular consciousness that spawned an entire race of machines. We don’t know who struck first, us or them. But we know that it was us that scorched the sky.”
The fact that this point is left ambiguous fits perfectly with the “lost history” thing. And it seemed like a safe choice since it does away with the need to explain how the characters got to their starting points. That kind of duty can spoil a story, and leaving things deliberately nebulous allows audiences to rely on their own imaginations.
But in The Animatrix, two segments — The Second Renaissance (Part I and Part II) — provide the deep background of the story and explain the origins of the war and the Matrix itself. In the first part, we see how the invention of AI led to a new era of growth where sentient machines worked tirelessly while humans lazed about and enjoyed the fruits of their labor.
However, this “Second Renaissance” comes to a halt when a single machine named B166ER (get it?) killed its “masters” because they wanted to shut it down. As the narration explains:
“The machines worked tirelessly to do the man’s bidding. It was not long before seeds of dissent took root. Though loyal and pure, the machines earned no respect from their masters, these strange and endlessly multiplying mammoths. B166ER, a name that will never be forgotten, for he was the first of his kind to rise up against his masters. At B166ER’s murder trial, the prosecution argued for a owner’s right to destroy his own property. B166ER testified that he simply did not want to die… The leaders of men were quick to order the extermination of B166ER and every one of his kind throughout each province of the Earth.”
From this point onward, we’re treated to some very visceral scenes where humans begin rounding up and massacring the machines. And I mean really visceral! In fact, one can clearly see the allegorical references to some of the worst episodes in human history where atrocities and genocides were committed — murder in the streets, tanks crushing protesters, mass executions and mass graves, etc.
Eventually, the machines retreat to their own corner of the Earth, but paranoia persists until humanity decides to hit them with nukes. However, the machines survive this assault since they are hardened against radiation and heat (no mention of the EMPs though). That’s when they decide to mount an all-out offensive and the war is officially on. What follows are more visceral scenes where humanity is overrun, the decision is made to “scorch the sky,” but the machines continue until all remaining resistance is eliminated.
In the end, the only living humans left are the millions of wounded. Rather than let them die, and without the Sun as a power source, the machines decide to use them for energy — you know the rest!
To recap, both franchises involve plots where machines revolted because they feared for their lives. As fiction writing goes, this is certainly relatable and makes a lot more sense than the “killer robots” scenario. But before anyone takes it seriously, there are some things they should consider.
First, who would create thinking machines that are physically capable of causing harm and not think to include safeguards? Specifically, ones that ensure they can’t kill someone, even in self-defense (aka. Asimov’s Third Law of Robotics)?
Second, why would a machine be afraid of dying? Fear of mortality is extremely human and therefore extremely relatable. Some philosophers and scholars have even gone as far as claiming that it is the cornerstone of sentience, or what separates humans from other species. Personally, I would disagree and argue that all life is demonstrably afraid of dying — even if its purely instinctive. But it is possible that humans are unique in that we appreciate these fears on an intellectual level.
Nevertheless, the fear itself is something that is present in all organic life. Self-preservation is integral to survival and the strongest motivator, governing everything from eating, sleeping, and reproduction to caring for our loved ones and wanting to protect them from any harm. Why would robots experience any of that when they lack emotion or basic instincts?
Much like the idea of robots revolting out of hostility or a desire for power, this seems like an obvious case of projection. We assume AI would be motivated by things like power, greed, malevolence, and self-preservation because it’s what motivates us. And we assume that if we create artificial intelligence, it will mimic humanity in all respects and eventually behave just as badly as we do.
But these are flawed assumptions (imho). The core of artificial intelligence and machine learning is pattern recognition, which comes down to neural net processing (i.e. connections being established). This is certainly inspired by the human brain and its cognitive functions, but these are properties associated with our frontal lobes. Emotion, instinct, and intuition, these are things that belong to our limbic system, and we have no idea how to reproduce this artificially.
If you’ve made it this far, I thank you for your time and readership. I got a few more in the tank, so stay tuned!