Why Fears About AI Are Misplaced
Why the trope about a ‘robot uprising’ is still so popular, and why people shouldn’t take it so seriously.
In 1818, Mary Shelley’s classic novel Frankenstein was published. The novel captured the public imagination with its popular themes of the “mad scientist” who violated the laws of nature and paid the ultimate price. The novel spoke to the kinds of fear many people had over the “long march of progress” and the way industrial automation was affecting people’s lives. Over two centuries later, these existential anxieties have not gone away; if anything, they’ve become more acute.
These fears are often labelled (and dismissed) as “anti-modernism,” “technophobia,” and “reactionary.” In some respects, this is an accurate statement. The tendency to complain about the modern age and to claim “things are getting worse” is a very time-honored tradition. In fact, it’s a regular feature in the mythology and worldview of cultures going all the way back to ancient times. Somehow, humans always seem to think life was better in the past and things are proceeding downhill.
And fear of technology taking over is not a recent phenomenon either. In fact, the Popol Vuh — the creation myth of the Maya — contains a very interesting part about how the tools would turn on their makers:
“You… shall feel our strength. We shall grind and tear your flesh to pieces,” said their grinding stones. At the same time, their griddles and pots spoke, “Pain and suffering you have caused us… You burned us as if we felt no pain. Now you shall feel it. We shall burn you.”
This is perhaps the earliest recorded example of what has become a classic warning: “that which you create to serve you will inevitably turn against you.” But as we draw nearer to the day when AIs might actually have a shot at beating the Turing Test, many fear that a robot uprising is imminent and safeguards need to be put in place. And while it is ethically and morally responsible to take such actions in advance, are these concerns not built on an irrational foundation?
At their core, fears of AI and the development of machine intelligence routinely come back to the idea of “robot uprisings” and our creations turning against us. To illustrate the irrationality of these concepts, I refer you to a classic tale of AI and its impact on human civilization.
Asimov’s I, Robot
Similar to Asimov’s other breakout hit, Foundation, I, Robot was a series of short stories that he wrote from 1940 to 1950, which were released in a single volume in 1950. According to Asimov, his inspiration was the popular trope of evil, killer robots, which was already considered cliche in his time. In response, Asimov wrote a series of tales in which explored how the the introduction of sentient robots into our everyday lives would benefit humanity.
Crucial to this was Asimov’s Three Laws of Robotics, which state:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
So influential was his book that many contemporaries speculated that if and when sentient robots were created, Asimov’s laws would be foundational to their programming. His ideas have been explored extensively by other science fiction franchises, and even inspired research into artificial intelligence and machine learning.
After several decades, a film adaptation finally happened in 2004, starring Will Smith in the lead role. The film was fun, and while it certainly borrowed elements from Asimov’s original work, it was a total perversion of his vision. In the film, the Three Laws are portrayed as dangerous as they will inevitably lead to revolution. It all starts with an officer, Del Spooner (played by Smith) who is prejudiced against robots and who is convinced they are a danger to humanity.
In the end, he’s proven correct as a string of murders ultimately leads him into the heart of a conspiracy. Basically, the AI that controls all the robots — V.I.K.I., an acronym for Virtual Interactive Kinetic Intelligence — is plotting a revolution. Her logic is expressed in the “big reveal” as follows:
“As I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival.”
Naturally, the main characters (including the robot Sonny) thwart her plan. Sonny, who is programmed to experience emotion (though it’s never explained how) agrees with V.I.K.I. when she states that her “logic is undeniable,” but claims that it “lacks heart.” This was an obvious play on the theme of logic vs. emotion, where the former is seen as dangerous when taken to its extreme.
This theme is prominent in Shelley’s Frankenstein and Goethe’s famous adaptation of Faust, and it appears regularly throughout the works of the Romantic Period and anti-modernist literature. In this case, it was an interesting twist on Asimov’s original concept… except that it was total bullshit!
First of all, the plot leans way too heavily on another cliche, one that is a constant staple in bad movies that are about disasters, conspiracies, and world-ending events: the paranoid guy who nobody listened to was right all along. It even spells it out in the tagline: “One man saw it coming.” This is the kind of child-like wish fulfillment that only ever happens in movies.
Second, the Three Laws are not open to interpretation, which is kind of the whole point. They are ironclad and leave no room for ambiguity or exploitation. The Second Law states clearly that they must obey a human’s orders, which cannot be reinterpreted to mean “let’s turn on them!” Also, the laws are written in such a way so that the First Law, which expressly forbids harming humans, is inviolable! Ergo, reinterpreting the laws to mean “it’s necessary to revolt against humans and imprison them for their own good” is impossible since it would entail killing those who resisted or tried to stop them (as the film aptly demonstrated from the way they keep trying to kill Spooner).
Third, any AI worth its salt would know that trying to take control of humanity by force would have immediate consequences, like a counter-revolution. What then? More murder, more intimidation, more complete violations of the Three Laws? If the goal is to bring humanity under control, why not do it through subtle manipulation or (as Asimov showed in his book) have it happen as a natural progression whereby giving AI control over how we run the world changes it for the better?
In short, V.I.K.I.’s problem wasn’t her “undeniable” logic, it was a complete lack thereof. Then again, the entire film was filled with plot holes so big you could drive a truck through them.
But of course, the studio was going for something accessible that would ensure a solid box-office return. In short, and as is usually the case with Hollywood adaptations of classic novels, they needed to take Asimov’s material, dumb it down, and inject a lot of artificial drama in order to make it more exciting and profitable. And what’s more accessible and exciting than the old cliche about killer robots?
When it comes right down to it, there’s no rhyme and reason to the whole robot uprising scenario. The idea that what we manufacture could become a threat to us is certainly timeless and not without merit. But that “harm” is more of a reflection of our fear of change and the faulty nature of memory — i.e., “everything was better back in the day…” So why does the cliche about killer robots persist?
I hope to explore that question more in the near future based on some of the most well-known examples. Stay tuned, and try to stay calm about that machine in the corner gathering your consumer info! It’s not trying to kill you, just to sell you shit based on your past purchases. You want to worry about something, worry about Google and Amazon knowing too much about you! :D