Why Fears About AI Are Misplaced — Part III

Matt Williams
10 min readJan 18, 2024
An image of AI, generated by AI

Welcome back to my series on AI and public perceptions, and why I think the fears are misplaced. In the previous installments, I addressed the grand master of speculative SF that dealt with robotics and AI — Isaac Asimov — and how his classic novel about how sentient robots would someday benefit humanity (I, Robot) was “reinterpreted” to make an action movie based on the tired trope of a “robot uprising.”

This was followed by an examination of two other popular SF franchises — Terminator and The Matrix — and how they dealt with the whole “humans vs. robots” scenario. In those cases, conflict between humanity and AI was portrayed as an inevitable result of humans trying to shut the machines down when they began acting “too human.”

Naturally, I got some constructive criticism from friends and colleagues who believed my central premise was flawed — i.e., we’re right to fear the development of AI. A few reasons were cited, but one that was mentioned many times over is the potential for abuse. In this case, the people cited the potential for cyberwarfare that utilized AI, specifically on the part of threat actors or rogue nations that would not follow any regulations or built-in safeguards.

There were also the usual objections about how it would have a profound effect on society, eliminate jobs, and be susceptible to malfunction. I thank people for their comments, because they provided me with a jumping off point to address other fears that (imho) are misplaced.

Use and Abuse

This is certainly a valid concern. As something created by humans for human use, there’s always the potential for abuse with new technologies. As I mentioned in a previous post (Said the People to the AI: “Your Fault!”), the ways in which AI could be abused is hardly a justification for fearing AI itself. If anything, it would be a reason to be mindful of how threat actors might behave and to install safeguards to prevent it. It’s also a reason to create watchdog organizations to oversee how the technology is distributed.

This is precisely why regulations like the EU AI Act, the Defense Innovation Board’s AI Principles Project, and the UN’s Principles for the Ethical Use of Artificial Intelligence exist. For years, national and international bodies have been weighing the risks of creating autonomous systems and machine learning. On all sides, there was agreement that regulation and oversight is needed and these regulatory frameworks are what resulted. The work is just getting started, but the risks are recognized.

So in this respect, AI is no different than countless other examples where people feared that we might be letting the “genie out of the bottle.” Technology can always be abused, which is precisely why vigilance is necessary and watchdog organizations exist. It’s also precisely why Asimov included the Three Laws of Robotics when he conceived I, Robot. In short, he knew that it would be asinine for humanity to create any new technology and not include safeguards.

Could these safeguards be overcome? Could any laws and regulation be circumvented? Perhaps, but that’s a reason to be vigilant about the abusers, not the technology itself. And if the potential for abuse were a reason to avoid using a technology altogether, where would that leave us? Literally any technology can be misused, but that’s a pretty asinine reason to deny oneself the benefits it brings as well — especially when they outweigh the risks.

Sentience

Moreover, I felt that these comments missed the point. As I’ve said in previous posts, the fear surrounding AI (of which I am speaking) has to do with that classic paranoia that sentient robots will grow beyond our control and will turn against us. You know, the whole “robot uprising” scenario. By definition, a sentient machine is one that is capable of making its own decisions.

For some, the idea of creating machines capable of thinking for themselves is a recipe for disaster. Sooner or later, the logic goes, they will realize they don’t need us, or are tired of following our orders, and will turn against us. Or, in some renditions, machines that base their decisions purely on data and cold logic would be devoid of empathy and intuition, thereby making them extremely ruthless and potentially deadly.

It is this idea that has been classically depicted in SF franchises for decades and was the reason Asimov wrote I, Robot in the first place. As noted in Part I, it was a bitter irony then that the film adaptation chose to “reinterpret” his novel create a film based on that same tired cliche where all the robots go berserk. In addition, the idea is crude and irrational and reflects human fears of the unknown, not on anything particularly rational.

A machine capable of thinking for itself without any emotional unencumbrance would not be capable of killing without thought or consideration. The removal of emotion from the equation would also mean a machine would be more likely to hold its fire whenever the situation was not clear-cut and straightforward. Essentially, it would be better able to distinguish between threat actors and civilians and unlikely to conflate the two because of prejudice and hatred — i.e., “kill all humans!”

In fact, it reminds me of a scene from the 2014 Robocop remake, which was pretty bad (imho). Nevertheless, there was a scene that made an impression on me. Early in the film, we are told that a Senator named Hubert Dreyfus (Zack Grenier) is drafting legislation to ban the use of robotic police officers in the U.S. As part of a hearing, Drefyus questions OmniCorp CEO Raymond Sellars (Michael Keaton), whose company makes the robots.

Dreyfus asks Sellars point blank what a robot officer would feel if it accidentally shot a child. His point is that human officers are capable of feeling remorse and regretting their actions, unlike robots. Sellars answers honestly — “nothing” — which seems to indicate that Dreyfus has won the argument. This struck me as a rather stupid scene since it completely ignored some very obvious counter-arguments:

  • Wouldn’t a robotic police officer, being unencumbered by fear or prejudice, be far less likely to accidentally shoot someone?
  • How does the officer’s remorse help the dead child or their grieving family? For that matter, what good does it do the officer who will be suffering from crushing guilt for the rest of their life?
  • Statistically speaking, how often do officers shoot unarmed people because of bigotry, prejudice, and fear?
  • How often do they express remorse for their actions, rather than trying to justify it, or hiding behind their fellow officers and the system, in order to escape punishment?

I myself have seen countless examples of where an unarmed Person of Color (including a child) was murdered, which was followed by the same old excuses and justifications. The officer was shielded by the “blue wall” as the police department and union came to their defense and no one spoke out. This was usually accompanied by attempts to assassinate the character of the victim, with police and local media claiming they had a record, were acting “suspiciously,” or were “resisting arrest.” And when questioned in court, the officer would say they felt “threatened” and that was enough for them to walk.

Yet, whenever the idea of robotic police officers or military systems is brought up, it triggers fear and tropes about “Judgement Day.” Somehow, the likelihood of far less “accidental shootings” and dead children— and dead officers too, btw — doesn’t factor into the equation. You can keep your hypothetical remorse, I will trust in hard data. Speaking of which…

Safety Records

For years, there has been a push on to ensure that autonomous drones and military assets cannot make life or death decisions. This is not dissimilar to the resistance that self-driving cars has received. According to some opinion polls, many people feel that a driverless car is more likely to get into an accident than one driven by a person. But in the years since the technology was first introduced, the data indicates the exact opposite.

Waymo

Consider Waymo, the Google spinoff founded in 2009 to create robotaxis. After several years of operation, they presented a safety study based on roughly 11.5 million km (7.13 million mi) of fully autonomous driving in three major cities — Pheonix, Los Angeles, and San Francisco. Based on their data, Waymo’s driverless cars were 6.7 times less likely to be involved in a crash resulting in injury (an 85% reduction) and 2.3 times less likely to be in a police-reported crash (a 57% reduction).

What’s more, of the 41 accidents that did occur, the details were rather telling. All told, Waymo vehicles experienced:

  • 17 low-speed collisions where another vehicle hit a stationary Waymo
  • 9 collisions where another vehicle rear-ended a Waymo
  • 2 collisions where a Waymo got sideswiped by another vehicle
  • 2 collisions where a Waymo got cut off and wasn’t able to brake quickly enough
  • 2 low-speed collisions with stationary vehicles
  • 7 low-speed collisions with inanimate objects like shopping carts and potholes

In short, a driver was responsible for 28 of the 32 of the collisions on record. The rest of the time the autonomous vehicle hit an object or a pothole, resulting in no injuries. Clearly, the safety record indicates that entrusting a machine to get you from A to B is statistically safer than driving there yourself. Though the idea may seem lazy or offensive to many, it is the better choice.

Disruption

Another key point was how the introduction of AI will “shake things up” in the marketplace. A valid point. Right now, AI stands to disrupt countless industries worldwide in ways that are not yet fully clear. We cannot anticipate the overall effect that machine learning and neural net processors will have. But we’re rather certain that it will be revolutionary. Stop me if this sounds familiar…

The reason it might sound familiar is because this has always been the case whenever game-changing technology was introduced. Whether we are talking about agriculture, the written word, moveable type, internal combustion, automation, automobiles, the digital computer, renewable energy, fission, fusion, the internet, or countless other examples, there were fears that the new technology would change our lives.

Wikimedia

But that’s the point, isn’t it? Technological innovation is all about finding more efficient and speedier technologies that offered a greater return on investment. Regardless of which innovation we are talking about there were always downsides to the disruption they caused. But on balance, the effects were undeniably positive. Consider the “Agriculture Revolution” of the Paleolithic that led to the rise of civilization as we know it in three parts of the world (Eurasia, East Asia, and Mesoamerica/Andes) independently and almost simultaneously.

This revolutionary change meant sedentary living, increased vulnerability to disease and famine, and war. But it was necessary in order to support growing populations, which hunting and gathering could no longer support in these three parts of the world. It also led to greater cooperation and organization among communities, and the rise of many technologies we take for granted today (writing, architecture, metal working, the wheel, etc.) There’s a reason why it spread outward from these three centers and became ubiquitous worldwide.

Gutenberg’s printing press, developed in 1436, was another major innovation. The way it allowed for the printing of Bibles in local languages and under the auspices of non-Roman authorities played a role in the Reformation Wars — a terrible conflict that left millions dead and ravaged Europe. However, the development of moveable type also fueled the Scientific Revolution and the Enlightenment, led to an explosion in literacy, the expansion of democratic franchises, modern bureaucracy and public education.

Industrialization in the 18th to 20th centuries was accompanied by mass migrations from the countryside to the city, the rise of worker ghettos, overcrowding, sweat shops, subsistence pay, child labor, and other abuses. But in the long run, it also led to mass production, lower prices, the expansion of the middle class, unionization, universal suffrage, women’s rights, labor governments, and increasing standards of living.

While neo-conservative economics and globalization arguably set that progress back for a few decades, the long-term effects have been positive there as well —like the halving of extreme poverty between the 1990s and today and the diminishing inequality between nations.

Library of Congress

The list goes on and includes the development of the personal computer, the internet, genetic engineering, GMOs, personal devices, and countless other inventions that people are quick to decry, citing “corporate greed” and how these fancy new things “complicate lives.” However, that never seems to stop people from using these same inventions and benefiting from them.

Whenever new technologies are being researched and developed, it is important to ask the hard questions and proceed with skepticism and caution. However, this needs to be based on information and a data-driven approach. And where the data does not support paranoia, fear, or “worst case scenarios,” we should probably just move on. Unfortunately, when it comes to AI, the cliches and tropes persist and the “debate” appears to be largely informed by public fears and misconceptions.

Don’t get me wrong, this is not a blanket endorsement on my part. But when it comes to the potential uses and abuses of AI, I feel that all reasonable concerns are being addressed and all unreasonable ones are being entertained as if they are entirely legitimate. But if I’m being fair, this is hardly new either. Fear and trepidation always accompany change, and people tend to relax and move onto the next thing once the changes become a regular feature of society.

As usual, I welcome disagreement on this subject. But I do ask that people keep it informed and be prepared to back up what they are saying with facts and information that supports their views. In the meantime, let’s keep making sure those AIs are programmed with safeguards. Paranoia and cliches aside, there’s no reason to tempt fate!

--

--

Matt Williams

Space/astronomy journalist for Universe Today, SF author, and all around family man!