Programming Morality: Isaac Asimov’s Three Laws of Robotics

September 6, 2019

If you ask enough people about the fusion of AI and robotics, chances are you’re going to get a few answers about The Terminator.

The fear of humanity's Icarian hubris leading to the end of our civilization is nothing new; indeed, the history of robotics is littered with tales of inventions turning against their masters or otherwise being used for nefarious purposes. But is this outcome inevitable? Must we, like the Greek Titan Kronos, be usurped by our creations and cast aside like so much refuse? Or is there a way to ensure harmony between ourselves and the increasingly sophisticated automata that continue to shape our world?

The Three Laws of Robotics

Science Fiction author Isaac Asimov set out to answer these questions with the development of his Three Laws of Robotics. First introduced in his 1942 collection of stories I, Robot, the Laws were developed as a rational way to govern intelligent automatons that would be able to exist in harmony with their organic creators.

In addition to these three laws, he later amended them to include what is referred to as “Law Zero”: A robot may not harm humanity or, by inaction, allow humanity to come to harm.

three laws of robotics

Asimov presents these laws as being the ultimate governing principles around which to develop complex intelligent robots: “The three laws are the only way in which rational human beings can deal with robots — or with anything else.” The Laws are unequivocally rooted in the fear of destruction that has informed so many narratives throughout human history. In many ways, Asimov distilled our timeless paranoia into a form that is relevant to our world today: a safeguard against the potential threat of superintelligent AI.

Problems with the Laws

While the Laws outwardly prevent robots from behaving in a way harmful to humanity, there is a myriad of ways in which they prove to be inadequate as a strict moral code. Indeed, Asimov himself explored the ways in which these laws could be bent or misinterpreted in order to create chaos or cause harm.

three laws of robotics orderComic courtesy of XKCD

For example, let’s examine the language used by each of the laws: they revolve around protecting a broadly defined “humanity”. But who decides what constitutes a human? History is littered with examples of groups of people dehumanizing other groups. With reports emerging earlier this year about AI’s racial and gender bias, it’s not hard to imagine frightening scenarios in which this otherization is taken a step further.

Is it possible for a robot — regardless of what set of principles were used to program it — to be truly moral if its creators are themselves flawed, irrational, and often immoral beings? The unfortunate reality is that it’s impossible to say unless we devise some way to do the impossible and step outside the lens of the human mind.

The technological singularity: do we need the Laws?

The idea of astronomically intelligent AI unfettered by morality is indeed terrifying. However, this possibility is reliant on a key event taking place at some point in our future — the Technological Singularity.

Tip: The Technological Singularity is the potential future event where the advancement of technology becomes uncontrollable and irreversible by human means.

The Singularity would hypothetically occur when an intelligent machine, possessing faculties beyond that of the most gifted human, would devise a machine more intelligent than itself, and the process would repeat until the most advanced AI possible was developed.

The problem with this assumption is that computers and AI possess psychology that is entirely artificial. We don’t even yet understand human consciousness, and it is unproven that AI can achieve self-awareness in a meaningful sense.

Current AI can only define goals given to it by its creators. Even if programs such as Google’s DeepMind can do impressive things like teach itself to walk, at the end of the day, it only had the goal to walk. While AI is being developed that can set new goals for itself, these machines are a far cry from being fully self-aware.

Gif courtesy of GIPHY

So, even with a “constructed psychology” that far surpasses human capabilities, true morality might be impossible to achieve, as an AI has no wants, desires, or “intelligence” in the human sense — only raw computing power. If that is the case, a system as strict as Asimov’s Laws may not be necessary.

Interested in using current AI technology to enhance your business endeavors? Check out G2's real user reviews of the best Artificial Neural Network Software now:

Find the best Artificial Neural Network Software for your business. Learn more  for FREE →

Making a mind

Regardless of what the future may hold, scrutinizing questions of morality is imperative to ensure the safe deployment of new technology, as each advancement increases the possibility and consequences for abuse. Asimov’s Laws, flawed though they may be, are aimed at the preservation of humanity in the face of an overwhelming force we can only begin to imagine. Fortunately, the type of AI and robots that we have access to are still a far cry from the kinds of machines that could prove to be an existential threat to humanity as we know it.

Now that we’ve considered the future of AI and robotics, consider learning more about the types of robots that we see in the world today!

Never miss a post.

Subscribe to keep your fingers on the tech pulse.

By submitting this form, you are agreeing to receive marketing communications from G2.