We all love the idea of robots. Just look at how many are featured in television, films, books, and even music and social media. Since the development of human-like robots in the 1950’s, our fascination for what they could do to benefit our lives has only grown.
But – as we know from countless science fiction stories – fear of robots being used for evil rather than good or revolting against their human overlords also intrigues us..
These are questions that scientists and civilians alike continue to ponder, particularly with the rapid development of artificial intelligence (AI) that uses of robotic process automation (RPA) software to perform routine tasks each day. To guide these discussions, a handful of questions from the sci-fi world have found their way into the mainstream as the “Three Laws of Robotics”.
Created by sci-fi author Isaac Asimov in 1942, the “Three Laws of Robotics” were an attempt to create an ethical system for the development of robotic technology. The goal was to put together laws that could govern advancements and ensure that humans always retained control of their creations. The laws were first noted in the book Runaround, a significant piece of literature for the sci-fi genre and wider scientific community.
Since the broad adoption of the laws of robotics in the technological world, they have formed the basis for many discussions around the ethics and safety of generative AI and machines. Although there has been criticism of the simplicity of the laws, particularly in recent years, they continue to act as a starting point for important conversations.
As outlined in Asimov’s book, the “Three Laws of Robotics” are as follows:
Asimov later added a follow-up to the First Law, known as Zeroth Law, to more broadly encompass the whole world – a robot may not harm humanity or, through inaction, allow humanity to come to harm.
The primary goal that Asimov detailed when creating the laws of robotics was to safeguard humanity against possible harm. Although fictional, they’re still essential when thinking about technology, especially during the planning and development stages.
One of the biggest takeaways from the laws are how they relate to the ethics of building intelligent machinery to support human life. In many cases, we use robotics to help us automate our systems at work and home, freeing us up for other tasks or hobbies instead.
But what happens when we misuse machines? The First Law explicitly concerns the safety of humanity, which raises questions around the responsibility of both humans and the robots they create to be used appropriately.
of US organizations placed significant importance on AI ethics in business in 2021. This increased from under 50% in 2018.
Source: IBM
As automated technology becomes more integrated into our lives, we must think about not only our reliance on it but how we can use it for actual good.
Without a moral compass to guide it, technology can only respond in the ways that humans have designed it to do. This leads to questions about how we balance our use of this technology with the values and principles we hold, along with how it may affect our own well-being.
Like questions around the ethics of robotics, having these laws makes accountability a critical point. What are the broader possibilities and implications that come from using this technology? Who should be held responsible for its impact? Designers, developers, programmers, and even manufacturers all hold some of this weight when it comes to making morally correct decisions about how their technology is used.
But there are also problems with this. For instance, how do we respond to situations where people or entities use robotics in ways they weren’t meant to be.? As technology continues to evolve, the responsibility question becomes more important.
of workers think that company CEOs should be accountable for AI ethics in their companies.
Source: IBM
One of the most positive outcomes of the “Three Laws of Robotics” has been the attempts made to develop ethical technology as a result. Even with their fictional origins, the questions the laws raise and the discussions we’ve had because of them have meant that engineers often begin with ethical responsibility about their creations in mind.
Whether we like it or not, the world of fiction is reflective of our existing culture and helps shape it. As a prolific writer in the sci-fi genre, Asmiov’s laws have become part of a broader narrative around robotics and technology.
These laws have unquestionably guided how and what the modern world thinks about robotics. And with the rise of AI in recent years, it’s easy to see how Asimov’s laws have continued to play a critical role in how the general public understands and responds to new technology.
As with anything adapted from the fictional to the real world, there are significant criticisms about how the laws of robotics apply to the technology of the 21st century. Many stem from the complexity of modern robotics, much of which was not accounted for in Asimov’s 1942 book.
Not only were the robots of Asimov’s time simpler than those of today, but the ethical issues they bring up are also much more complicated in a technology-reliant world. For example, a robotic home cleaning device is unlikely to cause serious harm to the wider human population. But when compared to military robotics that are ultimately designed as weapons, significant ethical issues surface.
Many of these devices are designed with efforts to reduce the impact on human lives in high combat areas, so still fall under the Three Laws. Yet they also undoubtedly harm and destroy human lives at the same time. Particularly in warzones, the use of robotics is never a simple answer that fits within the original laws.
While there are upsides to having laws that aren’t specific or rigid, especially when it comes to technology, problems arise when people interpret the laws differently. What may be considered ethical to one person could be seen as highly immoral by another.
Definitions are crucial when attempting to outline rules, so questions around what’s considered “harm” or how robots should prioritize the First and Second Laws are all issues engineers and scientists have wrestled with regarding the “Three Laws of Robotics.”
A major criticism of the laws of robotics is the strict focus on human life prevailing over anything else. The distinct lack of instruction for how to treat non-human life is a problem.
The use of robotic technology also afflicts animals and the environment, yet there’s no guidance on the ethics of these life forms. This human-centric perspective leaves room for exploitative and destructive technologies that still comply with the letter of the laws.
Another important grievance is that, even when discussing the ethics of humanity, we must account for thousands of years of our biases. Throughout history, we’ve seen countless examples of dehumanization of races, genders, and religions deemed different from the dominant culture.
Since humans program these robotic devices, it’s inevitable that bias appears in their functioning as well. In fact, discussions around training materials for generative AI and their implicit biases are already being discussed.
Like any other form of technology, AI has been routinely studied under the lens of the “Three Laws of Robotics” to see how it stacks up. Discussions around the development and use of AI have found their way into workplaces, classrooms, and even our homes.
Currently, AI largely complies with the laws Asimov laid out. It follows the rules, or inputs, provided by human creators and has no inherent desires of its own that pose a significant threat to humanity. Even when requests are denied, which would seemingly break the Second Law, a carefully reworded prompt can usually get around this.
The potential for unethical and harmful uses is still there, which makes AI fall outside of these laws. This is no different, though, to many of the other robotic technologies available today. Humans are flawed, and so is the technology we create.
Despite their flaws, Asimov’s laws of robotics are a helpful starting point for many of the important discussions we must have around the exponential development of new technology. As things stand, we have a long way to go before robots take control and become more intelligent than even the smartest humans on Earth. So until then, we simply keep using them to make our lives a little bit easier.
Interested in developing your own AI technology? With machine learning software, you can build automation by using algorithms to produce defined outputs to increase your accuracy at work.
Holly Landis is a freelance writer for G2. She also specializes in being a digital marketing consultant, focusing in on-page SEO, copy, and content writing. She works with SMEs and creative businesses that want to be more intentional with their digital strategies and grow organically on channels they own. As a Brit now living in the USA, you'll usually find her drinking copious amounts of tea in her cherished Anne Boleyn mug while watching endless reruns of Parks and Rec.
Brands nowadays create all kinds of digital assets to promote their products and services. As...
We all have our own unique quirks, but wouldn’t everything be boring if we were the same?
Brands nowadays create all kinds of digital assets to promote their products and services. As...