Artificial general intelligence could be the best or worst thing that ever happens to us.
Artificial intelligence (AI) is a buzzword that strums up a lot of conversation about improving the quality of human life or the risk of worldwide annihilation. Even the sci-fi movies we love and hate repeatedly warn us of a dystopia when it comes to AI.
Separating facts from fiction, the AI systems we currently have aren't capable of being self-aware or reasoning, let alone dominating the world. However, with the current pace of technological advancements, creating sentient, self-aware machines isn't so impractical.
In this article, we'll discuss artificial general intelligence, an AI with the same intelligence level as humans, how far we're from achieving it, and whether it will be our best innovation or greatest threat.
Artificial general intelligence, or AGI, is an AI agent capable of learning, perceiving, comprehending, and functioning just like humans. Such an artificial intelligence system can experience consciousness and find solutions to unfamiliar problems.
However, it's a hypothetical AI and might take decades to become a reality – if ever. AGI is also referred to as true intelligence, strong AI, or full AI, and can plan, communicate, reason, make judgments, and solve puzzles.
In other words, AGI can virtually do any intellectual task a human could ever do – and more. Achieving strong AI also means that the machine can produce objective thoughts, be self-aware, and have the ability to feel, observe, and experience subjectively.
Just like a child, a strong AI would have to learn through experiences, mistakes, and inputs and continuously improve its abilities over time. The time taken for the learning process might be significantly less than compared to a human child, as machines don't have to rest and don’t have biological limitations like the human brain.
It's the next level to narrow AI, the AI we currently have. Strong AI has human cognition, while narrow AI merely tries to mimic it. It might still be held together by components like machine learning, deep neural networks, and natural language processing, but probably their advanced versions.
It’s also with the arrival of this type of AI that many fear they will lose their jobs to machines, even if it's a knowledge-intensive, skill demanding job. Since strong AI has human-level intelligence and isn't prone to errors or threats, it could complete high-skilled tasks in less time and with better accuracy compared to us.
Although we're making steady strides with artificial neural networks, the technological process of mimicking the human brain, defining what makes something intelligent is still a challenge.
In short, some of the characteristics of artificial general intelligence are:
Logically, possessing human intelligence and cognitive skills could also mean that it might feel emotions just like us. If that's so, it will be equally vulnerable like humans and may strive for self-preservation, even if it means going against the will and interests of its creator.
Although some AI researchers predict AGI will become a reality in a few decades, another set of researchers feel it might be a thing of the next century. On the other hand, some researchers think that creating AI systems capable of thinking and acting like humans is virtually impossible.
It’s safe to say that the peculiarities of a thinking machine would be closely related to that of humans. After all, these AGI systems are trying to recreate what we gained through centuries of evolution.
Looking at the characteristics of a strong AI will also give you a better understanding of why this level of machine intelligence is complex.
Strong AI is autonomous, meaning it doesn't require any human intervention or maintenance to work. This also means that strong AI will be backed by the unsupervised learning model, which is a machine learning technique that isn't supervised and uses unlabeled training datasets.
Along with unsupervised learning, strong AI will also indulge in self-supervised learning, which involves exploration and experimentation to know the unknown.
AGI will also have goal-directedness meaning learning will be directed toward achieving specific goals that are programmed by its creators or those which are self-generated. Goal-directedness would also mean that the AI system would indulge in selective learning.
Also, AGI systems must be able to learn by consuming information from multiple sources like how-to videos, books, blog posts, and self-help books – just like humans.
Additionally, humans write with specific assumptions of the reader's knowledge. For example, it doesn't have to be explicitly stated that a tiger is a carnivorous animal or the Empire State Building is in New York as this is considered general knowledge. Intelligent systems must be able to apply such knowledge and have the same required common sense while learning.
It’s safe to say that the inability to create consciousness artificially is one of the prominent reasons we're still stuck with narrow AI. If we ever achieve AGI, artificial consciousness (AC) will be one thing we can expect from it.
Emotion is also a crucial ingredient of intelligence. Along with feeling emotions, machines must also be capable of deciphering the emotions of other living beings.
For example, humans tend to share food if they're full, or if they don't feel like having anything, or if they care for the individual to whom the food is being offered. Although a machine can quickly compute that food being offered is an act of sharing, the reason behind it may have to be picked up from the person's facial expressions, voice, the topic being discussed previously, and more.
Since the act of sharing may change depending on the social setting and many other factors, the machine must be able to comprehend the right emotion behind the action to respond correctly.
The ability to manage one's own and others' emotions is referred to as emotional intelligence and is one of the peculiarities of human intelligence. Emotional intelligence is also another step AI must cross to attain general intelligence.
Another aspect of consciousness is the ability to recall and relive memories and to dream about the future. If a machine could dream for itself without being explicitly programmed to do so, then it is a fascinating indicator of artificial general intelligence.
Artificial consciousness also raises numerous moral and ethical questions. If a machine achieves consciousness, should it be treated like a human being? If that's so, shutting it off would be similar to killing it, and so, will it be an evil thing to do?
Also, will it make sense anymore to use the pronoun "it", or will it be assigned gender pronouns? Finally, will the machine have equal rights as humans, and will it be subjected to the law just like us?
Both AI researchers and ethics experts have more questions than answers and are striving to create a flawless system for artificially conscious machines and humans to co-exist. Hopefully, it will be completed before AGI becomes a reality.
AI robots already have a terrible reputation – thanks to sci-fi movies. Having human-level intelligence also means that robots running AGI programs must be social entities. Just like humans, they must get along with other fellow humans and must have no trouble in making conversations.
They must also be able to understand human emotions by interpreting facial expressions or changes in voice tone. They must also be able to comprehend contradictory statements like the ones that hold sarcasm.
They must also be able to empathize with others and must be able to decide where to draw the line for jokes that might get personal based on the context. If an intelligent machine becomes too witty and excessively talkative, very few would appreciate it.
Also, the machine must know when and when not to start a conversation – a robot that cracks jokes at a funeral would make its creator look naive.
Feliks Zemdegs holds the fastest world-record for hand-solving a Rubik's cube in 4.22 seconds. An MIT robot recently solved the cube in 0.38 seconds, making the human's record look like rookie numbers. But can the same robot drive a car, play the violin, or let alone fill a cup with water? – probably not.
The reason is that the robots we know today are made for specific tasks and aren't capable of doing anything else. Of course, humanoid robots like Sophia resemble humans, but they aren't dexterous like us.
Although the strong AI program doesn't have to be always housed in a human-like body, if it is programmed into a humanoid, then it will have excellent motor skills and dexterity. It could move around and act like humans and would be only stopped by wear and tears as opposed to the aches and pains of humans. Think of the humanoid robots of the I, Robot film.
For example, while speaking over the phones, humans can easily distinguish the voice of a person from background noise and form a precise idea of the environment in which the person is in.
Strong AI will have human-like sensory perception capabilities called machine perception. Machine perception allows machines to take in sensory information in a way similar to humans. It also will enable machines to gather information with greater accuracy. It will also be one among the many elements that grant machines sentience – the ability to feel, understand, and experience subjectively.
Machine perception will be made of numerous sub-components, including computer vision, machine hearing, machine smelling, and machine touch. As the name suggests, these components will allow a machine to see, hear, smell, and feel.
Of course, currently, we have the elementary versions of these components, and they require years of development before they can complement a strong AI. For example, the computer vision of self-driving cars can be easily fooled by placing stickers on red stop signs.
Strong AI will also be able to cope with the overabundance of data that comes with the addition of new senses. The real-world is an enormous, unending dataset with massive amounts of micro and macro details to look at.
Just like how humans don't try to learn deeply about everything presented in front of them, the AI will streamline its learning process by considering contextually relevant information.
One of the industries that AI is yet to dominate is the entertainment industry that includes art, poetry, films, video games, and books, to name a few. Although you could watch numerous YouTube videos whose scripts were written by AI, they lack human touch and logic.
Also, such scripts are created by feeding instructions and pre-existing works to the AI, which then uses algorithms to analyze patterns and combine words to form sentences. These AI-generated scripts are actually heaps of randomness, and creativity is out of the question.
However, the same isn't true for strong AI. Its creativity would be similar or superior to human beings and would probably be able to craft unheard stories in minutes or less. Everything from painting to producing movies will be a piece of cake for general AI.
In short, AGI can do virtually anything humans could ever do. Their deep learning capabilities would be so advanced and may even outshine our naturally gained abilities. Although we can't be sure of the heights of its potential, here are some things it could do.
Narrow AI is already remarkable in performing numerous, monotonous tasks like being available to customers, 24/7, in the form of chatbots, analyzing and categorizing vast volumes of data, and of course, self-driving cars.
AGI could also perform tedious yet critical tasks like garbage collection, construction, filling supermarket shelves, and even household chores. Robots with excellent motor skills will also be useful for logistics.
Anything biological needs rest, and humans are no different. However, AGI robots, which are as intelligent as humans, could work for long hours, without breaks, and without losing concentration or accuracy.
Along with monotonous tasks, such robots could perform highly skilled tasks as doctors conducting surgeries or nurses assisting patients. Along with increasing the security of websites and networks, such robots could also be stationed at different locations that require physical protection.
Mining environments are generally hot and humid and could be deemed inhumane. They pose serious health risks to humans, and although harmful for the environment, they're still being continued.
With the advent of AGI, such dangerous jobs could be performed by robots who are equally intelligent and dexterous as humans. Even if they are damaged while working, their parts could be easily repaired or replaced, unlike humans.
AGI robots will also make asteroid minings – a safer mining alternative – highly feasible. Only requiring energy in the form of electricity, these machines could roam beyond the solar system and bring back vast amounts of valuable resources.
Strong AI could also be our key to interstellar exploration. Intelligent machines with human-like minds could travel longer distances than humans with the least resources. They could also help us find habitable planets or even find planets inhabited by extraterrestrial lifeforms.
Although narrow AI does a remarkable job predicting natural disasters, strong AI will be much better and accurate. By feeding information regarding past occurrences of disasters, AGI could inform authorities about disasters and suggest the best evacuation plans.
Robots running general AI programs could also help in disaster relief management by saving people from locations otherwise inaccessible for humans. They could also improve disaster response timings by quickly analyzing incoming reports and allocating resources efficiently.
Although strong AI comes with numerous benefits for humankind, there are enough reasons to believe that it might spell the end of the same species. One way of explaining such an unfavorable circumstance is through the concept of technological singularity.
The technological singularity, or just singularity, is a theoretical point in time at which technological advancements become uncontrollable, and most importantly, irreversible and cause unforeseeable and unfavorable changes to human civilization.
AGI is frequently associated with singularity as it might make the most popular singularity hypothesis – intelligence explosion – a reality.
Intelligence explosion is the most plausible outcome of achieving strong AI. It's again a hypothetical scenario in which an intelligent agent analyzes and understands the processes that produce its intelligence, improves them, and then creates a successor that repeats the same cycle.
After a few generations and self-improvement cycles, the AI system might create artificial superintelligence, or ASI, which will surpass human intelligence and capabilities and might even bring new definitions to the term "intelligence." And yes, the majority of sci-fi movies set in dystopian futures use singularity or AI going rogue as their preface.
If AI systems evolve and attain superintelligence, there's little to no reason why they should listen to us – a species with inferior or no intelligence from their perspective. In short, achieving AGI would most probably mean that we'll be surrounded by machines equally or significantly more intelligent than us.
According to Stephen Hawking, strong AI could mean the end of the human race. Once we create such an AI, it would take-off on its own and redesign itself at an ever-increasing rate. Humans, who are restricted by slow biological evolution, wouldn't be able to compete and will be superseded without a doubt.
Elon Musk also has similar views about machine intelligence. According to him, AI could be more dangerous than nuclear warheads and the speculation that AI could become a million times intelligent than humans is actually an understatement. He also said that strong AI is our biggest existential risk.
The fact is, intelligence isn't measurable, like weight or speed. Although an IQ test is an excellent method to measure intelligence, it only tests limited aspects of intelligence. It omits aspects of intelligence, such as the ability to make conversation, learning and adapting, and performing tasks that require motor skills.
Considering such points, here are four tests formulated by researchers to test whether general intelligence has arrived.
The Turing test is the first-ever proposed test for determining whether an AI system can think and exhibit human intelligence. Proposed in a paper published by the English mathematician Alan Turing in 1950, the test was originally known as the Imitation Game.
The principle behind the test is that if a machine can engage in a conversation with a human without being exposed as a machine, then it demonstrates human-level intelligence.
The Imitation Game consists of three players - two being humans and the other the computer which is being tested. A human player becomes the interrogator and is isolated from the other human player and the computer.
The interrogator must ask questions to both the players and try to figure out which among the two is a machine. The computer attempts to disguise itself as a human and, if intelligent, may even answer complicated mathematical problems wrongly to seem human.
The entire conversation will take place via a text-only channel, and the interrogator must make a rational guess whether the respondent was a human or a machine. If the interrogator fails to distinguish the answers provided by both the players, then the computer passes the test and would be considered an AGI.
However, many experts claim that the Turing test isn't a fool-proof method to test a strong AI. That's because it tests only a single skillset, for example, text output from a machine. Since general AI will be capable of performing multiple tasks simultaneously, testing the system for just a task won't prove much.
Another way to point out the flaws of the Turing test is with the Chinese room argument (CRA). Created by John Searle in 1980, CRA can be explained in the following scenario:
Imagine an individual who doesn't speak Chinese, sitting in a closed room. The individual is offered a book containing Chinese language rules, instructions, and phrases. Another individual who is fluent in Chinese will send notes written in Chinese to the room.
With the help of the Chinese language book, the individual inside the room can choose the right responses with the Chinese rule book, even though the person doesn't speak or understand Chinese. What is happening instead is just a simulation of understanding by matching statements with appropriate answers.
According to Searle, even in the case of the Turing test, the AI in question could simulate conversations, but that doesn't prove anything relating to consciousness or human-level intelligence, just like in the CRA. He also states that in order to have consciousness or understanding, the machine must have an actual mind similar to that of humans.
Although AI coffee machines exist, going to a house, searching for ingredients, and then making coffee isn't something machines can currently do. Keeping that in mind, Steve Wozniak, co-founder of Apple, put forward the coffee test that would judge an AI machine based on its ability to make coffee.
To pass the coffee test, the AI machine has to go to an average American home and find the ingredients and equipment needed to make a cup of coffee, including coffee, a coffee machine, water, a mug, and then push the right buttons of the machine to brew the coffee.
Locating ingredients and mixing them in the right amounts at an unfamiliar location is a difficult task and requires human intelligence. If an AI machine can do this without errors, it's highly likely that the machine possesses general intelligence.
Put forward by Ben Goertzel in 2012, the robot college student test advocates that if a machine can enroll in a human university, take classes, and get its degree in the same way as humans, then the machine is driven by a strong AI.
AI-MATHS, an AI robot created by Chengdu Zhunxingyunxue Technology of China, had completed two math tests of China's national college entrance exams. However, the AI robot barely passed those tests.
Proposed by AI researcher Nils J. Nilsson, the employment test puts AGI to the test by analyzing how well it can perform jobs done by humans. To pass this test, the AI should be able to perform jobs that are usually performed by humans. And whether the AI in question is an AGI is evaluated based on the fraction of these jobs that are satisfactorily completed by the AI.
Since the inception of artificial intelligence, scientists have been experimenting and searching for methods to mimic the human brain. After all, the human brain is the most powerful hub of cognition we've ever come across. And even after half a century of innovation, we're yet to discover the recipe to create an AI system capable of learning, thinking, and acting like humans.
It's safe to say that when John McCarthy and Marvin Minsky, the founding fathers of AI, sparked the revolution back in 1956, they were aspiring to create an artificial intelligence system that can think and act like humans. Of course, at the time, artificial intelligence research was in its infancy, and the technological feasibility of such a system was still in question.
In 1970, Minsky predicted that we would achieve artificial general intelligence within three to eight years, and a few months after that, its powers will be incalculable. It's been more than half a century, and an AI machine that could outsmart humans is still science fiction.
As previously mentioned, some AI researchers predict a couple of years, some suggest a few decades; some are optimistic it's in the next century, and some feel it's practically impossible.
of participants of the Future Progress in Artificial Intelligence survey think that AGI is likely to happen before 2075.
Source: Nick Bostrom
Brains are different from computers and can't be compared directly. However, it is postulated that the human brain operates at one exaFLOP – which is equal to a billion billion calculations per second and many times faster than the fastest supercomputer we've today.
Since the intelligence level of AGI will be equal to or more than the human brain, machines must be able to operate at one exaFLOP or higher – which is beyond the capabilities of the current technology.
Similarly, running machines at one exaFLOP or higher will require incredible amounts of energy, which might not be feasible unless we make breakthroughs in renewable energy. The energy issue becomes even more intense if we're planning to develop strong AI robots that move around like humans due to energy storage constraints.
Of course developments in quantum computing and finding ways to power machines with nuclear energy, such as the hydrogen cells of the T-800 of The Terminator, would be huge wins and would increase the feasibility of AGI by manifold.
In 2014, with the help of neural networks, scientists were able to replicate the brain of a one-millimeter flatworm consisting of 302 neurons. The human brain, on the other hand, contains approximately 100 billion neurons, which is another way of saying that it might take years before strong AI becomes a reality.
When talking about artificial intelligence, many assume that every researcher in the field is currently working towards achieving strong AI and super AI after. Nothing could be further from the truth, although eventually, every AI research effort would converge towards AGI and ASI subsequently.
On the contrary, researchers are working on perfecting different applications of AI such as natural language processing, deep learning, computer vision, and more. There are many things that can be done to achieve AGI faster, and here are some of them.
As mentioned earlier, unsupervised learning is a machine learning technique that isn't supervised and uses unlabeled training datasets. To put that into perspective, consider the supervised learning model that many narrow AI systems use.
In supervised learning, as the name suggests, learning takes place in the presence of a supervisor as a teacher. For example, consider a basket filled with different kinds of fruit, and you want to teach a machine to differentiate them.
The first step of the teaching process is to train the machine with the help of datasets like this:
After training on this dataset, the machine can use the acquired knowledge to identify any fruit that comes its way – given the features of the fruit were included in the training dataset.
In the case of unsupervised learning, there is no teacher to train the machine nor any labeled datasets. Instead, the machine has to group unsorted information based on patterns, similarities, and differences.
Improving a machine's capabilities to learn unsupervised is a critical step towards attaining AGI. This will allow machines to learn and adapt at an exponential rate without requiring any human assistance. Additionally, unsupervised learning is closely similar to the manner in which humans learn through experience.
Although tech giants, including IBM, Google, and Microsoft, are aggressively investing in this technology, quantum computing is still in its infancy. Unlike conventional computers, which rely on zeros and ones, or bits, quantum computers use quantum bits or qubits.
Qubits are made possible by the principles of quantum mechanics like superposition and entanglement and can exist in multiple states at the same time. In short, while traditional computers have just two states, quantum computers have numerous states and so are incredibly faster.
Unlike conventional computers that abide by Moore's Law, quantum computers follow Neven's law, which states that quantum computers are experiencing doubly exponential growth relative to traditional computers.
Such an exponential growth rate in computation power is a win for general AI and can aid in attaining and exceeding the one exaFLOPS marvel of the human brain. Quantum computing could spot patterns in big data in unbelievable timings.
It is estimated that just 60 qubits would be more than enough to encode an amount of data equal to that produced by the entire humanity in a year. Quantum computing will also improve the deep learning capability of AI, which is essential for achieving cognitive abilities similar to the human mind.
Since cognition is one of the biggest hurdles we need to cross for achieving AGI, scientists are exploring a new concept called embodied cognition.
According to it, robots will have to learn from their surrounding environment, just like how a human child does. Only then will they be able to obtain human-level cognition, and it will be a step-by-step process.
If AGI arrives, the world won't be the same anymore. Not only will AGI change the world around us, but it might change the way we see ourselves.
If we succeed in co-existing with strong AI, we might even evolve into an advanced species that combines carbon-based lifeforms and robotics. If we fail, AGI might decide our fate in nanoseconds and may spell the very end of humankind.
Since AGI seems like something that's set in the future, want to know how artificial intelligence is impacting the lives of the 21st century? Then check out how AI is influencing the banking sector.
Amal Joby is a Content Marketing Specialist at G2. He's fascinated by the human mind and hopes to decipher it in its entirety one day. In his free time, you can find him reading books, obsessing over sci-fi movies, or fighting the urge to have a slice of pizza.
Never miss a post.
Subscribe to keep your fingers on the tech pulse.