Artificial intelligence is used as a broad catchall term for many subsets of AI, which is in and of itself a subset of computer science.
While perusing the world of AI, you might feel like you’re plunging head first into shark-infested waters. There are so many components and sub-topics of AI that trying to navigate your way around them can be difficult without guidance. In order to sound knowledgeable about a topic, it is crucial that you first learn some important applications of the overall field.
Applications of artificial intelligence
Click on any term below to read an extended description plus an example of present-day applications.
- Artificial narrow intelligence (ANI)
- Artificial general intelligence (AGI)
- Big data
- Computer vision
- Data mining
- Machine learning
- Deep learning
- Neural networks
- Natural language processing (NLP)
Artificial narrow intelligence (ANI), or weak AI, is a type of artificial intelligence that can only focus on one specific task or problem at a time. This is our current widely-understood definition of artificial intelligence as a whole. Narrow AI is programmed to complete a single task, such as telling the weather or playing a game.
Narrow AI is neither self-aware nor sentient. Though it may seem highly capable, ANI is bound by strict programming for singular tasks. ANI is considered weak because it does not have the capacity to meet or exceed human intelligence or learn and adapt as other formats of AI can.
Despite this, ANI machines may seem more knowledgeable and sophisticated by surpassing human knowledge or skill on the individual task for which it was programmed; however, these systems are operating as programmed, not because they are actively learning new information.
An example of narrow AI is smartphone assistants like Bixby or Siri. Even though they can “communicate” with human users, their responses are limited by a lack of understanding words and phrases beyond what they were programmed to interpret.
GIF courtesy of F. Martin Ramin via amysboyd.com
Artificial general intelligence (AGI), or strong AI, is the inverse of ANI. AGI refers to machines that can successfully perform human tasks. This type of intelligence is considered “human-like,” given that general AI can strategize, reason, learn, and communicate in a manner aligned with human functions and processes. In addition, some AGI machines are able to see (by means of computer vision) or manipulate objects.
Currently, AGI is in preliminary stages with hypothetical real-life applications on the horizon in the foreseeable future.
|RELATED: Learn more about intelligent apps and how they are used to automate simple tasks or give users important data.|
Big data defines large amounts of structured and unstructured data. It is a field that analyzes and extracts information from mass amounts of information (data) that is too complex to be handled by standard data-processing software.
An example of big data in product development is Netflix. Because Netflix’s user base is at or beyond 100+ million people, they use big data to build predictive models to improve user experience. Whenever you get a recommendation about a show or movie that might interest you based on what you’ve watched before, Netflix is utilizing its mass amount of user data and preferences to curate a selection of likely matches for individual users.
GIF courtesy of Ramy Khuffash via uimovement.com
Netflix gathers big data in a multitude of ways by tracking how a user discovers a program or movie (search function, suggestion); star ratings; search queries; when or if users pause or stop watching a show/program; date(s) the content was watched; and more. They use this data to recommend new content to users and show a user “what’s trending” (which may influence some users to watch to be in the know) with hot, new programs.
Computer vision is when a machine processes visual input from image files (JPEGs) or camera feeds. Not only can computer vision “see” the image(s), but it also understands and processes what it is seeing. If this were put in terms of human existence, computer vision is to brain comprehension as eyes are to seeing.
Basically, whenever a machine processes raw visual input – such as a JPEG file or a camera feed – it’s using computer vision to understand what it’s seeing. It’s easiest to think of computer vision as the part of the human brain that processes the information received by the eyes – not the eyes themselves. To simplify, utilizing computer vision means the user is inputting an image into the system, and what the user receives as an output can include quantitative and qualitative features of the image, including color, shape, size, and classification.
An example of computer vision is the images that Tesla’s self-driving cars see. The system has to not only recognize images by shape, type, and color, but also process these pieces of information extremely quickly given that it is performing an action in real-time.
GIF courtesy of Steph Davidson via Tesla
|RELATED: Check out 7 Roles for Artificial Intelligence in Education to see how AI is being integrated beyond the tech sphere in educational learning settings.|
Data mining is the process of sorting through large sets of data in order to identify recurring patterns while establishing problem-solving relationships. Data mining is a blended subset of computer science and statistics whose sole purpose is to extract data using AI while turning it into useful information.
Examples of data mining occur in e-commerce, with Amazon spearheading the data collection game. Amazon targets their customers and uses their data by showing buyers recommended products “others” have bought in relation to the consumer’s intended purchase (i.e. if you’re considering buying this, people usually also purchase that). Amazon uses customer data (what people purchased plus what people said about their purchases) to identify buying patterns and infer what customers may like based on other user data.
Machine learning focuses on developing programs that access and use data on their own, leading machines to learn for themselves and improve from learned experiences without explicitly being programmed.
Many examples of machine learning in day-to-day life currently exist, including targeted advertisements on social media, virtual voice assistants on cell phones, facial recognition software on social media websites, and commuting predictions from apps like Google Maps or cellphone GPS data.
Image courtesy of vigilantsolutions.com
Deep learning is a machine learning technique that teaches computers how to learn by rote. In other words, deep learning allows machines to gain the ability to mimic learning as a human mind would by classifying text, sound, and images into categories.
Examples of deep learning are found in various existing technologies, such as driverless cars and voice assistants. These specific examples utilize deep learning techniques by learning from hundreds – if not thousands – of hours of video, images, and samples by which the technology self-teaches pattern recognition.
For instance, driverless cars learn how to drive and navigate roads from studying road patterns and driving habits of existing human drivers and other vehicles. Similarly, voice assistants listen to endless hours of speech data from people with different voice types, languages, and speech patterns in order to learn how to replicate human speech.
A neural network models itself after the human brain by creating an artificial neural network via a pattern-recognizing algorithm. This algorithm allows a computer to learn from and interpret sensory data with the purpose of classifying and clustering said data.
For example, a common task for neural networks is object recognition. Object recognition is when a neural network is given a large number of similar objects (street signs, images of animals, etc.) to inspect and analyze. It then interprets what the objects are while learning to identify patterns within said objects, eventually figuring out how to categorize future content.
GIF courtesy of www.analyticsindiamag.com
Convolutional neural networks are a type of neural network specifically created for analyzing, classifying, and clustering visual imagery by using multilayer perceptrons. CNNs aid in object recognition within scenes (think: objects within a larger image, not just the standalone object) as well as digitized or handwritten text by using optical character recognition (OCR) tools.
Generative adversarial networks are a type of neural network that can generate seemingly authentic photographs, at least on a superficial scale to human eyes. GAN-generated images take elements of photographic data and shape them into realistic-looking images of people, animals, and places.
A recent example is presented in a paper by NVIDIA, a Style-Based Generator Architecture for GANs (StyleGAN). StyleGAN is capable of producing artificial imagery in a gradual fashion, from a pixelated, low-quality image that eventually grows into a realistic high-resolution image of an individual at https://thispersondoesnotexist.com/ or a cat at https://thiscatdoesnotexist.com/.
The StyleGAN modifies features of what a person (or a cat) would look like, borrowing from actual images of existing people and cats, intricately assigning features and physical properties to a high level of detail (e.g. skin color, pores, hairstyle, eye color, facial hair, and more.)
GIF courtesy of https://arxiv.org/pdf/1710.10196.pdf
Natural language processing (NLP) helps computers process, interpret, and analyze human language and its characteristics by using natural language data. NLP is used with the intent of helping close the gap between humans and computers conversing with and understanding each other.
An example of NLP can be seen in speech-to-text conversion transcriptions of voicemails.
Sifting through the jargon
Now that you’ve learned about some of the most important applications of AI, you can breathe a sigh of relief and wipe the sweat from your brow – you did it! You’re on your way to becoming knowledgeable in all things related to artificial intelligence.
Related: Want to continue growing your expertise about artificial intelligence beyond the basics? Check out our guide on the history of AI!