Nice to meet you.

Enter your email to receive our weekly G2 Tea newsletter with the hottest marketing news, trends, and expert opinions.

Unsupervised Learning: How Machines Learn on Their Own

March 19, 2021

unsupervised learning

Unsupervised learning lets machines learn on their own.

This type of machine learning (ML) grants AI applications the ability to learn and find hidden patterns in large datasets without human supervision. Unsupervised learning is also crucial for achieving artificial general intelligence.

Labeling data is labor-intensive and time-consuming, and in many cases, impractical. That's where unsupervised learning brings a big difference by granting AI applications the ability to learn without labels and supervision.

What is unsupervised learning?

Unsupervised learning (UL) is a machine learning technique used to identify patterns in datasets containing unclassified and unlabeled data points. In this learning method, an AI system is given only the input data and no corresponding output data.

Unlike supervised learning, unsupervised machine learning doesn't require a human to supervise the model. The data scientist lets the machine learn by observing data and finding patterns on its own. In other words, this sub-category of machine learning allows a system to act on the given information without any external guidance.

Unsupervised learning techniques are critical for creating artificial intelligence systems with human intelligence. That's because intelligent machines must be capable of making (independent) decisions by analyzing large volumes of untagged data.

Compared to supervised learning algorithms, UL algorithms are more adept at performing complex tasks. However, supervised learning models produce more accurate results as a tutor explicitly tells the system what to look for in the given data. But in the case of unsupervised learning, things can be quite unpredictable.

Artificial neural networks, which make deep learning a reality, might seem like it's backed by unsupervised learning. Although it's true, neural networks' learning algorithms can also be supervised if the desired output is already known.

Unsupervised learning can be a goal in itself. For example, UL models can be used to find hidden patterns in massive volumes of data and even for classifying and labeling data points. The grouping of unsorted data points is performed by identifying their similarities and differences.

Some reasons why unsupervised learning is essential.

  • Unlabeled data is in abundance.
  • Labeling data is a tedious task requiring human labor. However, the very process can be ML-powered, making labeling easier for the humans involved.
  • It's useful for exploring unknown and raw data.
  • It's useful for performing pattern recognition in large datasets.
Unsupervised learning can be further divided into two categories: parametric unsupervised learning and non-parametric unsupervised learning.

How unsupervised learning works

Simply put, unsupervised learning works by analyzing uncategorized, unlabeled data and finding hidden structures in it.

In supervised learning, a data scientist feeds the system with labeled data, for example, the images of cats labeled as cats, allowing it to learn by example. In unsupervised learning, a data scientist provides just the photos, and it's the system's responsibility to analyze the data and conclude whether they're the images of cats.

Unsupervised machine learning requires massive volumes of data. In most cases, the same is true for supervised learning as the model becomes more accurate with more examples.

The process of unsupervised learning begins with the data scientists training the algorithms using the training datasets. The data points in these datasets are unlabeled and uncategorized.

The algorithm’s learning goal is to identify patterns within the dataset and categorize the data points based on the same identified patterns. In the example of cat images, the unsupervised learning algorithm can learn to identify the distinct features of cats, such as their whiskers, long tails, and retractable claws.

If you think about it, unsupervised learning is how we learn to identify and categorize things. Suppose you've never tasted ketchup or chili sauce. If you're given two "unlabeled" bottles of ketchup and chili sauce each and asked to taste them, you'll be able to differentiate between their flavors. 

You'll also be able to identify the peculiarities of both the sauces (one being sour and the other spicy) even if you don't know the names of either. Tasting each a few more times will make you more familiar with the flavor. Soon, you'll be able to group dishes based on the sauce added just by tasting them.

By analyzing the taste, you can find specific features that differentiate the two sauces and group dishes. You don't need to know the sauces' names or that of the dishes to categorize them. You might even end up calling one the sweet sauce and the other hot sauce.

This is similar to how machines identify patterns and classify data points with the help of unsupervised learning. In the same example, supervised learning would be someone telling you the names of both the sauces and how they taste beforehand.

Types of unsupervised learning

Unsupervised learning problems can be classified into clustering and association problems.

Clustering

Clustering or cluster analysis is the process of grouping objects into clusters. The items with the most similarities are grouped together, whereas the rest falls into other clusters. An example of clustering would be grouping YouTube users based on their watch history.

Depending on how they work, clustering can be categorized into four groups as follows:

  • Exclusive clustering: As the name suggests, exclusive clustering specifies that a data point or object can exist only in one cluster.
  • Hierarchical clustering: Hierarchical tries to create a hierarchy of clusters. There are two types of hierarchical clustering: agglomerative and divisive. Agglomerative follows the bottom-up approach, initially treats each data point as an individual cluster, and the pairs of clusters are merged as they move up the hierarchy. Divisive is the very opposite of agglomerative. Every data point starts in a single cluster and gets split as they move down the hierarchy.
  • Overlapping clustering: Overlapping allows a data point to be grouped in two or more clusters.
  • Probabilistic clustering: Probabilistic uses probability distributions to create clusters. For example, "green socks," "blue socks," "green t-shirt," and "blue t-shirt" can be either grouped into two categories "green" and "blue" or "socks" and "t-shirt".

Association

Association rule learning (ARL) is an unsupervised learning method used to find relations between variables in large databases. Unlike some machine learning algorithms, ARL is capable of handling non-numeric data points.

In a simpler sense, ARL is about finding how certain variables are associated with each other. For example, people that buy a motorcycle are most likely to buy a helmet.

Finding such relations can be lucrative. For example, if customers who buy Product X tend to buy Product Y, an online retailer can recommend Product Y to anyone buying Product X.

Association rule learning uses if/then statements in its core. These statements can reveal associations between independent data. Additionally, the if/then patterns or relationships are observed using support and confidence.

Support specifies how often the if/then relationship appears in the database. Confidence defines the number of times the if/then relationship was found to be valid.

Market basket analysis and web usage mining are made possible with the association rule.

Unsupervised learning algorithms

Both clustering and association rule learning is implemented with the help of algorithms.

Apriori algorithm, ECLAT algorithm, and Frequent pattern (FP) growth algorithm are some of the notable algorithms used to implement the association rule. Clustering is made possible by algorithms such as k-means clustering and principal component analysis (PCA).

Apriori algorithm

Apriori algorithm is built for data mining. It's useful for mining databases containing a large number of transactions, for example, a database containing the list of items bought by shoppers in a supermarket. It is used for identifying the harmful effects of drugs and in market basket analysis to find the set of items customers are more likely to buy together.

ECLAT algorithm

Equivalence Class Clustering and bottom-up Lattice Traversal, or ECLAT for short, is a data mining algorithm used to achieve itemset mining and find frequent items.

Apriori algorithm uses horizontal data format and so needs to scan the database multiple times to identify frequent items. On the other hand, ECLAT follows a vertical approach and is generally faster as it needs to scan the database only once.

Frequent pattern (FP) growth algorithm

The frequent pattern (FP) growth algorithm is an improved version of the Apriori algorithm. This algorithm represents the database in the form of a tree structure known as a frequent tree or pattern.

Such a frequent tree is used for mining the most frequent patterns. While the Apriori algorithm needs to scan the database n+1 times (where n is the length of the longest model), the FP-growth algorithm requires just two scans.

K-means clustering

Many iterations of the k-means algorithm are widely used in the field of data science. Simply put, the k-means clustering algorithm groups similar items into clusters. The number of clusters is represented by k. So if the value of k is 3, there will be three clusters in total.

This clustering method divides the unlabeled dataset so that each data point belongs to only a single group with similar properties. The key is to find K centers called cluster centroids.

Each cluster will have one cluster centroid, and on seeing a new data point, the algorithm will determine the closest cluster to which the data point belongs based on metrics like the euclidean distance.

Principal component analysis (PCA)

The principal component analysis (PCA) is a dimensionality-reduction method generally used to reduce the dimensionality of large datasets. It does this by converting a large number of variables into a smaller one that contains almost all the information in the large dataset.

Reducing the number of variables might affect the accuracy slightly, but it could be an acceptable tradeoff for simplicity. That's because smaller datasets are easier to analyze, and machine learning algorithms don't have to sweat much to derive valuable insights.

Supervised vs. unsupervised learning

Supervised learning is similar to having a teacher supervise the entire learning process. There's also a labeled training dataset similar to having the correct answers to each problem you're trying to solve.

It's easier to understand whether your answer is correct or not, and the teacher will also correct you when you make a mistake. In the case of unsupervised learning, there's no teacher or right answers.

From a computational perspective, unsupervised learning is more complicated and time-consuming than supervised learning. However, it's useful for data mining and to get insights into the structure of the data before assigning any classifier (a machine learning algorithm that automatically classifies data).

Despite being useful when unlabeled data is enormous, unsupervised learning might cause little inconveniences to data scientists. Since the validation dataset used in supervised learning is also labeled, it's easier for data scientists to measure the models' accuracy. But the same isn't true for unsupervised learning models.

In many cases, unsupervised learning is applied before supervised learning. This helps to identify features and create classes.

The unsupervised learning process takes place online, whereas supervised learning takes place offline. This allows UL algorithms to process data in real time. 

While unsupervised learning problems are divided into association and clustering problems, supervised learning can be further categorized into regression and classification.

Apart from supervised and unsupervised learning, there's semi-supervised learning and reinforcement learning.

Semi-supervised learning is a blend of supervised and unsupervised learning. In this machine learning technique, the system is trained just a little bit so that it gets a high-level overview. A fraction of the training data will be labeled, and the remaining will be unlabeled.

In reinforcement learning (RL), the artificial intelligence system will encounter a game-like environment in which it has to maximize the reward. The system must learn by following the trial and error method and improve its chance of gaining the reward with each step.

Here's a quick look at the key differences between supervised and unsupervised learning.

Unsupervised learning Supervised learning
It’s a complex process, requires more computational resources, and is time-consuming. It’s relatively simple and requires fewer computational resources.
The training dataset is unlabeled. The training dataset is labeled.
Less accurate, but not necessarily Highly accurate
Divided into association and clustering Divided into regression and classification
It’s cumbersome to measure the accuracy of the model along with uncertainty. It’s easier to measure the accuracy of the model.
The number of classes is unknown. The number of classes is known.
Learning takes place in real-time. Learning takes place offline.
Apriori, ECLAT, k-means clustering, and Frequent pattern (FP) growth algorithm are some of the algorithms used. Linear regression, logistic regression, Naive Bayes, and support vector machine (SVM) are some of the algorithms used.

Examples of unsupervised machine learning

As mentioned earlier, unsupervised learning can be a goal in itself and can be used to find hidden patterns in vast volumes of data – an unrealistic task for humans.

Some real-world applications of unsupervised machine learning.

  • Anomaly detection: It's a process of finding atypical data points in datasets and, therefore, useful for detecting fraudulent activities.
  • Computer vision: Also known as image recognition, this feat of identifying objects in images is essential for self-driving cars and even valuable for the healthcare industry for image segmentation.
  • Recommendation systems: By analyzing historical data, unsupervised learning algorithms recommend the products a customer is most likely to buy.
  • Customer persona: Unsupervised learning can help businesses build accurate customer personas by analyzing data on purchase habits.

Leaving algorithms to their own devices

The ability to learn on its own makes unsupervised learning the fastest way to analyze massive volumes of data. Of course, choosing between supervised or unsupervised (or even semi-supervised) learning depends on the problem you're trying to solve and the time and vastness of the data available. Nevertheless, unsupervised learning can make your entire effort more scalable.

The AI we have today isn't capable of world domination, let alone disobeying its creators' orders. But it makes incredible feats like self-driving cars and chatbots possible. It's called narrow AI but isn't as weak as it sounds.


Get this exclusive AI content editing guide.

By downloading this guide, you are also subscribing to the weekly G2 Tea newsletter to receive marketing news and trends. You can learn more about G2's privacy policy here.