What Is NLP (Natural Language Processing)?

Rebecca Reynoso
Rebecca Reynoso  |  May 31, 2019

Natural language processing is something we encounter on a regular basis – but you probably didn’t know that unless you’re already well-versed in all things AI.

NLP, or natural language processing, is used in many ways and applications we take for granted: smartphone assistants, word processors, translation apps, and automatic voice response on customer service calls among other things.

You interact with NLP all the time without even realizing it.

Smartphones come with pre-downloaded voice assistants which utilize NLP to understand and interpret human speech and provide text- or voice-based responses to user queries. Word processors check for accuracy in terms of grammar, syntax, and logic of written user input. Translation apps process one language and change it, either in writing or speaking, to another language. And interactive voice response (IVR) applications are employed to listen to verbal requests and commands over the phone to guide people to the correct human customer service representative.

Though all of the above examples are used with regularity, few people understand how they’re able to function. The answer is NLP.  

What is NLP?

Natural language processing is drawn from multiple disciplines, including computer science and linguistics with roots in artificial intelligence. As human dependence on computers has grown over the past few decades, so has the need for understanding how to communicate with them in a universally-comprehensible language.

The purpose of NLP is to erase the gap between computer and human communication. In short, NLP was developed to make it easier for computers to understand us and for us to understand them.  

To simplify, humans speak a native language (e.g. English, Spanish, or German). The same is true for computers – except their language consists of ones and zeroes that linguistically mean nothing to us unless and until they are translated. 

Jump ahead to read about various aspects of NLP:

A brief historical overview of NLP

Natural language processing was theorized as far back as the 1600s by René Descartes and Gottfried Wilhelm Leibniz, who proposed codes that could relate words between languages. Due to technological barriers, however, actionable outcomes regarding NLP did not come to fruition until much later.

1950s: the early days of modern NLP

It wasn’t until 1954 when the Georgetown-IBM experiment took place. This experiment was the first of its kind to demonstrate automatic machine translation (MT) of 60+ Russian sentences to English. 

Where MT once seemed an impossible feat for average people to fathom, myriad machine translation software exists today.  

After the success of the Georgetown-IBM experiment, a combination of advancements in artificial intelligence and machine translation propelled discussion around continuing to research NLP. Thus arose findings in linguistics, particularly those of Noam Chomsky, about universal grammar and standardized rules which were then applied to machine translation systems.

Applying these rules to MT systems made it possible for computers to adhere to a standard understanding of language from which they could learn how to read and interpret text or speech.

1960s: NLP takes a major step forward

In 1969, AI theorist and cognitive psychologist Roger Schank developed the conceptual dependency theory model for natural language understanding. Schank’s goal was to make meaning (intent) independent from input (the words actually written). This means regardless of how words were input into computer systems, if their intent was the same, the way they were written did not matter.  

For example, the sentences “John gave Mary a book.” and “John gave a book to Mary.” mean the same thing, but where humans can distinguish that the intent is identical, computer systems need to be trained to understand that intent can be identical even if word arrangements differ – hence the necessity of teaching systems to comprehend logical inference. 

TIP: Read an in-depth explanation of the conceptual dependency theory here.

1970s: NLP sees continued growth

NLP researcher William A. Woods (better known as Bill Woods) introduced the augmented transition network (ATN) in 1970. ATNs represent natural language input and can theoretically analyze sentence structure regardless of complexity.

ATNs are an extension of RTNs, or recursive transition networks. Both ATNs and RTNs are graph theoretical schematics that are used to represent rules of context-free grammar – which is used in application to programming languages, NLP, and lexical analysis. ATNs build on the concept of efficient sentence parsing (grammar). The Markov model captures language regularities and standards with consistent grammatical structures, thus making it easier for systems to understand language inputs. 

Throughout the 70s, programmers began writing conceptual ontologies, which in layman’s terms is a formal naming or representation of categories and relationships within a universal set of variables.

1980s: NLP expands to new models

During much of the 1980s, existing NLP systems were primarily based on handwritten rule-based models until the time when machine learning (ML) algorithms for language processing became more prevalent. At this time, the first known ML algorithms were decision trees that focused on “if X, then Y” rules that were similar to the complex sets of handwritten rules which came before.  

TIP: Check out what machine learning software is available today!

See the Highest-Rated Machine Learning Software, Free →

In contrast, modern NLP systems are focused on statistical models that evaluate data inputs, even those which contain errors – with a higher level of accuracy than previous systems. Additionally, recent NLP research focuses on supervised and unsupervised learning algorithms. These algorithms have the ability to learn from data that either has not been annotated by hand or uses a combination of hand-annotated data and that which is not.

2000s and beyond: NLP as a modern commodity

Nowadays, we see NLP in our everyday lives. The technological boom of the new millennium has given way to more applications for NLP than ever before. Looking forward to growth is AI, it should be no surprise to know that NLP will continue to play a key role in the future of tech.

Rule-based vs. statistical modeling in NLP systems

As mentioned previously, early NLP systems were developed using rule-based models that were hand written and hand-coded. However, the strict and fixed nature of these models are highly inferior to that of statistical models that are used in modern NLP systems. 

Statistical models call on machine learning to infer and interpret language learning rules via the analysis of real-world examples of large datasets. This modeling system is more fluid in terms of linguistic variances that naturally occur in human speech patterns. 

Because machine learning algorithms in statistical modeling are pattern and inference-based, the algorithms can infer and interpret language to a degree higher than that of rule-based models. ML algorithms are programmed to learn from recurring patterns, so they can learn to automatically focus on certain areas of input text. The same cannot be said for rule-based models; in order to “learn” under rule-based models, the handwritten rule systems have to be altered by hand, leaving room for mistakes.  

Thus, statistical models are superior and more efficient due to their automated ability to adapt to changes with speed and dexterity in a way that rule-based models cannot. In sum, statistical models based on ML data can be made more accurate by increasing the data input whereas rule-based models can only be made more accurate by increasing the complexity of handwritten rules, which is laborious and prone to error. 

rule-based vs statistical modeling NLP

How does NLP actually work?

Natural language processing offers a variety of language-interpretation techniques, as listed above. There are machine learning algorithms, statistical modeling, and rule-based modeling approaches. More often than not, an amalgamation of these techniques are used to help computer systems process human language data.

NLP was created with intent to break down large sets of human language data into smaller, shorter, and more logical components that are built to understand the semantic and syntactic purpose of our spoken and written language. NLP also serves to provide relationships within and meaning to the nature of our linguistic choices.  

Some aspects of NLP include text-to-speech or speech-to-text conversion; machine translation from one language to another; categorizing, indexing, and summarizing written documents; and identifying mood and opinions within text- and voice-based data.

The overarching purpose is to take language inputs and use algorithms to transform the value point of the data into something greater.

Why is NLP important?

The complexity and diversity of human language is astonishing and so vast that even humans themselves cannot comprehend it all. Given this, modern machines and computer systems are tasked with the labor of understanding text and speech while interpreting it and making sense of intent.

It helps computers analyze data faster

With such large volumes of language-based data in correspondence to the number of intelligent systems in existence today, it is crucial for computers and other systems to be able to communicate with their human counterparts. Machines equipped with ML algorithms can analyze and understand more language data than humans because they have the ability to learn from patterns found in stored data.

It helps with expedited tech growth

As more devices are introduced into society, the necessity of natural language processing for intelligent systems will continue to grow. The difficulty behind language data analysis for NLP systems comes from unstructured data. Within our hundreds of thousands of languages are subsets of regional dialects, slang, and made-up words in our vocabularies. 

In text-based input, the same is true. People write in slang, with emojis and symbols, abbreviations, and without proper grammatical standards or punctuation. These variances can cause confusion for NLP systems because they lack structure. 

NLP systems were developed to help bring semantic understanding to languages so that the communication between man and machine can result in logical, positive interactions. Overall, NLP systems helps resolve confusing, ambiguous language by adding structure (by means of speech recognition and text analysis) to the data they receive. 

Examples and use cases of NLP

Identifying real-world examples of things that use natural language processing is a lot easier than breaking down the intricacies of the science behind it. In our day-to-day lives, we use NLP on a regular basis. 

The most obvious use case of NLP is voice assistants like Alexa, Google Home, Cortana, Bixby, and Siri. Every time you say a “wake” word to your smart home device or ask your phone to look something up for you, natural language processing is being used. In fact, voice search statistics show that voice assistant usage is on the rise for Generations X, Y, and Z. 

Other examples of NLP in action include email filtering (e.g. spam mail keyword identification) and phone-to-text transcription (i.e. when you get a voicemail on your phone, but instead of dialing to listen to the message, a transcription of the voice message appears on your screen). 

Are you still processing this information?

As you’ve been reading about NLP and learning the ins and outs of the topic, you’ve been absorbing information in the same way machine learning algorithms do. How? Well, you’ve read, understood, extracted, and hopefully will share this guide with others to help others learn what you just have.

Natural language processing might seem intimidating at first, but when broken down into bite-sized pieces, you’ll realize you’ve kind of been doing it this whole time…just not as well as a computer can!

Want to continue learning? Read about the future of machine learning to help grow your expertise!

Rebecca Reynoso
Author

Rebecca Reynoso

Rebecca Reynoso is a Content Marketing Associate at G2. Her passion for writing led her to study English, receiving a BA and MA from UIC and DePaul, respectively. In her free time, she enjoys watching and attending Blackhawks games as well as spending time with her family and cat.