The Tao of Text Normalization

Why bother?

Texts documents are noisy. You will realize it brutally when you switch from tutorial datasets into real world data. Cleaning things like misspellings, various abbreviations, emoticons, etc. will consume most of your time. But feature processing step is crucial to provide to quality samples for later analysis.

This article will provide you a gentle introduction to some techniques of normalizing text documents.


We will discuss a couple of techniques that can be immediately used. The plan for the following sections is as follows:

  1. Basic processing
  2. Stemming
  3. Lemmatization
  4. Non-standard words mapping
  5. Stopwords

For experimentation purposes, an environment with Python 3.5 and NLTK module is used. If you have never used in before check this first.

All examples assume that basic modules are loaded, and there is a helper function capable of presenting the whole text from tokens.

As a playground, a review of new Apple Watch 2 is used (from Engadget).

Basic processing

Feature processing will start with bringing all characters into lowercase, and tokenize them using.RegexpTokenizer In this case, the regexp - \w+ will extract only word characters.

For all the consecutive examples we assume that the text is processed this way.


Stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form—generally a written word form. The stem need not be identical to the morphological root of the word; it is usually sufficient that related words map to the same stem, even if this stem is not in itself a valid root. ~ Wikipedia

NLTK introduces several stemmers i.e. Snowball, Porter, Lancaster. You should try them on your own and see which one will be best for the use case.


Lemmatisation is the algorithmic process of determining the lemma for a given word. ~ Wikipedia

In NLTK you can use built-in WordNet lemmatizer. It will try to match each word to an instance within a WordNet. Mind that this process returns a word in its initial form if it cannot be found, and is much slower than standard stemming.

Non-standard words mapping

Another normalization task should be to distinguish non-standard words - for example, numbers, dates etc. Each such word should me mapped to a common value, for example:

  • Mr, Mrs, Dr, ... → ABR
  • 12/05/2015, 22/01/2016, ... → DATE
  • 0, 12, 45.0 → NUM
  • ...

This process allows to further easily summarize a text document and to derive new features (for example: count how many times a number appears).

Stop words removal

Stop words usually refer to the most common words in a language. They do not provide any informative value and should be removed. Notice however that when you are generating features with bigrams, stop words might still provide some useful insights.

There are built-in lists for many languages that you can use (or extend).

Let's see how a lemmatized version with removed stop words looks like:


Pre-processing text data makes it more specific. It's get cleaned from things human consider important but does not provide any value for machines. Very often a positive byproduct of normalization is the reduction of potential features used in a later analysis, which makes all computation significantly faster ("curse of dimensionality"). You should also keep in mind that some of the data is irreversibly lost.


Let's combine software craftsmanship and data engineering skills results to produce some clean and understandable code.