Recently I had a pleasure to give two talks at PyData Wrocław meetup group - about reproducible data science and explaining predictions of any classifier using LIME project. The meeting is taking place each month enabling others to discuss potential issues they encounter in their projects or simply share knowledge.
Reproducible data science
Assuring reproducibility is one of the most important issues in any scientific projects. See what techniques and tools you can use in your daily basis
Why have you done this to me?
Explaining predictions of any classifier
Very often it's nearly impossible to explain the decision made by a black-box classifier. But there is a new open source library solving this problem (LIME). Learn what's possible by seeing it in action.
You can watch the whole presentation below (28:12):
You can download notebook and data files used in the examples here.
LIME project (local interpretable model-agnostic explanations) is about to reveal what were the motivations for any black-box classifier that picked certain action.
At the moment the reasoning is possible for text and tabular data (with continuous, discrete and both features). The project is constantly evolving and you can expect much, much more improvements over time.
All you need to provide is an algorithm that outputs probability for each class.
Just watch the promo-video for the project (2:55 min):
Quiver is a kick-ass tool for doing interactive visualization of Keras convolutional network features.
The way it works it that you need to build and feed a Keras model into Quiver. Then with just one line of code start an embedded web-server with the app (built with React and Redux) and open it in your browser.
Watch the video how to explore layer activations on all the different images (1:47 min):
TPOT utilizes genetic algorithms to automatically create and optimize machine learning pipelines. it will explore thousands of possibilities and get's back to you with the best one.
To show you this magic I have prepared a short (3:50 min) video (loading a Kaggle dataset, configuring and training app for 60 minutes). Click if you're curious to see what will happen.
It can be used both as CLI or within Python code. All you need to do is to prepare some good quality data and write a little script for starting computations (see examples). After some time (or iterations) script stops, providing you Python snippet (based on Sklearn) with the best configuration found.
Texts documents are noisy. You will realize it brutally when you switch from tutorial datasets into real world data. Cleaning things like misspellings, various abbreviations, emoticons, etc. will consume most of your time. But feature processing step is crucial to provide to quality samples for later analysis.
This article will provide you a gentle introduction to some techniques of normalizing text documents.
We will discuss a couple of techniques that can be immediately used. The plan for the following sections is as follows:
Non-standard words mapping
For experimentation purposes, an environment with Python 3.5 and NLTK module is used. If you have never used in before check this first.
All examples assume that basic modules are loaded, and there is a helper function capable of presenting the whole text from tokens.
As a playground, a review of new Apple Watch 2 is used (from Engadget).
When the Apple Watch first came out last year, Engadget published not one but two reviews. There was the \"official\" review, which provided an overview of the device's features and, more important, attempted to explain who, if anyone, should buy it. Then there was a piece I wrote, focusing specifically on the watch's capabilities (actually, drawbacks) as a running watch. Although we knew that many readers would be interested in that aspect of the device, we were wary of derailing the review by geeking out about marathoning.
This year, we needn't worry about that. With the new Apple Watch Series 2, the company is explicitly positioning the device as a sports watch. In particular, the second generation brings a built-in GPS radio for more accurate distance tracking on runs, walks, hikes, bike rides and swims. Yes, swims: It's also waterproof this time, safe for submersion in up to 50 meters of water.
Beyond that, the other changes are performance-related, including a faster chip, longer battery life and a major software update that makes the watch easier to use. Even so, the first-gen version, which will continue to be sold at a lower price, is getting upgraded with the same firmware and dual-core processor. That means, then, that the Series 2's distinguishing features are mostly about fitness. And if you don't fancy yourself an athlete, we can think of an even smarter buy.
Feature processing will start with bringing all characters into lowercase, and tokenize them using.RegexpTokenizer In this case, the regexp - \w+ will extract only word characters.
For all the consecutive examples we assume that the text is processed this way.
Stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form—generally a written word form. The stem need not be identical to the morphological root of the word; it is usually sufficient that related words map to the same stem, even if this stem is not in itself a valid root. ~ Wikipedia
NLTK introduces several stemmers i.e. Snowball, Porter, Lancaster. You should try them on your own and see which one will be best for the use case.
Lemmatisation is the algorithmic process of determining the lemma for a given word. ~ Wikipedia
In NLTK you can use built-in WordNet lemmatizer. It will try to match each word to an instance within a WordNet. Mind that this process returns a word in its initial form if it cannot be found, and is much slower than standard stemming.
Another normalization task should be to distinguish non-standard words - for example, numbers, dates etc. Each such word should me mapped to a common value, for example:
Mr, Mrs, Dr, ... → ABR
12/05/2015, 22/01/2016, ... → DATE
0, 12, 45.0 → NUM
This process allows to further easily summarize a text document and to derive new features (for example: count how many times a number appears).
Stop words removal
Stop words usually refer to the most common words in a language. They do not provide any informative value and should be removed. Notice however that when you are generating features with bigrams, stop words might still provide some useful insights.
There are built-in lists for many languages that you can use (or extend).
Let's see how a lemmatized version with removed stop words looks like:
Pre-processing text data makes it more specific. It's get cleaned from things human consider important but does not provide any value for machines. Very often a positive byproduct of normalization is the reduction of potential features used in a later analysis, which makes all computation significantly faster ("curse of dimensionality"). You should also keep in mind that some of the data is irreversibly lost.
I bet you all heard that in order to stay fit you should consider eating 5 meals per day. This roughly means eating every 3 hours!
Inspired by a talk given by Tim Ferris, I decided to conduct a conscious experiment to track each meal I was consuming. Just for fun.
It all took about 2 months to complete, but the outcomes are very thought to provoke. I got acquired with the brutal truth about myself.
What's more interesting - the experiment is fully repeatable. At the end of the post, I will give you some Python scripts that will be helpful to replicate the whole process and obtain your own personalized results.
First and foremost you need your some data. I have used a DietSnaps app. It's purpose is to take a photo of each consumed eaten meal. You can get it from the AppStore.
Even though the app provides an option to export all data (i.e. CSV file), I decided to take a manual approach. Each dish was labeled using the following categories:
The first thing I wanted to know is how much beer I drink each day. Let's try to visualize it with the following plot - average number of meals and beer bottles consumed per each weekday.
Oh, that interesting. I would take a bet that most beers are drunk on the weekend - but ... it's Wednesday (exhausting middle of the week). On the other hand - my training days are Mondays and Thursdays (less consumption). Hopefully, I was eating more these days.
Let's proceed with another question.
The whole experiment took nearly 8 weeks. The fact of taking photos of each meal has obviously made me more conscious about the quality of food. I should be eating better with each meal, right?
Everything was going well until week 3. After that time fast-food consumption was continuously growing. The overall number of meals with fruits is also very depressing.
"What gets measured gets managed" ~ Peter Drucker
Maybe there were little fruits and vegetables but the dishes were overall quite healthy. I can calculate some proportions for each day (green color means super-healthy eating, red - mega-unhealthy).
Mondays tend to be healthier than other days (new week begins with extra powers). Tuesdays and Thursdays are also quite ok (due to workouts). There are also some bad periods - see last three days of the fourth week. Awww.
Finally, let's try to answer how often do I eat? Am I following a rule of "meal every 3 hours"? To visualize we will use a great concept of a time-map (you can read more about it here).
Time-map is very good for recognizing how do events relate each other in time (are they occurring fast or rather slow). Each event is plotted on XY plot, where axes show time before previous, and after next event.
And this is where the drama starts.
All of the plots are fully interactive. If you zoom-in to the purple rectangle you will see how many meals were eaten in a healthy fashion.
It turns out that I have eaten roughly FIVE meals that were introduced and followed with about 3h break. It's exactly 1.86% of all meals. How the hell I was supposed to build muscle if only 1.86% of all meals during 8 weeks were consumed properly?
You might be thinking that you're living good. But these beliefs should also be verified from time to time.
Painful truth: numbers don't lie.
Plotting the results is useless if it is not followed by understanding the data and coming with some action to make a change.
If you are curious about your own performance feel free to use this Jupyter Notebook. It will generate all of the plots presented above for you.