{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Random Forests - Data Exploration\n", "===\n", "***\n", "\n", "##Introduction\n", "\n", "Now we're going to use a large and messy data set from a familiar source object and then prepare it for analysis using Random Forests.\n", "Why do we want to use Random Forests? This will become clear very shortly.\n", "\n", "We will use a data set of mobile phone accelerometer and gyroscope readings to create a predictive model. The data set is found in R Data form [1] on Amazon S3 and raw form at the UCI Repository [2] The data set readings encode data on mobile phone orientation and motion of the wearer of the phone.\n", "\n", "The subject is known to be doing one of six activities - sitting, standing, lying down, walking, walking up, and walking down.\n", "\n", "##Methods\n", "\n", "Our goal is to predict, given one data point, which activity they are doing.\n", "We set ourself a goal of creating a model with understandable variables rather than a black box model. We have the choice of creating a black box model that just has variables and coefficients. When given a data point we feed it to the model and out pops an answer. This generally works but is simply too much \"magic\" to give us any help in building our intuition or giving us any opportunity to use our domain knowledge.\n", "\n", "So we are going to open the box a bit and we are going to use domain knowledge combined with the massive power of Random Forests once we have some intuition going. We find that in the long run this is a much more satisfying approach and also, it appears, a much more powerful one.\n", "\n", "We will reduce the independent variable set to 36 variables using domain knowledge alone and then use Random Forests to predict the variable ‘activity’. \n", "This may not be the best model from the point of view of accuracy, but we want to understand what is going on and from that perspective it turns out to be much better.\n", "\n", "We use accuracy measures Positive and Negative Prediction Value, Sensitivity and Specificity to rate our model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data Cleanup\n", "\n", "* The given data set contains activity data for 21 subjects. \n", "* The data set has 7,352 rows with 561 numeric data columns plus 2 columns ‘subject’, an integer, and ‘activity’, a character string. \n", "* Since we have 563 total columns we will dispense with the step of creating a formal data dictionary and refer to feature_info.txt instead\n", "\n", "* Initial exploration of the data shows it has dirty column name text with a number of problems:\n", " * Duplicate column names - multiple occurrences. \n", " * Inclusion of ( ) in column names.\n", " * Extra ) in some column names.\n", " * Inclusion of ‘-’ in column names.\n", " * Inclusion of multiple ‘,’ in column names\n", " * Many column names contain “BodyBody” which we assume is a typo." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Populating the interactive namespace from numpy and matplotlib\n" ] } ], "source": [ "%pylab inline\n", "import pandas as pd\n", "df = pd.read_csv('../datasets/samsung/samsungdata.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* We change ‘activity’ to be a categorical variable\n", "* We keep ‘subject’ as integer\n", "\n", "\n", "We want to create an interpretable model rather than use Random Forests as a black box. So we will need to understand our variables and leverage our intuition about them.\n", "\n", "To plan the data exploration, the documentation of the data set from the UCI website [2] is very useful and we study it in detail. \n", "Especially the file feature_info.txt is very important in understanding our variables. It is, in effect, the data dictionary which we have avoided listing here.\n", "Also the explanation for terminology which we use is in feature_info.txt. So going through it in some detail is critical." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "Do each of the above data cleanup activities on the data set.\n", "i.e.\n", "\n", "* Identify and remove duplicate column names - multiple occurrences. \n", "* Identify and fix inclusion of ( ) in column names. How will you fix this?\n", "* Identify and fix extra ) in some column names. How will you fix this?\n", "* Identify and fix Inclusion of ‘-’ in column names. How will you fix this?\n", "* Identify and fix Inclusion of multiple ‘,’ in column names. How will you fix this?\n", "* Identify and fix column names containing “BodyBody”." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "--------\n", "\n", "#### Note to students\n", "_The major value of this data set is as follows_ \n", "\n", "It teaches the implicit lesson that \n", "\n", "* You can just use blind brute force techniques and get useful results OR \n", "* You can short circuit a lot of that and use domain knowledge. This data set highlights the power you get from domain knowledge. \n", "* It also nudges us out of our comfort zone to seek supporting knowledge from semanticaly adjacent data sources to empower the analysis further. \n", "* This underlines the fact, obvious in restrospect, that you never get the data and all supporting information in a neat bundle. \n", "* You have to clean it up - we learnt that earlier, but we also may have to be willing to expand our knowledge, do a little research to enhance our background expertise. \n", "\n", "\n", "So this particular data set may seem a little techy but it could easily be in the direction of bio, or finance or mechanics of fractures or sports analytics or whatever - a data scientist should be willing to get hands *and* mind dirty. The most successful ones are/will be the ones that are willing to be interdisciplinary. \n", "\n", "__That's__ the implicit lesson here. \n", "\n", "----------" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Aside from understanding what each variable represents, we also want to get some technical background about the meaning of each variable. \n", "So we use the Android Developer Reference [3] to educate ourselves about each of the physical parameters that are important.\n", "In this way we extend our domain knowledge so that we understand the language of the data - we allow it to come alive and figuratively speak to us and reveal it's secrets. The more we learn about the background context from which the data comes, the better, faster, and deeper our exploration of the data will be.\n", "\n", "In this case see that the variables have X, Y, Z prefixes/suffixes and the Android Developer Reference [3] gives us the specific reference frame with which these are measured. They are vector components of jerk and acceleration, the angles are measured with respect to the direction of gravity or more precisely the vector acceleration due to gravity.\n", "We use this information and combine it with some intuition about motion, velocity, acceleration etc." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Variable Reduction\n", "\n", "So we dig into the variables and make some quick notes.\n", "\n", "Before we go further, you'll need to open a file in the dataset directory for the HAR data set.\n", "There is a file called feature_info.txt. This file describes each feature, it's physical significance and also describes features that are derived from raw data by doing some averaging, or sampling or some operation that gives a numerical results.\n", "\n", "We want to look at \n", "\n", "a) all the variable names \n", "b) physical quantities\n", "\n", "and take some time to understand these." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we spend some time doing all that, we can extract some useful guidelines using physical understanding and common sense.\n", "\n", "* In static activities (sit, stand, lie down) motion information will not be very useful.\n", "* In the dynamic activities (3 types of walking) motion will be significant.\n", "* Angle variables will be useful both in differentiating “lie vs stand” and “walk up vs walk down”.\n", "* Acceleration and Jerk variables are important in distinguishing various kinds of motion while filtering out random tremors while static.\n", "* Mag and angle variables contain the same info as (= strongly correlated with) XYZ variables \n", "* We choose to focus on the latter as they are simpler to reason about. \n", "* This is a very important point to understand as it results in elimination of a few hundred variables.\n", "* We ignore the *band* variables as we have no simple way to interpret the meaning and relate them to physical activities.\n", "* -mean and -std are important, -skewness and -kurtosis may also be hence we include all these.\n", "* We see the usefulness of some of these variables as predictors in Figure 1. \n", "which shows some of our exploration and validates our thinking." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "\n", "Figure 1. Using a histogram of Body Acceleration Magnitude to evaluate that variable as a predictor of static vs dynamic activities. This is an example of data exploration in support of our heuristic variable selection using domain knowledge.\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Eliminating confounders\n", "\n", "In dropping the -X -Y -Z variables (cartesian coordinates) we removed a large number of confounding variables as these have information strongly correlated with Magnitude + Angle (polar coordinates). There may still be some confounding influences but the remaining effects are hard to interpret. \n", "\n", "From common sense we see other variables -min, -max, -mad have correlations with mean/std so we drop all these confounders also.\n", "The number of variables is now reduced to 37 as below. \n", "\n", "----\n", "\n", "Note to reviewers - we do some tedious name mapping to keep the semantics intact since we want to have a \"white box\" like model.\n", "If we don't we can just take the remaining variables and map them to v1, v2 ..... v37. This would be a couple of lines of code and explanation but we would lose a lot of the value we derived from retaining interpretability using domain knowledge. So we soldier on for just one last step and then we are into the happy land of analysis\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Name transformations\n", "\n", "To be able to explore the data easily we rename variables and simplify them for readability as follows.\n", "\n", "We drop the \"Body\" and \"Mag\" wherever they occur as these are common to all our remaining variables.\n", "We map ‘mean’ to Mean and ‘std’ to SD\n", "\n", "So e.g.\n", "\n", "tAccBodyMag-mean -> tAccMean \n", "fAccBodyMag-std -> fAccSD \n", "etc. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Results\n", "\n", "The reduced set of selected variables with transformed names is now (with meaningful groupings): \n", "\n", "* tAccMean, tAccSD tJerkMean, tJerkSD\n", "* tGyroMean, tGyroSD tGyroJerkMean, tGyroJerkSD\n", "* fAccMean, fAccSD, fJerkMean, fJerkSD,\n", "* fGyroMean, fGyroSD, fGyroJerkMean, fGyroJerkSD,\n", "* fGyroMeanFreq, fGyroJerkMeanFreq fAccMeanFreq, fJerkMeanFreq\n", "* fAccSkewness, fAccKurtosis, fJerkSkewness, fJerkKurtosis\n", "* fGyroSkewness, fGyroKurtosis fGyroJerkSkewness, fGyroJerkKurtosis\n", "* angleAccGravity, angleJerkGravity angleGyroGravity, angleGyroJerkGravity\n", "* angleXGravity, angleYGravity, angleZGravity\n", "* subject, activity " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusions\n", "\n", "Now after all these data cleanup calisthenics we raise our weary heads and notice something pleasantly surprising and positively encouraging.\n", "\n", "These variables are primarily magnitudes of acceleration and jerk with their statistics, along with angle variables. This encourages us to think that our approach of focusing on domain knowledge, doing some extra reading and research and using some elementary physical intuition\n", "seems to be bearing fruit. \n", "\n", "This is a set of variables that is semantically compact, interpretable and relatively easy to reason about.\n", "\n", "We can do another round of winnowing down the variables, because we might have a feeling that 37 variables is too many to hold in our mind at one time - and we would be right. But at this point we bring in the heavy artillery and let the modeling software do the work, using Random Forests on this variable set." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "\n", "[1] \n", "[2] Human Activity Recognition Using Smartphones \n", "[4] Random Forests \n", "[5] Code for computation of error measures \n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "" ], "text/plain": [ "" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from IPython.core.display import HTML\n", "def css_styling():\n", " styles = open(\"../styles/custom.css\", \"r\").read()\n", " return HTML(styles)\n", "css_styling()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 }