{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Random Forests - Analysis.\n", "===\n", "***\n", "\n", "## Introduction\n", "\n", "Our goal for this phase is to use the reduced variable data set from our exploration phase to create a model predicting human activity, using Random Forests.\n", "\n", "To remind ourselves, the variables we will use are:\n", "\n", "* tAccMean, tAccSD tJerkMean, tJerkSD\n", "* tGyroMean, tGyroSD tGyroJerkMean, tGyroJerkSD\n", "* fAccMean, fAccSD, fJerkMean, fJerkSD,\n", "* fGyroMean, fGyroSD, fGyroJerkMean, fGyroJerkSD,\n", "* fGyroMeanFreq, fGyroJerkMeanFreq fAccMeanFreq, fJerkMeanFreq\n", "* fAccSkewness, fAccKurtosis, fJerkSkewness, fJerkKurtosis\n", "* fGyroSkewness, fGyroKurtosis fGyroJerkSkewness, fGyroJerkKurtosis\n", "* angleAccGravity, angleJerkGravity angleGyroGravity, angleGyroJerkGravity\n", "* angleXGravity, angleYGravity, angleZGravity\n", "* subject, activity \n", "\n", "Of these, \n", "\n", "* all except the last two are numeric. \n", "* 'subject' is an integer identifying a person, one of 21 from 1 to 27 with some missing. \n", "* 'activity' is a categorical variable - one of six activities identified earlier - \n", "* 'sitting', 'standing', 'lying', 'walking', 'walking up', 'walking down'. \n", "\n", "Why do we use Random Forests? We are using Random Forests [4] in our model due to the relatively high accuracy of this method and the complexity of our data.\n", "\n", "These are two major reasons to bring out the heavy artillery of Random Forests, especially when we have too many attrubutes even in a simplified set of attributes. \n", "\n", "## Methods\n", "\n", "\n", "### Expository Segue on Experiment design \n", "\n", "\n", "Typically in analysing such data sets we are creating a model that uses the data we are given. How do we know the model will work for other data? The real answer is \"We don't\". And there's no way we can be sure that we can create a model that will work for new data. \n", "\n", "But what we **can** do is reduce the chances that we are creating an \"overfitted\" model. That is a technical term for a model that works wonderfully on the given data (fitted to it) and fails on the next data set that comes along. There's a way to design our modeling experiment so we avoid that trap. Here's how. \n", "\n", "We take the data set and we keep some of the given data aside and we don't use it for modeling at all. This \"held out\" set is called the test set.\n", "\n", "Then we take our remaining data and we further divide it so that we have a larger set called the training set and a smaller set we call the validation set.\n", "Then we create our model using the training set and look at how well it performs on the validation set (i.e. not counting the \"held out\" data). \n", "We are allowed to tweak our modeling as much as we want using the training and validations sets but we are **not** allowed to look at the held out, test set until we are ready to declare we are done modeling. Then we apply the model to our held out test data -- when that test data also shows an acceptable error rate we have a good model.\n", "\n", "However if we get a bad error rate from the test data we have a problem. We cannot keep tweaking the model to get a better test result because then we are simply overfitting again. So what do we do? We are allowed to mix up all the data, hold out a new test set which has to be different at least in part from the old one, and then we repeat the exercise. In some cases when we are given a data set by a third party we are not shown the held out set, and we have to submit our model without testing agains the held out set.\n", "\n", "The third party then applies our model to the held out test set and we don't get to mix it all up. We only get one shot. We're going to do that here and see how well we do." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Our experiment design\n", "\n", "We hold out the last 4 subjects in the data as a test set and the rest are used for our modeling. Why do we do this? The data set, if we look at the supporting docs, suggests that we use the last 4 as test subjects. So in our first pass at this we might as well just follow the instructions. All rows relating to these 4 will be held out and not used during modeling at all.\n", "\n", "Of the 17 remaining subjects we use the first 12 subjects as the training set and remaining 5 as the validation set. Why this proportion? Typically 30% of the the training data is used as validation set and 70% used for actual training. The validation set is used as our \"internal\" test set, not used in modeling and held out for each validation step. The difference between the actual test set and the validation set is that we are allowed to keep tuning our model as long as we keep mixing up the data after each attempt and re-extraction of a validation set.\n", "\n", "There is also a methodology that takes this step even further and does n-fold validation. The training set is divided into n (usually 10) equal parts and then each part is used as a validation set while the rest used for training, with n such modeling exercises being conducted.\n", "Then some averaging is done to create the best model. \n", "\n", "We do not do n-fold validation here.\n", "\n", "We divided our data based on the 'subject' variable as we included ‘subject’ in our model and want to keep all test data separate. What does this mean? The test data should actually be data about which we have no information at all - i.e. it needs to be independent of the training data. So suppose we did not separate out the data on the 4 test individuals but we just decided that we would mix up all the rows and take say 20% as test data, chosen randomly.\n", "\n", "Note that we have some 7,000 plus rows so we have a few hundred rows on each individual. So if we mixed it all up and chose randomly, then we would most probably get data from all 21 individuals in our test set. And all 21 in our training set. The test set would not be independent of the training set as both would have somewhat similar mixtures of data.\n", "Thus the held out set would not really provide a useful reality check - we have statistically similar info in our training set already i.e. the test set has leaked into the training set.\n", "\n", "This would be similar to the situation where we had a homework exercise which was later solved in class the next day. Then we received a quiz question set which had questions very similar to the homework with just some numbers changed. It would not really test our understanding of the subject matter, only our ability to understand the homework (= overfitting).\n", "\n", "So when we keep aside our test set separated by all rows for certain individuals we know that the training set has no leaked information about these individuals. It is important to be very diligent about the test data, in this fashion, so that we can have some confidence that our model is not overfitting our sample data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Results" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training\n", "\n", "We now run our RandomForest modeling software on our training set, described earlier, and derive a model along with some parameters describing how good our model is. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Populating the interactive namespace from numpy and matplotlib\n" ] } ], "source": [ "%pylab inline\n", "# We pull in the training, validation and test sets created according to the scheme described\n", "# in the data exploration lesson.\n", "\n", "import pandas as pd\n", "\n", "samtrain = pd.read_csv('../datasets/samsung/samtrain.csv')\n", "samval = pd.read_csv('../datasets/samsung/samval.csv')\n", "samtest = pd.read_csv('../datasets/samsung/samtest.csv')\n", "\n", "# We use the Python RandomForest package from the scikits.learn collection of algorithms. \n", "# The package is called sklearn.ensemble.RandomForestClassifier\n", "\n", "# For this we need to convert the target column ('activity') to integer values \n", "# because the Python RandomForest package requires that. \n", "# In R it would have been a \"factor\" type and R would have used that for classification.\n", "\n", "# We map activity to an integer according to\n", "# laying = 1, sitting = 2, standing = 3, walk = 4, walkup = 5, walkdown = 6\n", "# Code is in supporting library randomforest.py\n", "\n", "import randomforests as rf\n", "samtrain = rf.remap_col(samtrain,'activity')\n", "samval = rf.remap_col(samval,'activity')\n", "samtest = rf.remap_col(samtest,'activity')" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import sklearn.ensemble as sk\n", "#rfc = sk.RandomForestClassifier(n_estimators=500, compute_importances=True, oob_score=True)\n", "rfc = sk.RandomForestClassifier(n_estimators=500, oob_score=True)\n", "\n", "train_data = samtrain[samtrain.columns[1:-2]]\n", "train_truth = samtrain['activity']\n", "model = rfc.fit(train_data, train_truth)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.98174904942965779" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# use the OOB (out of band) score which is an estimate of accuracy of our model.\n", "rfc.oob_score_" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(0.041132016576230841, 'Unnamed: 0'),\n", " (0.04533818370346588, 'tAccMean'),\n", " (0.047482776788317374, 'tJerkMean'),\n", " (0.053521653973290867, 'tGyroJerkMagSD'),\n", " (0.058512886654378288, 'fAccMean'),\n", " (0.044070497298412975, 'fJerkSD'),\n", " (0.13459418484705651, 'angleGyroJerkGravity'),\n", " (0.17695771341608663, 'angleXGravity'),\n", " (0.043735641768225926, 'angleYGravity')]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "### TRY THIS\n", "# use \"feature importance\" scores to see what the top 10 important features are\n", "fi = enumerate(rfc.feature_importances_)\n", "cols = samtrain.columns\n", "[(value,cols[i]) for (i,value) in fi if value > 0.04]\n", "## Change the value 0.04 which we picked empirically to give us 10 variables\n", "## try running this code after changing the value up and down so you get more or less variables\n", "## do you see how this might be useful in refining the model?\n", "## Here is the code in case you mess up the line above\n", "## [(value,cols[i]) for (i,value) in fi if value > 0.04]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use the predict() function using our model on our validation set and our test set and get the following results from our analysis of errors in the predictions." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# pandas data frame adds a spurious unknown column in 0 position hence starting at col 1\n", "# not using subject column, activity ie target is in last columns hence -2 i.e dropping last 2 cols\n", "\n", "val_data = samval[samval.columns[1:-2]]\n", "val_truth = samval['activity']\n", "val_pred = rfc.predict(val_data)\n", "\n", "test_data = samtest[samtest.columns[1:-2]]\n", "test_truth = samtest['activity']\n", "test_pred = rfc.predict(test_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "####Prediction Errors and Computed Error Measures " ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "mean accuracy score for validation set = 0.834385\n", "mean accuracy score for test set = 0.894276\n" ] } ], "source": [ "print(\"mean accuracy score for validation set = %f\" %(rfc.score(val_data, val_truth)))\n", "print(\"mean accuracy score for test set = %f\" %(rfc.score(test_data, test_truth)))" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# use the confusion matrix to see how observations were misclassified as other activities\n", "# See [5]\n", "import sklearn.metrics as skm\n", "test_cm = skm.confusion_matrix(test_truth,test_pred)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# visualize the confusion matrix" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import pylab as pl\n", "pl.matshow(test_cm)\n", "pl.title('Confusion matrix for test data\\n'\n", " + ' ')\n", "pl.colorbar()\n", "pl.show()" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# compute a number of other common measures of prediction goodness" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now compute some commonly used measures of prediction \"goodness\". \n", "For more detail on these measures see\n", "[6],[7],[8],[9]" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Accuracy = 0.896296\n" ] } ], "source": [ "# Accuracy\n", "print(\"Accuracy = %f\" %(skm.accuracy_score(test_truth,test_pred)))" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Precision = 0.898533\n" ] } ], "source": [ "# Precision\n", "print(\"Precision = %f\" %(skm.precision_score(test_truth,test_pred)))" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Recall = 0.896296\n" ] } ], "source": [ "# Recall\n", "print(\"Recall = %f\" %(skm.recall_score(test_truth,test_pred)))" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "F1 score = 0.896497\n" ] } ], "source": [ "# F1 Score\n", "print(\"F1 score = %f\" %(skm.f1_score(test_truth,test_pred)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusions\n", "\n", "We can make the following concrete conclusions looking at the above results.\n", "\n", "Random Forests give us satisfactory error rates and predictive power in this scenario.\n", "\n", "Using domain knowledge it is possible to get surprisingly high values of predictive measures, and low error rates on validation and test sets. \n", "\n", "This is supported by the results, i.e. ~90% on predictive measures, OOB error estimates ~2%.\n", "\n", "We only did this once and did not go back and forth tweaking the models. Note that we stuck to the rules here and did not see the test set until we were done modeling.\n", "\n", "Focusing on magnitude and angle information for acceleration and jerk in the time and frequency domains gives us a model with surprising predictive power. It's possible that a brute force model will give better predictive power but it would simply show us how to blindly apply software. If for some reason the model misbehaved or failed, we would not have any insight at all as to why. Instead we used domain knowledge to focus on insight and in the process created a model with interpretive value.\n", "\n", "Model performance on the test set is better than on the validation set as seen in the two “Total” rows above and each individual activity.\n", "\n", "Let's see how we might be able to improve the model in future. It's always good to note the possible ways in which our model(s) might be deficient or incomplete or unfinished so we don't get overconfident about our models and overpromise what they can do.\n", "\n", "### Critique\n", "\n", "* Our model eliminated a number of Magnitude related attributes such as -mad, -max, -min also a number of Gyro related variables during the variable selection process using domain knowledge. These may be important but this was not tested. We may want to look at that the next time we do this.\n", "\n", "* Variable importance should be investigated in detail - i.e. we really ought to look at how we can use the smaller number of attributes identified as important, to create the model and see what the difference is. Computationally this would be more efficient. We could even use simpler methods like Logistic Regression to do the classification from that point on, using only the reduced set of variables." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "Instead of using domain knowledge to reduce variables, use Random Forests directly on the full set of columns. Then use variable importance and sort the variables. \n", "\n", "Compare the model you get with the model you got from using domain knowledge. \n", "You can short circuit the data cleanup process as well by simply renaming the variables x1, x2...xn, y where y is 'activity' the dependent variable. \n", "\n", "Now look at the new Random Forest model you get. It is likely to be more accurate at prediction than the one we have above. It is a black box model, where there is no meaning attached to the variables. \n", " \n", "* What insights does it give you? \n", "* Which model do you prefer? \n", "* Why? \n", "* Is this an absolute preference or might it change? \n", "* What might cause it to change? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "\n", "[1] Original dataset as R data https://spark-public.s3.amazonaws.com/dataanalysis/samsungData.rda \n", "[2] Human Activity Recognition Using Smartphones http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones \n", "[3] Android Developer Reference http://developer.android.com/reference/android/hardware/Sensor.html \n", "[4] Random Forests http://en.wikipedia.org/wiki/Random_forest \n", "[5] Confusion matrix http://en.wikipedia.org/wiki/Confusion_matrix\n", "[6] Mean Accuracy http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1054102&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1054102\n", "\n", "[7] Precision http://en.wikipedia.org/wiki/Precision_and_recall\n", "[8] Recall http://en.wikipedia.org/wiki/Precision_and_recall\n", "[9] F Measure http://en.wikipedia.org/wiki/Precision_and_recall" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "" ], "text/plain": [ "" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from IPython.core.display import HTML\n", "def css_styling():\n", " styles = open(\"../styles/custom.css\", \"r\").read()\n", " return HTML(styles)\n", "css_styling()" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.8" } }, "nbformat": 4, "nbformat_minor": 0 }