{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Training and Testing Data\n", "=====================================\n", "\n", "To evaluate how well our supervised models generalize, we can split our data into a training and a test set:\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.datasets import load_iris\n", "\n", "iris = load_iris()\n", "X, y = iris.data, iris.target" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thinking about how machine learning is normally performed, the idea of a train/test split makes sense. Real world systems train on the data they have, and as other data comes in (from customers, sensors, or other sources) the classifier that was trained must predict on fundamentally *new* data. We can simulate this during training using a train/test split - the test data is a simulation of \"future data\" which will come into the system during production. \n", "\n", "Specifically for iris, the 150 labels in iris are sorted, which means that if we split the data using a proportional split, this will result in fudamentally altered class distributions. For instance, if we'd perform a common 2/3 training data and 1/3 test data split, our training dataset will only consists of flower classes 0 and 1 (Setosa and Versicolor), and our test set will only contain samples with class label 2 (Virginica flowers).\n", "\n", "Under the assumption that all samples are independent of each other (in contrast time series data), we want to **randomly shuffle the dataset before we split the dataset** as illustrated above." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "y" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we need to split the data into training and testing. Luckily, this is a common pattern in machine learning and scikit-learn has a pre-built function to split data into training and testing sets for you. Here, we use 50% of the data as training, and 50% testing. 80% and 20% is another common split, but there are no hard and fast rules. The most important thing is to fairly evaluate your system on data it *has not* seen during training!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "train_X, test_X, train_y, test_y = train_test_split(X, y, \n", " train_size=0.5,\n", " test_size=0.5,\n", " random_state=123)\n", "print(\"Labels for training data:\")\n", "print(train_y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Labels for test data:\")\n", "print(test_y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Tip: Stratified Split**\n", "\n", "Especially for relatively small datasets, it's better to stratify the split. Stratification means that we maintain the original class proportion of the dataset in the test and training sets. For example, after we randomly split the dataset as shown in the previous code example, we have the following class proportions in percent:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "print('All:', np.bincount(y) / float(len(y)) * 100.0)\n", "print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0)\n", "print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, in order to stratify the split, we can pass the label array as an additional option to the `train_test_split` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "train_X, test_X, train_y, test_y = train_test_split(X, y, \n", " train_size=0.5,\n", " test_size=0.5,\n", " random_state=123,\n", " stratify=y)\n", "\n", "print('All:', np.bincount(y) / float(len(y)) * 100.0)\n", "print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0)\n", "print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By evaluating our classifier performance on data that has been seen during training, we could get false confidence in the predictive power of our model. In the worst case, it may simply memorize the training samples but completely fails classifying new, similar samples -- we really don't want to put such a system into production!\n", "\n", "Instead of using the same dataset for training and testing (this is called \"resubstitution evaluation\"), it is much much better to use a train/test split in order to estimate how well your trained model is doing on new data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.neighbors import KNeighborsClassifier\n", "\n", "classifier = KNeighborsClassifier().fit(train_X, train_y)\n", "pred_y = classifier.predict(test_X)\n", "\n", "print(\"Fraction Correct [Accuracy]:\")\n", "print(np.sum(pred_y == test_y) / float(len(test_y)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also visualize the correct predictions ..." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "print('Samples correctly classified:')\n", "correct_idx = np.where(pred_y == test_y)[0]\n", "print(correct_idx)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "... as well as the failed predictions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Samples incorrectly classified:')\n", "incorrect_idx = np.where(pred_y != test_y)[0]\n", "print(incorrect_idx)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Plot two dimensions\n", "\n", "for n in np.unique(test_y):\n", " idx = np.where(test_y == n)[0]\n", " plt.scatter(test_X[idx, 1], test_X[idx, 2], label=\"Class %s\" % str(iris.target_names[n]))\n", "\n", "plt.scatter(test_X[incorrect_idx, 1], test_X[incorrect_idx, 2], color=\"darkred\")\n", "\n", "plt.xlabel('sepal width [cm]')\n", "plt.ylabel('petal length [cm]')\n", "plt.legend(loc=3)\n", "plt.title(\"Iris Classification results\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the errors occur in the area where green (class 1) and gray (class 2) overlap. This gives us insight about what features to add - any feature which helps separate class 1 and class 2 should improve classifier performance." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " EXERCISE:\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# %load solutions/04_wrong-predictions.py" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }