{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Cross-Validation and scoring methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous sections and notebooks, we split our dataset into two parts, a training set and a test set. We used the training set to fit our model, and we used the test set to evaluate its generalization performance -- how well it performs on new, unseen data.\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, often (labeled) data is precious, and this approach lets us only use ~ 3/4 of our data for training. On the other hand, we will only ever try to apply our model 1/4 of our data for testing.\n", "A common way to use more of the data to build a model, but also get a more robust estimate of the generalization performance, is cross-validation.\n", "In cross-validation, the data is split repeatedly into a training and non-overlapping test-sets, with a separate model built for every pair. The test-set scores are then aggregated for a more robust estimate.\n", "\n", "The most common way to do cross-validation is k-fold cross-validation, in which the data is first split into k (often 5 or 10) equal-sized folds, and then for each iteration, one of the k folds is used as test data, and the rest as training data:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This way, each data point will be in the test-set exactly once, and we can use all but a k'th of the data for training.\n", "Let us apply this technique to evaluate the KNeighborsClassifier algorithm on the Iris dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import load_iris\n", "from sklearn.neighbors import KNeighborsClassifier\n", "\n", "iris = load_iris()\n", "X, y = iris.data, iris.target\n", "\n", "classifier = KNeighborsClassifier()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The labels in iris are sorted, which means that if we split the data as illustrated above, the first fold will only have the label 0 in it, while the last one will only have the label 2:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To avoid this problem in evaluation, we first shuffle our data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "rng = np.random.RandomState(0)\n", "\n", "permutation = rng.permutation(len(X))\n", "X, y = X[permutation], y[permutation]\n", "print(y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now implementing cross-validation is easy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "k = 5\n", "n_samples = len(X)\n", "fold_size = n_samples // k\n", "scores = []\n", "masks = []\n", "for fold in range(k):\n", " # generate a boolean mask for the test set in this fold\n", " test_mask = np.zeros(n_samples, dtype=bool)\n", " test_mask[fold * fold_size : (fold + 1) * fold_size] = True\n", " # store the mask for visualization\n", " masks.append(test_mask)\n", " # create training and test sets using this mask\n", " X_test, y_test = X[test_mask], y[test_mask]\n", " X_train, y_train = X[~test_mask], y[~test_mask]\n", " # fit the classifier\n", " classifier.fit(X_train, y_train)\n", " # compute the score and record it\n", " scores.append(classifier.score(X_test, y_test))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check that our test mask does the right thing:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "plt.matshow(masks, cmap='gray_r')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now let's look a the scores we computed:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(scores)\n", "print(np.mean(scores))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, there is a rather wide spectrum of scores from 90% correct to 100% correct. If we only did a single split, we might have gotten either answer." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As cross-validation is such a common pattern in machine learning, there are functions to do the above for you with much more flexibility and less code.\n", "The ``sklearn.model_selection`` module has all functions related to cross validation. There easiest function is ``cross_val_score`` which takes an estimator and a dataset, and will do all of the splitting for you:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import cross_val_score\n", "scores = cross_val_score(classifier, X, y)\n", "print('Scores on each CV fold: %s' % scores)\n", "print('Mean score: %0.3f' % np.mean(scores))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, the function uses three folds by default. You can change the number of folds using the cv argument:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cross_val_score(classifier, X, y, cv=5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are also helper objects in the cross-validation module that will generate indices for you for all kinds of different cross-validation methods, including k-fold:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import KFold, StratifiedKFold, ShuffleSplit" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, cross_val_score will use ``StratifiedKFold`` for classification, which ensures that the class proportions in the dataset are reflected in each fold. If you have a binary classification dataset with 90% of data point belonging to class 0, that would mean that in each fold, 90% of datapoints would belong to class 0.\n", "If you would just use KFold cross-validation, it is likely that you would generate a split that only contains class 0.\n", "It is generally a good idea to use ``StratifiedKFold`` whenever you do classification.\n", "\n", "``StratifiedKFold`` would also remove our need to shuffle ``iris``.\n", "Let's see what kinds of folds it generates on the unshuffled iris dataset.\n", "Each cross-validation class is a generator of sets of training and test indices:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv = StratifiedKFold(n_splits=5)\n", "for train, test in cv.split(iris.data, iris.target):\n", " print(test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, there are a couple of samples from the beginning, then from the middle, and then from the end, in each of the folds.\n", "This way, the class ratios are preserved. Let's visualize the split:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_cv(cv, features, labels):\n", " masks = []\n", " for train, test in cv.split(features, labels):\n", " mask = np.zeros(len(labels), dtype=bool)\n", " mask[test] = 1\n", " masks.append(mask)\n", " \n", " plt.matshow(masks, cmap='gray_r')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_cv(StratifiedKFold(n_splits=5), iris.data, iris.target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For comparison, again the standard KFold, that ignores the labels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_cv(KFold(n_splits=5), iris.data, iris.target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Keep in mind that increasing the number of folds will give you a larger training dataset, but will lead to more repetitions, and therefore a slower evaluation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_cv(KFold(n_splits=10), iris.data, iris.target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another helpful cross-validation generator is ``ShuffleSplit``. This generator simply splits of a random portion of the data repeatedly. This allows the user to specify the number of repetitions and the training set size independently:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_cv(ShuffleSplit(n_splits=5, test_size=.2), iris.data, iris.target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want a more robust estimate, you can just increase the number of splits:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_cv(ShuffleSplit(n_splits=20, test_size=.2), iris.data, iris.target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can use all of these cross-validation generators with the `cross_val_score` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv = ShuffleSplit(n_splits=5, test_size=.2)\n", "cross_val_score(classifier, X, y, cv=cv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " EXERCISE:\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# %load solutions/13_cross_validation.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.2" } }, "nbformat": 4, "nbformat_minor": 2 }