{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Model Evaluation, Scoring Metrics, and Dealing with Imbalanced Classes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous notebook, we already went into some detail on how to evaluate a model and how to pick the best model. So far, we assumed that we were given a performance measure, a measure of the quality of the model. What measure one should use is not always obvious, though.\n", "The default scores in scikit-learn are ``accuracy`` for classification, which is the fraction of correctly classified samples, and ``r2`` for regression, with is the coefficient of determination.\n", "\n", "These are reasonable default choices in many scenarious; however, depending on our task, these are not always the definitive or recommended choices.\n", "\n", "Let's take look at classification in more detail, going back to the application of classifying handwritten digits.\n", "So, how about training a classifier and walking through the different ways we can evaluate it? Scikit-learn has many helpful methods in the ``sklearn.metrics`` module that can help us with this task:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "np.set_printoptions(precision=2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import load_digits\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.svm import LinearSVC\n", "\n", "digits = load_digits()\n", "X, y = digits.data, digits.target\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, \n", " random_state=1,\n", " stratify=y,\n", " test_size=0.25)\n", "\n", "classifier = LinearSVC(random_state=1).fit(X_train, y_train)\n", "y_test_pred = classifier.predict(X_test)\n", "\n", "print(\"Accuracy: {}\".format(classifier.score(X_test, y_test)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we predicted 95.3% of samples correctly. For multi-class problems, it is often interesting to know which of the classes are hard to predict, and which are easy, or which classes get confused. One way to get more information about misclassifications is ``the confusion_matrix``, which shows for each true class, how frequent a given predicted outcome is." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import confusion_matrix\n", "confusion_matrix(y_test, y_test_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A plot is sometimes more readable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.matshow(confusion_matrix(y_test, y_test_pred), cmap=\"Blues\")\n", "plt.colorbar(shrink=0.8)\n", "plt.xticks(range(10))\n", "plt.yticks(range(10))\n", "plt.xlabel(\"Predicted label\")\n", "plt.ylabel(\"True label\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that most entries are on the diagonal, which means that we predicted nearly all samples correctly. The off-diagonal entries show us that many eights were classified as ones, and that nines are likely to be confused with many other classes. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another useful function is the ``classification_report`` which provides precision, recall, fscore and support for all classes.\n", "Precision is how many of the predictions for a class are actually that class. With TP, FP, TN, FN standing for \"true positive\", \"false positive\", \"true negative\" and \"false negative\" repectively:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Precision = TP / (TP + FP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall is how many of the true positives were recovered:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall = TP / (TP + FN)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "F1-score is the geometric average of precision and recall:\n", "\n", "F1 = 2 x (precision x recall) / (precision + recall)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The values of all these values above are in the closed interval [0, 1], where 1 means a perfect score." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import classification_report\n", "print(classification_report(y_test, y_test_pred))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These metrics are helpful in two particular cases that come up often in practice:\n", "1. Imbalanced classes, that is one class might be much more frequent than the other.\n", "2. Asymmetric costs, that is one kind of error is much more \"costly\" than the other." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's have a look at 1. first. Say we have a class imbalance of 1:9, which is rather mild (think about ad-click-prediction where maybe 0.001% of ads might be clicked):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.bincount(y) / y.shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a toy example, let's say we want to classify the digits three against all other digits:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X, y = digits.data, digits.target == 3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we run cross-validation on a classifier to see how well it does:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import cross_val_score\n", "from sklearn.svm import SVC\n", "\n", "cross_val_score(SVC(), X, y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our classifier is 90% accurate. Is that good? Or bad? Keep in mind that 90% of the data is \"not three\". So let's see how well a dummy classifier does, that always predicts the most frequent class:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.dummy import DummyClassifier\n", "cross_val_score(DummyClassifier(\"most_frequent\"), X, y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Also 90% (as expected)! So one might thing that means our classifier is not very good, it doesn't to better than a simple strategy that doesn't even look at the data.\n", "That would be judging too quickly, though. Accuracy is simply not a good way to evaluate classifiers for imbalanced datasets!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.bincount(y) / y.shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "ROC Curves\n", "=======\n", "\n", "A much better measure is using the so-called ROC (Receiver operating characteristics) curve. A roc-curve works with uncertainty outputs of a classifier, say the \"decision_function\" of the ``SVC`` we trained above. Instead of making a cut-off at zero and looking at classification outcomes, it looks at every possible cut-off and records how many true positive predictions there are, and how many false positive predictions there are.\n", "\n", "The following plot compares the roc curve of three parameter settings of our classifier on the \"three vs rest\" task." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import roc_curve, roc_auc_score\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n", "\n", "for gamma in [.05, 0.1, 0.5]:\n", " plt.xlabel(\"False Positive Rate\")\n", " plt.ylabel(\"True Positive Rate (recall)\")\n", " svm = SVC(gamma=gamma).fit(X_train, y_train)\n", " decision_function = svm.decision_function(X_test)\n", " fpr, tpr, _ = roc_curve(y_test, decision_function)\n", " acc = svm.score(X_test, y_test)\n", " auc = roc_auc_score(y_test, svm.decision_function(X_test))\n", " label = \"gamma: %0.3f, acc:%.2f auc:%.2f\" % (gamma, acc, auc)\n", " plt.plot(fpr, tpr, label=label, linewidth=3)\n", "plt.legend(loc=\"best\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With a very small decision threshold, there will be few false positives, but also few false negatives, while with a very high threshold, both true positive rate and false positive rate will be high. So in general, the curve will be from the lower left to the upper right. A diagonal line reflects chance performance, while the goal is to be as much in the top left corner as possible. This means giving a higher decision_function value to all positive samples than to any negative sample.\n", "\n", "In this sense, this curve only considers the ranking of the positive and negative samples, not the actual value.\n", "As you can see from the curves and the accuracy values in the legend, even though all classifiers have the same accuracy, 89%, which is even lower than the dummy classifier, one of them has a perfect roc curve, while one of them performs on chance level.\n", "\n", "For doing grid-search and cross-validation, we usually want to condense our model evaluation into a single number. A good way to do this with the roc curve is to use the area under the curve (AUC).\n", "We can simply use this in ``cross_val_score`` by specifying ``scoring=\"roc_auc\"``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import cross_val_score\n", "cross_val_score(SVC(gamma='auto'), X, y, scoring=\"roc_auc\", cv=5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Built-In and custom scoring functions\n", "=======================================" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are many more scoring methods available, which are useful for different kinds of tasks. You can find them in the \"SCORERS\" dictionary. The only documentation explains all of them." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics.scorer import SCORERS\n", "print(SCORERS.keys())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is also possible to define your own scoring metric. Instead of a string, you can provide a callable to as ``scoring`` parameter, that is an object with a ``__call__`` method or a function.\n", "It needs to take a model, a test-set features ``X_test`` and test-set labels ``y_test``, and return a float. Higher floats are taken to mean better models.\n", "\n", "Let's reimplement the standard accuracy score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def my_accuracy_scoring(est, X, y):\n", " return np.mean(est.predict(X) == y)\n", "\n", "cross_val_score(SVC(), X, y, scoring=my_accuracy_scoring)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " EXERCISE:\n", " \n", " ![](figures/average-per-class.png)\n", "
  • \n", " Given the following arrays of \"true\" class labels and predicted class labels, can you implement a function that uses the accuracy measure to compute the average-per-class accuracy as shown below?\n", "
  • \n", "
    " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_true = np.array([0, 0, 0, 1, 1, 1, 1, 1, 2, 2])\n", "y_pred = np.array([0, 1, 1, 0, 1, 1, 2, 2, 2, 2])\n", "\n", "confusion_matrix(y_true, y_pred)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# %load solutions/16A_avg_per_class_acc.py" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.2" } }, "nbformat": 4, "nbformat_minor": 2 }