{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Probability Calibration curves\n\nWhen performing classification one often wants to predict not only the class\nlabel, but also the associated probability. This probability gives some\nkind of confidence on the prediction. This example demonstrates how to\nvisualize how well calibrated the predicted probabilities are using calibration\ncurves, also known as reliability diagrams. Calibration of an uncalibrated\nclassifier will also be demonstrated.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dataset\n\nWe will use a synthetic binary classification dataset with 100,000 samples\nand 20 features. Of the 20 features, only 2 are informative, 10 are\nredundant (random combinations of the informative features) and the\nremaining 8 are uninformative (random numbers). Of the 100,000 samples, 1,000\nwill be used for model fitting and the rest for testing.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import make_classification\nfrom sklearn.model_selection import train_test_split\n\nX, y = make_classification(\n n_samples=100_000, n_features=20, n_informative=2, n_redundant=10, random_state=42\n)\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.99, random_state=42\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Calibration curves\n\n### Gaussian Naive Bayes\n\nFirst, we will compare:\n\n* :class:`~sklearn.linear_model.LogisticRegression` (used as baseline\n since very often, properly regularized logistic regression is well\n calibrated by default thanks to the use of the log-loss)\n* Uncalibrated :class:`~sklearn.naive_bayes.GaussianNB`\n* :class:`~sklearn.naive_bayes.GaussianNB` with isotonic and sigmoid\n calibration (see `User Guide `)\n\nCalibration curves for all 4 conditions are plotted below, with the average\npredicted probability for each bin on the x-axis and the fraction of positive\nclasses in each bin on the y-axis.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nfrom matplotlib.gridspec import GridSpec\n\nfrom sklearn.calibration import CalibratedClassifierCV, CalibrationDisplay\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import GaussianNB\n\nlr = LogisticRegression(C=1.0)\ngnb = GaussianNB()\ngnb_isotonic = CalibratedClassifierCV(gnb, cv=2, method=\"isotonic\")\ngnb_sigmoid = CalibratedClassifierCV(gnb, cv=2, method=\"sigmoid\")\n\nclf_list = [\n (lr, \"Logistic\"),\n (gnb, \"Naive Bayes\"),\n (gnb_isotonic, \"Naive Bayes + Isotonic\"),\n (gnb_sigmoid, \"Naive Bayes + Sigmoid\"),\n]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig = plt.figure(figsize=(10, 10))\ngs = GridSpec(4, 2)\ncolors = plt.get_cmap(\"Dark2\")\n\nax_calibration_curve = fig.add_subplot(gs[:2, :2])\ncalibration_displays = {}\nfor i, (clf, name) in enumerate(clf_list):\n clf.fit(X_train, y_train)\n display = CalibrationDisplay.from_estimator(\n clf,\n X_test,\n y_test,\n n_bins=10,\n name=name,\n ax=ax_calibration_curve,\n color=colors(i),\n )\n calibration_displays[name] = display\n\nax_calibration_curve.grid()\nax_calibration_curve.set_title(\"Calibration plots (Naive Bayes)\")\n\n# Add histogram\ngrid_positions = [(2, 0), (2, 1), (3, 0), (3, 1)]\nfor i, (_, name) in enumerate(clf_list):\n row, col = grid_positions[i]\n ax = fig.add_subplot(gs[row, col])\n\n ax.hist(\n calibration_displays[name].y_prob,\n range=(0, 1),\n bins=10,\n label=name,\n color=colors(i),\n )\n ax.set(title=name, xlabel=\"Mean predicted probability\", ylabel=\"Count\")\n\nplt.tight_layout()\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Uncalibrated :class:`~sklearn.naive_bayes.GaussianNB` is poorly calibrated\nbecause of\nthe redundant features which violate the assumption of feature-independence\nand result in an overly confident classifier, which is indicated by the\ntypical transposed-sigmoid curve. Calibration of the probabilities of\n:class:`~sklearn.naive_bayes.GaussianNB` with `isotonic` can fix\nthis issue as can be seen from the nearly diagonal calibration curve.\n`Sigmoid regression ` also improves calibration\nslightly,\nalbeit not as strongly as the non-parametric isotonic regression. This can be\nattributed to the fact that we have plenty of calibration data such that the\ngreater flexibility of the non-parametric model can be exploited.\n\nBelow we will make a quantitative analysis considering several classification\nmetrics: `brier_score_loss`, `log_loss`,\n`precision, recall, F1 score ` and\n`ROC AUC `.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from collections import defaultdict\n\nimport pandas as pd\n\nfrom sklearn.metrics import (\n brier_score_loss,\n f1_score,\n log_loss,\n precision_score,\n recall_score,\n roc_auc_score,\n)\n\nscores = defaultdict(list)\nfor i, (clf, name) in enumerate(clf_list):\n clf.fit(X_train, y_train)\n y_prob = clf.predict_proba(X_test)\n y_pred = clf.predict(X_test)\n scores[\"Classifier\"].append(name)\n\n for metric in [brier_score_loss, log_loss, roc_auc_score]:\n score_name = metric.__name__.replace(\"_\", \" \").replace(\"score\", \"\").capitalize()\n scores[score_name].append(metric(y_test, y_prob[:, 1]))\n\n for metric in [precision_score, recall_score, f1_score]:\n score_name = metric.__name__.replace(\"_\", \" \").replace(\"score\", \"\").capitalize()\n scores[score_name].append(metric(y_test, y_pred))\n\n score_df = pd.DataFrame(scores).set_index(\"Classifier\")\n score_df.round(decimals=3)\n\nscore_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that although calibration improves the `brier_score_loss` (a\nmetric composed\nof calibration term and refinement term) and `log_loss`, it does not\nsignificantly alter the prediction accuracy measures (precision, recall and\nF1 score).\nThis is because calibration should not significantly change prediction\nprobabilities at the location of the decision threshold (at x = 0.5 on the\ngraph). Calibration should however, make the predicted probabilities more\naccurate and thus more useful for making allocation decisions under\nuncertainty.\nFurther, ROC AUC, should not change at all because calibration is a\nmonotonic transformation. Indeed, no rank metrics are affected by\ncalibration.\n\n### Linear support vector classifier\nNext, we will compare:\n\n* :class:`~sklearn.linear_model.LogisticRegression` (baseline)\n* Uncalibrated :class:`~sklearn.svm.LinearSVC`. Since SVC does not output\n probabilities by default, we naively scale the output of the\n :term:`decision_function` into [0, 1] by applying min-max scaling.\n* :class:`~sklearn.svm.LinearSVC` with isotonic and sigmoid\n calibration (see `User Guide `)\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n\nfrom sklearn.svm import LinearSVC\n\n\nclass NaivelyCalibratedLinearSVC(LinearSVC):\n \"\"\"LinearSVC with `predict_proba` method that naively scales\n `decision_function` output for binary classification.\"\"\"\n\n def fit(self, X, y):\n super().fit(X, y)\n df = self.decision_function(X)\n self.df_min_ = df.min()\n self.df_max_ = df.max()\n\n def predict_proba(self, X):\n \"\"\"Min-max scale output of `decision_function` to [0, 1].\"\"\"\n df = self.decision_function(X)\n calibrated_df = (df - self.df_min_) / (self.df_max_ - self.df_min_)\n proba_pos_class = np.clip(calibrated_df, 0, 1)\n proba_neg_class = 1 - proba_pos_class\n proba = np.c_[proba_neg_class, proba_pos_class]\n return proba" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "lr = LogisticRegression(C=1.0)\nsvc = NaivelyCalibratedLinearSVC(max_iter=10_000)\nsvc_isotonic = CalibratedClassifierCV(svc, cv=2, method=\"isotonic\")\nsvc_sigmoid = CalibratedClassifierCV(svc, cv=2, method=\"sigmoid\")\n\nclf_list = [\n (lr, \"Logistic\"),\n (svc, \"SVC\"),\n (svc_isotonic, \"SVC + Isotonic\"),\n (svc_sigmoid, \"SVC + Sigmoid\"),\n]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig = plt.figure(figsize=(10, 10))\ngs = GridSpec(4, 2)\n\nax_calibration_curve = fig.add_subplot(gs[:2, :2])\ncalibration_displays = {}\nfor i, (clf, name) in enumerate(clf_list):\n clf.fit(X_train, y_train)\n display = CalibrationDisplay.from_estimator(\n clf,\n X_test,\n y_test,\n n_bins=10,\n name=name,\n ax=ax_calibration_curve,\n color=colors(i),\n )\n calibration_displays[name] = display\n\nax_calibration_curve.grid()\nax_calibration_curve.set_title(\"Calibration plots (SVC)\")\n\n# Add histogram\ngrid_positions = [(2, 0), (2, 1), (3, 0), (3, 1)]\nfor i, (_, name) in enumerate(clf_list):\n row, col = grid_positions[i]\n ax = fig.add_subplot(gs[row, col])\n\n ax.hist(\n calibration_displays[name].y_prob,\n range=(0, 1),\n bins=10,\n label=name,\n color=colors(i),\n )\n ax.set(title=name, xlabel=\"Mean predicted probability\", ylabel=\"Count\")\n\nplt.tight_layout()\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ":class:`~sklearn.svm.LinearSVC` shows the opposite\nbehavior to :class:`~sklearn.naive_bayes.GaussianNB`; the calibration\ncurve has a sigmoid shape, which is typical for an under-confident\nclassifier. In the case of :class:`~sklearn.svm.LinearSVC`, this is caused\nby the margin property of the hinge loss, which focuses on samples that are\nclose to the decision boundary (support vectors). Samples that are far\naway from the decision boundary do not impact the hinge loss. It thus makes\nsense that :class:`~sklearn.svm.LinearSVC` does not try to separate samples\nin the high confidence region regions. This leads to flatter calibration\ncurves near 0 and 1 and is empirically shown with a variety of datasets\nin Niculescu-Mizil & Caruana [1]_.\n\nBoth kinds of calibration (sigmoid and isotonic) can fix this issue and\nyield similar results.\n\nAs before, we show the `brier_score_loss`, `log_loss`,\n`precision, recall, F1 score ` and\n`ROC AUC `.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scores = defaultdict(list)\nfor i, (clf, name) in enumerate(clf_list):\n clf.fit(X_train, y_train)\n y_prob = clf.predict_proba(X_test)\n y_pred = clf.predict(X_test)\n scores[\"Classifier\"].append(name)\n\n for metric in [brier_score_loss, log_loss, roc_auc_score]:\n score_name = metric.__name__.replace(\"_\", \" \").replace(\"score\", \"\").capitalize()\n scores[score_name].append(metric(y_test, y_prob[:, 1]))\n\n for metric in [precision_score, recall_score, f1_score]:\n score_name = metric.__name__.replace(\"_\", \" \").replace(\"score\", \"\").capitalize()\n scores[score_name].append(metric(y_test, y_pred))\n\n score_df = pd.DataFrame(scores).set_index(\"Classifier\")\n score_df.round(decimals=3)\n\nscore_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As with :class:`~sklearn.naive_bayes.GaussianNB` above, calibration improves\nboth `brier_score_loss` and `log_loss` but does not alter the\nprediction accuracy measures (precision, recall and F1 score) much.\n\n## Summary\n\nParametric sigmoid calibration can deal with situations where the calibration\ncurve of the base classifier is sigmoid (e.g., for\n:class:`~sklearn.svm.LinearSVC`) but not where it is transposed-sigmoid\n(e.g., :class:`~sklearn.naive_bayes.GaussianNB`). Non-parametric\nisotonic calibration can deal with both situations but may require more\ndata to produce good results.\n\n## References\n\n.. [1] [Predicting Good Probabilities with Supervised Learning](https://dl.acm.org/doi/pdf/10.1145/1102351.1102430),\n A. Niculescu-Mizil & R. Caruana, ICML 2005\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }