{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Probability calibration of classifiers\n\nWhen performing classification you often want to predict not only\nthe class label, but also the associated probability. This probability\ngives you some kind of confidence on the prediction. However, not all\nclassifiers provide well-calibrated probabilities, some being over-confident\nwhile others being under-confident. Thus, a separate calibration of predicted\nprobabilities is often desirable as a postprocessing. This example illustrates\ntwo different methods for this calibration and evaluates the quality of the\nreturned probabilities using Brier's score\n(see https://en.wikipedia.org/wiki/Brier_score).\n\nCompared are the estimated probability using a Gaussian naive Bayes classifier\nwithout calibration, with a sigmoid calibration, and with a non-parametric\nisotonic calibration. One can observe that only the non-parametric model is\nable to provide a probability calibration that returns probabilities close\nto the expected 0.5 for most of the samples belonging to the middle\ncluster with heterogeneous labels. This results in a significantly improved\nBrier score.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generate synthetic dataset\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n\nfrom sklearn.datasets import make_blobs\nfrom sklearn.model_selection import train_test_split\n\nn_samples = 50000\nn_bins = 3 # use 3 bins for calibration_curve as we have 3 clusters here\n\n# Generate 3 blobs with 2 classes where the second blob contains\n# half positive samples and half negative samples. Probability in this\n# blob is therefore 0.5.\ncenters = [(-5, -5), (0, 0), (5, 5)]\nX, y = make_blobs(n_samples=n_samples, centers=centers, shuffle=False, random_state=42)\n\ny[: n_samples // 2] = 0\ny[n_samples // 2 :] = 1\nsample_weight = np.random.RandomState(42).rand(y.shape[0])\n\n# split train, test for calibration\nX_train, X_test, y_train, y_test, sw_train, sw_test = train_test_split(\n X, y, sample_weight, test_size=0.9, random_state=42\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Gaussian Naive-Bayes\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.calibration import CalibratedClassifierCV\nfrom sklearn.metrics import brier_score_loss\nfrom sklearn.naive_bayes import GaussianNB\n\n# With no calibration\nclf = GaussianNB()\nclf.fit(X_train, y_train) # GaussianNB itself does not support sample-weights\nprob_pos_clf = clf.predict_proba(X_test)[:, 1]\n\n# With isotonic calibration\nclf_isotonic = CalibratedClassifierCV(clf, cv=2, method=\"isotonic\")\nclf_isotonic.fit(X_train, y_train, sample_weight=sw_train)\nprob_pos_isotonic = clf_isotonic.predict_proba(X_test)[:, 1]\n\n# With sigmoid calibration\nclf_sigmoid = CalibratedClassifierCV(clf, cv=2, method=\"sigmoid\")\nclf_sigmoid.fit(X_train, y_train, sample_weight=sw_train)\nprob_pos_sigmoid = clf_sigmoid.predict_proba(X_test)[:, 1]\n\nprint(\"Brier score losses: (the smaller the better)\")\n\nclf_score = brier_score_loss(y_test, prob_pos_clf, sample_weight=sw_test)\nprint(\"No calibration: %1.3f\" % clf_score)\n\nclf_isotonic_score = brier_score_loss(y_test, prob_pos_isotonic, sample_weight=sw_test)\nprint(\"With isotonic calibration: %1.3f\" % clf_isotonic_score)\n\nclf_sigmoid_score = brier_score_loss(y_test, prob_pos_sigmoid, sample_weight=sw_test)\nprint(\"With sigmoid calibration: %1.3f\" % clf_sigmoid_score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plot data and the predicted probabilities\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nfrom matplotlib import cm\n\nplt.figure()\ny_unique = np.unique(y)\ncolors = cm.rainbow(np.linspace(0.0, 1.0, y_unique.size))\nfor this_y, color in zip(y_unique, colors):\n this_X = X_train[y_train == this_y]\n this_sw = sw_train[y_train == this_y]\n plt.scatter(\n this_X[:, 0],\n this_X[:, 1],\n s=this_sw * 50,\n c=color[np.newaxis, :],\n alpha=0.5,\n edgecolor=\"k\",\n label=\"Class %s\" % this_y,\n )\nplt.legend(loc=\"best\")\nplt.title(\"Data\")\n\nplt.figure()\n\norder = np.lexsort((prob_pos_clf,))\nplt.plot(prob_pos_clf[order], \"r\", label=\"No calibration (%1.3f)\" % clf_score)\nplt.plot(\n prob_pos_isotonic[order],\n \"g\",\n linewidth=3,\n label=\"Isotonic calibration (%1.3f)\" % clf_isotonic_score,\n)\nplt.plot(\n prob_pos_sigmoid[order],\n \"b\",\n linewidth=3,\n label=\"Sigmoid calibration (%1.3f)\" % clf_sigmoid_score,\n)\nplt.plot(\n np.linspace(0, y_test.size, 51)[1::2],\n y_test[order].reshape(25, -1).mean(1),\n \"k\",\n linewidth=3,\n label=r\"Empirical\",\n)\nplt.ylim([-0.05, 1.05])\nplt.xlabel(\"Instances sorted according to predicted probability (uncalibrated GNB)\")\nplt.ylabel(\"P(y=1)\")\nplt.legend(loc=\"upper left\")\nplt.title(\"Gaussian naive Bayes probabilities\")\n\nplt.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }