{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Effect of varying threshold for self-training\n\nThis example illustrates the effect of a varying threshold on self-training.\nThe `breast_cancer` dataset is loaded, and labels are deleted such that only 50\nout of 569 samples have labels. A `SelfTrainingClassifier` is fitted on this\ndataset, with varying thresholds.\n\nThe upper graph shows the amount of labeled samples that the classifier has\navailable by the end of fit, and the accuracy of the classifier. The lower\ngraph shows the last iteration in which a sample was labeled. All values are\ncross validated with 3 folds.\n\nAt low thresholds (in [0.4, 0.5]), the classifier learns from samples that were\nlabeled with a low confidence. These low-confidence samples are likely have\nincorrect predicted labels, and as a result, fitting on these incorrect labels\nproduces a poor accuracy. Note that the classifier labels almost all of the\nsamples, and only takes one iteration.\n\nFor very high thresholds (in [0.9, 1)) we observe that the classifier does not\naugment its dataset (the amount of self-labeled samples is 0). As a result, the\naccuracy achieved with a threshold of 0.9999 is the same as a normal supervised\nclassifier would achieve.\n\nThe optimal accuracy lies in between both of these extremes at a threshold of\naround 0.7.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom sklearn import datasets\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.semi_supervised import SelfTrainingClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.utils import shuffle\n\nn_splits = 3\n\nX, y = datasets.load_breast_cancer(return_X_y=True)\nX, y = shuffle(X, y, random_state=42)\ny_true = y.copy()\ny[50:] = -1\ntotal_samples = y.shape[0]\n\nbase_classifier = SVC(probability=True, gamma=0.001, random_state=42)\n\nx_values = np.arange(0.4, 1.05, 0.05)\nx_values = np.append(x_values, 0.99999)\nscores = np.empty((x_values.shape[0], n_splits))\namount_labeled = np.empty((x_values.shape[0], n_splits))\namount_iterations = np.empty((x_values.shape[0], n_splits))\n\nfor i, threshold in enumerate(x_values):\n self_training_clf = SelfTrainingClassifier(base_classifier, threshold=threshold)\n\n # We need manual cross validation so that we don't treat -1 as a separate\n # class when computing accuracy\n skfolds = StratifiedKFold(n_splits=n_splits)\n for fold, (train_index, test_index) in enumerate(skfolds.split(X, y)):\n X_train = X[train_index]\n y_train = y[train_index]\n X_test = X[test_index]\n y_test = y[test_index]\n y_test_true = y_true[test_index]\n\n self_training_clf.fit(X_train, y_train)\n\n # The amount of labeled samples that at the end of fitting\n amount_labeled[i, fold] = (\n total_samples\n - np.unique(self_training_clf.labeled_iter_, return_counts=True)[1][0]\n )\n # The last iteration the classifier labeled a sample in\n amount_iterations[i, fold] = np.max(self_training_clf.labeled_iter_)\n\n y_pred = self_training_clf.predict(X_test)\n scores[i, fold] = accuracy_score(y_test_true, y_pred)\n\n\nax1 = plt.subplot(211)\nax1.errorbar(\n x_values, scores.mean(axis=1), yerr=scores.std(axis=1), capsize=2, color=\"b\"\n)\nax1.set_ylabel(\"Accuracy\", color=\"b\")\nax1.tick_params(\"y\", colors=\"b\")\n\nax2 = ax1.twinx()\nax2.errorbar(\n x_values,\n amount_labeled.mean(axis=1),\n yerr=amount_labeled.std(axis=1),\n capsize=2,\n color=\"g\",\n)\nax2.set_ylim(bottom=0)\nax2.set_ylabel(\"Amount of labeled samples\", color=\"g\")\nax2.tick_params(\"y\", colors=\"g\")\n\nax3 = plt.subplot(212, sharex=ax1)\nax3.errorbar(\n x_values,\n amount_iterations.mean(axis=1),\n yerr=amount_iterations.std(axis=1),\n capsize=2,\n color=\"b\",\n)\nax3.set_ylim(bottom=0)\nax3.set_ylabel(\"Amount of iterations\")\nax3.set_xlabel(\"Threshold\")\n\nplt.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }