{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Post pruning decision trees with cost complexity pruning\n\n.. currentmodule:: sklearn.tree\n\nThe :class:`DecisionTreeClassifier` provides parameters such as\n``min_samples_leaf`` and ``max_depth`` to prevent a tree from overfiting. Cost\ncomplexity pruning provides another option to control the size of a tree. In\n:class:`DecisionTreeClassifier`, this pruning technique is parameterized by the\ncost complexity parameter, ``ccp_alpha``. Greater values of ``ccp_alpha``\nincrease the number of nodes pruned. Here we only show the effect of\n``ccp_alpha`` on regularizing the trees and how to choose a ``ccp_alpha``\nbased on validation scores.\n\nSee also `minimal_cost_complexity_pruning` for details on pruning.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Total impurity of leaves vs effective alphas of pruned tree\nMinimal cost complexity pruning recursively finds the node with the \"weakest\nlink\". The weakest link is characterized by an effective alpha, where the\nnodes with the smallest effective alpha are pruned first. To get an idea of\nwhat values of ``ccp_alpha`` could be appropriate, scikit-learn provides\n:func:`DecisionTreeClassifier.cost_complexity_pruning_path` that returns the\neffective alphas and the corresponding total leaf impurities at each step of\nthe pruning process. As alpha increases, more of the tree is pruned, which\nincreases the total impurity of its leaves.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X, y = load_breast_cancer(return_X_y=True)\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)\n\nclf = DecisionTreeClassifier(random_state=0)\npath = clf.cost_complexity_pruning_path(X_train, y_train)\nccp_alphas, impurities = path.ccp_alphas, path.impurities" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the following plot, the maximum effective alpha value is removed, because\nit is the trivial tree with only one node.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\nax.plot(ccp_alphas[:-1], impurities[:-1], marker=\"o\", drawstyle=\"steps-post\")\nax.set_xlabel(\"effective alpha\")\nax.set_ylabel(\"total impurity of leaves\")\nax.set_title(\"Total Impurity vs effective alpha for training set\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we train a decision tree using the effective alphas. The last value\nin ``ccp_alphas`` is the alpha value that prunes the whole tree,\nleaving the tree, ``clfs[-1]``, with one node.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "clfs = []\nfor ccp_alpha in ccp_alphas:\n clf = DecisionTreeClassifier(random_state=0, ccp_alpha=ccp_alpha)\n clf.fit(X_train, y_train)\n clfs.append(clf)\nprint(\n \"Number of nodes in the last tree is: {} with ccp_alpha: {}\".format(\n clfs[-1].tree_.node_count, ccp_alphas[-1]\n )\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the remainder of this example, we remove the last element in\n``clfs`` and ``ccp_alphas``, because it is the trivial tree with only one\nnode. Here we show that the number of nodes and tree depth decreases as alpha\nincreases.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "clfs = clfs[:-1]\nccp_alphas = ccp_alphas[:-1]\n\nnode_counts = [clf.tree_.node_count for clf in clfs]\ndepth = [clf.tree_.max_depth for clf in clfs]\nfig, ax = plt.subplots(2, 1)\nax[0].plot(ccp_alphas, node_counts, marker=\"o\", drawstyle=\"steps-post\")\nax[0].set_xlabel(\"alpha\")\nax[0].set_ylabel(\"number of nodes\")\nax[0].set_title(\"Number of nodes vs alpha\")\nax[1].plot(ccp_alphas, depth, marker=\"o\", drawstyle=\"steps-post\")\nax[1].set_xlabel(\"alpha\")\nax[1].set_ylabel(\"depth of tree\")\nax[1].set_title(\"Depth vs alpha\")\nfig.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Accuracy vs alpha for training and testing sets\nWhen ``ccp_alpha`` is set to zero and keeping the other default parameters\nof :class:`DecisionTreeClassifier`, the tree overfits, leading to\na 100% training accuracy and 88% testing accuracy. As alpha increases, more\nof the tree is pruned, thus creating a decision tree that generalizes better.\nIn this example, setting ``ccp_alpha=0.015`` maximizes the testing accuracy.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "train_scores = [clf.score(X_train, y_train) for clf in clfs]\ntest_scores = [clf.score(X_test, y_test) for clf in clfs]\n\nfig, ax = plt.subplots()\nax.set_xlabel(\"alpha\")\nax.set_ylabel(\"accuracy\")\nax.set_title(\"Accuracy vs alpha for training and testing sets\")\nax.plot(ccp_alphas, train_scores, marker=\"o\", label=\"train\", drawstyle=\"steps-post\")\nax.plot(ccp_alphas, test_scores, marker=\"o\", label=\"test\", drawstyle=\"steps-post\")\nax.legend()\nplt.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }