{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# In Depth - Decision Trees and Forests" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we'll explore a class of algorithms based on decision trees.\n", "Decision trees at their root are extremely intuitive. They\n", "encode a series of \"if\" and \"else\" choices, similar to how a person might make a decision.\n", "However, which questions to ask, and how to proceed for each answer is entirely learned from the data.\n", "\n", "For example, if you wanted to create a guide to identifying an animal found in nature, you\n", "might ask the following series of questions:\n", "\n", "- Is the animal bigger or smaller than a meter long?\n", " + *bigger*: does the animal have horns?\n", " - *yes*: are the horns longer than ten centimeters?\n", " - *no*: is the animal wearing a collar\n", " + *smaller*: does the animal have two or four legs?\n", " - *two*: does the animal have wings?\n", " - *four*: does the animal have a bushy tail?\n", "\n", "and so on. This binary splitting of questions is the essence of a decision tree." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One of the main benefit of tree-based models is that they require little preprocessing of the data.\n", "They can work with variables of different types (continuous and discrete) and are invariant to scaling of the features.\n", "\n", "Another benefit is that tree-based models are what is called \"nonparametric\", which means they don't have a fix set of parameters to learn. Instead, a tree model can become more and more flexible, if given more data.\n", "In other words, the number of free parameters grows with the number of samples and is not fixed, as for example in linear models.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Decision Tree Regression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A decision tree is a simple binary classification tree that is\n", "similar to nearest neighbor classification. It can be used as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from figures import make_dataset\n", "x, y = make_dataset()\n", "X = x.reshape(-1, 1)\n", "\n", "plt.figure()\n", "plt.xlabel('Feature X')\n", "plt.ylabel('Target y')\n", "plt.scatter(X, y);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.tree import DecisionTreeRegressor\n", "\n", "reg = DecisionTreeRegressor(max_depth=5)\n", "reg.fit(X, y)\n", "\n", "X_fit = np.linspace(-3, 3, 1000).reshape((-1, 1))\n", "y_fit_1 = reg.predict(X_fit)\n", "\n", "plt.figure()\n", "plt.plot(X_fit.ravel(), y_fit_1, color='tab:blue', label=\"prediction\")\n", "plt.plot(X.ravel(), y, 'C7.', label=\"training data\")\n", "plt.legend(loc=\"best\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A single decision tree allows us to estimate the signal in a non-parametric way,\n", "but clearly has some issues. In some regions, the model shows high bias and\n", "under-fits the data.\n", "(seen in the long flat lines which don't follow the contours of the data),\n", "while in other regions the model shows high variance and over-fits the data\n", "(reflected in the narrow spikes which are influenced by noise in single points)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Decision Tree Classification\n", "==================\n", "Decision tree classification work very similarly, by assigning all points within a leaf the majority class in that leaf:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import make_blobs\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.tree import DecisionTreeClassifier\n", "from figures import plot_2d_separator\n", "from figures import cm2\n", "\n", "\n", "X, y = make_blobs(centers=[[0, 0], [1, 1]], random_state=61526, n_samples=100)\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n", "\n", "clf = DecisionTreeClassifier(max_depth=5)\n", "clf.fit(X_train, y_train)\n", "\n", "plt.figure()\n", "plot_2d_separator(clf, X, fill=True)\n", "plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm2, s=60, alpha=.7, edgecolor='k')\n", "plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm2, s=60, edgecolor='k');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are many parameter that control the complexity of a tree, but the one that might be easiest to understand is the maximum depth. This limits how finely the tree can partition the input space, or how many \"if-else\" questions can be asked before deciding which class a sample lies in.\n", "\n", "This parameter is important to tune for trees and tree-based models. The interactive plot below shows how underfit and overfit looks like for this model. Having a ``max_depth`` of 1 is clearly an underfit model, while a depth of 7 or 8 clearly overfits. The maximum depth a tree can be grown at for this dataset is 8, at which point each leave only contains samples from a single class. This is known as all leaves being \"pure.\"\n", "\n", "In the interactive plot below, the regions are assigned blue and red colors to indicate the predicted class for that region. The shade of the color indicates the predicted probability for that class (darker = higher probability), while yellow regions indicate an equal predicted probability for either class." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from figures import plot_tree\n", "max_depth = 3\n", "plot_tree(max_depth=max_depth)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Decision trees are fast to train, easy to understand, and often lead to interpretable models. However, single trees often tend to overfit the training data. Playing with the slider above you might notice that the model starts to overfit even before it has a good separation between the classes.\n", "\n", "Therefore, in practice it is more common to combine multiple trees to produce models that generalize better. The most common methods for combining trees are random forests and gradient boosted trees.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Random Forests" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Random forests are simply many trees, built on different random subsets (drawn with replacement) of the data, and using different random subsets (drawn without replacement) of the features for each split.\n", "This makes the trees different from each other, and makes them overfit to different aspects. Then, their predictions are averaged, leading to a smoother estimate that overfits less.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from figures import plot_forest\n", "max_depth = 3\n", "plot_forest(max_depth=max_depth)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Selecting the Optimal Estimator via Cross-Validation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import GridSearchCV\n", "from sklearn.datasets import load_digits\n", "from sklearn.ensemble import RandomForestClassifier\n", "\n", "digits = load_digits()\n", "X, y = digits.data, digits.target\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n", "\n", "rf = RandomForestClassifier(n_estimators=200)\n", "parameters = {'max_features':['sqrt', 'log2', 10],\n", " 'max_depth':[5, 7, 9]}\n", "\n", "clf_grid = GridSearchCV(rf, parameters, n_jobs=-1)\n", "clf_grid.fit(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clf_grid.score(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clf_grid.score(X_test, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Another option: Gradient Boosting" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another Ensemble method that can be useful is *Boosting*: here, rather than\n", "looking at 200 (say) parallel estimators, We construct a chain of 200 estimators\n", "which iteratively refine the results of the previous estimator.\n", "The idea is that by sequentially applying very fast, simple models, we can get a\n", "total model error which is better than any of the individual pieces." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.ensemble import GradientBoostingRegressor\n", "clf = GradientBoostingRegressor(n_estimators=100, max_depth=5, learning_rate=.2)\n", "clf.fit(X_train, y_train)\n", "\n", "print(clf.score(X_train, y_train))\n", "print(clf.score(X_test, y_test))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " EXERCISE: Cross-validating Gradient Boosting:\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.datasets import load_digits\n", "from sklearn.ensemble import GradientBoostingClassifier\n", "\n", "digits = load_digits()\n", "X_digits, y_digits = digits.data, digits.target\n", "\n", "# split the dataset, apply grid-search" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# %load solutions/18_gbc_grid.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature importance\n", "\n", "Both RandomForest and GradientBoosting objects expose a `feature_importances_` attribute when fitted. This attribute is one of the most powerful feature of these models. They basically quantify how much each feature contributes to gain in performance in the nodes of the different trees." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X, y = X_digits[y_digits < 2], y_digits[y_digits < 2]\n", "\n", "rf = RandomForestClassifier(n_estimators=300, n_jobs=1)\n", "rf.fit(X, y)\n", "print(rf.feature_importances_) # one value per feature" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure()\n", "plt.imshow(rf.feature_importances_.reshape(8, 8), cmap=plt.cm.viridis, interpolation='nearest')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" } }, "nbformat": 4, "nbformat_minor": 2 }