{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 07 - Ensemble Methods\n", "\n", "by [Alejandro Correa Bahnsen](http://www.albahnsen.com/) & [Iván Torroledo](http://www.ivantorroledo.com/)\n", "\n", "version 1.3, June 2018\n", "\n", "## Part of the class [Applied Deep Learning](https://github.com/albahnsen/AppliedDeepLearningClass)\n", "\n", "\n", "This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [Kevin Markham](https://github.com/justmarkham)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Why are we learning about ensembling?\n", "\n", "- Very popular method for improving the predictive performance of machine learning models\n", "- Provides a foundation for understanding more sophisticated models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Lesson objectives\n", "\n", "Students will be able to:\n", "\n", "- Define ensembling and its requirements\n", "- Identify the two basic methods of ensembling\n", "- Decide whether manual ensembling is a useful approach for a given problem\n", "- Explain bagging and how it can be applied to decision trees\n", "- Explain how out-of-bag error and feature importances are calculated from bagged trees\n", "- Explain the difference between bagged trees and Random Forests\n", "- Build and tune a Random Forest model in scikit-learn\n", "- Decide whether a decision tree or a Random Forest is a better model for a given problem" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 1: Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ensemble learning is a widely studied topic in the machine learning community. The main idea behind \n", "the ensemble methodology is to combine several individual base classifiers in order to have a \n", "classifier that outperforms each of them.\n", "\n", "Nowadays, ensemble methods are one \n", "of the most popular and well studied machine learning techniques, and it can be \n", "noted that since 2009 all the first-place and second-place winners of the KDD-Cup https://www.sigkdd.org/kddcup/ used ensemble methods. The core \n", "principle in ensemble learning, is to induce random perturbations into the learning procedure in \n", "order to produce several different base classifiers from a single training set, then combining the \n", "base classifiers in order to make the final prediction. In order to induce the random permutations \n", "and therefore create the different base classifiers, several methods have been proposed, in \n", "particular: \n", "* bagging\n", "* pasting\n", "* random forests \n", "* random patches \n", "\n", "Finally, after the base classifiers \n", "are trained, they are typically combined using either:\n", "* majority voting\n", "* weighted voting \n", "* stacking\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are three main reasons regarding why ensemble \n", "methods perform better than single models: statistical, computational and representational . First, from a statistical point of view, when the learning set is too \n", "small, an algorithm can find several good models within the search space, that arise to the same \n", "performance on the training set $\\mathcal{S}$. Nevertheless, without a validation set, there is \n", "a risk of choosing the wrong model. The second reason is computational; in general, algorithms \n", "rely on some local search optimization and may get stuck in a local optima. Then, an ensemble may \n", "solve this by focusing different algorithms to different spaces across the training set. The last \n", "reason is representational. In most cases, for a learning set of finite size, the true function \n", "$f$ cannot be represented by any of the candidate models. By combining several models in an \n", "ensemble, it may be possible to obtain a model with a larger coverage across the space of \n", "representable functions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![s](images/ch9_fig1.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example\n", "\n", "Let's pretend that instead of building a single model to solve a binary classification problem, you created **five independent models**, and each model was correct about 70% of the time. If you combined these models into an \"ensemble\" and used their majority vote as a prediction, how often would the ensemble be correct?" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 0 1 1]\n", "[1 1 1 1 1 1 1 0 1 0 0 0 1 1 1 0 1 0 0 0]\n", "[1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 1 1]\n", "[1 1 0 0 0 0 1 1 0 1 1 1 1 1 1 0 1 1 1 0]\n", "[0 0 1 0 0 0 1 0 1 0 0 0 1 1 1 1 1 1 1 1]\n" ] } ], "source": [ "import numpy as np\n", "\n", "# set a seed for reproducibility\n", "np.random.seed(1234)\n", "\n", "# generate 1000 random numbers (between 0 and 1) for each model, representing 1000 observations\n", "mod1 = np.random.rand(1000)\n", "mod2 = np.random.rand(1000)\n", "mod3 = np.random.rand(1000)\n", "mod4 = np.random.rand(1000)\n", "mod5 = np.random.rand(1000)\n", "\n", "# each model independently predicts 1 (the \"correct response\") if random number was at least 0.3\n", "preds1 = np.where(mod1 > 0.3, 1, 0)\n", "preds2 = np.where(mod2 > 0.3, 1, 0)\n", "preds3 = np.where(mod3 > 0.3, 1, 0)\n", "preds4 = np.where(mod4 > 0.3, 1, 0)\n", "preds5 = np.where(mod5 > 0.3, 1, 0)\n", "\n", "# print the first 20 predictions from each model\n", "print(preds1[:20])\n", "print(preds2[:20])\n", "print(preds3[:20])\n", "print(preds4[:20])\n", "print(preds5[:20])" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1 1 1 1 0 0 1 0 1 1 1 1 1 1 1 1 1 0 1 1]\n" ] } ], "source": [ "# average the predictions and then round to 0 or 1\n", "ensemble_preds = np.round((preds1 + preds2 + preds3 + preds4 + preds5)/5.0).astype(int)\n", "\n", "# print the ensemble's first 20 predictions\n", "print(ensemble_preds[:20])" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.713\n", "0.665\n", "0.717\n", "0.712\n", "0.687\n" ] } ], "source": [ "# how accurate was each individual model?\n", "print(preds1.mean())\n", "print(preds2.mean())\n", "print(preds3.mean())\n", "print(preds4.mean())\n", "print(preds5.mean())" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.841\n" ] } ], "source": [ "# how accurate was the ensemble?\n", "print(ensemble_preds.mean())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** As you add more models to the voting process, the probability of error decreases, which is known as [Condorcet's Jury Theorem](http://en.wikipedia.org/wiki/Condorcet%27s_jury_theorem)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What is ensembling?\n", "\n", "**Ensemble learning (or \"ensembling\")** is the process of combining several predictive models in order to produce a combined model that is more accurate than any individual model.\n", "\n", "- **Regression:** take the average of the predictions\n", "- **Classification:** take a vote and use the most common prediction, or take the average of the predicted probabilities\n", "\n", "For ensembling to work well, the models must have the following characteristics:\n", "\n", "- **Accurate:** they outperform the null model\n", "- **Independent:** their predictions are generated using different processes\n", "\n", "**The big idea:** If you have a collection of individually imperfect (and independent) models, the \"one-off\" mistakes made by each model are probably not going to be made by the rest of the models, and thus the mistakes will be discarded when averaging the models.\n", "\n", "There are two basic **methods for ensembling:**\n", "\n", "- Manually ensemble your individual models\n", "- Use a model that ensembles for you" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Theoretical performance of an ensemble\n", " If we assume that each one of the $T$ base classifiers has a probability $\\rho$ of \n", " being correct, the probability of an ensemble making the correct decision, assuming independence, \n", " denoted by $P_c$, can be calculated using the binomial distribution\n", "\n", "$$P_c = \\sum_{j>T/2}^{T} {{T}\\choose{j}} \\rho^j(1-\\rho)^{T-j}.$$\n", "\n", " Furthermore, as shown, if $T\\ge3$ then:\n", "\n", "$$\n", " \\lim_{T \\to \\infty} P_c= \\begin{cases} \n", " 1 &\\mbox{if } \\rho>0.5 \\\\ \n", " 0 &\\mbox{if } \\rho<0.5 \\\\ \n", " 0.5 &\\mbox{if } \\rho=0.5 ,\n", " \\end{cases}\n", "$$\n", "\tleading to the conclusion that \n", "$$\n", " \\rho \\ge 0.5 \\quad \\text{and} \\quad T\\ge3 \\quad \\Rightarrow \\quad P_c\\ge \\rho.\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 2: Manual ensembling\n", "\n", "What makes a good manual ensemble?\n", "\n", "- Different types of **models**\n", "- Different combinations of **features**\n", "- Different **tuning parameters**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Machine learning flowchart](https://raw.githubusercontent.com/justmarkham/DAT8/master/notebooks/images/crowdflower_ensembling.jpg)\n", "\n", "*Machine learning flowchart created by the [winner](https://github.com/ChenglongChen/Kaggle_CrowdFlower) of Kaggle's [CrowdFlower competition](https://www.kaggle.com/c/crowdflower-search-relevance)*" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# read in and prepare the vehicle training data\n", "import zipfile\n", "import pandas as pd\n", "with zipfile.ZipFile('../datasets/vehicles_train.csv.zip', 'r') as z:\n", " f = z.open('vehicles_train.csv')\n", " train = pd.io.parsers.read_table(f, index_col=False, sep=',')\n", "with zipfile.ZipFile('../datasets/vehicles_test.csv.zip', 'r') as z:\n", " f = z.open('vehicles_test.csv')\n", " test = pd.io.parsers.read_table(f, index_col=False, sep=',')\n", "\n", "train['vtype'] = train.vtype.map({'car':0, 'truck':1})\n", "# read in and prepare the vehicle testing data\n", "test['vtype'] = test.vtype.map({'car':0, 'truck':1})" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtype
02200020121300020
11400020103000020
21300020107350040
3950020097800040
4900020074700040
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtype
131300199713800040
21300020107350040
121800199916300021
21300020107350040
63000200417700040
11400020103000020
3950020097800040
102500200319000021
11500020016200040
91900200316000040
63000200417700040
11400020103000020
02200020121300020
11400020103000020
\n", "
" ], "text/plain": [ " price year miles doors vtype\n", "13 1300 1997 138000 4 0\n", "2 13000 2010 73500 4 0\n", "12 1800 1999 163000 2 1\n", "2 13000 2010 73500 4 0\n", "6 3000 2004 177000 4 0\n", "1 14000 2010 30000 2 0\n", "3 9500 2009 78000 4 0\n", "10 2500 2003 190000 2 1\n", "11 5000 2001 62000 4 0\n", "9 1900 2003 160000 4 0\n", "6 3000 2004 177000 4 0\n", "1 14000 2010 30000 2 0\n", "0 22000 2012 13000 2 0\n", "1 14000 2010 30000 2 0" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# show the rows for the first decision tree\n", "train.iloc[samples[0], :]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Build one tree for each sample" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.tree import DecisionTreeRegressor\n", "\n", "# grow each tree deep\n", "treereg = DecisionTreeRegressor(max_depth=None, random_state=123)\n", "\n", "# DataFrame for storing predicted price from each tree\n", "y_pred = pd.DataFrame(index=test.index, columns=[list(range(n_B))])\n", "\n", "# grow one tree for each bootstrap sample and make predictions on testing data\n", "for i, sample in enumerate(samples):\n", " X_train = train.iloc[sample, 1:]\n", " y_train = train.iloc[sample, 0]\n", " treereg.fit(X_train, y_train)\n", " y_pred[i] = treereg.predict(X_test)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123456789
01300.01300.03000.04000.01300.04000.04000.04000.03000.04000.0
15000.01300.03000.05000.05000.05000.04000.05000.05000.05000.0
214000.013000.013000.013000.013000.014000.013000.013000.09500.09000.0
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
131581724383914344983569321414375NW6324310475.0N
2479130186672763162445763224266263AW8808214480.0A
3496141206578371156281575225828838354NE200113500.0N
43218710394230239610112484633NE80540491.5N
55941694745135114408113319501336194AW28242125750.0A
\n", "
" ], "text/plain": [ " AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun CRuns \\\n", "1 315 81 7 24 38 39 14 3449 835 69 321 \n", "2 479 130 18 66 72 76 3 1624 457 63 224 \n", "3 496 141 20 65 78 37 11 5628 1575 225 828 \n", "4 321 87 10 39 42 30 2 396 101 12 48 \n", "5 594 169 4 74 51 35 11 4408 1133 19 501 \n", "\n", " CRBI CWalks League Division PutOuts Assists Errors Salary NewLeague \n", "1 414 375 N W 632 43 10 475.0 N \n", "2 266 263 A W 880 82 14 480.0 A \n", "3 838 354 N E 200 11 3 500.0 N \n", "4 46 33 N E 805 40 4 91.5 N \n", "5 336 194 A W 282 421 25 750.0 A " ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# read in the data\n", "with zipfile.ZipFile('../datasets/hitters.csv.zip', 'r') as z:\n", " f = z.open('hitters.csv')\n", " hitters = pd.read_csv(f, sep=',', index_col=False)\n", "\n", "# remove rows with missing values\n", "hitters.dropna(inplace=True)\n", "hitters.head()" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
131581724383914344983569321414375006324310475.00
2479130186672763162445763224266263108808214480.01
349614120657837115628157522582883835401200113500.00
432187103942302396101124846330180540491.50
559416947451351144081133195013361941028242125750.01
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
featureimportance
0AtBat0.000000
2HmRun0.000000
3Runs0.000000
4RBI0.000000
5Walks0.000000
7League0.000000
8Division0.000000
9PutOuts0.000000
10Assists0.000000
11Errors0.000000
12NewLeague0.000000
6Years0.488391
1Hits0.511609
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
featureimportance
7League0.003603
12NewLeague0.004290
8Division0.005477
10Assists0.023842
11Errors0.028618
2HmRun0.044607
9PutOuts0.060063
3Runs0.071800
0AtBat0.094592
4RBI0.130965
5Walks0.139899
1Hits0.145264
6Years0.246980
\n", "