{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 11 - Ensemble Methods - Bagging\n", "\n", "\n", "by [Alejandro Correa Bahnsen](albahnsen.com/) and [Jesus Solano](https://github.com/jesugome)\n", "\n", "version 1.5, February 2019\n", "\n", "## Part of the class [Practical Machine Learning](https://github.com/albahnsen/PracticalMachineLearningClass)\n", "\n", "\n", "\n", "This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [Kevin Markham](https://github.com/justmarkham)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Why are we learning about ensembling?\n", "\n", "- Very popular method for improving the predictive performance of machine learning models\n", "- Provides a foundation for understanding more sophisticated models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Lesson objectives\n", "\n", "Students will be able to:\n", "\n", "- Define ensembling and its requirements\n", "- Identify the two basic methods of ensembling\n", "- Decide whether manual ensembling is a useful approach for a given problem\n", "- Explain bagging and how it can be applied to decision trees\n", "- Explain how out-of-bag error and feature importances are calculated from bagged trees\n", "- Explain the difference between bagged trees and Random Forests\n", "- Build and tune a Random Forest model in scikit-learn\n", "- Decide whether a decision tree or a Random Forest is a better model for a given problem" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 1: Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ensemble learning is a widely studied topic in the machine learning community. The main idea behind \n", "the ensemble methodology is to combine several individual base classifiers in order to have a \n", "classifier that outperforms each of them.\n", "\n", "Nowadays, ensemble methods are one \n", "of the most popular and well studied machine learning techniques, and it can be \n", "noted that since 2009 all the first-place and second-place winners of the KDD-Cup https://www.sigkdd.org/kddcup/ used ensemble methods. The core \n", "principle in ensemble learning, is to induce random perturbations into the learning procedure in \n", "order to produce several different base classifiers from a single training set, then combining the \n", "base classifiers in order to make the final prediction. In order to induce the random permutations \n", "and therefore create the different base classifiers, several methods have been proposed, in \n", "particular: \n", "* bagging\n", "* pasting\n", "* random forests \n", "* random patches \n", "\n", "Finally, after the base classifiers \n", "are trained, they are typically combined using either:\n", "* majority voting\n", "* weighted voting \n", "* stacking\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are three main reasons regarding why ensemble \n", "methods perform better than single models: statistical, computational and representational . First, from a statistical point of view, when the learning set is too \n", "small, an algorithm can find several good models within the search space, that arise to the same \n", "performance on the training set $\\mathcal{S}$. Nevertheless, without a validation set, there is \n", "a risk of choosing the wrong model. The second reason is computational; in general, algorithms \n", "rely on some local search optimization and may get stuck in a local optima. Then, an ensemble may \n", "solve this by focusing different algorithms to different spaces across the training set. The last \n", "reason is representational. In most cases, for a learning set of finite size, the true function \n", "$f$ cannot be represented by any of the candidate models. By combining several models in an \n", "ensemble, it may be possible to obtain a model with a larger coverage across the space of \n", "representable functions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![](ch9_fig1.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example\n", "\n", "Let's pretend that instead of building a single model to solve a binary classification problem, you created **five independent models**, and each model was correct about 70% of the time. If you combined these models into an \"ensemble\" and used their majority vote as a prediction, how often would the ensemble be correct?" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 0 1 1]\n", "[1 1 1 1 1 1 1 0 1 0 0 0 1 1 1 0 1 0 0 0]\n", "[1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 1 1]\n", "[1 1 0 0 0 0 1 1 0 1 1 1 1 1 1 0 1 1 1 0]\n", "[0 0 1 0 0 0 1 0 1 0 0 0 1 1 1 1 1 1 1 1]\n" ] } ], "source": [ "import numpy as np\n", "\n", "# set a seed for reproducibility\n", "np.random.seed(1234)\n", "\n", "# generate 1000 random numbers (between 0 and 1) for each model, representing 1000 observations\n", "mod1 = np.random.rand(1000)\n", "mod2 = np.random.rand(1000)\n", "mod3 = np.random.rand(1000)\n", "mod4 = np.random.rand(1000)\n", "mod5 = np.random.rand(1000)\n", "\n", "# each model independently predicts 1 (the \"correct response\") if random number was at least 0.3\n", "preds1 = np.where(mod1 > 0.3, 1, 0)\n", "preds2 = np.where(mod2 > 0.3, 1, 0)\n", "preds3 = np.where(mod3 > 0.3, 1, 0)\n", "preds4 = np.where(mod4 > 0.3, 1, 0)\n", "preds5 = np.where(mod5 > 0.3, 1, 0)\n", "\n", "# print the first 20 predictions from each model\n", "print(preds1[:20])\n", "print(preds2[:20])\n", "print(preds3[:20])\n", "print(preds4[:20])\n", "print(preds5[:20])" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1 1 1 1 0 0 1 0 1 1 1 1 1 1 1 1 1 0 1 1]\n" ] } ], "source": [ "# average the predictions and then round to 0 or 1\n", "ensemble_preds = np.round((preds1 + preds2 + preds3 + preds4 + preds5)/5.0).astype(int)\n", "\n", "# print the ensemble's first 20 predictions\n", "print(ensemble_preds[:20])" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.713\n", "0.665\n", "0.717\n", "0.712\n", "0.687\n" ] } ], "source": [ "# how accurate was each individual model?\n", "print(preds1.mean())\n", "print(preds2.mean())\n", "print(preds3.mean())\n", "print(preds4.mean())\n", "print(preds5.mean())" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.841\n" ] } ], "source": [ "# how accurate was the ensemble?\n", "print(ensemble_preds.mean())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** As you add more models to the voting process, the probability of error decreases, which is known as [Condorcet's Jury Theorem](http://en.wikipedia.org/wiki/Condorcet%27s_jury_theorem)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What is ensembling?\n", "\n", "**Ensemble learning (or \"ensembling\")** is the process of combining several predictive models in order to produce a combined model that is more accurate than any individual model.\n", "\n", "- **Regression:** take the average of the predictions\n", "- **Classification:** take a vote and use the most common prediction, or take the average of the predicted probabilities\n", "\n", "For ensembling to work well, the models must have the following characteristics:\n", "\n", "- **Accurate:** they outperform the null model\n", "- **Independent:** their predictions are generated using different processes\n", "\n", "**The big idea:** If you have a collection of individually imperfect (and independent) models, the \"one-off\" mistakes made by each model are probably not going to be made by the rest of the models, and thus the mistakes will be discarded when averaging the models.\n", "\n", "There are two basic **methods for ensembling:**\n", "\n", "- Manually ensemble your individual models\n", "- Use a model that ensembles for you" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Theoretical performance of an ensemble\n", " If we assume that each one of the $T$ base classifiers has a probability $\\rho$ of \n", " being correct, the probability of an ensemble making the correct decision, assuming independence, \n", " denoted by $P_c$, can be calculated using the binomial distribution\n", "\n", "$$P_c = \\sum_{j>T/2}^{T} {{T}\\choose{j}} \\rho^j(1-\\rho)^{T-j}.$$\n", "\n", " Furthermore, as shown, if $T\\ge3$ then:\n", "\n", "$$\n", " \\lim_{T \\to \\infty} P_c= \\begin{cases} \n", " 1 &\\mbox{if } \\rho>0.5 \\\\ \n", " 0 &\\mbox{if } \\rho<0.5 \\\\ \n", " 0.5 &\\mbox{if } \\rho=0.5 ,\n", " \\end{cases}\n", "$$\n", "\tleading to the conclusion that \n", "$$\n", " \\rho \\ge 0.5 \\quad \\text{and} \\quad T\\ge3 \\quad \\Rightarrow \\quad P_c\\ge \\rho.\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# Part 2: Manual ensembling\n", "\n", "What makes a good manual ensemble?\n", "\n", "- Different types of **models**\n", "- Different combinations of **features**\n", "- Different **tuning parameters**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Machine learning flowchart](https://raw.githubusercontent.com/justmarkham/DAT8/master/notebooks/images/crowdflower_ensembling.jpg)\n", "\n", "*Machine learning flowchart created by the [winner](https://github.com/ChenglongChen/Kaggle_CrowdFlower) of Kaggle's [CrowdFlower competition](https://www.kaggle.com/c/crowdflower-search-relevance)*" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# read in and prepare the vehicle training data\n", "import pandas as pd\n", "url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/vehicles_train.csv'\n", "train = pd.read_csv(url)\n", "train['vtype'] = train.vtype.map({'car':0, 'truck':1})\n", "# read in and prepare the vehicle testing data\n", "url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/vehicles_test.csv'\n", "test = pd.read_csv(url)\n", "test['vtype'] = test.vtype.map({'car':0, 'truck':1})" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtype
02200020121300020
11400020103000020
21300020107350040
3950020097800040
4900020074700040
\n", "
" ], "text/plain": [ " price year miles doors vtype\n", "0 22000 2012 13000 2 0\n", "1 14000 2010 30000 2 0\n", "2 13000 2010 73500 4 0\n", "3 9500 2009 78000 4 0\n", "4 9000 2007 47000 4 0" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Train different models" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LinearRegression\n", "from sklearn.tree import DecisionTreeRegressor\n", "from sklearn.naive_bayes import GaussianNB\n", "from sklearn.neighbors import KNeighborsRegressor\n", "\n", "models = {'lr': LinearRegression(),\n", " 'dt': DecisionTreeRegressor(),\n", " 'nb': GaussianNB(),\n", " 'kn': KNeighborsRegressor()}" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "# Train all the models\n", "X_train = train.iloc[:, 1:]\n", "X_test = test.iloc[:, 1:]\n", "y_train = train.price\n", "y_test = test.price\n", "\n", "for model in models.keys():\n", " models[model].fit(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# predict test for each model\n", "y_pred = pd.DataFrame(index=test.index, columns=models.keys())\n", "for model in models.keys():\n", " y_pred[model] = models[model].predict(X_test)\n", " " ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "lr 2138.3579028745116\n", "dt 1414.213562373095\n", "nb 5477.2255750516615\n", "kn 1671.3268182295567\n" ] } ], "source": [ "# Evaluate each model\n", "from sklearn.metrics import mean_squared_error\n", "\n", "for model in models.keys():\n", " print(model,np.sqrt(mean_squared_error(y_pred[model], y_test)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Evaluate the error of the mean of the predictions" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1193.164765760328" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.sqrt(mean_squared_error(y_pred.mean(axis=1), y_test))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparing manual ensembling with a single model approach\n", "\n", "**Advantages of manual ensembling:**\n", "\n", "- Increases predictive accuracy\n", "- Easy to get started\n", "\n", "**Disadvantages of manual ensembling:**\n", "\n", "- Decreases interpretability\n", "- Takes longer to train\n", "- Takes longer to predict\n", "- More complex to automate and maintain\n", "- Small gains in accuracy may not be worth the added complexity" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 3: Bagging\n", "\n", "The primary weakness of **decision trees** is that they don't tend to have the best predictive accuracy. This is partially due to **high variance**, meaning that different splits in the training data can lead to very different trees.\n", "\n", "**Bagging** is a general purpose procedure for reducing the variance of a machine learning method, but is particularly useful for decision trees. Bagging is short for **bootstrap aggregation**, meaning the aggregation of bootstrap samples.\n", "\n", "What is a **bootstrap sample**? A random sample with replacement:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20]\n", "[ 6 12 13 9 10 12 6 16 1 17 2 13 8 14 7 19 6 19 12 11]\n" ] } ], "source": [ "# set a seed for reproducibility\n", "np.random.seed(1)\n", "\n", "# create an array of 1 through 20\n", "nums = np.arange(1, 21)\n", "print(nums)\n", "\n", "# sample that array 20 times with replacement\n", "print(np.random.choice(a=nums, size=20, replace=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**How does bagging work (for decision trees)?**\n", "\n", "1. Grow B trees using B bootstrap samples from the training data.\n", "2. Train each tree on its bootstrap sample and make predictions.\n", "3. Combine the predictions:\n", " - Average the predictions for **regression trees**\n", " - Take a vote for **classification trees**\n", "\n", "Notes:\n", "\n", "- **Each bootstrap sample** should be the same size as the original training set.\n", "- **B** should be a large enough value that the error seems to have \"stabilized\".\n", "- The trees are **grown deep** so that they have low bias/high variance.\n", "\n", "Bagging increases predictive accuracy by **reducing the variance**, similar to how cross-validation reduces the variance associated with train/test split (for estimating out-of-sample error) by splitting many times an averaging the results.\n" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[array([13, 2, 12, 2, 6, 1, 3, 10, 11, 9, 6, 1, 0, 1]),\n", " array([ 9, 0, 0, 9, 3, 13, 4, 0, 0, 4, 1, 7, 3, 2]),\n", " array([ 4, 7, 2, 4, 8, 13, 0, 7, 9, 3, 12, 12, 4, 6]),\n", " array([ 1, 5, 6, 11, 2, 1, 12, 8, 3, 10, 5, 0, 11, 2]),\n", " array([10, 10, 6, 13, 2, 4, 11, 11, 13, 12, 4, 6, 13, 3]),\n", " array([10, 0, 6, 4, 7, 11, 6, 7, 1, 11, 10, 5, 7, 9]),\n", " array([ 2, 4, 8, 1, 12, 2, 1, 1, 3, 12, 5, 9, 0, 8]),\n", " array([11, 1, 6, 3, 3, 11, 5, 9, 7, 9, 2, 3, 11, 3]),\n", " array([ 3, 8, 6, 9, 7, 6, 3, 9, 6, 12, 6, 11, 6, 1]),\n", " array([13, 10, 3, 4, 3, 1, 13, 0, 5, 8, 13, 6, 11, 8])]" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# set a seed for reproducibility\n", "np.random.seed(123)\n", "\n", "n_samples = train.shape[0]\n", "n_B = 10\n", "\n", "# create ten bootstrap samples (will be used to select rows from the DataFrame)\n", "samples = [np.random.choice(a=n_samples, size=n_samples, replace=True) for _ in range(1, n_B +1 )]\n", "samples" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtype
131300199713800040
21300020107350040
121800199916300021
21300020107350040
63000200417700040
11400020103000020
3950020097800040
102500200319000021
11500020016200040
91900200316000040
63000200417700040
11400020103000020
02200020121300020
11400020103000020
\n", "
" ], "text/plain": [ " price year miles doors vtype\n", "13 1300 1997 138000 4 0\n", "2 13000 2010 73500 4 0\n", "12 1800 1999 163000 2 1\n", "2 13000 2010 73500 4 0\n", "6 3000 2004 177000 4 0\n", "1 14000 2010 30000 2 0\n", "3 9500 2009 78000 4 0\n", "10 2500 2003 190000 2 1\n", "11 5000 2001 62000 4 0\n", "9 1900 2003 160000 4 0\n", "6 3000 2004 177000 4 0\n", "1 14000 2010 30000 2 0\n", "0 22000 2012 13000 2 0\n", "1 14000 2010 30000 2 0" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# show the rows for the first decision tree\n", "train.iloc[samples[0], :]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Build one tree for each sample" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "from sklearn.tree import DecisionTreeRegressor\n", "\n", "# grow each tree deep\n", "treereg = DecisionTreeRegressor(max_depth=None, random_state=123)\n", "\n", "# DataFrame for storing predicted price from each tree\n", "y_pred = pd.DataFrame(index=test.index, columns=[list(range(n_B))])\n", "\n", "# grow one tree for each bootstrap sample and make predictions on testing data\n", "for i, sample in enumerate(samples):\n", " X_train = train.iloc[sample, 1:]\n", " y_train = train.iloc[sample, 0]\n", " treereg.fit(X_train, y_train)\n", " y_pred[i] = treereg.predict(X_test)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123456789
01300.01300.03000.04000.01300.04000.04000.04000.03000.04000.0
15000.01300.03000.05000.05000.05000.04000.05000.05000.05000.0
214000.013000.013000.013000.013000.014000.013000.013000.09500.09000.0
\n", "
" ], "text/plain": [ " 0 1 2 3 4 5 6 7 \\\n", "0 1300.0 1300.0 3000.0 4000.0 1300.0 4000.0 4000.0 4000.0 \n", "1 5000.0 1300.0 3000.0 5000.0 5000.0 5000.0 4000.0 5000.0 \n", "2 14000.0 13000.0 13000.0 13000.0 13000.0 14000.0 13000.0 13000.0 \n", "\n", " 8 9 \n", "0 3000.0 4000.0 \n", "1 5000.0 5000.0 \n", "2 9500.0 9000.0 " ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Results of each tree" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 1621.7274740226856\n", "1 2942.7877939124323\n", "2 1825.7418583505537\n", "3 1000.0\n", "4 1276.7145334803704\n", "5 1414.213562373095\n", "6 1414.213562373095\n", "7 1000.0\n", "8 1554.5631755148024\n", "9 1914.854215512676\n" ] } ], "source": [ "for i in range(n_B):\n", " print(i, np.sqrt(mean_squared_error(y_pred[i], y_test)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Results of the ensemble" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 2990.0\n", "1 4330.0\n", "2 12450.0\n", "dtype: float64" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred.mean(axis=1)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "998.5823284370031" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.sqrt(mean_squared_error(y_test, y_pred.mean(axis=1)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bagged decision trees in scikit-learn (with B=500)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "# define the training and testing sets\n", "X_train = train.iloc[:, 1:]\n", "y_train = train.iloc[:, 0]\n", "X_test = test.iloc[:, 1:]\n", "y_test = test.iloc[:, 0]" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "C:\\Users\\albah\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.\n", " from numpy.core.umath_tests import inner1d\n" ] } ], "source": [ "# instruct BaggingRegressor to use DecisionTreeRegressor as the \"base estimator\"\n", "from sklearn.ensemble import BaggingRegressor\n", "bagreg = BaggingRegressor(DecisionTreeRegressor(), n_estimators=500, \n", " bootstrap=True, oob_score=True, random_state=1)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 3344.2, 5395. , 12902. ])" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# fit and predict\n", "bagreg.fit(X_train, y_train)\n", "y_pred = bagreg.predict(X_test)\n", "y_pred" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "657.8000304043775" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# calculate RMSE\n", "np.sqrt(mean_squared_error(y_test, y_pred))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Estimating out-of-sample error\n", "\n", "For bagged models, out-of-sample error can be estimated without using **train/test split** or **cross-validation**!\n", "\n", "On average, each bagged tree uses about **two-thirds** of the observations. For each tree, the **remaining observations** are called \"out-of-bag\" observations." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([13, 2, 12, 2, 6, 1, 3, 10, 11, 9, 6, 1, 0, 1])" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# show the first bootstrap sample\n", "samples[0]" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{0, 1, 2, 3, 6, 9, 10, 11, 12, 13}\n", "{0, 1, 2, 3, 4, 7, 9, 13}\n", "{0, 2, 3, 4, 6, 7, 8, 9, 12, 13}\n", "{0, 1, 2, 3, 5, 6, 8, 10, 11, 12}\n", "{2, 3, 4, 6, 10, 11, 12, 13}\n", "{0, 1, 4, 5, 6, 7, 9, 10, 11}\n", "{0, 1, 2, 3, 4, 5, 8, 9, 12}\n", "{1, 2, 3, 5, 6, 7, 9, 11}\n", "{1, 3, 6, 7, 8, 9, 11, 12}\n", "{0, 1, 3, 4, 5, 6, 8, 10, 11, 13}\n" ] } ], "source": [ "# show the \"in-bag\" observations for each sample\n", "for sample in samples:\n", " print(set(sample))" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[4, 5, 7, 8]\n", "[5, 6, 8, 10, 11, 12]\n", "[1, 5, 10, 11]\n", "[4, 7, 9, 13]\n", "[0, 1, 5, 7, 8, 9]\n", "[2, 3, 8, 12, 13]\n", "[6, 7, 10, 11, 13]\n", "[0, 4, 8, 10, 12, 13]\n", "[0, 2, 4, 5, 10, 13]\n", "[2, 7, 9, 12]\n" ] } ], "source": [ "# show the \"out-of-bag\" observations for each sample\n", "for sample in samples:\n", " print(sorted(set(range(n_samples)) - set(sample)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How to calculate **\"out-of-bag error\":**\n", "\n", "1. For every observation in the training data, predict its response value using **only** the trees in which that observation was out-of-bag. Average those predictions (for regression) or take a vote (for classification).\n", "2. Compare all predictions to the actual response values in order to compute the out-of-bag error.\n", "\n", "When B is sufficiently large, the **out-of-bag error** is an accurate estimate of **out-of-sample error**." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.7986955133989982" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# compute the out-of-bag R-squared score (not MSE, unfortunately!) for B=500\n", "bagreg.oob_score_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Estimating feature importance\n", "\n", "Bagging increases **predictive accuracy**, but decreases **model interpretability** because it's no longer possible to visualize the tree to understand the importance of each feature.\n", "\n", "However, we can still obtain an overall summary of **feature importance** from bagged models:\n", "\n", "- **Bagged regression trees:** calculate the total amount that **MSE** is decreased due to splits over a given feature, averaged over all trees\n", "- **Bagged classification trees:** calculate the total amount that **Gini index** is decreased due to splits over a given feature, averaged over all trees" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 4: Combination of classifiers - Majority Voting" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " The most typical form of an ensemble is made by combining $T$ different base classifiers.\n", " Each base classifier $M(\\mathcal{S}_j)$ is trained by applying algorithm $M$ to a random subset \n", " $\\mathcal{S}_j$ of the training set $\\mathcal{S}$. \n", " For simplicity we define $M_j \\equiv M(\\mathcal{S}_j)$ for $j=1,\\dots,T$, and \n", " $\\mathcal{M}=\\{M_j\\}_{j=1}^{T}$ a set of base classifiers.\n", " Then, these models are combined using majority voting to create the ensemble $H$ as follows\n", " $$\n", " f_{mv}(\\mathcal{S},\\mathcal{M}) = max_{c \\in \\{0,1\\}} \\sum_{j=1}^T \n", " \\mathbf{1}_c(M_j(\\mathcal{S})).\n", " $$\n" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "# read in and prepare the churn data\n", "# Download the dataset\n", "import pandas as pd\n", "import numpy as np\n", "\n", "url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/churn.csv'\n", "data = pd.read_csv(url)\n", "\n", "# Create X and y\n", "\n", "# Select only the numeric features\n", "X = data.iloc[:, [1,2,6,7,8,9,10]].astype(np.float)\n", "# Convert bools to floats\n", "X = X.join((data.iloc[:, [4,5]] == 'no').astype(np.float))\n", "\n", "y = (data.iloc[:, -1] == 'True.').astype(np.int)" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Account LengthArea CodeVMail MessageDay MinsDay CallsDay ChargeEve MinsInt'l PlanVMail Plan
0128.0415.025.0265.1110.045.07197.41.00.0
1107.0415.026.0161.6123.027.47195.51.00.0
2137.0415.00.0243.4114.041.38121.21.01.0
384.0408.00.0299.471.050.9061.90.01.0
475.0415.00.0166.7113.028.34148.30.01.0
\n", "
" ], "text/plain": [ " Account Length Area Code VMail Message Day Mins Day Calls Day Charge \\\n", "0 128.0 415.0 25.0 265.1 110.0 45.07 \n", "1 107.0 415.0 26.0 161.6 123.0 27.47 \n", "2 137.0 415.0 0.0 243.4 114.0 41.38 \n", "3 84.0 408.0 0.0 299.4 71.0 50.90 \n", "4 75.0 415.0 0.0 166.7 113.0 28.34 \n", "\n", " Eve Mins Int'l Plan VMail Plan \n", "0 197.4 1.0 0.0 \n", "1 195.5 1.0 0.0 \n", "2 121.2 1.0 1.0 \n", "3 61.9 0.0 1.0 \n", "4 148.3 0.0 1.0 " ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X.head()" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
countpercentage
028500.855086
14830.144914
\n", "
" ], "text/plain": [ " count percentage\n", "0 2850 0.855086\n", "1 483 0.144914" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y.value_counts().to_frame('count').assign(percentage = lambda x: x/x.sum())" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create 100 decision trees" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "n_estimators = 100\n", "# set a seed for reproducibility\n", "np.random.seed(123)\n", "\n", "n_samples = X_train.shape[0]\n", "\n", "# create bootstrap samples (will be used to select rows from the DataFrame)\n", "samples = [np.random.choice(a=n_samples, size=n_samples, replace=True) for _ in range(n_estimators)]" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "from sklearn.tree import DecisionTreeClassifier\n", "\n", "np.random.seed(123) \n", "seeds = np.random.randint(1, 10000, size=n_estimators)\n", "\n", "trees = {}\n", "for i in range(n_estimators):\n", " trees[i] = DecisionTreeClassifier(max_features=\"sqrt\", max_depth=None, random_state=seeds[i])\n", " trees[i].fit(X_train.iloc[samples[i]], y_train.iloc[samples[i]])" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123456789...90919293949596979899
4380000000000...1000000000
26740000000000...0000000000
13450001000001...0001100110
19570000000001...1010000010
21480000000000...0000010010
\n", "

5 rows × 100 columns

\n", "
" ], "text/plain": [ " 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 \\\n", "438 0 0 0 0 0 0 0 0 0 0 ... 1 0 0 0 0 0 0 \n", "2674 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 \n", "1345 0 0 0 1 0 0 0 0 0 1 ... 0 0 0 1 1 0 0 \n", "1957 0 0 0 0 0 0 0 0 0 1 ... 1 0 1 0 0 0 0 \n", "2148 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 1 0 \n", "\n", " 97 98 99 \n", "438 0 0 0 \n", "2674 0 0 0 \n", "1345 1 1 0 \n", "1957 0 1 0 \n", "2148 0 1 0 \n", "\n", "[5 rows x 100 columns]" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Predict \n", "y_pred_df = pd.DataFrame(index=X_test.index, columns=list(range(n_estimators)))\n", "for i in range(n_estimators):\n", " y_pred_df.iloc[:, i] = trees[i].predict(X_test)\n", "\n", "y_pred_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Predict using majority voting" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "438 2\n", "2674 5\n", "1345 35\n", "1957 17\n", "2148 3\n", "3106 4\n", "1786 22\n", "321 6\n", "3082 10\n", "2240 5\n", "dtype: int64" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred_df.sum(axis=1)[:10]" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.5245901639344264" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = (y_pred_df.sum(axis=1) >= (n_estimators / 2)).astype(np.int)\n", "\n", "from sklearn import metrics\n", "metrics.f1_score(y_pred, y_test)" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.8945454545454545" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "metrics.accuracy_score(y_pred, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using majority voting with sklearn" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "from sklearn.ensemble import BaggingClassifier\n", "clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=100, bootstrap=True,\n", " random_state=42, n_jobs=-1, oob_score=True)" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.536, 0.8945454545454545)" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "clf.fit(X_train, y_train)\n", "y_pred = clf.predict(X_test)\n", "metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 5: Combination of classifiers - Weighted Voting" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The majority voting approach gives the same weight to each classfier regardless of the performance of each one. Why not take into account the oob performance of each classifier" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, in the traditional approach, a \n", "similar comparison of the votes of the base classifiers is made, but giving a weight $\\alpha_j$ \n", "to each classifier $M_j$ during the voting phase\n", "$$\n", " f_{wv}(\\mathcal{S},\\mathcal{M}, \\alpha)\n", " =\\max_{c \\in \\{0,1\\}} \\sum_{j=1}^T \\alpha_j \\mathbf{1}_c(M_j(\\mathcal{S})),\n", "$$\n", "where $\\alpha=\\{\\alpha_j\\}_{j=1}^T$.\n", "The calculation of $\\alpha_j$ is related to the performance of each classifier $M_j$.\n", "It is usually defined as the normalized misclassification error $\\epsilon$ of the base \n", "classifier $M_j$ in the out of bag set $\\mathcal{S}_j^{oob}=\\mathcal{S}-\\mathcal{S}_j$\n", "\\begin{equation}\n", " \\alpha_j=\\frac{1-\\epsilon(M_j(\\mathcal{S}_j^{oob}))}{\\sum_{j_1=1}^T \n", " 1-\\epsilon(M_{j_1}(\\mathcal{S}_{j_1}^{oob}))}.\n", "\\end{equation}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Select each oob sample" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "samples_oob = []\n", "# show the \"out-of-bag\" observations for each sample\n", "for sample in samples:\n", " samples_oob.append(sorted(set(range(n_samples)) - set(sample)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Estimate the oob error of each classifier" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "errors = np.zeros(n_estimators)\n", "\n", "for i in range(n_estimators):\n", " y_pred_ = trees[i].predict(X_train.iloc[samples_oob[i]])\n", " errors[i] = 1 - metrics.accuracy_score(y_train.iloc[samples_oob[i]], y_pred_)" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Text(0.5,1,'OOB error of each tree')" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAbUAAAEeCAYAAAANcYvwAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvIxREBQAAIABJREFUeJzt3XtcVHX+P/DXCALmhREa0RS84o0iFBXUhIRNMytMlzQtW9YLmFs/zQuYbe4qrhcS0y5qIpu2tIWXQvPauiQ3xdoizEox80IKKpcUFbn+/vDL5DDDzJmZM8w5Z17Px8PHQ845cy7vOee8P7dzRlVeXl4PIiIiBWhh7x0gIiISC5MaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMakcwUFxdj1qxZePDBB+Hh4QG1Wo3z58/be7fMolar8dBDD9l7N0iBnO29A2Qb+fn52Lx5M7Kzs3H58mW0aNECXbp0QUhICGbNmoUePXoY/XxWVha2bt2KY8eO4cqVK3B1dYWPjw/+8Ic/ICYmBh07dtT7TGZmJp566im96W5ubujSpQvCwsIwZ84cPPDAA6IdpyN66aWXcPjwYYwZMwbPPfccWrRoAXd3d3vvlt089NBDuHjxIsrLy+29KyQBKj58rSz19fVYvnw51qxZgxYtWiA0NBR+fn6oq6vDN998g6NHj8LZ2RkrV67E9OnT9T5fVVWFuXPnIiUlBa6urggPD0fv3r1RWVmJo0eP4rvvvkPr1q2xceNGvQTWkNS8vb0xefJk7f6UlZUhKysLP/74Izp06IAvv/ySic1CVVVV6NixI3r27ImvvvrK3rtjMbVaDW9vb5w4ccLqdTGp0b1YU1OYNWvW4M0330SXLl3w0Ucfwd/fX2d+RkYGpk6divnz56Ndu3Z49tlndebPnz8fKSkp8PPzQ0pKCrp166Yzf+fOnXjppZcQFRWFzz77DI888ojePvj4+GDRokU60+rr6zFp0iQcPHgQW7du1ZtPwhQXF6Ourg4dOnSw964QSRL71BTkwoULWLVqFZydnfHvf/9bL6EBQEhICDZt2gQAiIuLQ0VFhXZebm4utm3bBnd3d+zYsUMvoQHAhAkTEB8fj5qaGrz66quoq6sTtG8qlQphYWEAgJKSErOOKyMjA5MmTULPnj2h0Wjw4IMPYt68eSguLtZbduzYsVCr1Th37hzefvttBAcHw8vLS1tzTElJgVqtxooVK5Cbm4vx48eja9euUKvVOiX9jIwMREZGonv37ujQoQMefvhhxMbG4urVq3rbnDVrFtRqNTIzM5GSkoLQ0FA88MADBhO+Id999x1efPFF+Pr6QqPRwM/PD7Nnz8a5c+d0lnvooYe0/VDZ2dlQq9VQq9WYNWuWoO2cPXsWL7/8Mh588EF06NABPXv2xJQpU5CXl6e37OXLl7Fy5UqMGjUKvXv3hkajQd++fTFt2jT8+OOPTW4jLy8P06dPh5+fHzp06ABfX1+MGTMGW7ZsMbj8rVu38Ne//lW7TwMGDMDatWtRX2+6Aen8+fNQq9W4ePEiAGjjoVarMXbsWO1yDz30ENRqNSorKxEfH48BAwZAo9EgLi5Ou0xdXR22bduG0aNHw8fHB15eXhg6dCgSExNRVVVldTyp+bCmpiD/+te/UF1djYiICKOd8KNHj0ZAQADy8vKQlpaGKVOmAAD++c9/AgCmTp2KTp06Nfn5qKgovPnmmzh9+jSys7MxYsQIQfuXnp4OABg4cKDQQ8Jbb72Fv/3tb2jfvj1GjRoFLy8vnDx5Elu2bMH+/fvxxRdfoHPnznqfW7hwIXJzczF69GiMGjUKbdq00Zl//PhxJCYmYtiwYZg6dSouX74MJycnAHfj8Oqrr6JVq1aIiIhAx44dkZubi02bNmHv3r3Yv38/vL299bb59ttvIyMjA2PGjMGjjz6KO3fumDy+AwcOYOrUqairq8NTTz2F7t274+TJk0hJScHnn3+O3bt34+GHHwZwN3leuHABGzdu1GniFTLg4siRI5gyZQoqKysxevRo9OzZE5cvX8aePXvwn//8Bx999BHCw8O1y+fk5GDdunUYMWIEnn76adx33334+eefkZaWhv379+PAgQN6haYPP/wQc+fOBQCMGjUKffr0QVlZGb7//nusW7cO06ZN01m+pqYG48ePR1FREf7whz/A2dkZe/fuxd///nfcvn0br732mtFjcnd3R2xsLDZs2IDr168jNjZWO8/Hx0dv+alTpyI/Px/h4eFo3769ttBWU1OD559/HgcOHECvXr0wYcIEuLq6Ijs7G0uXLsWRI0ewc+dOODv/frs0N57UfNinpiBPP/00MjIysG7dOrz44otGl126dCkSExPxwgsv4O233wYABAQE4Ny5c/j0008xcuRIo5+fPn06duzYgcWLF2PBggUADPepAdD2qZ0+fRqTJ0/G2rVrtQnEmOzsbDz55JMYNGgQtm/fDrVarZ338ccfIyYmBk8++ST+9a9/aaePHTsW2dnZ6NSpEw4cOICuXbvqrDMlJQWzZ88GcDdh/ulPf9KZf+HCBQwaNAgtW7bEf/7zH/Tr1087Lz4+Hm+++SZGjRqF1NRU7fRZs2bh3//+N+677z6DN/umVFRUwN/fH2VlZUhLS0NISIh23rZt2/DKK6+gX79+yMnJgUqlAnC3dvLwww9j+PDh2Lt3r6Dt/PbbbxgwYADq6+uxf/9+9O3bVzvv1KlTCA8PR5s2bfDdd9/B1dUVAHD16lW4ubmhbdu2OuvKy8vDE088gaFDh2Lnzp3a6T/99BMeeeQRuLm54fPPP0dAQIDO5woLC9GlSxft3w3f5ejRo7F161a4ublptxsYGAgA+Pnnn9GyZUuTx2eqT61hfv/+/bFnzx54enrqzE9ISMDy5csxY8YMrFy5Untu1tXVYe7cudi6dStWrlyJmJgYi+NJzYfNjwrS0BxnqObSWMMyRUVFon2+wcWLF7Fq1Srtv/fffx8//PADBg0ahD/+8Y+CEhoAbNy4EfX19Vi7dq1OQgOASZMmwd/fH/v378f169f1Pvvyyy/rJbR7Pfjgg3oJDQBSU1NRVVWFadOm6SQ0AFiwYAE6deqEQ4cO4dKlS3qfnTp1quCEBgD79u1DaWkpIiIidBJaw7oCAgLw448/4vjx44LXacjHH3+M0tJSxMbG6tyAAaBPnz6YOnUqioqK8OWXX2qnazQavYQG3C34jBgxAllZWaiurtZO37JlC2pqajBv3jy9hAZAJ6Hda9WqVdqE1rDdsWPH4vr16ygoKDD3UI167bXX9BJaXV0dNm7cCI1GgxUrVuicmy1atMDSpUuhUqnwySefaKdbEk9qPmx+VJCGfoiGUr0Qhpa19vONaxGlpaXIzc1FbGwsnnnmGXzwwQcGh/43lpubC2dnZ+zZswd79uzRm19VVYXa2lqcPXtW70Y6aNAgo+tuav53330HAHpJBgBcXV0RHByMTz/9FPn5+XojOE1t05xtAUBoaCjy8vLw3XffISgoyKx13ys3NxcAcPLkSaxYsUJv/pkzZwAAp0+fxujRo7XTDx48iOTkZOTl5aGkpAQ1NTU6nyspKdE+2vH1118DuNvsKJS7u7vBftuGApPYoxkNfT9nzpxBSUkJunfvjoSEBIOfa9WqlU6CtTSe1DyY1BTEy8sLp0+fRmFhocllf/31V+1nGnTo0AHnz59HYWEhfH19zf58Uzw8PDBmzBi0atUK48aNw5IlSwQltdLSUtTU1GDVqlVGl7t3sEsDU6MDm5rfUOtran7D8RqqHZo7ItGabZmjtLQUwN0+L2Nu3ryp/f/GjRsRFxcHtVqNkSNHwtvbG25ublCpVNi7dy++//57nT7D3377DYCwWn6Ddu3aGZzeUFuqra0VvC4hDJ2rDbH55ZdfTJ5njT9jTjyp+TCpKUhwcDAyMzORnp5usk+toWkkODhY5/Pnz59Henq60T61mpoaZGVl6X3elIa+krNnz6K8vFyvSbGxdu3aobq6Wju6zRymaptNzW+40V65csXg/IYmWkM3ZHNquNZuy5LtfPnllwabBhurqanBihUr4OXlhSNHjug9aG/o+biGh78vXbpk8nu1F0PfT0NsHn/8cXz88ceC1mNuPKl5sU9NQaZMmaIdQXby5Mkml/viiy/wzTffoH379oiIiNBOb0iE27ZtM9hX1mDr1q0oKipC7969MXz4cMH7Z25z0uDBg3Hjxg1RHtAVqmGkYWZmpt68O3fuaJueGpaz1baAu48VALD6xjl48GAAwNGjRwUtX1JSgt9++w1DhgzRS2gVFRXaZlND2zh06JBV+2oJa2p2vXv3hru7O/73v/81OXS/MXPjSc2LSU1BunXrhvnz56O6uhqTJk3C999/r7dMVlYWZs6cCeBuJ/29Q92HDRuGyZMno7y8HM8++ywuXLig9/m0tDQsXrwYzs7OSExMRIsWwk+hd999FwDg5+cnqDTfMEpxzpw52ubOezW85URMzz77LFxcXLBlyxacPn1aZ15iYiIuXbqEUaNGGX3kQaixY8fCw8MDaWlpyM7O1pmXkpKCb7/9Fv369dPeRC31/PPPQ61WIyEhweCgk/r6ehw9elR7U9doNLjvvvvw7bff6jTtVldXIy4uzuBzhtOmTUPLli2xZs0ag4UQQ9+fWBoGf1hSo3d2dkZMTAyuXr2K+fPn49atW3rLlJSUID8/X/u3ufGk5sXmR4WJjY1FZWUl3nrrLYSGhuLRRx/Vvibr22+/RXZ2NpydnZGQkKD3NhEAWLt2LWpra/HJJ59gyJAhOq/JOnbsGL799lu0bt0a77//fpMPF1+4cEGnA72srAzHjx9HXl4eWrVq1WSHfGMhISFYtmwZlixZgsDAQDz22GPo1q0bKisrcfHiReTk5MDHx0fbFCoGHx8frFq1Cq+++ipGjhyJcePGwcvLC7m5ucjOzkbnzp2xZs0aUbbVunVrvPfee5g6dSrGjRuHp59+Gt26dcP333+PQ4cOwd3dHRs2bDC7WbOx9u3bY9u2bXj++ecxatQohISEoG/fvmjZsiV+/fVXfP311ygsLMS5c+fg4uKCFi1aIDo6GmvXrsWwYcPwxBNPoLq6GpmZmSgrK8OIESP0apd9+vRBYmIi5syZg5EjR2L06NHo06cPfvvtN5w8eRKXLl3SSQxiGjlyJP73v//hhRdewKhRo+Dm5gZvb29MmjRJ0OcXLFiAH374Adu2bcOhQ4cQEhKCzp0749q1a/jll19w7NgxTJ8+XTuy1dx4UvNiUlMYlUqFv/3tbxg3bpz2hcYNzzl17twZM2bMQExMDHr27Gnw866urti0aRMmT56Mbdu2ITc3F1988QVcXFzQtWtX/L//9/8wa9Ysgy80btAwpL+Bi4sLOnXqhBdeeAGvvPKKyUEo93r55ZcRHByMjRs34ujRozhw4ADatGmDTp06ITIyEuPHjxceHIGioqLQo0cPvP3229i7dy9u3ryJTp06YebMmZg/f76or6h6/PHHcejQISQmJuLIkSNIS0uDRqPBc889h4ULFxocHWiJkJAQZGdn45133sHhw4dx/PhxODs7w8vLC4MHD8aSJUt0+u4WL14MT09PfPjhh/jggw/Qrl07PProo3j99dcNjvgDgBdeeAH9+/fH22+/jZycHBw6dAjt27eHr68vXn31VVGOw5B58+bh+vXr2LdvH9atW4eamhoMHz5ccFJzdnbGtm3bsHPnTqSkpOCLL75ARUUFPDw84O3tjblz5+qty9x4UvPhw9dERKQY7FMjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFJTILF/h0rJGCvhGCvhGCv7YVIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFYFIjIiLFEJzUkpKS4O/vDy8vL4SGhiInJ6fJZXfv3o1nnnkGPXv2RJcuXRAeHo59+/bpLLN161aMGTMG3bp1g4+PD5588kkcPXrU8iMhIiKHJyip7dq1C3FxcZg3bx4yMjIwZMgQREZG4uLFiwaXz87ORkhICFJTU5GRkYHHHnsMzz//vE4izMrKwjPPPIO0tDQcPnwYvr6+mDBhAn7++WdxjoyIiByOqry8vN7UQuHh4fDz88P69eu10wYOHIiIiAgsWbJE0IbCwsIwdOhQLF++3OD8+vp69OnTB/PmzUN0dLTA3SdDCgoK4Ovra+/dkAXGSjjGSjjGyn5M1tSqqqqQl5eHsLAwnelhYWHIzc0VvKGKigqo1Wqj26msrDS6DBERkTHOphYoKSlBbW0tNBqNznSNRoMrV64I2sjmzZtx6dIlTJw4scll4uPj0aZNG4wZM8bouviTDsIwTsIxVsIxVsIxVsKIXaM1mdQaqFQqnb/r6+v1phmSlpaGN954A1u2bIGPj4/BZTZs2IAPPvgAn332Gdq1a2d0fazSm8amD+EYK+EYK+EYK/sxmdQ8PT3h5OSkVyu7du2aXu2tsbS0NMTExGDjxo144oknDC6zYcMGLF++HNu3b0dgYKAZu05ERKTLZJ+ai4sLAgICkJ6erjM9PT0dQUFBTX7u008/RXR0NN577z1EREQYXOadd95BfHw8PvnkEwwdOtTMXSciItIlqPlx9uzZiI6ORmBgIIKCgpCcnIyioiJERUUBgHa04qZNmwAAO3fuRHR0NJYtW4Zhw4ahuLgYwN0E2b59ewDA+vXrsWzZMrz//vvo1auXdhk3Nze4u7uLe5REROQQBCW18ePHo7S0FAkJCSguLka/fv2Qmpqq7SMrLCzUWT45ORk1NTVYtGgRFi1apJ0+fPhw7N27F8DdwSPV1dXaxNjgueeew4YNG6w6KCIickyCnlMjeWEntXCMlXCMlXCMlf3w3Y9ERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYTGpERKQYgn5Pjezn/I1qxH9zA5dv1aLTfU54fWBbdG3b0t67RUQkSUxqEnb+RjXGHSzBLzdqtdO+vlqFz0Z7MrERERnA5kcJi//mhk5CA4BfbtQi/psbdtojIiJpY1KTsMu3ag1OL2piOhGRo2NSk7BO9zkZnN6xielERI5O1n1qSh9E8frAtvj6apVOE2T3tnePk4iI9Mk2qTnCIIqubVvis9GeiP/mBopu1aKjAhM3EZGYZJvUjA2i2BzqYae9El/Xti0VdTxERLYk2z41DqIgIqLGZJvUOIiCiIgak21Se31gW3Rvq5vAOIiCiMixybZPjYMoiIioMdkmNYCDKIikQOmP1pC8yDqpEZF9OcKjNSQvsu1TIyL74/tJSWqY1IjIYny0hqSGSY2ILMZHa0hqmNSIyGJ8tIakhgNFiMhifLRG+eQ2upVJjYiswkdrlEuOo1vZ/EhERAbJcXQrkxoRERkkx9GtgpNaUlIS/P394eXlhdDQUOTk5DS57O7du/HMM8+gZ8+e6NKlC8LDw7Fv3z695dLS0hAUFIQOHTogKCgIe/bssewoiIhIdHIc3Sooqe3atQtxcXGYN28eMjIyMGTIEERGRuLixYsGl8/OzkZISAhSU1ORkZGBxx57DM8//7xOIjx+/Dj+/Oc/IzIyEpmZmYiMjMSf/vQnfP311+IcGRERWUWOo1tV5eXl9aYWCg8Ph5+fH9avX6+dNnDgQERERGDJkiWCNhQWFoahQ4di+fLlAICoqCiUlZXhs88+0y4TERGB+++/H1u2bDH3OOgeBQUF8PX1tfduyIIYsZLb6DBL8bwSTkmxaji/5TK61eTox6qqKuTl5eHll1/WmR4WFobc3FzBG6qoqIBardb+/dVXX2HmzJk6y4SHh+P9998XvE6yHUe5UVtLjqPDyDyOfi3IbXSryaRWUlKC2tpaaDQanekajQZXrlwRtJHNmzfj0qVLmDhxonZacXGxRessKCgQtE1HZ02cfr2twl9OuqKw8vfW6aOXbuIdvzvo3MpkxV52rInVX0+1xC83dG9wv9yoReyRX7GsT7W1uyY55sbq19sqbLzgjKt3WkDjWocYnxpJnkNN7ac11wLvVcKIXaMV/JyaSqXS+bu+vl5vmiFpaWl44403sGXLFvj4+Fi9TqVU6W3J2qaP1UdKUVh5W2daYWULpJR5YLO/fEpsQlgbq4ozVwFU6U2/6dQavr4a/Q/ImLmxOn+jGnN1arFOOFXpJrlarLH9TPnmhkXXgpKaH+XG5EART09PODk56dWgrl27plfTaiwtLQ0xMTHYuHEjnnjiCZ15Xl5eFq2TbE+Ow3jtRY6jwww5f6MaM46U4sn9VzHjSCnO37C+limXZ5yM7SevBfkxmdRcXFwQEBCA9PR0nenp6ekICgpq8nOffvopoqOj8d577yEiIkJv/uDBg81eJzUPpdyom4McR4c11tAvuP3sbWQVVWH72dsYd7DE6sQml4RgbD95LciPoCH9s2fPxkcffYRt27bh1KlTiI2NRVFREaKiogAA0dHRiI6O1i6/c+dOzJgxA0uWLMGwYcNQXFyM4uJilJWVaZeJiYlBRkYGEhMTcfr0aSQmJiIzMxOzZs0S+RDJXEq4UTeXhncfRvZohREdXRDZo5XkmtdMsVWNSi4Jwdh+8lqQH0F9auPHj0dpaSkSEhJQXFyMfv36ITU1VdtHVlhYqLN8cnIyampqsGjRIixatEg7ffjw4di7dy8AICgoCMnJyYiPj8eKFSvQvXt3JCcnY9CgQWIdG1mIL6k1j9xGhzVmqxrV6wPb4uurVToJ014JwdgIRmP7yWtBfgQ9p0bywk5q4RgrYMaRUmw/e1tvemSPVjrJ2pJYSeEZJ0OPXXRv66RToxZ7P3le2Q/f0k/k4GxZo5JCLdZY82rDvklhP0kcTGpEDk7pTWxyGbBC4mBSIyKdmorS3qAhlwErJA4mtSYo7cImEsLYa7/kSkoDVsj2mNQM4Pv8SGmEFtKM9T8tfKC59lZcSm9eJV1MagYI6VgmkgtzCmlK7X/iQBDHwV++NkCpFzbZny1eR2WKOQ9XK6X/yR5xJmlgTc0ApVzYJC32atY2p5BmrP+pqqjEZvsoJnYfODZF19QsLa3J9dU4Dccbk+/K0qkE2esFv+YU0qT02i9Lr1+5vEhZTKyZ/k6xNTVrSmty7FjWPV4n/O/6bZZOJcZezdrmjv6TQv+TNdevo3UfsGaqS7E1NWtLaw0X9p4xGmwO9ZD8yeGIpVO5sVeztpRqX0JZcz47WveBlK99e9QgFVtTc7TSmqMdrxzZ83kpKdS+zGHN+exoz6VJ9dq3Vw1SsUlNqqU1Wz3ULdXjpd81Z7O23F8eYM35LMfuA2uIfe2Lde7Y69EoxSY1KZbWbFlykeLxkr7mqDEpoY/F2vNZbjVTa4h57Yt57tirBqnYpGbP0lpTJR1bllzuPd6zJRXo4dlG0aVTe5J6LUjOLw+4N7Z93Z3RT+2MG9X1iq9tWUPMe52Y5469Wo8Um9QA+5TWjJV0bF1yaTjegoIS+Pr6iLJO0iWHWpBU+1hMEfK7Z2SYWPc6Mc8de7UeKXb0o70YK+mw30v+pDzSrIFczzM5xFbpxDx37DXqVtE1NXswVtJ55xE1+73MIMVmPjnUguTavyqH2Cqd2OeOPVrLmNREZqyk42ijsqwh1WY+OdSC5HqeySG2SifXc+deTGoiM1XScaRRWdaQ6mAHudSC5HieySW2SifHc+deTGoiU0JJRwqk2hTliN9vQzPw2Wuu6HGp1GbH64ixJfEpKqlJpQ9GiiUdMWPTHHGW6gOlgOnvVyrnoRia+52ixmKrpLiS7SgmqUm1D0YKxIxNc8XZUFNUl/tUuFldhyf3XzXrpmZsn8WmtPNQKs3ASosr2Y5ihvRzOHDTxIxNc8W58XDgMV1cAZUK+y7eQVZRFbafvY1xB0sEvSC1Oc8NpZ2HUmkGVlpcyXYUU1Oz9uJTctOGmDcmW9/kmvoeZhwpReHNOp1lhdYYmvPGLJUkIBapjEhUWlzJdhST1Ky5+JTetCHmjcmWNzlbvY2lOW/MUkkCYpHKiESlxZVsRzHNj9b8WrXSmzbE/CVvW/4quNhvY2n4Laez16vRulHxraF/TuxfCZfrr6Y35d5m4ED3Wrv9FpvS4kq2o5iamjXDgZXetCHmUGlbDrsW820shmp9rZ1V6N/eGfe7tsCJsmrsu3gHYo/oU+KwdFu+U1Ros78S49qYkrtAmpNikhpg+VB6R2jaEPMxA1s9siDm21gM1fpu1tSjW9u7p7yl/XNCNNcjHXK/CZrb7K/kRymU3gXSnBSV1CwllX4Dc8n5IjZEzLexGKv11TfxGTnVzJVwExTzcQG5x0Mqj04oAZMa5Nm00ZzPXjUXMb8HS2rfcqqZK+EmKGazv9zjofQukObEpPZ/pPgWEGOMXcQLH7DTTolArO/BVK1PjjXze0n1JmhO64GYzf5SjYdQjtAF0lyY1GSKz+UZZ6rWJ/dfCZfiTdDcJkAxm/2lGA9zyLULRIqY1GSKz+WZZqzWJ4dfCTdW8JDiTdDcJkAxm5ulGA9zyLELRKocNqlZU1ORQi3H2EVcVVRidD/l3v/gCEwVPKR4E7Sk9UCs5mYx42Gv69vcWEjhPiRFDpnUrKmpSKWWY+wiLiiy3ds5yDyW3niEFDyk1g/c3E2AhmJrbTykcn2bIpf9tAfBbxRJSkqCv78/vLy8EBoaipycnCaXLSoqwvTp0zF48GB4eHhg1qxZBpfbsGEDBg8ejI4dO6J///6YP38+KioqzD8KM1nzBhEpvX2k4aa2Z4wGm0M9dE5msd/OIUUNbwx5cv9VUd8KIpaGG8/2s7fNfgmzHAseQt76IdZ3Zk1sjZHS9W2MXPazsea4ZgXV1Hbt2oW4uDisWbMGwcHBSEpKQmRkJI4dOwZvb2+95e/cuQMPDw/MmTMHW7duNbjO7du3Y8mSJVi/fj2GDh2Kc+fO4eWXX0ZlZSXeeeedJvdlxpFSq6vb1tww7PVCX3OJ+XYOKZJDSdWaZl5bFzxs0XRlqglQzO/MVk3x4LdLAAAXaklEQVTocilMmNrP5myaFLqtpr7/b//YUdT9EZTU3n33XUyePBkvvvgiACAhIQGHDx9GcnIylixZord8165dsXr1agDA7t27Da7z+PHjGDRoECZNmqT9zKRJk7Bnzx6j+7L97G3t/y29IKy5Ydjrhb5iHqMU+2PMJYd+QWtukLYc+GDLAoGxJlExvzNbJR+5tGIY28/mLPCZs62mvn+xmWx+rKqqQl5eHsLCwnSmh4WFITc31+INBwcH4/vvv8dXX30FALh48SL279+Pxx57TPA6LK1uW/NyVHu90NdcpvbTWNOlHMihRG3NDbLx78mJ+SJhezVdifmd2Sr5yOXFycb2U6q/H9jU9y82kzW1kpIS1NbWQqPR6EzXaDS4cuWKxRueMGECSktL8cQTT6C+vh41NTWYOHEi/v73v5u1HksuCGtqKpZ8Vmj1XMyLXgm1MWPkUKK2trZlq4Eg9ioQiPmd2aomK5frxth+SvX3A5v6/sUmePSjSqXS+bu+vl5vmjmysrKQkJCANWvWIDAwEGfPnsWiRYvwj3/8A4sXLxa8nta1N1FQUG7RPtz75o2qohIUFIn/2V9vq/CXk64orPy9Unz00k2843cHnVvpvoWwTW1LAPoXjyXHWFBQYNZ+ys2U9iocddONaxe3OkxpX4qCghKz1tUQK1tY21uFjReccbWqBTQudYjxuY2qoht2/R6sOc+siZWY3xlg29iKcd3Y8rxqYGg/xbyPmGLOtpr6/sVmMql5enrCyclJr1Z27do1vdqbOZYvX44JEyZg6tSpAAA/Pz/cunULr7zyCmJjY+HsbDrfdm/rhFWhHSRXirrX6iOlKKy8rTOtsLIFUso8sNlftxS+qmM1TjVqn7bkGAsKCuDr62vdjkucL4C93autLlHbOla+AB71t9nqLWLpeWZtrMT6zu5dn9Ri28Ce16BY9xGxt9XU9y82k5nDxcUFAQEBSE9Px7hx47TT09PT8fTTT1u84Vu3bsHJSbc66uTkhPr6pt6hftcT3q64UV0v2WaBxsypnsul6UMqpPacllxYe55ZM7KO35ntNed9xNxtNcf3L6j5cfbs2YiOjkZgYCCCgoKQnJyMoqIiREVFAQCio6MBAJs2bdJ+Jj8/HwBw/fp1qFQq5Ofnw8XFBX379gUAPP7443jvvfcwYMAABAYG4pdffsHy5csxevRoo7W0H8trJDVs2xRz+xHkeNEr/c0GSjw+S88zOTxKQc17H5HaPUtQUhs/fjxKS0uRkJCA4uJi9OvXD6mpqfDxufvOvMLCQr3PhISE6Px94MABeHt748SJEwCABQsWQKVSYfny5bh06RI8PT3x+OOP469//avRfZHasG1T5P5OOlOUfpOT0vFJIbnK4VEKcmyq8vJy4+19EqP+568Y0dEFe8ZY3p/X3BpuRs3VpNic7fkzjpTqPDvYILJHK1nc5EzFSirHZyi5dm/r1KzJtaCgAHPPqJFVVKU3T27XpK0JvQalUFBRGlm++1FKw7YB0yemseq53E9qOTwvZg2pHJ9UakhyeJRCLqTUCqAksktqUmu6U8LLka2h9JucVI5PKslV6c3pzcncgoqtCsBirlcKhXTZJTWp3fCtKUFLpfRtDaXf5KRyfFJJrhyhKx5zCiq2KgCLuV6pFNJll9SkdvFI+eXIzUGpN7l7S5x93Z3RT+1s10dJpJJcAemNdpMrcwoqtioAi7leqRTSZZfULGWrarFUX47cnMy5yUmhecIUKQzKaEyphQdHZk5BxVYFYDHXK5VCukMkNVtWi60pQUup9N0cpNI8YYpUSpyNsYakLOYUVGxVABZzvVIppDtEUrPlTaq5X45sK81Rg5JqsmhMKiVOUj6hBRVbFYDFXK9UCukOkdRsfZOypgQthdK32DWophKkXJKFVEqcRA1sVQAWc71SKaQ7RFJTyk3KVrUpMWtQxhKkXL4HqZQ4SX4artGz11zR41KpqDd1WxWAxVyvFArpDpHUlHCTsmV/lJg1KGMJUi7fg1RKnCQvuteoE/53/bYk+4yVziGSmhJuUrbsjxKzBmUsQcrpe5BCiZPkRS59xkrnEEkNkP9Nypb9UWLWoEwlSLl/D0RNkUufsdK1ML0ISYEt+6MaalCRPVphREcXRPZoZXGTyesD26J7W919kmITI5HY5NJnrHQOU1OTO1v3R4lVg5JTEyORGLSDQ65Xo7UzcLPm93lSLdDJ4SUIlmJSkwk5JQs2MVpOyTcbKRErzoYGcLV2VqGHWw36dmgjye9PLi9BsBSTmowwWSib0m82UiFmnA0NDrlZU4/Oreoke60qfUAL+9SIJMLYzYbEI2acmxoccrVKurdWpQ9oYU2N7Eppv+VkDaXfbKRCzDg3NThE41JncLoUzlGlD2hhUiO7UeJvOVlD6TcbqRAzzk0N4Irxua23rFTOUbm8BMFS0q0jk+KJ2QykhKY7Pg7RPMSMc1OPw3RuVa+3rFTOUTEf4ZEi1tTIbpT4W07WkNMIVzkTO86GBnAVFOkvJ6VzVMmDzpjUyG6U+FtO1lLyzUZK7BFnpZyjUsfmR7IbMZuB2HRHUsdztHmwpkZ2o8TfciJqCs/R5sGkRnaltN9yIjKG56jtMakREZFgUnjWzhgmNSIiM0j9pm5LUnnWzhgmNSIigeRwU7clObw3kqMfBTp/oxozjpTiyf1XMeNIKc7fqLb3LjUbRz52ontJ5QFqe5HSs3ZNYU1NAEcunTnysRM1Joebui3J4Vk71tQEcOTSmSMfO1Fjcrip25IcnrVjUhPAkUtnjnzsRI3J4aZuS3J4bySbHwVw5NKZIx87UWN8gFr6z9oxqQmg9J9qMMaRj53IEKnf1B0dk5oAjlw6c+RjJyL5YVITyJFLZ4587HLiyA8FEzVgUiNSAD56QXSX4NGPSUlJ8Pf3h5eXF0JDQ5GTk9PkskVFRZg+fToGDx4MDw8PzJo1y+By169fx8KFC9G3b1906NABAwYMwKeffmr+URA5OD56QXSXoJrarl27EBcXhzVr1iA4OBhJSUmIjIzEsWPH4O3trbf8nTt34OHhgTlz5mDr1q0G11ldXY3x48dDrVbjn//8Jx544AFcunQJrq6u1h0RkQPioxdEdwlKau+++y4mT56MF198EQCQkJCAw4cPIzk5GUuWLNFbvmvXrli9ejUAYPfu3QbXmZKSgqtXr2Lfvn1wcXHRfo6IzMdHL4juMtn8WFVVhby8PISFhelMDwsLQ25ursUb3rt3L4KCgrBw4UL07t0bQUFBWLFiBaqr+V5BJeF7I5uHoz8UTMY50nVosqZWUlKC2tpaaDQanekajQZXrlyxeMPnzp1DRkYG/vjHPyI1NRXnz5/HggULcPPmTcTHxzf5uYKCAou36UikEKdfb6vwl5OuKKz8vex09NJNvON3B51b1dtxz3RJIVZiWNtbhY0XnHG1qgU0LnWI8bmNqqIbKCgSbxtKiVVzkEqspH4d+vr6iro+waMfVSqVzt/19fV608xRV1cHjUaD9evXw8nJCQEBASgrK8Nrr72GZcuWNblusQOgRAUFBZKI0+ojpSisvK0zrbCyBVLKPLDZXxqPCEglVmLwBfCov+3Wr6RY2ZqUYiWH61BMJpOap6cnnJyc9Gpl165d06u9mcPLywstW7aEk9PvTSa9e/fGrVu3UFJSgvvvv9/idZM0cPACkf052nVosk/NxcUFAQEBSE9P15menp6OoKAgizccHByMs2fPoq6uTjvtzJkzuO++++Dp6Wnxekk6OHiByP4c7ToU9Jza7Nmz8dFHH2Hbtm04deoUYmNjUVRUhKioKABAdHQ0oqOjdT6Tn5+P/Px8XL9+HWVlZcjPz8dPP/2knf/nP/8Z5eXliI2NRUFBAQ4fPoyVK1di2rRpVjVrknRw8AKR/TnadSioT238+PEoLS1FQkICiouL0a9fP6SmpsLHxwcAUFhYqPeZkJAQnb8PHDgAb29vnDhxAgDQpUsX7Nq1C4sXL8aIESPQoUMHTJkyBQsWLLD2mEgi+N5IIvtztOtQVV5ebv/hLyQqKXVSS529YyWn9zXaO1ZywljZD9/9SGQnfF8jkfj4y9dEdsL3NRKJj0mNyE4cbag1UXNgUiOyE0cbak3UHJjUiOzE0YZaEzUHDhQhshNHG2pN5pHTyFgpYVIjsqOubVtic6jy3r9Hd1mamDgy1nJMakRENmBNYjI2MpaFIOPYp0ZEZAPWPLLBkbGWY1IjIrIBaxITR8ZajkmNiMgGrElMHBlrOSY1IiIbsCYxNYyMjezRCiM6uiCyRysOEhGIA0WIiGzA2kc2ODLWMkxqREQ2wsTU/Nj8SEREisGkRkREisGkRkREisGkRkREisGkRkREisGkRkREisGkRkREisGkRkREisGkRkREisE3ihCBvzJMpBRMauTw+CvDRMrB5kdyeNb8mCMRSQuTGjk8/sowkXIwqZHD468MEykHkxo5PP7KMJFycKAIOTxrf8yRiKSDSY0I/DFHIqVg8yMRESkGkxoRESkGkxoRESkGkxoRESkGkxoRESmG4KSWlJQEf39/eHl5ITQ0FDk5OU0uW1RUhOnTp2Pw4MHw8PDArFmzjK57x44dUKvVmDhxovA9JyIiakRQUtu1axfi4uIwb948ZGRkYMiQIYiMjMTFixcNLn/nzh14eHhgzpw5GDRokNF1nzt3Dm+88QaGDh1q/t4TERHdQ1BSe/fddzF58mS8+OKL6NOnDxISEuDl5YXk5GSDy3ft2hWrV6/GlClT0L59+ybXW11djWnTpuH1119Ht27dLDoAIiKiBiaTWlVVFfLy8hAWFqYzPSwsDLm5uVZtfNmyZfDx8cHkyZOtWg8REREg4I0iJSUlqK2thUaj0Zmu0Whw5coVizf83//+F7t27UJWVpZZnysoKLB4m46EcRKOsRKOsRKOsRLG19dX1PUJfk2WSqXS+bu+vl5vmlAlJSV46aWXsHnzZqjVarM+K3YAlKigoIBxEoixEo6xEo6xsh+TSc3T0xNOTk56tbJr167p1d6E+uGHH1BUVIRx48Zpp9XV1Wm3d+zYMZ4QRERkNpNJzcXFBQEBAUhPT9dJQunp6Xj66act2ujAgQP1HgmIj49HeXk53nzzTXTt2tWi9RIRkWMT1Pw4e/ZsREdHIzAwEEFBQUhOTkZRURGioqIAANHR0QCATZs2aT+Tn58PALh+/TpUKhXy8/Ph4uKCvn37onXr1ujfv7/ONtzd3VFbW6s3nYiISChBSW38+PEoLS1FQkICiouL0a9fP6SmpsLHxwcAUFhYqPeZkJAQnb8PHDgAb29vnDhxQoTdJiIi0qcqLy+vt/dOkLjYSS0cYyUcYyUcY2U/fPcjEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREpBpMaEREphqq8vLze3jtBREQkBtbUiIhIMZjUiIhIMZjUiIhIMZjUiIhIMZjUiIhIMSSf1JKSkuDv7w8vLy+EhoYiJyfH3rtkd4mJiRg5ciS8vb3Rs2dPTJw4ET/88IPOMvX19VixYgX69u2Ljh07YuzYsfjxxx/ttMfSsWbNGqjVaixYsEA7jbH6XVFREWJiYtCzZ094eXkhKCgIWVlZ2vmM1e9qa2sRHx+vvT/5+/sjPj4eNTU12mUcNV7Z2dmYNGkS+vXrB7VajZSUFJ35QuJSXl6OmTNnwsfHBz4+Ppg5cybKy8tNblvSSW3Xrl2Ii4vDvHnzkJGRgSFDhiAyMhIXL160967ZVVZWFqZNm4aDBw9i9+7dcHZ2xrhx41BWVqZdZt26dXj33XexatUq/Pe//4VGo8EzzzyDGzdu2HHP7eurr77C1q1b4efnpzOdsbqrvLwco0ePRn19PVJTU5Gbm4vVq1dDo9Fol2GsfvfWW28hKSkJq1atwvHjx7Fy5Ups3rwZiYmJ2mUcNV43b95E//79sXLlSrRq1UpvvpC4TJ8+Hfn5+di+fTt27NiB/Px8REdHm9y2pJ9TCw8Ph5+fH9avX6+dNnDgQERERGDJkiV23DNpqaiogI+PD1JSUjBmzBjU19ejb9++mDFjBubPnw8AuH37Nnx9fbFs2TJERUXZeY+b32+//YbQ0FCsW7cOq1evRv/+/ZGQkMBY3WPp0qXIzs7GwYMHDc5nrHRNnDgR7du3x8aNG7XTYmJiUFZWhk8++YTx+j+dO3fG6tWrMWXKFADCzqNTp04hKCgIBw4cQHBwMADg6NGjGDNmDL766iv4+vo2uT3J1tSqqqqQl5eHsLAwnelhYWHIzc21015JU0VFBerq6qBWqwEA58+fR3FxsU7sWrVqhWHDhjls7ObMmYOIiAiEhobqTGesfrd3714EBgYiKioKvXr1wiOPPIL3338f9fV3y72Mla7g4GBkZWXh9OnTAICffvoJmZmZeOyxxwAwXk0REpfjx4+jTZs2CAoK0i4THByM1q1bm4yds21223olJSWora3VafoAAI1GgytXrthpr6QpLi4ODz30EIYMGQIAKC4uBgCDsbt8+XKz75+9bd26FWfPnsWmTZv05jFWvzt37hy2bNmCl156CXPmzMGJEycQGxsLAJg5cyZj1cicOXNQUVGBoKAgODk5oaamBvPnz8f06dMB8NxqipC4XLlyBZ6enlCpVNr5KpUK999/v8n7v2STWoN7Dwq4W3VtPM2Rvfbaazh27BgOHDgAJycnnXmMHVBQUIClS5di//79cHFxaXI5xgqoq6vDgAEDtE37Dz/8MM6ePYukpCTMnDlTuxxjddeuXbvw8ccfIykpCX379sWJEycQFxcHHx8fTJ06Vbsc42WYqbgYipGQ2Em2+dHT0xNOTk56WfnatWt6Gd5RLVq0CDt37sTu3bvRrVs37XQvLy8AYOxwtxmjpKQEQ4cOhaenJzw9PZGdnY2kpCR4enrCw8MDAGMF3D1v+vTpozOtd+/eKCws1M4HGKsGb7zxBv7yl79gwoQJ8PPzw6RJkzB79mysXbsWAOPVFCFx6dChA65du6Zt+gbuJrSSkhKTsZNsUnNxcUFAQADS09N1pqenp+u0szqq2NhY7NixA7t370bv3r115nXt2hVeXl46sausrMTRo0cdLnZjx45FTk4OMjMztf8GDBiACRMmIDMzE7169WKs/k9wcDDOnDmjM+3MmTPw9vYGwPOqsVu3bum1jjg5OaGurg4A49UUIXEZMmQIKioqcPz4ce0yx48fx82bN03GzikuLu5vNtlzEbRt2xYrVqxAx44d4ebmhoSEBOTk5OCdd96Bu7u7vXfPbubPn4+PP/4YH3zwAbp06YKbN2/i5s2bAO4WBlQqFWpra7F27Vr06tULtbW1WLx4MYqLi/HWW2/B1dXVzkfQfNzc3KDRaHT+bd++HT4+PpgyZQpjdY8uXbpg1apVaNGiBTp27IgjR44gPj4ec+fORWBgIGPVyKlTp/DJJ5+gV69eaNmyJTIzM7Fs2TKMHz8e4eHhDh2viooK/PTTTyguLsaHH36I/v37o127dqiqqoK7u7vJuNx///34+uuvsWPHDvj7++PXX3/F3LlzMXDgQJPD+iU9pB+4+/D1unXrUFxcjH79+uEf//gHhg8fbu/dsquGUY6NxcbGYtGiRQDuVtVXrlyJDz74AOXl5QgMDMSbb76J/v37N+euStLYsWO1Q/oBxupeBw8exNKlS3HmzBl06dIFM2bMQHR0tLYfg7H63Y0bN7B8+XJ8/vnnuHbtGry8vDBhwgQsXLgQbm5uABw3XpmZmXjqqaf0pj/33HPYsGGDoLiUlZUhNjYW+/fvBwCMGTMGq1evbvL+10DySY2IiEgoyfapERERmYtJjYiIFINJjYiIFINJjYiIFINJjYiIFINJjYiIFINJjYiIFINJjYiIFINJjYiIFOP/A/pwIw6soK7CAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('fivethirtyeight')\n", "\n", "plt.scatter(range(n_estimators), errors)\n", "plt.xlim([0, n_estimators])\n", "plt.title('OOB error of each tree')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Estimate $\\alpha$" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [], "source": [ "alpha = (1 - errors) / (1 - errors).sum()" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "weighted_sum_1 = ((y_pred_df) * alpha).sum(axis=1)" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "438 0.019993\n", "2674 0.050009\n", "1345 0.350236\n", "1957 0.170230\n", "2148 0.030047\n", "3106 0.040100\n", "1786 0.219819\n", "321 0.059707\n", "3082 0.100178\n", "2240 0.050128\n", "1910 0.180194\n", "2124 0.190111\n", "2351 0.049877\n", "1736 0.950014\n", "879 0.039378\n", "785 0.219632\n", "2684 0.010104\n", "787 0.710568\n", "170 0.220390\n", "1720 0.020166\n", "dtype: float64" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "weighted_sum_1.head(20)" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.5267489711934156, 0.8954545454545455)" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = (weighted_sum_1 >= 0.5).astype(np.int)\n", "\n", "metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using Weighted voting with sklearn" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.536, 0.8945454545454545)" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=100, bootstrap=True,\n", " random_state=42, n_jobs=-1, oob_score=True)\n", "clf.fit(X_train, y_train)\n", "y_pred = clf.predict(X_test)\n", "metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [], "source": [ "errors = np.zeros(clf.n_estimators)\n", "y_pred_all_ = np.zeros((X_test.shape[0], clf.n_estimators))\n", "\n", "for i in range(clf.n_estimators):\n", " oob_sample = ~clf.estimators_samples_[i]\n", " y_pred_ = clf.estimators_[i].predict(X_train.values[oob_sample])\n", " errors[i] = metrics.accuracy_score(y_pred_, y_train.values[oob_sample])\n", " y_pred_all_[:, i] = clf.estimators_[i].predict(X_test)\n", " \n", "alpha = (1 - errors) / (1 - errors).sum()\n", "y_pred = (np.sum(y_pred_all_ * alpha, axis=1) >= 0.5).astype(np.int)" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.5533596837944664, 0.8972727272727272)" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 5: Combination of classifiers - Stacking" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The staking method consists in combining the different base classifiers by learning a \n", "second level algorithm on top of them. In this framework, once the base \n", "classifiers are constructed using the training set $\\mathcal{S}$, a new set is constructed \n", "where the output of the base classifiers are now considered as the features while keeping the \n", "class labels.\n", "\n", "Even though there is no restriction on which algorithm can be used as a second level learner, \n", "it is common to use a linear model, such as \n", "$$\n", " f_s(\\mathcal{S},\\mathcal{M},\\beta) =\n", " g \\left( \\sum_{j=1}^T \\beta_j M_j(\\mathcal{S}) \\right),\n", "$$\n", "where $\\beta=\\{\\beta_j\\}_{j=1}^T$, and $g(\\cdot)$ is the sign function \n", "$g(z)=sign(z)$ in the case of a linear regression or the sigmoid function, defined \n", "as $g(z)=1/(1+e^{-z})$, in the case of a logistic regression. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets first get a new training set consisting of the output of every classifier" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [], "source": [ "X_train_2 = pd.DataFrame(index=X_train.index, columns=list(range(n_estimators)))\n", "\n", "for i in range(n_estimators):\n", " X_train_2[i] = trees[i].predict(X_train)" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123456789...90919293949596979899
23600000000000...0000000000
14120010000000...1000000000
14040000000000...0000100000
6261101111111...1111111111
3470000000000...0000000000
\n", "

5 rows × 100 columns

\n", "
" ], "text/plain": [ " 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 \\\n", "2360 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 \n", "1412 0 0 1 0 0 0 0 0 0 0 ... 1 0 0 0 0 0 0 \n", "1404 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 1 0 0 \n", "626 1 1 0 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 \n", "347 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 \n", "\n", " 97 98 99 \n", "2360 0 0 0 \n", "1412 0 0 0 \n", "1404 0 0 0 \n", "626 1 1 1 \n", "347 0 0 0 \n", "\n", "[5 rows x 100 columns]" ] }, "execution_count": 51, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X_train_2.head()" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegressionCV" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "LogisticRegressionCV(Cs=10, class_weight=None, cv=5, dual=False,\n", " fit_intercept=True, intercept_scaling=1.0, max_iter=100,\n", " multi_class='ovr', n_jobs=1, penalty='l2', random_state=None,\n", " refit=True, scoring=None, solver='lbfgs', tol=0.0001, verbose=0)" ] }, "execution_count": 53, "metadata": {}, "output_type": "execute_result" } ], "source": [ "lr = LogisticRegressionCV(cv = 5 )\n", "lr.fit(X_train_2, y_train)" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0.10093164, 0.10422116, 0.09431398, 0.09652653, 0.09708914,\n", " 0.09902868, 0.11099774, 0.09661942, 0.09339793, 0.09113735,\n", " 0.10012431, 0.09821555, 0.09383391, 0.09553896, 0.09147924,\n", " 0.09649782, 0.08966216, 0.09196528, 0.09684185, 0.09020504,\n", " 0.0983949 , 0.09514009, 0.10446051, 0.10029114, 0.09671778,\n", " 0.09725594, 0.10912314, 0.10590886, 0.10274701, 0.10275977,\n", " 0.10607442, 0.09803138, 0.1031967 , 0.09266065, 0.09702167,\n", " 0.095245 , 0.08884686, 0.0996088 , 0.09053837, 0.09010279,\n", " 0.09905727, 0.09880662, 0.10538906, 0.09584236, 0.09633239,\n", " 0.09001192, 0.09181503, 0.08995192, 0.10130381, 0.10827454,\n", " 0.10065035, 0.09770659, 0.08922769, 0.10078159, 0.10173676,\n", " 0.10522662, 0.0974279 , 0.09597549, 0.08932533, 0.1003361 ,\n", " 0.10345933, 0.1014522 , 0.09016942, 0.10348487, 0.09335792,\n", " 0.09796407, 0.10166743, 0.09307337, 0.09538791, 0.10997033,\n", " 0.09352554, 0.09860746, 0.10597265, 0.09583425, 0.0982285 ,\n", " 0.09994926, 0.10224051, 0.10065239, 0.10209171, 0.11258262,\n", " 0.09956141, 0.11516098, 0.09798579, 0.10092722, 0.10149644,\n", " 0.10275359, 0.09181294, 0.09903724, 0.10016702, 0.10146037,\n", " 0.09848365, 0.10322647, 0.09913428, 0.08925698, 0.0994986 ,\n", " 0.10277998, 0.09249995, 0.09541316, 0.10532089, 0.09850201]])" ] }, "execution_count": 54, "metadata": {}, "output_type": "execute_result" } ], "source": [ "lr.coef_" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [], "source": [ "y_pred = lr.predict(y_pred_df)" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.5365853658536585, 0.8963636363636364)" ] }, "execution_count": 56, "metadata": {}, "output_type": "execute_result" } ], "source": [ "metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using sklearn" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.5625000000000001, 0.8981818181818182)" ] }, "execution_count": 57, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred_all_ = np.zeros((X_test.shape[0], clf.n_estimators))\n", "X_train_3 = np.zeros((X_train.shape[0], clf.n_estimators))\n", "\n", "for i in range(clf.n_estimators):\n", "\n", " X_train_3[:, i] = clf.estimators_[i].predict(X_train)\n", " y_pred_all_[:, i] = clf.estimators_[i].predict(X_test)\n", " \n", "lr = LogisticRegressionCV(cv=5)\n", "lr.fit(X_train_3, y_train)\n", "\n", "y_pred = lr.predict(y_pred_all_)\n", "metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "vs using only one dt" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.44510385756676557, 0.83)" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dt = DecisionTreeClassifier()\n", "dt.fit(X_train, y_train)\n", "y_pred = dt.predict(X_test)\n", "metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.0" } }, "nbformat": 4, "nbformat_minor": 1 }