{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 06 - Decision Trees\n", "\n", "by [Alejandro Correa Bahnsen](albahnsen.com/) and [Jesus Solano](https://github.com/jesugome)\n", "\n", "version 1.4, January 2019\n", "\n", "## Part of the class [Practical Machine Learning](https://github.com/albahnsen/PracticalMachineLearningClass)\n", "\n", "\n", "\n", "This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [Rick Muller](http://www.cs.sandia.gov/~rmuller/), Sandia National Laboratories" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Why are we learning about decision trees?\n", "\n", "- Can be applied to both regression and classification problems\n", "- Many useful properties\n", "- Very popular\n", "- Basis for more sophisticated models\n", "- Have a different way of \"thinking\" than the other models we have studied" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Lesson objectives\n", "\n", "Students will be able to:\n", "\n", "- Explain how a decision tree is created\n", "- Build a decision tree model in scikit-learn\n", "- Tune a decision tree model and explain how tuning impacts the model\n", "- Interpret a tree diagram\n", "- Describe the key differences between regression and classification trees\n", "- Decide whether a decision tree is an appropriate model for a given problem" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 1: Regression trees\n", "\n", "Major League Baseball player data from 1986-87:\n", "\n", "- **Years** (x-axis): number of years playing in the major leagues\n", "- **Hits** (y-axis): number of hits in the previous year\n", "- **Salary** (color): low salary is blue/green, high salary is red/yellow" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Salary data](https://github.com/justmarkham/DAT8/raw/226791169b1cc6df8e8845c12e34e748d5ffaa85/notebooks/images/salary_color.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Group exercise:\n", "\n", "- The data above is our **training data**.\n", "- We want to build a model that predicts the Salary of **future players** based on Years and Hits.\n", "- We are going to \"segment\" the feature space into regions, and then use the **mean Salary in each region** as the predicted Salary for future players.\n", "- Intuitively, you want to **maximize** the similarity (or \"homogeneity\") within a given region, and **minimize** the similarity between different regions.\n", "\n", "Rules for segmenting:\n", "\n", "- You can only use **straight lines**, drawn one at a time.\n", "- Your line must either be **vertical or horizontal**.\n", "- Your line **stops** when it hits an existing line." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Salary regions](https://github.com/justmarkham/DAT8/raw/226791169b1cc6df8e8845c12e34e748d5ffaa85/notebooks/images/salary_regions.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above are the regions created by a computer:\n", "\n", "- $R_1$: players with **less than 5 years** of experience, mean Salary of **\\$166,000 **\n", "- $R_2$: players with **5 or more years** of experience and **less than 118 hits**, mean Salary of **\\$403,000 **\n", "- $R_3$: players with **5 or more years** of experience and **118 hits or more**, mean Salary of **\\$846,000 **\n", "\n", "**Note:** Years and Hits are both integers, but the convention is to use the **midpoint** between adjacent values to label a split.\n", "\n", "These regions are used to make predictions on **out-of-sample data**. Thus, there are only three possible predictions! (Is this different from how **linear regression** makes predictions?)\n", "\n", "Below is the equivalent regression tree:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Salary tree](https://github.com/justmarkham/DAT8/raw/226791169b1cc6df8e8845c12e34e748d5ffaa85/notebooks/images/salary_tree.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first split is **Years < 4.5**, thus that split goes at the top of the tree. When a splitting rule is **True**, you follow the left branch. When a splitting rule is **False**, you follow the right branch.\n", "\n", "For players in the **left branch**, the mean Salary is \\$166,000, thus you label it with that value. (Salary has been divided by 1000 and log-transformed to 5.11.)\n", "\n", "For players in the **right branch**, there is a further split on **Hits < 117.5**, dividing players into two more Salary regions: \\$403,000 (transformed to 6.00), and \\$846,000 (transformed to 6.74)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Salary tree annotated](https://github.com/justmarkham/DAT8/raw/226791169b1cc6df8e8845c12e34e748d5ffaa85/notebooks/images/salary_tree_annotated.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**What does this tree tell you about your data?**\n", "\n", "- Years is the most important factor determining Salary, with a lower number of Years corresponding to a lower Salary.\n", "- For a player with a lower number of Years, Hits is not an important factor determining Salary.\n", "- For a player with a higher number of Years, Hits is an important factor determining Salary, with a greater number of Hits corresponding to a higher Salary.\n", "\n", "**Question:** What do you like and dislike about decision trees so far?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Building a regression tree by hand\n", "\n", "Your **training data** is a tiny dataset of [used vehicle sale prices](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/vehicles_train.csv). Your goal is to **predict price** for testing data.\n", "\n", "1. Read the data into a Pandas DataFrame.\n", "2. Explore the data by sorting, plotting, or split-apply-combine (aka `group_by`).\n", "3. Decide which feature is the most important predictor, and use that to create your first splitting rule.\n", " - Only binary splits are allowed.\n", "4. After making your first split, split your DataFrame into two parts, and then explore each part to figure out what other splits to make.\n", "5. Stop making splits once you are convinced that it strikes a good balance between underfitting and overfitting.\n", " - Your goal is to build a model that generalizes well.\n", " - You are allowed to split on the same variable multiple times!\n", "6. Draw your tree, labeling the leaves with the mean price for the observations in that region.\n", " - Make sure nothing is backwards: You follow the **left branch** if the rule is true, and the **right branch** if the rule is false." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## How does a computer build a regression tree?\n", "\n", "**Ideal approach:** Consider every possible partition of the feature space (computationally infeasible)\n", "\n", "**\"Good enough\" approach:** recursive binary splitting\n", "\n", "1. Begin at the top of the tree.\n", "2. For **every feature**, examine **every possible cutpoint**, and choose the feature and cutpoint such that the resulting tree has the lowest possible mean squared error (MSE). Make that split.\n", "3. Examine the two resulting regions, and again make a **single split** (in one of the regions) to minimize the MSE.\n", "4. Keep repeating step 3 until a **stopping criterion** is met:\n", " - maximum tree depth (maximum number of splits required to arrive at a leaf)\n", " - minimum number of observations in a leaf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Demo: Choosing the ideal cutpoint for a given feature" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# vehicle data\n", "import pandas as pd\n", "url = 'https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/vehicles_train.csv'\n", "train = pd.read_csv(url)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypeprediction
0220002012130002car6571.428571
1140002010300002car6571.428571
2130002010735004car6571.428571
395002009780004car6571.428571
490002007470004car6571.428571
5400020061240002car6571.428571
6300020041770004car6571.428571
7200020042090004truck6571.428571
8300020031380002car6571.428571
9190020031600004car6571.428571
10250020031900002truck6571.428571
1150002001620004car6571.428571
12180019991630002truck6571.428571
13130019971380004car6571.428571
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction\n", "0 22000 2012 13000 2 car 6571.428571\n", "1 14000 2010 30000 2 car 6571.428571\n", "2 13000 2010 73500 4 car 6571.428571\n", "3 9500 2009 78000 4 car 6571.428571\n", "4 9000 2007 47000 4 car 6571.428571\n", "5 4000 2006 124000 2 car 6571.428571\n", "6 3000 2004 177000 4 car 6571.428571\n", "7 2000 2004 209000 4 truck 6571.428571\n", "8 3000 2003 138000 2 car 6571.428571\n", "9 1900 2003 160000 4 car 6571.428571\n", "10 2500 2003 190000 2 truck 6571.428571\n", "11 5000 2001 62000 4 car 6571.428571\n", "12 1800 1999 163000 2 truck 6571.428571\n", "13 1300 1997 138000 4 car 6571.428571" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# before splitting anything, just predict the mean of the entire dataset\n", "train['prediction'] = train.price.mean()\n", "train" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3042.7402778200435" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "year = 2010\n", "train['pred'] = train.loc[train.year=year, 'pred'] = train.loc[train.year>=year, 'price'].mean()\n", "\n", "(((train['price'] - train['pred'])**2).mean()) ** 0.5" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "train_izq = train.loc[train.year<2010].copy()" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([2009, 2007, 2006, 2004, 2003, 2001, 1999, 1997])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_izq.year.unique()" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def error_año(train, year):\n", " train['pred'] = train.loc[train.year=year, 'pred'] = train.loc[train.year>=year, 'price'].mean()\n", " print ((((train['price'] - train['pred'])**2).mean()) ** 0.5)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "def error_miles(train, miles):\n", " train['pred'] = train.loc[train.miles=miles, 'pred'] = train.loc[train.miles>=miles, 'price'].mean()\n", " print ((((train['price'] - train['pred'])**2).mean()) ** 0.5)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Year 2009\n", "2057.469761182851\n", "Year 2007\n", "1009.9754972525349\n", "Year 2006\n", "1588.559953943422\n", "Year 2004\n", "2291.254783922662\n", "Year 2003\n", "2609.750075111687\n", "Year 2001\n", "2474.322680507279\n", "Year 1999\n", "2584.235424119236\n", "Year 1997\n", "2712.7492078079777\n" ] } ], "source": [ "for year in train_izq.year.unique():\n", " print('Year ',year)\n", " error_año(train_izq, year)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "count 11.000000\n", "mean 135090.909091\n", "std 53042.350147\n", "min 47000.000000\n", "25% 101000.000000\n", "50% 138000.000000\n", "75% 170000.000000\n", "max 209000.000000\n", "Name: miles, dtype: float64" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_izq.miles.describe()" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Miles 50000\n", "2183.408511312697\n", "Miles 90000\n", "1258.6217811077274\n", "Miles 95000\n", "1258.6217811077274\n", "Miles 100000\n", "1258.6217811077274\n", "Miles 105000\n", "1258.6217811077274\n", "Miles 110000\n", "1258.6217811077274\n", "Miles 125000\n", "1527.2099167665622\n", "Miles 140000\n", "2244.42744267988\n", "Miles 160000\n", "2244.42744267988\n", "Miles 180000\n", "2597.5610160924484\n" ] } ], "source": [ "for miles in [50000, 90000, 95000, 100000, 105000, 110000, 125000, 140000, 160000, 180000]:\n", " print('Miles ',miles)\n", " error_miles(train_izq, miles)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Year 2012\n", "408.248290463863\n", "Year 2010\n", "4027.681991198191\n", "----------------------------------\n", "Miles 50000\n", "3265.986323710904\n", "Miles 90000\n", "4027.681991198191\n", "Miles 95000\n", "4027.681991198191\n", "Miles 100000\n", "4027.681991198191\n", "Miles 105000\n", "4027.681991198191\n", "Miles 110000\n", "4027.681991198191\n", "Miles 125000\n", "4027.681991198191\n", "Miles 140000\n", "4027.681991198191\n", "Miles 160000\n", "4027.681991198191\n", "Miles 180000\n", "4027.681991198191\n" ] } ], "source": [ "train_der = train.loc[train.year>=2010].copy()\n", "\n", "for year in train_der.year.unique():\n", " print('Year ',year)\n", " error_año(train_der, year)\n", "print('----------------------------------')\n", "for miles in [50000, 90000, 95000, 100000, 105000, 110000, 125000, 140000, 160000, 180000]:\n", " print('Miles ',miles)\n", " error_miles(train_der, miles)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Year 2010\n", "500.0\n", "----------------------------------\n", "Miles 25000\n", "500.0\n", "Miles 50000\n", "0.0\n", "Miles 90000\n", "500.0\n", "Miles 95000\n", "500.0\n", "Miles 100000\n", "500.0\n", "Miles 105000\n", "500.0\n", "Miles 110000\n", "500.0\n", "Miles 125000\n", "500.0\n", "Miles 140000\n", "500.0\n", "Miles 160000\n", "500.0\n", "Miles 180000\n", "500.0\n" ] } ], "source": [ "train_der_izq = train_der.loc[train_der.year<2012].copy()\n", "\n", "for year in train_der_izq.year.unique():\n", " print('Year ',year)\n", " error_año(train_der_izq, year)\n", "print('----------------------------------')\n", "for miles in [25000, 50000, 90000, 95000, 100000, 105000, 110000, 125000, 140000, 160000, 180000]:\n", " print('Miles ',miles)\n", " error_miles(train_der_izq, miles)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
1140002010300002car6571.42857113500.0
2130002010735004car6571.42857113500.0
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "1 14000 2010 30000 2 car 6571.428571 13500.0\n", "2 13000 2010 73500 4 car 6571.428571 13500.0" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_der_izq" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
5400020061240002car6571.4285714277.777778
6300020041770004car6571.4285714277.777778
7200020042090004truck6571.4285712250.000000
8300020031380002car6571.4285714277.777778
9190020031600004car6571.4285714277.777778
10250020031900002truck6571.4285712250.000000
1150002001620004car6571.4285714277.777778
12180019991630002truck6571.4285714277.777778
13130019971380004car6571.4285714277.777778
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "5 4000 2006 124000 2 car 6571.428571 4277.777778\n", "6 3000 2004 177000 4 car 6571.428571 4277.777778\n", "7 2000 2004 209000 4 truck 6571.428571 2250.000000\n", "8 3000 2003 138000 2 car 6571.428571 4277.777778\n", "9 1900 2003 160000 4 car 6571.428571 4277.777778\n", "10 2500 2003 190000 2 truck 6571.428571 2250.000000\n", "11 5000 2001 62000 4 car 6571.428571 4277.777778\n", "12 1800 1999 163000 2 truck 6571.428571 4277.777778\n", "13 1300 1997 138000 4 car 6571.428571 4277.777778" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_izq_izq = train_izq.loc[train_izq.year<2007].copy()\n", "train_izq_izq" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Year 2006\n", "1014.2731387550397\n", "Year 2004\n", "1092.8216960050065\n", "Year 2003\n", "1110.2218663819374\n", "Year 2001\n", "916.6450213894664\n", "Year 1999\n", "989.9494936611666\n", "Year 1997\n", "1110.3330609203888\n", "----------------------------------\n", "Miles 25000\n", "1110.3330609203888\n", "Miles 50000\n", "1110.3330609203888\n", "Miles 90000\n", "764.3988196979084\n", "Miles 95000\n", "764.3988196979084\n", "Miles 100000\n", "764.3988196979084\n", "Miles 105000\n", "764.3988196979084\n", "Miles 110000\n", "764.3988196979084\n", "Miles 125000\n", "574.3180911666199\n", "Miles 140000\n", "970.6527013647398\n", "Miles 160000\n", "970.6527013647398\n", "Miles 180000\n", "1081.2617556017526\n" ] } ], "source": [ "for year in train_izq_izq.year.unique():\n", " print('Year ',year)\n", " error_año(train_izq_izq, year)\n", "\n", "print('----------------------------------')\n", "for miles in [25000, 50000, 90000, 95000, 100000, 105000, 110000, 125000, 140000, 160000, 180000]:\n", " print('Miles ',miles)\n", " error_miles(train_izq_izq, miles)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
395002009780004car6571.4285714277.777778
490002007470004car6571.4285714277.777778
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "3 9500 2009 78000 4 car 6571.428571 4277.777778\n", "4 9000 2007 47000 4 car 6571.428571 4277.777778" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_izq_der = train_izq.loc[train_izq.year>=2007].copy()\n", "train_izq_der" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
0220002012130002car6571.42857116333.333333
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "0 22000 2012 13000 2 car 6571.428571 16333.333333" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_der_der = train_der.loc[train_der.year>=2012].copy()\n", "train_der_der" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
5400020061240002car6571.4285712857.142857
1150002001620004car6571.4285712857.142857
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "5 4000 2006 124000 2 car 6571.428571 2857.142857\n", "11 5000 2001 62000 4 car 6571.428571 2857.142857" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_izq_izq_izq = train_izq_izq.loc[train_izq_izq.miles<125000].copy()\n", "train_izq_izq_izq" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
6300020041770004car6571.4285712857.142857
7200020042090004truck6571.4285712250.000000
8300020031380002car6571.4285712857.142857
9190020031600004car6571.4285712857.142857
10250020031900002truck6571.4285712250.000000
12180019991630002truck6571.4285712857.142857
13130019971380004car6571.4285712857.142857
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "6 3000 2004 177000 4 car 6571.428571 2857.142857\n", "7 2000 2004 209000 4 truck 6571.428571 2250.000000\n", "8 3000 2003 138000 2 car 6571.428571 2857.142857\n", "9 1900 2003 160000 4 car 6571.428571 2857.142857\n", "10 2500 2003 190000 2 truck 6571.428571 2250.000000\n", "12 1800 1999 163000 2 truck 6571.428571 2857.142857\n", "13 1300 1997 138000 4 car 6571.428571 2857.142857" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_izq_izq_der = train_izq_izq.loc[train_izq_izq.miles>=125000].copy()\n", "train_izq_izq_der" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Year 2004\n", "565.685424949238\n", "Year 2003\n", "419.69376590897457\n", "Year 1999\n", "461.8802153517006\n", "Year 1997\n", "593.8459911664722\n", "----------------------------------\n", "Miles 140000\n", "592.452529743945\n", "Miles 160000\n", "592.452529743945\n", "Miles 180000\n", "593.416259587532\n" ] } ], "source": [ "for year in train_izq_izq_der.year.unique():\n", " print('Year ',year)\n", " error_año(train_izq_izq_der, year)\n", "\n", " \n", "print('----------------------------------')\n", "for miles in [140000, 160000, 180000]:\n", " print('Miles ',miles)\n", " error_miles(train_izq_izq_der, miles)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
12180019991630002truck6571.4285712200.0
13130019971380004car6571.4285712200.0
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "12 1800 1999 163000 2 truck 6571.428571 2200.0\n", "13 1300 1997 138000 4 car 6571.428571 2200.0" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_izq_izq_der_izq = train_izq_izq_der.loc[train_izq_izq_der.year<2003].copy()\n", "train_izq_izq_der_izq" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
6300020041770004car6571.4285712200.0
7200020042090004truck6571.4285712250.0
8300020031380002car6571.4285712200.0
9190020031600004car6571.4285712200.0
10250020031900002truck6571.4285712250.0
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "6 3000 2004 177000 4 car 6571.428571 2200.0\n", "7 2000 2004 209000 4 truck 6571.428571 2250.0\n", "8 3000 2003 138000 2 car 6571.428571 2200.0\n", "9 1900 2003 160000 4 car 6571.428571 2200.0\n", "10 2500 2003 190000 2 truck 6571.428571 2250.0" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_izq_izq_der_der = train_izq_izq_der.loc[train_izq_izq_der.year>=2003].copy()\n", "train_izq_izq_der_der" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Year 2004\n", "470.46076705006266\n", "Year 2003\n", "470.7440918375928\n", "----------------------------------\n", "Miles 140000\n", "392.42833740697165\n", "Miles 160000\n", "392.42833740697165\n", "Miles 180000\n", "431.66344915145794\n" ] } ], "source": [ "for year in train_izq_izq_der_der.year.unique():\n", " print('Year ',year)\n", " error_año(train_izq_izq_der_der, year)\n", "\n", "print('----------------------------------')\n", "for miles in [140000, 160000, 180000]:\n", " print('Miles ',miles)\n", " error_miles(train_izq_izq_der_der, miles)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "5936.981985995983" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# calculate RMSE for those predictions\n", "from sklearn import metrics\n", "import numpy as np\n", "np.sqrt(metrics.mean_squared_error(train.price, train.prediction))" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "# define a function that calculates the RMSE for a given split of miles\n", "def mileage_split(miles):\n", " lower_mileage_price = train[train.miles < miles].price.mean()\n", " higher_mileage_price = train[train.miles >= miles].price.mean()\n", " train['prediction'] = np.where(train.miles < miles, lower_mileage_price, higher_mileage_price)\n", " return np.sqrt(metrics.mean_squared_error(train.price, train.prediction))" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RMSE: 3984.0917425414564\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
0220002012130002car15000.00000016333.333333
1140002010300002car15000.00000016333.333333
2130002010735004car4272.72727316333.333333
395002009780004car4272.7272733909.090909
490002007470004car15000.0000003909.090909
5400020061240002car4272.7272733909.090909
6300020041770004car4272.7272733909.090909
7200020042090004truck4272.7272733909.090909
8300020031380002car4272.7272733909.090909
9190020031600004car4272.7272733909.090909
10250020031900002truck4272.7272733909.090909
1150002001620004car4272.7272733909.090909
12180019991630002truck4272.7272733909.090909
13130019971380004car4272.7272733909.090909
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "0 22000 2012 13000 2 car 15000.000000 16333.333333\n", "1 14000 2010 30000 2 car 15000.000000 16333.333333\n", "2 13000 2010 73500 4 car 4272.727273 16333.333333\n", "3 9500 2009 78000 4 car 4272.727273 3909.090909\n", "4 9000 2007 47000 4 car 15000.000000 3909.090909\n", "5 4000 2006 124000 2 car 4272.727273 3909.090909\n", "6 3000 2004 177000 4 car 4272.727273 3909.090909\n", "7 2000 2004 209000 4 truck 4272.727273 3909.090909\n", "8 3000 2003 138000 2 car 4272.727273 3909.090909\n", "9 1900 2003 160000 4 car 4272.727273 3909.090909\n", "10 2500 2003 190000 2 truck 4272.727273 3909.090909\n", "11 5000 2001 62000 4 car 4272.727273 3909.090909\n", "12 1800 1999 163000 2 truck 4272.727273 3909.090909\n", "13 1300 1997 138000 4 car 4272.727273 3909.090909" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# calculate RMSE for tree which splits on miles < 50000\n", "print('RMSE:', mileage_split(50000))\n", "train" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RMSE: 3530.146530076269\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtypepredictionpred
0220002012130002car12083.33333316333.333333
1140002010300002car12083.33333316333.333333
2130002010735004car12083.33333316333.333333
395002009780004car12083.3333333909.090909
490002007470004car12083.3333333909.090909
5400020061240002car2437.5000003909.090909
6300020041770004car2437.5000003909.090909
7200020042090004truck2437.5000003909.090909
8300020031380002car2437.5000003909.090909
9190020031600004car2437.5000003909.090909
10250020031900002truck2437.5000003909.090909
1150002001620004car12083.3333333909.090909
12180019991630002truck2437.5000003909.090909
13130019971380004car2437.5000003909.090909
\n", "
" ], "text/plain": [ " price year miles doors vtype prediction pred\n", "0 22000 2012 13000 2 car 12083.333333 16333.333333\n", "1 14000 2010 30000 2 car 12083.333333 16333.333333\n", "2 13000 2010 73500 4 car 12083.333333 16333.333333\n", "3 9500 2009 78000 4 car 12083.333333 3909.090909\n", "4 9000 2007 47000 4 car 12083.333333 3909.090909\n", "5 4000 2006 124000 2 car 2437.500000 3909.090909\n", "6 3000 2004 177000 4 car 2437.500000 3909.090909\n", "7 2000 2004 209000 4 truck 2437.500000 3909.090909\n", "8 3000 2003 138000 2 car 2437.500000 3909.090909\n", "9 1900 2003 160000 4 car 2437.500000 3909.090909\n", "10 2500 2003 190000 2 truck 2437.500000 3909.090909\n", "11 5000 2001 62000 4 car 12083.333333 3909.090909\n", "12 1800 1999 163000 2 truck 2437.500000 3909.090909\n", "13 1300 1997 138000 4 car 2437.500000 3909.090909" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# calculate RMSE for tree which splits on miles < 100000\n", "print('RMSE:', mileage_split(100000))\n", "train" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "# check all possible mileage splits\n", "mileage_range = range(train.miles.min(), train.miles.max(), 1000)\n", "RMSE = [mileage_split(miles) for miles in mileage_range]" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "# allow plots to appear in the notebook\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.rcParams['figure.figsize'] = (6, 4)\n", "plt.rcParams['font.size'] = 14" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Text(0, 0.5, 'RMSE (lower is better)')" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAY4AAAEKCAYAAAAFJbKyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3XucXHV9//HXZ2Z3k80FEiCkMSEmCKh4AXG5KKgoioAIaAXBW8QLarHaX6uVqC1WxZ+2agvVIqgoKhYQa82PohhApVK5JILckQChJATIfUOyl7l8fn+c7+xOJjs7M7tz5pyZvJ+Pxz525jtnzvlkdjKf+d7N3REREalXJukARESkvShxiIhIQ5Q4RESkIUocIiLSECUOERFpiBKHiIg0RIlDREQaosQhIiINUeIQEZGGdCUdQBz22WcfX7RoUdJhiIi0lZUrV25w9zm1jos1cZjZLOA7wIsBB94HPARcBSwCVgNnuPtmMzPgQuAkYAfwXnf/QzjPEuCz4bRfdPfLx7vuokWLWLFiRdP/PSIinczMHq/nuLibqi4EfunuLwAOAR4AzgNudPcDgRvDfYATgQPDzznAxQBmthdwPnAkcARwvpnNjjluERGpIrbEYWZ7Aq8Gvgvg7sPuvgU4FSjVGC4HTgu3TwV+4JFbgVlmNg94I7Dc3Te5+2ZgOXBCXHGLiMj44qxxLAbWA98zszvN7DtmNh2Y6+7rwjFPAXPD7fnAE2XPXxPKqpWLiEgC4kwcXcBhwMXu/jJgO6PNUgB4tKZ7U9Z1N7NzzGyFma1Yv359M04pIiJjiDNxrAHWuPtt4f41RInk6dAERfj9THh8LbBf2fMXhLJq5Ttx90vdvc/d++bMqTkoQEREJii2xOHuTwFPmNnzQ9FxwP3AMmBJKFsC/DzcXga8xyJHAVtDk9b1wPFmNjt0ih8fykREJAFxz+P4S+AKM+sBHgXOJkpWV5vZ+4HHgTPCsdcRDcVdRTQc92wAd99kZl8A7gjHfd7dN8Uct4iIVGGduHVsX1+fT2Qex/ahPJf89hFe+4J9edlCjfgVkd2Lma10975ax2nJkTKDuQIX3bSKu9dsTToUEZHUUuIok80YAMUOrIWJiDSLEkeZaNUTKBSVOEREqlHiKKMah4hIbUocZbJWShwJByIikmJKHGVC3lBTlYjIOJQ4yow0VSlxiIhUpcRRRk1VIiK1KXGUGWmqUue4iEhVShxlzIyMqalKRGQ8Hbnn+GRkM6bhuCLSNO7OXU9sYWC40JLrzZjaxUsXzIr1GkocFcxMTVUi0jQrHt/M6d/6fcuud+h+s/jPc4+O9RpKHBWyZmqqEpGmefCpbQBc8u6XM6u3O/brTZ8S/8e6EkeFqKkq6ShEpFOs3rCdqd0Z3vDCuWTCkP92p87xCmaaACgizbN6w3YW7T29Y5IGKHHsQp3jItJMj22MEkcnUeKokDUlDhFpjnyhyBObdrBoHyWOjmZmFIpJRyEineDJLYPkCs7ifaYlHUpTKXFUyGY0AVBEmuOxjdsB1FTV6dRUJSLNsnpDlDgWq6mqs2kCoIg0y2MbtjO9J8ucmVOSDqWpNI+jQjajCYAiu7MnNu1g2R+fxJvwBfKWVRtYtM/0kW2pO4USRwVNABTZvV1y8yP86Nb/bdr53n/M4qadKy2UOCqYaVl1kd3ZI89s55D9ZnHNh1/RlPN1ZzuvR0CJo4LWqhLZvT264VmOPmCfjvzAbxa9MhU0c1xk97V9KM/T/UM8b86MpENJNSWOCpoAKLL7eiwMn92/w4bPNpsSR4VsBtU4RHZTj6x/FoD9VeMYlxJHBU0AFNl9Pbp+O2bw3L07a4mQZlPiqBA1VSlxiOyOHtuwnfmzepnanU06lFRT4qigznGR3dejG55VM1UdNBy3QjQcN+koRKQWd+fxjTvINXE0y2Prt9P33L2adr5OFWviMLPVwDagAOTdvc/MPgd8EFgfDvu0u18Xjl8KvD8c/zF3vz6UnwBcCGSB77j7l+OLWRMARdrBf92zjo/++M6mn/eguTObfs5O04oax2vdfUNF2T+7+1fLC8zsYOBM4EXAc4AbzOyg8PA3gTcAa4A7zGyZu98fR7DZjDGcV5VDJO0e37gDgAvPPJRsk7Zl7c5meM1Bc5pyrk6WpqaqU4Er3X0IeMzMVgFHhMdWufujAGZ2ZTg2tsShPg6R9NuyY5ip3RlOPXR+0qHsduLuHHfgV2a20szOKSv/qJndbWaXmdnsUDYfeKLsmDWhrFp5LKJl1eM6u4g0y6btOfaa1pN0GLuluBPHMe5+GHAicK6ZvRq4GHgecCiwDvhaMy5kZueY2QozW7F+/fraT6gia9oBUKQdbN4xzCwljkTEmjjcfW34/QzwM+AId3/a3QvuXgS+zWhz1Fpgv7KnLwhl1corr3Wpu/e5e9+cORNvo1RTlUh72LxjmL2mK3EkIbbEYWbTzWxm6TZwPHCvmc0rO+wtwL3h9jLgTDObYmaLgQOB24E7gAPNbLGZ9RB1oC+LMW5NABRpA5u3DzNrWnfSYeyW4uwcnwv8LOx81QX82N1/aWY/NLNDifo/VgMfAnD3+8zsaqJO7zxwrrsXAMzso8D1RMNxL3P3++IKWkuOiLSHzTtyqnEkJLbEEUZBHTJG+bvHec4FwAVjlF8HXNfUAKvQDoAi6ZcvFNk6kGO2+jgSoSVHKpg6x0VSb+tADoDZaqpKhBJHhWzGNHNcJOU27xgGYLaaqhKhxFEhq85xkdTbtL1U41DiSIISRwUzQxUOkXQr1TjUOZ6MhhJHGGLb0QvVZzOoxiGScpu3q6kqSeMmDjPLmNk7zOy/zOwZ4EFgnZndb2b/ZGYHtCbM1lEfh0j6bd6hzvEk1apx/JpoeZClwJ+5+37uvi9wDHAr8BUze1fMMbZUxgxX4hBJtc07hpnSlaFXO/UlotY8jte7e66y0N03AT8FfmpmHZXyM+ocF0m9TduHmT2thzDBWFps3BqHu+fMLGtmD453TPPDSk42o8QhknZbdgyrfyNBNTvHw7IfD5nZwhbEk7iMRlWJpF5U4+ioxo62Uu+SI7OB+8zsdmB7qdDdT4klqgRltHWsSOpt2ZHjhc/pTTqM3Va9iePvYo0iRdRUJZI+haLz8Svv5KmtgwA8sXkHRx+wT8JR7b7qmsfh7r8lWsm2O9y+A/hDjHElJpNRU5VI2jzdP8i1d69jy0COKd0Zjtp/b056ybzaT5RY1FXjMLMPAucAexENz50PfAs4Lr7QkqGmKpH06R+MxuD8n9cfxJteqoSRtHpnjp8LHA30A7j7w8C+cQWVJK1VJZI+/QN5APbojXMLIalXvYljyN2HS3fMrItoI6aOk8lE48I1CVAkPbaFGsfMqRpJlQb1Jo7fmtmngV4zewPwE+D/xRdWcjJhQpFqHSLpUWqq2mOqahxpUG/iOA9YD9xDtNXrde7+mdiiSlA21DjUzyGSHtsGS01VqnGkQb3p+y/d/ULg26UCM/t4KOsopRqH8oZIevQPlJqqVONIg3prHEvGKHtvE+NIjVDhUFOVSIpsG8wzpSvDlC4tapgG46ZvMzsLeAew2MyWlT00E9gUZ2BJUVOVSPr0D+bUMZ4itep9/wOsA/YBvlZWvg24O66gkjTSVFVMOBARGdE/mNdQ3BQZ9y/h7o8Dj5vZzWHG+Agz+wrwqTiDS8JIU5VqHCKp0T+QYw/VOFKj3j6ON4xRdmIzA0mLkaYq9XGIpMa2wbw6xlOkVh/HR4C/AJ5nZuVNUzOBW+IMLCmaACiSPv2DOebP1mq4aVErhf8Y+AXwf4nmcpRsC7sAdpyRCYBKHCKp0T+Q1+S/FKm1A+BWd1/t7mcB+wGvC/0eGTNb3JIIWyyrmeMiqbNtUH0caVJXH4eZnU/UEb40FPUAP4orqCSNNlUlHIiIADCULzCUL2rWeIrU2zn+FuAUwu5/7v4kUT9Hx9EEQJF0KS03os7x9Kg3cQx71FvsAGY2Pb6QkqUJgCLpUlpuRE1V6VFv4rjazC4BZoVNnW6gbN2qTjK6VpUSh0gaqMaRPnX9Jdz9q2E59X7gIODv3X15rJElZHRZ9YQDERGgbEl19XGkRiMp/B6gl6i56p54wkleNtTB1Mchkg6qcaRPvaOqPgDcDrwVeBtwq5m9r47nrTaze8zsLjNbEcr2MrPlZvZw+D07lJuZXWRmq8zsbjM7rOw8S8LxD5vZWCv1Nk2pxlFUU5VIKqiPI33qTeGfBF7m7hsBzGxvogUQL6vjua919w1l988DbnT3L5vZeeH+p4iWMDkw/BwJXAwcaWZ7AecDfUS1nZVmtszdN9cZe0OUOETSRU1V6VNv5/hGohVxS7aFsok4Fbg83L4cOK2s/AceuZWoI34e8EZgubtvCsliOXDCBK9dk9aqEkmXbYN5MgbTe7QXR1rUWqvqr8PNVcBtZvZzom/9p1LfsuoO/MrMHLjE3S8F5rr7uvD4U8DccHs+8ETZc9eEsmrllbGeA5wDsHDhwjpCG1tpAqDyhkhk/bYh8sXkRos8tXWQmVO7sdAaIMmr1VRVmuT3SPgp+Xmd5z/G3dea2b7AcjN7sPxBd/eQVCYtJKVLAfr6+iZ8ztIEQDVVicAv7lnHR674Q9JhsHifjp061pZq7cfxD5M5ubuvDb+fMbOfAUcAT5vZPHdfF5qingmHryVaD6tkQShbCxxbUf6bycQ1Hq1VJTJq3dZBAM5/88H0difXVPSi5+yZ2LVlV7GNbwuzyzPuvi3cPh74PLCMaA/zL4ffpdrLMuCjZnYlUef41pBcrge+VBp9Fc6zlJiMNlUpcYjkwoSmM/r2Y/oUDYeVSJzvhLnAz0K7ZBfwY3f/pZndQTQT/f3A48AZ4fjrgJOI+lN2AGcDuPsmM/sCcEc47vNxLuk+MqpKEwBFyIead1dW/QsyKrbE4e6PAoeMUb4ROG6McgfOrXKuy6hv6O+kjUwAVI1DZKTG0Z2pdwCm7A7qnQD4j2a2h5l1m9mNZrbezN4Vd3BJ0DwOkVH5gpOx0SZcEah/Hsfx7t4PnAysBg4gmhTYcUabqpQ4RHLFIl1Z1TZkZ/W+I0pNWm8CfuLuW2OKJ3GaACgyKl9wulXbkAr19nFcG+ZgDAAfMbM5wGB8YSVntKkq4UBEUiBfKNLdpRqH7Kyud4S7nwe8Euhz9xzRToCnxhlYUkp9gOrjEIFc0elSx7hUqLXkyOvc/SYze2tZWfkh/xFXYEnRBECRUbl8kW4NxZUKtZqqXgPcBLx5jMecDkwcmgAoMipfdM3hkF3UWnLk/PD77NaEkzwNxxUZlSsUNYdDdqF3RIWsto4VGZEvqMYhu1LiqDDSOa4+DhHyxaI6x2UXNd8RZpYxs1e2Ipg0UFOVyKhcwdU5LruomTjcvQh8swWxpMLIBEAlDhHyxSLdmjkuFep9R9xoZn9uu8EWXFpyRGRUTn0cMoZ6E8eHgJ8Aw2bWb2bbzKw/xrgSk9XWsSIjcgXVOGRXdS054u4zax/VGUrL8mgCoEgYVaW1qqRCvcuqm5m9y8z+Ltzfz8yOiDe0ZGgCoMioXEGr48qu6n1H/BvwCuAd4f6zdGiHeVajqkRG5IsaVSW7qnd13CPd/TAzuxPA3TebWU+McSUmowmAIiPyBc3jkF3V+47ImVmWaH0qwrLqHfnRqtVxRUZpVJWMpd7EcRHwM2BfM7sA+B3wpdiiSlBWw3FFRuSLRXrUxyEV6h1VdYWZrQSOAww4zd0fiDWyhIw0VanGIaK1qmRMdSUOM/sCcDPwfXffHm9IyRoZVaUahwjD6uOQMdT7jngUOAtYYWa3m9nXzKwjdwCEaBKg8oZI2HNcNQ6pUO/Wsd9z9/cBrwV+BJwefnekjKmpSgTC6rjq45AK9TZVfQc4GHga+G/gbcAfYowrURkzNVXJbs/do9VxNXNcKtT7VWJvIAtsATYBG9w9H1tUCYuaqpQ4ZPdWWnZHNQ6pVO+oqrcAmNkLgTcCvzazrLsviDO4pGTMNAGwAYWic9KF/83/btox4XM8Z9ZUfvHxV9PTpQ+ptMiPJA7VOGRn9TZVnQy8Cng1MAu4iajJqiNlTBMAG/HsUJ6Hnt7GK/bfm5cs2LPh59//ZD+/W7WB/sEc+8yYEkOEMhG58O1J8zikUr1LjpxAlCgudPcnY4wnFdRU1ZihfAGAN710Hu866rkNP/+qO/6X363awHBe1bw0yRdCjUN9HFKh3qaqj5rZXOBwMzsMuN3dn4k3tORETVVKHPUaykUf+FMm2MxU2u8hp/bBVCn9PdTHIZXqXVb9dOB2omG4ZwC3mdnb4gwsSRnVOBoyFGoKU7qzE3p+qV9DNY50yYUvT5rHIZXqbar6LHB4qZYRFjm8AbgmrsCSlDWjqM+wupWaqiZa4yi1oQ8pcaRKvlTj0MxxqVDvOyJT0TS1sd7nmlnWzO40s2vD/e+b2WNmdlf4OTSUm5ldZGarzOzu0CRWOscSM3s4/CypM+YJ0wTAxpQ+8Cc6Iqq7S01VaZQraFSVjK3eGscvzex64N/D/bcD19X53I8DDwB7lJV90t0raysnAgeGnyOBi4EjzWwv4Hygj2hZ95VmtszdN9d5/YZlMpoA2IhSE9NEaxxTsmqqSqN8qHZrz3GpVO+SI58ELgVeGn4udfdP1XqemS0A3gR8p47LnAr8wCO3ArPMbB7RvJHl7r4pJIvlRKO8YqNRVY0Z6ePommQfh2ocqaJRVVJNvTUO3P2nwE8bPP+/AH8LzKwov8DM/h64ETjP3YeA+cATZcesCWXVyndiZucA5wAsXLiwwTB3ljGjoLxRt6HcJPs41DmeSqWmw25NypQK474jzGybmfWP8bPNzPprPPdk4Bl3X1nx0FLgBcDhwF5AzZpLPdz9Unfvc/e+OXPmTOpcGdOy6o0o1Timdms4bicpzRzvVue4VBi3xuHulTWFRhwNnGJmJwFTgT3M7Efu/q7w+JCZfQ/4RLi/Ftiv7PkLQtla4NiK8t9MIq6a1FTVmGY1VWlUVbrk8qV5HGqqkp3VqnHMqHWCase4+1J3X+Dui4AzgZvc/V2h3wIzM+A04N7wlGXAe8LoqqOAre6+DrgeON7MZpvZbOD4UBYbTQBsTLOG46qpKl00j0OqqdXH8XMzuwv4ObCytPufme1PtDfHGcC3aWw+xxVhHogBdwEfDuXXAScBq4AdwNkA7r4p7EB4Rzju8+6+qYHrNSxjqnE0ojRzfKLDcdU5nk6axyHV1GqqOi40NX0IODp8488DDwH/BSxx96dqXcTdf0NoXnL311U5xoFzqzx2GXBZres0i3YAbEzpA3/CTVWlPg7VOFJF8zikmpqjqtz9Ouqfs9ERMoaaqhqgGkdn0jwOqUbviDForarGDOULdGeN7ATH+2s4bjppHodUo8Qxhqz6OBoylC9OuJkKRj+YlDjSZWQeh2ocUkHviDFoVFVjhvKFCY+oAjAzeroyDGvWZaqMzONQ4pAKtYbjvq7s9uKKx94aV1BJy2TQ6rgNGMoVJ5U4IFqvSjWOdBndj0NNVbKzWv/bv1p2u3K5kc82OZbU0ATAxgzli5PeKzyqcRSaFJE0Q2lUlWaOS6Va7wircnus+x0jWqtKiaNeUVPVxPs4IGoOUY0jXfKqcUgVtRKHV7k91v2OkTEtq96I4XyRKRNcp6qkpysz8g1X0qHUx6HEIZVqzePY38yWEdUuSrcJ9xdXf1p70wTAxkSjqprQVKUaR6qMjKpSU5VUqJU4Ti27/dWKxyrvdwxNAGzMUL5I7wT3Gy/pzma0yGHK5AtOxqJ5TSLlai058tvy+2bWDbwYWFuxlWxH0VpVjRnKF5jV2z2pc0RNVUocaZIrFjUUV8ZUazjut8zsReH2nsAfgR8Ad5rZWS2ILxEaVdWYodzk+zg0HDd98gVX4pAx1XpXvMrd7wu3zwb+5O4vAV5OtLNfR9IEwMZMduY4lIbjKnGkSa5QVMe4jKlW4hguu/0G4D8B6lkRt51l1DnekKF8YWSF24nqzppqHCmTK7iWVJcx1XpXbDGzk83sZUQ7+v0SwMy6gN64g0tKVp3jDWnecFwljjTJF4raxEnGVGtU1YeAi4A/A/6qrKZxHNF+HB1JneONac5w3KxqHCmTL7qaqmRMtUZV/Qk4YYzy64l5+9YkZTKaANiIZvRxdGdNw3FTJlcoag6HjGncxGFmF433uLt/rLnhpENWS47ULV8oUij65Bc5VOd46uQLqnHI2Go1VX0YuBe4GniSDl6fqpw6x+tXqiVMuo8jqz6OtMlrHodUUStxzANOB95OtNf4VcA17r4l7sCSlDHUVFWnkcTRjOG4aqpKlVzB6VLikDGM+65w943u/i13fy3RPI5ZwP1m9u6WRJeQbEZNVfUaykdLoU92WXWtjps+UR/HbtHIIA2qVeMAwMwOA84imsvxC2BlnEElTavj1m94pMYx+eG4+aJTLLrWRkoJ9XFINbU6xz8PvAl4ALgSWOru+VYElqRoOG7SUbSHZjZVAQwXikzNTO5c0hy5YpEZ3XV9t5TdTK13xWeBx4BDws+XzAyiTnJ395fGG14yshlNAKzXUK5JNY5sWeKY5Eq70hz5gtOl2p+MoVbi6Ng9N8aT0SKHdSv1cTRj5jigfo4UidaqUue47KrWBMDHxyo3swxRn8eYj7e7rEWT0fq+eEOs13npgj257L2Hx3qNuDWtqSqrxJE2+aJryREZU60+jj2Ac4H5wDJgOfBR4G+Illi/Iu4Ak/CWl82nfzAXaz/H3Wu2cPOf1sd3gRYZqXE0oXMc0FyOFInWqlKNQ3ZVq6nqh8Bm4PfAB4BPE/VvnObud8UcW2IOnDuTL572kliv8Y2bHubetf0M54uTHsqapFIfRzOG44JqHGmi1XGlmpp7jof9NzCz7wDrgIXuPhh7ZB2utyd66QeGC+2dOJo4HLf8fJK8nFbHlSpq/W/PlW64ewFYo6TRHNN6oj6BHbn2Ht08Mo9jkiOh1FSVPlodV6qpVeM4xMz6w20DesP90nDcPWKNroONJI7hQsKRTE6z+jimqKkqdXKFopqqZEy1RlVpQH1MesM39IG2TxzNaarqLpsAKBOzafswg7nmvZ/UVCXVxD4t1MyywApgrbufbGaLiWah7020dMm73X3YzKYAPyDaz3wj8HZ3Xx3OsRR4P1AAPhb2A2lr00IfR/vXODQcNw3ue3Irb7rod00/b6kvTqRcK94VHydasqTUrPUV4J/d/Uoz+xZRQrg4/N7s7geY2ZnhuLeb2cHAmcCLgOcAN5jZQaHPpW31jjRVtXcfx1CugBmT/maqPo7JWbcl6nr82OsOYP7s5uzqbGa8/oVzm3Iu6SyxJg4zW0C01tUFwF9btF7J64B3hEMuBz5HlDhODbcBrgG+EY4/FbjS3YeAx8xsFXAE0RDhtlXq4+iEpqqebIawFM2ElYbjalTVxAyEJqpTDn0OB+w7M+FopNPFXeP4F+BvgdI7eW9gS9lCiWuIJhcSfj8B4O55M9sajp8P3Fp2zvLnjDCzc4BzABYuXNjcf0UMkugcf7p/kHyTZzVu3D486f4NGO0jUVPVxJQSh9b5klaILXGY2cnAM+6+0syOjes6Je5+KXApQF9fX+oXmhppqmpiZ+Z4frpyDX/zkz/Gcu75sybfNNKjzvFJGVLikBaKs8ZxNHCKmZ0ETCXq47gQmGVmXaHWsQBYG45fC+wHrDGzLmBPok7yUnlJ+XPa1rSRCYCt6eN4fON2zODLb30J1uQdgA/6s8k3jZQ6x3OqcUxIqcbRq8QhLRBb4nD3pcBSgFDj+IS7v9PMfgK8jWhk1RLg5+Epy8L934fHb3J3N7NlwI/N7OtEneMHArfHFXerlP6Dt6qpqn8wz8wpXbz98HQ242k47uQMDEevm2oc0gpJjLX7FHClmX0RuBP4bij/LvDD0Pm9iWgkFe5+n5ldDdxPtO/5ue0+ogqi7WmndGVa1jneP5hj5tTullxrIjQcd3IG8wV6shmy2j9DWqAlicPdfwP8Jtx+lGhUVOUxg8DpVZ5/AdHIrI7S25NtXY1jIM8evelNHKXhvEocEzMwXGDqJPdEEamX3mkJmtbdusSxbTDHzKnpncxlZvRkMwwXUj+uIZUGc4WRARcicVPiSFBvT7apS0SMp38wzx4pbqqCaGSVahwTM5grqH9DWkaJI0HTerpaNnN822COPXrTW+OAkDgKbd99lYiBXEEjqqRl0v1J0uFa28eRS3+NI5vh949s5NM/u2eXx/qeO5u3HrYggajaw0CuqBqHtIwSR4Km9WTZtH049usUi86zQ3n2SHEfB8ArD9ibm/+0gV/d9/RO5dsGc9z0wDNKHOMYVOe4tFC6P0k63LSeLGs2x1/j2D6cp+ikejguwNfPOHTM8i9cez9X3fFEi6NpL4P5AntP70k6DNlN6CtKgnq7u1oyj6N/MOpHSXsfRzUzpnTx7FCeYpPX2eokA8MaVSWto8SRoGk92ZZ0jm8bjHYATnuNo5oZU6KEt73Nl6CP00CuwNRJ7okiUi8ljgRNa1HneP9AqHG0a+IIfTPbhzTiqprBXJGpqnFIiyhxJKi3J8tQvkgh5iaYUo2jnZuqAJ4dyiUcSXoNajiutJASR4JGNnOKeRJgf7s3VYUax7ZBNVVVM5DTqCppHb3TEtQ7su94vB+Io01V7VnjmDlS41DiGEuuENVaVeOQVlHiSNC07tZsH9v2neMh4T2rGseYtPuftJoSR4JatX1s/2Ceqd2ZkV322k2pj2ObahxjGhxW4pDWas9Pkg7R26LEsS3le3HUMtI5rhrHmAZz0cKQaqqSVlHiSNDo9rEx1zgG0r/cyHimq49jXCPbxmo4rrSIEkeCWjmqKs2bONXSnc0wtTujxFHFaB+H/jtLa+idlqDRpqqYR1UN5tu6qQpgxpRuDcetYkB9HNJiShwJGqlxxN3HMZBr66YqgJlTu1TjqGIwH5qqlDikRZQ4EjStuzSPI/5RVe1f4+hiuxLHmDSqSlqtvb+GtrlSU9UtqzbEep2tA8Ntu9xIyYwpXRpVVcVI57gSh7RIe3+atLmergzzZ/Vy44PPcOODz8R6refNmRHr+eM2Y2oXazYPJB1HosLAAAAL/ElEQVRGKo0Mx9WoKmkRJY6E/foTx8bex5HJtO+s8ZJoTw4tcjiWkVFVWlZdWkSJI2E9Xe07o7uV1FRV3WApcfTofSStoXeatIUZYVSVu3YBrDSYK5Ax6Mnqv7O0ht5p0hZmTOkiV3CG8sWkQ0mdgeECU7uzmFnSochuQolD2sLMqVp2pJoBbeIkLabEIW1BCx1WN5grag6HtJQSh7SFGVrosKpB7f4nLaZ3m7QFJY7qBnIFzeGQllLikLagXQCrG1Qfh7SYEoe0BdU4qhvIFdTHIS0V2wRAM5sK3AxMCde5xt3PN7PvA68BtoZD3+vud1k0lvBC4CRgRyj/QzjXEuCz4fgvuvvlccUt6VSqcXzpugf45q9XJRxNujy+aQevPnBO0mHIbiTOmeNDwOvc/Vkz6wZ+Z2a/CI990t2vqTj+RODA8HMkcDFwpJntBZwP9AEOrDSzZe6+OcbYJWXmzJjCB45ZzJNbtV5VpQPnzuD0l++XdBiyG4ktcXg0xffZcLc7/Iw37fdU4Afhebea2SwzmwccCyx3900AZrYcOAH497hil/QxMz578sFJhyEixNzHYWZZM7sLeIbow/+28NAFZna3mf2zmU0JZfOBJ8qeviaUVSuvvNY5ZrbCzFasX7++6f8WERGJxJo43L3g7ocCC4AjzOzFwFLgBcDhwF7Ap5p0rUvdvc/d++bMUXuviEhcWjKqyt23AL8GTnD3dR4ZAr4HHBEOWwuUN9QuCGXVykVEJAGxJQ4zm2Nms8LtXuANwIOh34Iwiuo04N7wlGXAeyxyFLDV3dcB1wPHm9lsM5sNHB/KREQkAXGOqpoHXG5mWaIEdbW7X2tmN5nZHMCAu4APh+OvIxqKu4poOO7ZAO6+ycy+ANwRjvt8qaNcRERazzpxf4O+vj5fsWJF0mGIiLQVM1vp7n21jtPMcRERaYgSh4iINKQjm6rMbD3weNJxAPsAG5IOogrFNjGKbeLSHJ9iizzX3WvOZ+jIxJEWZrainvbCJCi2iVFsE5fm+BRbY9RUJSIiDVHiEBGRhihxxOvSpAMYh2KbGMU2cWmOT7E1QH0cIiLSENU4RESkMe6un3F+iBZY/DVwP3Af8PFQ/jmixRbvCj8nlT1nKdHSKQ8BbywrPyGUrQLOKytfDNwWyq8CehqMcTVwT4hjRSjbC1gOPBx+zw7lBlwUrnU3cFjZeZaE4x8GlpSVvzycf1V4rtUZ1/PLXp+7gH7gr5J67YDLiJb4v7esLPbXqdo16ojtn4AHw/V/BswK5YuAgbLX71sTjWG8f2eN2GL/GxLtHnpVKL8NWFRnbFeVxbUauKvVrxvVPzdS8X6b9Odis0/YaT9Ea24dFm7PBP4EHBz+43xijOMPBv4Y3vSLgUeAbPh5BNgf6AnHHByeczVwZrj9LeAjDca4GtinouwfS/85gfOAr4TbJwG/CG/Uo4Dbyt5sj4bfs8Pt0pv69nCsheeeOIHXMQs8BTw3qdcOeDVwGDt/yMT+OlW7Rh2xHQ90hdtfKYttUflxFedpKIZq/846Yov9bwj8BeHDHTgTuKqe2Coe/xrw961+3aj+uZGK99tkfxL/YG63H+DnRCv9VvuPsxRYWnb/euAV4ef6yuPCH30Dox8QOx1XZ0yr2TVxPATMK3sTPxRuXwKcVXkccBZwSVn5JaFsHvBgWflOxzUQ4/HALeF2Yq8dFR8erXidql2jVmwVj70FuGK84yYSQ7V/Zx2vW+x/w9Jzw+2ucNwutd1xXg8j2gTuwKRet7LHS58bqXm/TeZHfRwNMLNFwMuIqs0AHw07GV4WlnyHxncy3BvY4u75ivJGOPArM1tpZueEsrkeLUsP0Tf9uROMb364XVneqDPZebvftLx2rXidql2jEe8j+lZZstjM7jSz35rZq8pibjSGunbYrCLuv+HIc8LjW8Px9XoV8LS7P1xW1vLXreJzo13eb+NS4qiTmc0Afgr8lbv3AxcDzwMOBdYRVYmTcoy7HwacCJxrZq8uf9Cjrx6eSGSAmfUApwA/CUVpeu1GtOJ1msg1zOwzQB64IhStAxa6+8uAvwZ+bGZ7xBnDGFL5N6xwFjt/WWn56zbG58akzteouK6hxFEHM+sm+uNf4e7/AeDuT3u0NW4R+DYT38lwIzDLzLoqyuvm7mvD72eIOlGPAJ4u2zRrHlEH4kTiWxtuV5Y34kTgD+7+dIgzNa8drXmdql2jJjN7L3Ay8M7wIYC7D7n7xnB7JVHfwUETjGFCO2y26G848pzw+J7h+JrC8W8l6igvxdzS122sz40JnK+l77d6KXHUEHYq/C7wgLt/vax8Xtlhb2HnnQzPNLMpZrYYOJCoE+sO4EAzWxy+gZ8JLAsfBr8G3haev4SoPbTe+Kab2czSbaK+hHtDHEvGOGdDOy2Gx/rN7KjwWrynkfiCnb75peW1K7tm3K9TtWuMy8xOAP4WOMXdd5SVzwkbpGFm+xO9To9OMIZq/85asbXib1ge89uAm0rJsw6vJ+oDGGnOaeXrVu1zYwLna9n7rSHN7jTptB/gGKKq3t2UDT0Efkg0FO7u8IeaV/aczxB9m3mIshFI4Xl/Co99pqx8f6L/XKuImnOmNBDf/kQjVP5INOzvM6F8b+BGoiF5NwB7hXIDvhliuAfoKzvX+0IMq4Czy8r7iD4YHgG+QZ3DccNzpxN9S9yzrCyR144oea0DckRtwu9vxetU7Rp1xLaKqH17p+GjwJ+Hv/VdwB+AN080hvH+nTVii/1vCEwN91eFx/evJ7ZQ/n3gwxXHtux1o/rnRireb5P90cxxERFpiJqqRESkIUocIiLSECUOERFpiBKHiIg0RIlDREQaosQhHcXM3Mx+VHa/y8zWm9m14f4pZnZeuP05M/tEUrE2wsw+Pcnnf8fMDq5xzGm1jhEBJQ7pPNuBF5tZb7j/Bspm9Lr7Mnf/ciKRTc6kEoe7f8Dd769x2GlEK7iKjEuJQzrRdcCbwu3KWevvNbNvVD7BzJ5nZr+0aKHI/zazF4TyN5vZbRYtjHeDmc0N5XPMbLmZ3Re+zT9uZvuEx95lZreb2V1mdklptnLF9Q43s/8xsz+GY2dWxmZm15rZsWb2ZaA3nO8KM1tkZg+G2w+Y2TVmNi0857gQ6z0WLT44JZT/xsz6wu1nzeyCcO1bzWyumb2SaD2xfwrXeV4z/hDSmZQ4pBNdSbTsxVTgpYyuZjyeS4G/dPeXA58A/i2U/w44yqOF8a4kWgIE4HyiJTBeBFwDLAQwsxcCbweOdvdDgQLwzvILhSU3riLa3OcQouUxBqoF5u7nAQPufqi7l871fODf3P2FRBtk/UX4934feLu7v4RoKfKPjHHK6cCt4do3Ax909/8hmgH+yXCdR2q+YrLb6qp9iEh7cfe7LVrK+iyi2se4LFrB9JXAT6Jlf4BoIyKIFo+7KqzN1AM8FsqPIVqjCXf/pZltDuXHEe3Mdkc4Vy+7LjL3fGCdu98Rnt8f4mjkn/mEu98Sbv8I+BjRbm+PufufQvnlwLnAv1Q8dxi4NtxeSdScJ1I3JQ7pVMuArwLHUnsPhwzRnhCHjvHYvwJfd/dlZnYs0QZG4zHgcndf2lC0kTw7twJMHefYyrWCGlk7KOejaw0V0OeANEhNVdKpLgP+wd3vqXVg+Mb/mJmdDtHKpmZ2SHh4T0Y715eUPe0W4Ixw/PFE23pCtLjc28xs3/DYXmb23IpLPgTMM7PDwzEzLVoGfDVwqJllzGw/RpcqB8hZtEx3yUIze0W4/Q6iJrWHgEVmdkAofzfw21r//jLbiLY5FRmXEod0JHdf4+4XNfCUdwLvN7PSKsOnhvLPETVhrSTaurTkH4iWu74XOJ1op7VtYeTSZ4l2ZLybqPmofAly3H2YqB/kX8P1lhPVLm4hagq7H7iIaAXXkkuBu82stJnTQ0Sbdj1AlLQudvdB4OwQ7z1AkWgP73pdCXwydK6rc1yq0uq4IhMQRisV3D0fvvlfXKWpK45rLwKudfcXt+J6IpXUtikyMQuBq80sQ9TZ/MGE4xFpGdU4RESkIerjEBGRhihxiIhIQ5Q4RESkIUocIiLSECUOERFpiBKHiIg05P8DWPYZsFY+1pMAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "# plot mileage cutpoint (x-axis) versus RMSE (y-axis)\n", "plt.plot(mileage_range, RMSE)\n", "plt.xlabel('Mileage cutpoint')\n", "plt.ylabel('RMSE (lower is better)')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Recap:** Before every split, this process is repeated for every feature, and the feature and cutpoint that produces the lowest MSE is chosen." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Building a regression tree in scikit-learn" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "# encode car as 0 and truck as 1\n", "train['vtype'] = train.vtype.map({'car':0, 'truck':1})" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "# define X and y\n", "feature_cols = ['year', 'miles', 'doors', 'vtype']\n", "X = train[feature_cols]\n", "y = train.price" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DecisionTreeRegressor(criterion='mse', max_depth=None, max_features=None,\n", " max_leaf_nodes=None, min_impurity_decrease=0.0,\n", " min_impurity_split=None, min_samples_leaf=1,\n", " min_samples_split=2, min_weight_fraction_leaf=0.0,\n", " presort=False, random_state=1, splitter='best')" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# instantiate a DecisionTreeRegressor (with random_state=1)\n", "from sklearn.tree import DecisionTreeRegressor\n", "treereg = DecisionTreeRegressor(random_state=1)\n", "treereg" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3107.1428571428573" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# use leave-one-out cross-validation (LOOCV) to estimate the RMSE for this model\n", "import numpy as np\n", "from sklearn.model_selection import cross_val_score\n", "scores = cross_val_score(treereg, X, y, cv=14, scoring='neg_mean_squared_error')\n", "np.mean(np.sqrt(-scores))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What happens when we grow a tree too deep?\n", "\n", "- Left: Regression tree for Salary **grown deeper**\n", "- Right: Comparison of the **training, testing, and cross-validation errors** for trees with different numbers of leaves" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Salary tree grown deep](https://github.com/justmarkham/DAT8/raw/226791169b1cc6df8e8845c12e34e748d5ffaa85/notebooks/images/salary_tree_deep.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The **training error** continues to go down as the tree size increases (due to overfitting), but the lowest **cross-validation error** occurs for a tree with 3 leaves." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tuning a regression tree\n", "\n", "Let's try to reduce the RMSE by tuning the **max_depth** parameter:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "4050.1443001443" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# try different values one-by-one\n", "treereg = DecisionTreeRegressor(max_depth=1, random_state=1)\n", "scores = cross_val_score(treereg, X, y, cv=14, scoring='neg_mean_squared_error')\n", "np.mean(np.sqrt(-scores))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or, we could write a loop to try a range of values:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "# list of values to try\n", "max_depth_range = range(1, 8)\n", "\n", "# list to store the average RMSE for each value of max_depth\n", "RMSE_scores = []\n", "\n", "# use LOOCV with each value of max_depth\n", "for depth in max_depth_range:\n", " treereg = DecisionTreeRegressor(max_depth=depth, random_state=1)\n", " MSE_scores = cross_val_score(treereg, X, y, cv=14, scoring='neg_mean_squared_error')\n", " RMSE_scores.append(np.mean(np.sqrt(-MSE_scores)))" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Text(0, 0.5, 'RMSE (lower is better)')" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAY4AAAELCAYAAADOeWEXAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3Xl8FfW9//HXJwthTUAIEJKwCYKIbDnFHStUxRUXQl2rVou3tdZWf7dX2956a2/Xa2urrWuttXUFrIpKtdYNd0xYZFcENYkga0LYQpbP748z2IiQcxJyMsnJ+/l4zCNzvmfmzPvog3wy8535fs3dERERiVdK2AFERKRtUeEQEZFGUeEQEZFGUeEQEZFGUeEQEZFGUeEQEZFGUeEQEZFGUeEQEZFGUeEQEZFGSQs7QCL06tXLBw4cGHYMEZE2pbi4eKO7Z8faLikLx8CBAykqKgo7hohIm2JmH8WznS5ViYhIo6hwiIhIo6hwiIhIo6hwiIhIo6hwiIhIo6hwiIhIo6hwiIhIo6hw1LN5+27umbuaip3VYUcREWm1VDjqKduyk5/NWc5Tiz4JO4qISKulwlHPyNxMhvftxsyikrCjiIi0Wioc9ZgZhZF8FpVWsHJdZdhxRERaJRWOvZw1ph/pqaazDhGR/VDh2EvPrhlMGt6HxxeUUV1bF3YcEZFWR4VjH6Z9KY9N23fz4or1YUcREWl1El44zCzVzBaY2dPB60Fm9raZrTKzR82sQ9CeEbxeFbw/sN5n3BC0rzSzkxOdecLQbHp3y9DlKhGRfWiJM45rgOX1Xv8KuMXdhwBbgMuD9suBLUH7LcF2mNkI4DzgMGAycLuZpSYycFpqCueMy+OllRtYX7krkYcSEWlzElo4zCwPOA34U/DagInArGCT+4GzgvUpwWuC9ycF208BHnH3KndfA6wCxicyN0BhJI/aOufx+WWJPpSISJuS6DOO3wHfB/b0MvcEyt29JnhdCuQG67lACUDwfkWw/Wft+9gnYQ7O7kpkQA9mFJXg7ok+nIhIm5GwwmFmpwPr3b04UcfY63jTzazIzIo2bNjQLJ9ZGMnjgw3bWVBS3iyfJyKSDBJ5xnEMcKaZfQg8QvQS1e+B7ma2Z67zPGDPtaAyIB8geD8L2FS/fR/7fMbd73b3iLtHsrNjzrUel9NG9aNTeqo6yUVE6klY4XD3G9w9z90HEu3cftHdLwReAqYGm10CPBmszw5eE7z/okevEc0GzgvuuhoEDAXmJSp3fV0z0jj18ByeWrSWHbtrYu8gItIOhPEcx38B15rZKqJ9GPcG7fcCPYP2a4HrAdx9KTADWAY8C1zl7rUtFXZaJI9tVTU8u2RdSx1SRKRVs2Ts+I1EIl5UVNQsn+XunHDzy/TN6sgj049qls8UEWmNzKzY3SOxttOT4zGYGVML8nhr9WY+3rQj7DgiIqFT4YjDuQV5mMGsYnWSi4iocMQhJ6sTE4ZmM6u4lNq65Lu0JyLSGCoccSqM5PFJxS5eX7Ux7CgiIqFS4YjTiSP60L1zOjOLS8OOIiISKhWOOGWkpXLWmFyeW7qO8h27w44jIhIaFY5GmFqQx+6aOmYv+iTsKCIioVHhaISRuVmMyMlkZpEuV4lI+6XC0UjTInksLqtg+dqtYUcREQmFCkcjTRmTS4fUFJ11iEi7pcLRSD26dODEEX14fEEpu2vqYu8gIpJkVDiaoDCSx5Yd1byw/NOwo4iItDgVjiY4bmg2fTM76pkOEWmXVDiaIDXFOLcgl5dXrufTrbvCjiMi0qJUOJqosCCfOofH5uusQ0TaFxWOJhrYqwvjBx7ErKJSknFOExGR/VHhOACFkTxWb9xO8Udbwo4iItJiVDgOwKmH59C5QyozijRPh4i0HyocB6BLRhqnj8rhmXfXsr2qJuw4IiItQoXjAE2L5LN9dy1zFq8NO4qISItQ4ThABQN6MLhXFw1BIiLthgrHATIzpkbymPfhZtZs3B52HBGRhFPhaAbnjssjxWBWsTrJRST5qXA0gz6ZHTn+kGweKy6jtk7PdIhIclPhaCbTIvms27qLV9/fEHYUEZGEUuFoJpMO7cNBXTqok1xEkp4KRzPpkJbClDH9eH7Zp2zZvjvsOCIiCaPC0YwKC/LZXVvHkwvLwo4iIpIwKhzNaES/TA7PzWKGLleJSBJLWOEws45mNs/MFpnZUjP7SdA+yczmm9lCM3vNzIYE7Rlm9qiZrTKzt81sYL3PuiFoX2lmJycqc3MojOSxbO1WlpRVhB1FRCQhGlU4zKyLmaXGuXkVMNHdRwNjgMlmdiRwB3Chu48BHgJ+FGx/ObDF3YcAtwC/Co45AjgPOAyYDNzeiAwt7szR/eiQlsIszQ4oIkmqwcJhZilmdoGZPWNm64EVwFozW2Zm/7fnbGFfPGpb8DI9WDxYMoP2LOCTYH0KcH+wPguYZGYWtD/i7lXuvgZYBYxv9DdtId07d+Dkw/ryxMIyqmpqw44jItLsYp1xvAQcDNwA9HX3fHfvDRwLvAX8yswu2t/OZpZqZguB9cDz7v42cAUwx8xKgYuBXwab5wIlAO5eA1QAPeu3B0qDtlarsCCP8h3V/GvZ+rCjiIg0u1iF4yvu/lN3f9fd6/Y0uvtmd3/M3c8FHt3fzu5eG1ySygPGm9lI4HvAqe6eB9wH/PbAvwaY2XQzKzKzog0bwn0I75ghveiX1VHzdIhIUmqwcLh7dXDWsKKhbWIdxN3LiZ69nAKMDs48IFp0jg7Wy4B8ADNLI3oZa1P99kBe0Lb3Me5294i7R7Kzs2NFSqjUFGNqQR5z39/A2oqdoWYREWluMTvH3b0WWGlm/RvzwWaWbWbdg/VOwInAciDLzA4JNtvTBjAbuCRYnwq86NHJvGcD5wV3XQ0ChgLzGpMlDFML8nGHv8/XMx0iklzS4tyuB7DUzOYBn40d7u5nNrBPDnB/cAdUCjDD3Z82s28Aj5lZHbAF+Hqw/b3A38xsFbCZ6J1UuPtSM5sBLANqgKuCYtaq9e/ZmSMHH8SMohK+9eWDifbzi4i0fRb9oz7GRmbH76vd3V9p9kTNIBKJeFFRUdgxeKy4lOtmLuLR6UdyxOCeYccREWmQmRW7eyTWdnE9xxEUiA+B9GD9HWD+ASVsB045vC9dM9KYqWc6RCSJxFU4gstLs4C7gqZc4IlEhUoWnTukccboHJ55dy3bqmrCjiMi0izifXL8KuAYYCuAu78P9E5UqGQytSCfndW1PPPuJ7E3FhFpA+ItHFXu/tlY4cHtsprqLg7j+nfn4OwumqdDRJJGvIXjFTP7AdDJzE4EZgJPJS5W8jAzpkXyKfpoCx9s2BZ7BxGRVi7ewnE9sAFYDFwJzHH3HyYsVZI5e1wuqSmmgQ9FJCnEWziudvd73L3Q3ae6+z1mdk1CkyWR3t06csKwbB4rLqWmti72DiIirVi8heOSfbRd2ow5kl5hJJ/1lVXMfT/ccbRERA5Ug0+Om9n5wAXAIDObXe+tbkSf7pY4TRzem55dOjCzqJSJw/uEHUdEpMliDTnyBrAW6AX8pl57JfBuokIlo/TUFM4em8v9b37Ipm1V9OyaEXYkEZEmiTU67kfu/jIw191fqbfMB37WIgmTSGEkn+pa54mFeqZDRNquePs4TtxH2ynNGaQ9GNa3G6PzsphZVEI8Y4SJiLRGsaaO/aaZLQaGm9m79ZY16FJVkxRG8lmxrpIlZVvDjiIi0iSxzjgeAs4Angx+7lkK3H2/U8bK/p0xuh8ZaSmaHVBE2qxYfRwV7v6hu59PdBa+ie7+EZASTKokjZTVKZ3JI/vy5MIydlW3+mlFRES+IN7RcW8E/gu4IWjqADyQqFDJblokn627avjnsk/DjiIi0mjxdo6fDZxJMPufu39C9FkOaYKjBvckt3snZupylYi0QfEWjt3B/N8OYGZdEhcp+aWkGFML8nht1UbKyneGHUdEpFHiLRwzzOwuoHswqdO/gHsSFyv5TS3Iwz06vayISFsS79SxNxOdAfAx4BDgx+5+WyKDJbv8gzpzzJCezCwuoa5Oz3SISNsR7xkHRIdUfxWYG6zLASosyKdk807eXqNhv0Sk7Yj3rqorgHnAOcBU4C0z+3oig7UHk0f2pVvHNHWSi0ibEu8Zx38CY939Une/BCggenuuHICO6amcMbofc5asZeuu6rDjiIjEJd7CsYnoiLh7VAZtcoCmRfLZVV3HM++uDTuKiEhcYs3HcW2wugp428yeJHpL7hQ0VlWzGJ2XxSF9ujKjqITzx/cPO46ISEyxzji6BcsHwBMEz3EQHbtqTQJztRtmRmFBPgs+LmfV+srYO4iIhKzBMw53/0lLBWnPzhqby6+eXcHMolJuOPXQsOOIiDSoMbfjSoJkd8tg4vDePDa/jOraurDjiIg0SIWjlSiM5LNxWxWvrNwQdhQRkQYlrHCYWUczm2dmi8xsqZn9JGg3M/uZmb1nZsvN7Dv12m81s1XBZFHj6n3WJWb2frBckqjMYfrysGx6dc3QPB0i0urF+wDgr80s08zSzewFM9tgZrEmcqoiOn/HaGAMMNnMjgQuJTq3x3B3PxR4JNj+FGBosEwH7giOfRBwI3AEMB640cx6NOZLtgXpqSmcOy6XF1esZ+O2qrDjiIjsV7xnHCe5+1bgdOBDYAjRhwL3y6O2BS/Tg8WBbwI3uXtdsN36YJspwF+D/d4iOqBiDnAy8Ly7b3b3LcDzwOR4v2BbUhjJo6bOeWJBWdhRRET2K97Csefuq9OAme5eEc9OZpZqZguB9UR/+b8NHAx81cyKzOwfZjY02DwXqH+dpjRo21/73seaHnxm0YYNbbOfYEjvbozt351H3ykhOoq9iEjrE2/heNrMVhAdauQFM8sGdsXayd1r3X0MkAeMN7ORQAawy90jRIdm/3PTon/hWHe7e8TdI9nZ2c3xkaGYFsnn/fXbWFQaV20WEWlx8Q6rfj1wNBBx92qiMwFOifcg7l4OvET0ElMp8PfgrceBUcF6GdG+jz3ygrb9tSel00fl0DE9RQMfikir1WDhMLOJwc9zgC8DU4L1yUQLSUP7ZptZ92C9E3AisILoE+gnBJsdD7wXrM8GvhbcXXUkUOHua4HngJPMrEfQKX5S0JaUunVM59SROcxe+Ak7d9eGHUdE5AsafHKc6C/2F4Ez9vGe8+8zh33JAe43s1SiBWqGuz9tZq8BD5rZ94BtwBXB9nOAU4mOi7UDuAzA3Teb2U+Bd4LtbnL3pJ7AojCSz98XlPHc0nWcNfYL3TkiIqGyZOyEjUQiXlRUFHaMJqurc46/+SX6H9SZB684Muw4ItJOmFlx0P/cID053gqlpEQHPnx91SZKNu8IO46IyOeocLRS5xbkYQazikvDjiIi8jkxC4eZpZhZgx3h0vxyu3fi2CG9mFVcSl1d8l1OFJG2K2bhCJ7w/mMLZJG9FEbyKSvfyZurNdmiiLQe8V6qesHMzjUzS2ga+ZyTRvQhs2OanukQkVYl3sJxJTAT2G1mW82s0sy2JjCXAB3TU5kyJpd/LFlHxc7qsOOIiADxPznezd1T3D3d3TOD15mJDifRIUiqaup4atEnYUcREQHiH1bdzOwiM/vv4HW+mY1PbDQBGJmbyfC+3Zipu6tEpJWI91LV7cBRwAXB622ow7xFmBmFkXwWlZSzcl1l2HFEROIuHEe4+1UEI+IG82J0SFgq+ZyzxvQjPdXUSS4irUK8haM6GHPKITqAIVCXsFTyOT27ZjBpeB8eX1BGda3+s4tIuOItHLcSHQK9t5n9DHgN+HnCUskXTPtSHpu27+bFFetjbywikkCxRscFwN0fNLNiYBJgwFnuvjyhyeRzJgzNpne3DGYWlXDyYX3DjiMi7Vi8d1X9lOhkSn9x9z+oaLS8tNQUzhmXx0srN7C+MubkiyIiCRPvparVwPlAkZnNM7PfmFncMwBK8yiM5FFb5zw+P2knQBSRNiDeBwDvc/evE5257wGgMPgpLejg7K5EBvRgRlEJyTiPioi0DfFeqvqTmb0B3EG0X2Qq0CORwWTfCiN5fLBhOwtKysOOIiLtVLyXqnoCqUA5sBnY6O41CUsl+3XaqH50Sk/VMx0iEpp4L1Wd7e5HAL8GugMvmZnGwAhB14w0ThuVw1OL1rJjt2q3iLS8eC9VnW5mvwL+THSk3BeBHycymOxfYUEe26pqeHbJurCjiEg7FNdzHMBk4FXg9+6uYVpDNn7QQQzs2ZkZRSWcMy4v7Dgi0s7Ee6nq28DLwLjg7KN3QlNJg/YMfPjW6s18vGlH2HFEpJ2J91JVITCP6G2404C3zWxqIoNJw84Zl0uKwaxidZKLSMuK966qHwFfcvdL3P1rwHjgvxMXS2LJyerEcUOzmVVcSm2dnukQkZYTb+FIcff6o+ttasS+kiCFkTw+qdjFGx9sDDuKiLQj8f7yf9bMnjOzS83sUuAZYE7iYkk8ThzRh+6d05lRpDujRaTlxDs67n+a2bnAMUHT3e7+eOJiSTwy0lI5a0wuD837mIod1WR1Tg87koi0A3FfbnL3x9z92mBR0WglphbksbumjtmLNPChiLSMBguHmVWa2dZ9LJVmtjXGvh2DkXQXmdlSM/vJXu/fambb6r3OMLNHzWyVmb1tZgPrvXdD0L7SzE5u2ldNTiNzsxiRk6nLVSLSYhosHO7ezd0z97F0c/fMGJ9dBUx099HAGGCymR0JYGYRvjhI4uXAFncfAtwC/CrYdgRwHnAY0QcRbw+msZXAtEgei8sqWL62wVouItIsYp1xdI31AfvbxqP2nFGkB4sHv/T/D/j+XrtMAe4P1mcBk8zMgvZH3L3K3dcAq4jeDiyBKWNy6ZCawkyddYhIC4jVx/FkMGnTBDPrsqfRzAab2eVm9hzRs4B9MrNUM1sIrAeed/e3gW8Ds9197V6b5wIlAMHIuxVER+X9rD1QGrRJoEeXDpw4og+PLyhld01d2HFEJMnFulQ1CXiB6MCGS82swsw2EZ3EqS9wibvPamD/WncfA+QB481sAtGnz29rri+wh5lNN7MiMyvasGFDc398q1cYyWPLjmpeWP5p2FFEJMnFvB3X3edwgM9suHu5mb1EdAbBIcCq6FUoOpvZqqBfo4zovOalZpYGZBF90HBP+x55Qdvex7gbuBsgEom0u0epjxuaTd/MjswsLuWUw3PCjiMiSSxhT3+bWbaZdQ/WOwEnAsXu3tfdB7r7QGBHUDQAZgOXBOtTgRc9Oj/qbOC84K6rQcBQouNmST2pKca5Bbm8vHI9n27dFXYcEUliiRw2JIfohE/vAu8Q7eN4uoHt7wV6mtkq4FrgegB3XwrMAJYBzwJXuXttAnO3WYUF+dQ5PDZfneQikjgW/aM+uUQiES8qKgo7Riim3fkmG7dV8cJ1xxNcDhQRiYuZFbt7JNZ2sW7HnVhvfdBe753T9HiSKIWRPFZv3E7xR1vCjiIiSSrWpaqb660/ttd7P2rmLNIMTj08hy4dUvVMh4gkTKzCYftZ39draQW6ZKRx2qgcnn73E7ZX1YQdR0SSUKzC4ftZ39draSWmRfLZvruWOYv3fsZSROTAxXqOY7CZzSZ6drFnneD1oP3vJmEqGNCDwb26MLOolMJIfuwdREQaIVbhmFJv/ea93tv7tbQSZsb54/vzsznLeXxBKWePzQs7kogkkQYLh7u/Uv+1maUDI4GyvaaSlVbmsmMG8q/ln3LD3xczrE8mI/rFGsxYRCQ+sW7HvdPMDgvWs4BFwF+BBWZ2fgvkkyZKS03hDxeMI6tTOv/xQDEVO6rDjiQiSSJW5/hxwZPbAJcB77n74UABXxwWXVqZ7G4Z3H5hAWsrdvLdRxdQV6f7GUTkwMUqHLvrrZ8IPAHg7usSlkiaVcGAHvz49BG8tHIDt774fthxRCQJxCoc5WZ2upmNBY4hOlYUwei1nRIdTprHRUcO4Jxxufz+hfd5aYW6pkTkwMQqHFcSnXjpPuC79c40JgHPJDKYNB8z4+dnH86hfTO55pEFfLxpR9iRRKQNizWR03vuPtndx7j7X+q1P+fu1yU8nTSbjump3HlRAWbGlQ8Us3O3BhgWkaZp8HZcM7u1offd/TvNG0cSqX/PzvzuvDF8/S/v8MPHF/ObaaM1gq6INFqsBwD/A1hCdD6MT9D4VG3eCcN6891Jh3DLv95jTP/ufO2ogWFHEpE2JlbhyCE6R/hXgRrgUWCWu5cnOpgkztUTh7CotJybnlrGYf0yKRhwUNiRRKQNidXHscnd73T3E4g+x9EdWGZmF7dIOkmIlBTjlmlj6Ne9E996cD7rKzXVrIjEL66pY81sHHANcBHwD6A4kaEk8bI6p3PXxQVU7Kzm2w8toLq2LuxIItJGxBpy5CYzKyY6B/grQMTdL3f3ZS2SThLq0JxMfnnOKOat2cwv/7Ei7Dgi0kbE6uP4EbAGGB0sPw/uwjHA3X1UYuNJop01NpeFJeXc+9oaxuR354zR/cKOJCKtXKzCoTk32oEfnHooS8oq+K/H3mVY324c0qdb2JFEpBWL1Tn+0b4WoAQ4tmUiSqJ1SEvhjxeOo0tGGlf+rZituzSSrojsX6w+jkwzu8HM/mBmJ1nU1cBqYFrLRJSW0CezI3+8YBwlm3dw3YxFGklXRPYr1l1VfwOGAYuBK4CXgKnAWe4+paEdpe0ZP+ggfnDqoTy/7FPueOWDsOOISCsVc87xYP4NzOxPwFqgv7vrxv8kddkxA1lYUs5v/rmSUXlZHDc0O+xIItLKxDrj+Oxit7vXAqUqGsnNzPjluYcztHc3vvPwAkq3aCRdEfm8WIVjtJltDZZKYNSedTPb2hIBpeV17pDGnRcXUFPrfPOB+eyq1ki6IvJvse6qSnX3zGDp5u5p9dYzWyqktLxBvbpwy1fHsLisgh8/uQR3dZaLSFRcQ45I+/SVEX24euIQZhSV8sg7JWHHEZFWImGFw8w6mtk8M1tkZkvN7CdB+4NmttLMlpjZn80sPWg3M7vVzFaZ2bvB+Fh7PusSM3s/WC5JVGb5ou9+5RAmHJLNjU8uZWGJBkUWkcSecVQBE919NDAGmGxmRwIPAsOBw4nOW35FsP0pwNBgmQ7cAWBmBwE3AkcA44EbzaxHAnNLPakpxu+/OobemRl864FiNm2rCjuSiIQsYYXDo7YFL9ODxd19TvCeA/OAvGCbKcBfg7feArqbWQ5wMvC8u2929y3A88DkROWWL+rRpQN3XlTApu27ufrhBdRoJF2Rdi2hfRxmlmpmC4H1RH/5v13vvXTgYuDZoCmX6FAme5QGbftrlxY0MjeL/z1rJG98sImb//le2HFEJEQJLRzuXuvuY4ieVYw3s5H13r4dmOvurzbHscxsupkVmVnRhg0bmuMjZS+FkXwuOKI/d77yAc8uWRt2HBEJSYvcVRVMNfsSwSUmM7sRyCY6z8ceZUB+vdd5Qdv+2vc+xt3uHnH3SHa2nnZOlBvPGMGY/O5cN2MRq9Zvi72DiCSdRN5VlW1m3YP1TsCJwAozu4Jov8X57l7/Yvls4GvB3VVHAhXuvhZ4DjjJzHoEneInBW0Sgoy0VO64aBwd01O58m9FbKuqCTuSiLSwRJ5x5AAvmdm7wDtE+zieBu4E+gBvmtlCM/txsP0coqPurgLuAb4F4O6bgZ8Gn/EOcFPQJiHJyerEbReMZc3G7Xx/1iI9HCjSzlgy/qOPRCJeVFQUdoykd9crH/CLf6zgB6cOZ/qEg8OOIyIHyMyK3T0Sazs9OS5NNn3CYE49vC+//McK3vhgY9hxRKSFqHBIk5kZv546msHZXbn6oQV8Ur4z7Egi0gJUOOSAdM1I486LCqiqqeObD86nqkYj6YokOxUOOWBDenfl5sJRLCop56anloUdR0QSTIVDmsXkkTlcefxgHnz7Y2YWaSRdkWSmwiHN5j9PGsbRB/fkh08sYUlZRdhxRCRBVDik2aSlpnDb+WPp1aUDV/6tmC3bd4cdSUQSQIVDmlXPrhncflEBGyqruObRhdTWJd9zQiLtXVrYAST5jMnvzv+ceRg/eHwxv/vXe1x30rCwI4VuV3UtTywo46F5H7OrupasTulkdUonM/jZ0JLZKZ2O6alhfwWRz6hwSEKcPz6fhSVbuO3FVYzK686JI/qEHSkU5Tt288BbH/GXNz5i47YqRuRkMqhXFyp2VlNWvovlayup2Fkdc8yvjLSU/RaVL7R3/vxrFR1pbiockhBmxk1TRrJ8bSXXPrqQ2Vcfy6BeXcKO1WJKNu/g3tfW8Og7JeysruXLw7KZftxgjjq4J2b2he1rauvYuquGip3VX1i27lnf8e+2tRW7WLGukq07q6mMUXQ67KfoxHPG0zE9ZZ95pX3TWFWSUKVbdnD6ba/Rp1tHHr/qaDp3SO6/VRaXVnDX3A+Ys3gtqSnGmaNzmT5hMMP6dkvYMWtq66jcT9H5XOHZx1K5K0bRSU0JikvaF4pKt47ppKSoqLQ2w/p047RROU3aN96xqpL7X7GELq9HZ249byyX3DeP6x9bzO/PG5N0f8G6Oy+v3MBdcz/grdWb6ZaRxjcmDOayowfRN6tjwo+flppCjy4d6NGlQ6P3ra1zKnftv7DsXXg2bKti1YZtVOyInukk4d+dbd7po3KaXDjipcIhCTfhkGz+30nD+L/nVjImvztfP3ZQ2JGaxe6aOp5cWMY9r67mvU+3kZPVkR+eeijnjc+nW8f0sOPFJTXF6N65A907N77oSPulwiEt4pvHH8zCknJ+Pmc5I3OzGD/ooLAjNVnFzmoeevtj/vLGGj7dWsXwvt347bTRnDG6H+mpusNdkp/6OKTFbN1VzZQ/vE7lrhqe+c6x9MlM/GWc5lRWvpP7XlvDw/M+ZvvuWo4d0ovpEwZz3NBeSXf5Tdon9XFIq5PZMZ07LyrgrD++zrcenM/D3ziSDmmt/y/0pZ9UcM/c1Tz97locOGNUDt+YMJjD+mWFHU0kFCoc0qKG9e3Gr6eO4uqHF/DzOcv5nzMPCzvSPrk7r76/kbvnrua1VRvp0iGVS44eyNePHURu905hxxMJlQqHtLgzRvdjYUk59762htH5WZw9Ni/sSJ+prq3j6Xc/4e65a1i+ditmVXNOAAAKS0lEQVS9u2XwX5OHc8ER/cnq1DY6vEUSTYVDQnH9KcNZXFbBDX9fzLA+mYzolxlqnspd1Twyr4Q/v76GtRW7GNq7K7+eOoopY/qRkaYnr0XqU+e4hGZ95S7OuO01MtJSeerbx5LVueX/ol9XsYv73ljDQ299TGVVDUcOPogrJxzM8Ydk6+E2aXfUOS6tXu9uHbn9wnGcd/dbXDtjIfd8LdJiv6xXrqvk7rmrmb2ojNo659TDc5g+YTCj8rq3yPFF2jIVDglVwYCD+O/TR/DjJ5dy24uruOYrQxN2LHfnzQ82cdfc1bzy3gY6pady4REDuPzYQeQf1DlhxxVJNiocErqLjxzAwo/L+d0L7zEqP4sThvVu1s+vqa1jzpJ13D33A5aUbaVX1w5cd+IhXHTkgCYN0yHS3qlwSOjMjJ+dfTjL11VyzcMLePrq4+jf88DPALZX1fDoOyXc+9oaysp3Mji7C78453DOHpurocZFDoA6x6XV+GjTds647TXyenTmsW8eTacOTfvlvr5yF/e/8SEPvPUxFTur+dLAHkyfcDCThvdWh7dIA9Q5Lm3OgJ5d+P15Y/n6/e/wwycW85vC0Y0aymPV+krumbuGxxeUUV1Xx+TD+vKNCYMZ179HAlOLtD8qHNKqnDC8N9dMGsrv/vU+Y/O7c/FRAxvc3t2Zt2Yz97y6mn8tX09GWgrTvpTHFccOZmA7mjhKpCWpcEir852JQ1lUUs5NTy9jRL8sCgZ88Yyhts55buk67pq7mkUl5fTonM41k4bytaMG0LNrRgipRdqPhI0wZ2YdzWyemS0ys6Vm9pOgfZCZvW1mq8zsUTPrELRnBK9XBe8PrPdZNwTtK83s5ERlltYhJcX43VfHkpPViW89WMyGyqrP3tu5u5a/vvkhJ9z8Mt96cD7lO3bz07NG8sb1k/jeiYeoaIi0gEQOTVoFTHT30cAYYLKZHQn8CrjF3YcAW4DLg+0vB7YE7bcE22FmI4DzgMOAycDtZqZbYpJcVufoSLoVO6v59kPzWb91F799/j2O/uUL/PjJpRzUpQN3XDiOF6/7MhcfOaDJHeki0ngJu1Tl0du1tgUv04PFgYnABUH7/cD/AHcAU4J1gFnAHyzaMzoFeMTdq4A1ZrYKGA+8majs0jqM6JfJL845nO89uogjfvEC7vCVQ/tw5fGDiQzooTkwREKS0D6O4MygGBgC/BH4ACh395pgk1IgN1jPBUoA3L3GzCqAnkH7W/U+tv4+9Y81HZgO0L9//2b/LhKOs8fm8Un5Lj4p38llxwxiSO+uYUcSafcSWjjcvRYYY2bdgceB4Qk81t3A3RB9jiNRx5GWd9UJQ8KOICL1tMj0a+5eDrwEHAV0N7M9BSsPKAvWy4B8gOD9LGBT/fZ97CMiIi0skXdVZQdnGphZJ+BEYDnRAjI12OwS4MlgfXbwmuD9F4N+ktnAecFdV4OAocC8ROUWEZGGJfJSVQ5wf9DPkQLMcPenzWwZ8IiZ/S+wALg32P5e4G9B5/dmondS4e5LzWwGsAyoAa4KLoGJiEgINFaViIgA8Y9V1SJ9HCIikjxUOEREpFFUOEREpFFUOEREpFGSsnPczDYAHx3AR/QCNjZTnDAly/cAfZfWKFm+B+i77DHA3bNjbZSUheNAmVlRPHcWtHbJ8j1A36U1SpbvAfoujaVLVSIi0igqHCIi0igqHPt2d9gBmkmyfA/Qd2mNkuV7gL5Lo6iPQ0REGkVnHCIi0igqHAEz+7OZrTezJWFnOVBmlm9mL5nZsmC+92vCztRU+5u7vq0ys1QzW2BmT4ed5UCY2YdmttjMFppZmx4Yzsy6m9ksM1thZsvN7KiwMzWFmQ0L/n/sWbaa2XcTcixdqooyswlEp7r9q7uPDDvPgTCzHCDH3eebWTeiszCe5e7LQo7WaMH0wV3cfZuZpQOvAde4+1sxdm2VzOxaIAJkuvvpYedpKjP7EIi4e5t/9sHM7gdedfc/mVkHoHMwh1CbFYxKXgYc4e4H8kzbPumMI+Duc4kO597muftad58frFcSnQflC9PttgUeta+569scM8sDTgP+FHYWiTKzLGACwfQO7r67rReNwCTgg0QUDVDhSHpmNhAYC7wdbpKmCy7vLATWA8+7e1v9Lr8Dvg/UhR2kGTjwTzMrNrPpYYc5AIOADcB9wSXEP5lZl7BDNYPzgIcT9eEqHEnMzLoCjwHfdfetYedpKnevdfcxRKcNHm9mbe5SopmdDqx39+KwszSTY919HHAKcFVwqbctSgPGAXe4+1hgO3B9uJEOTHC57UxgZqKOocKRpIL+gMeAB93972HnaQ715q6fHHaWJjgGODPoG3gEmGhmD4QbqencvSz4uR54HBgfbqImKwVK653FziJaSNqyU4D57v5pog6gwpGEgg7le4Hl7v7bsPMciP3MXb8i3FSN5+43uHueuw8kehnhRXe/KORYTWJmXYKbLggu65wEtMm7Ed19HVBiZsOCpklEp6luy84ngZepILFzjrcpZvYw8GWgl5mVAje6+70N79VqHQNcDCwO+gYAfuDuc0LM1FT7nLs+5EztXR/g8ejfJ6QBD7n7s+FGOiBXAw8Gl3hWA5eFnKfJgkJ+InBlQo+j23FFRKQxdKlKREQaRYVDREQaRYVDREQaRYVDREQaRYVDREQaRYVDREQaRYVDJETB8OS9mrjvpWbWrzk+S6QxVDhE2q5LgX6xNhJpbiocIkRHEQ4m8vmLmb1nZg+a2VfM7HUze9/MxgfLm8Eoqm/sGabCzL5nZn8O1g83syVm1nk/x+lpZv8MJqX6E2D13rsomLRqoZndFTwtj5ltM7Nbgn1eCIZhmUp0Xo8Hg+07BR9ztZnNDyZZGp7I/2bSfqlwiPzbEOA3wPBguQA4Fvh/wA+IjpF1XDCK6o+Bnwf7/R4YYmZnA/cBV7r7jv0c40bgNXc/jOjggP0BzOxQ4KvAMcFIwLXAhcE+XYCiYJ9XiA6HMwsoAi509zHuvjPYdmMwau0dQW6RZqexqkT+bY27LwYws6XAC+7uZrYYGAhkER03ayjR+SjSAdy9zswuBd4F7nL31xs4xgTgnGC/Z8xsS9A+CSgA3gnGgOpEdP4RiM7f8Wiw/gDQ0GjHe94r3nMckeamwiHyb1X11uvqva4j+m/lp8BL7n52MEHWy/W2H0p06uGm9jkYcL+73xDHtg0NMLcncy369y0JoktVIvHLIjqPM0Q7poHPph+9lejZRM+g/2F/5hK9BIaZnQL0CNpfAKaaWe/gvYPMbEDwXgqw5zMvIDrvOkAl0O0Avo9Ik6hwiMTv18AvzGwBn/9r/hbgj+7+HnA58Ms9BWAffgJMCC6FnQN8DODuy4AfEZ2O9V3geaJDykN0VrrxZrYEmAjcFLT/Bbhzr85xkYTTsOoirZyZbXP3rmHnENlDZxwiItIoOuMQSQAzuwy4Zq/m1939qjDyiDQnFQ4REWkUXaoSEZFGUeEQEZFGUeEQEZFGUeEQEZFGUeEQEZFG+f8ofyejG0qKcwAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "# plot max_depth (x-axis) versus RMSE (y-axis)\n", "plt.plot(max_depth_range, RMSE_scores)\n", "plt.xlabel('max_depth')\n", "plt.ylabel('RMSE (lower is better)')" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DecisionTreeRegressor(criterion='mse', max_depth=3, max_features=None,\n", " max_leaf_nodes=None, min_impurity_decrease=0.0,\n", " min_impurity_split=None, min_samples_leaf=1,\n", " min_samples_split=2, min_weight_fraction_leaf=0.0,\n", " presort=False, random_state=1, splitter='best')" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# max_depth=3 was best, so fit a tree using that parameter\n", "treereg = DecisionTreeRegressor(max_depth=3, random_state=1)\n", "treereg.fit(X, y)" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
featureimportance
0year0.798744
1miles0.201256
2doors0.000000
3vtype0.000000
\n", "
" ], "text/plain": [ " feature importance\n", "0 year 0.798744\n", "1 miles 0.201256\n", "2 doors 0.000000\n", "3 vtype 0.000000" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# \"Gini importance\" of each feature: the (normalized) total reduction of error brought by that feature\n", "pd.DataFrame({'feature':feature_cols, 'importance':treereg.feature_importances_})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating a tree diagram" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "# create a Graphviz file\n", "from sklearn.tree import export_graphviz\n", "export_graphviz(treereg, out_file='tree_vehicles.dot', feature_names=feature_cols)\n", "\n", "# At the command line, run this to convert to PNG:\n", "# dot -Tpng tree_vehicles.dot -o tree_vehicles.png" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Tree for vehicle data](https://github.com/justmarkham/DAT8/raw/226791169b1cc6df8e8845c12e34e748d5ffaa85/notebooks/images/tree_vehicles.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Reading the internal nodes:\n", "\n", "- **samples:** number of observations in that node before splitting\n", "- **mse:** MSE calculated by comparing the actual response values in that node against the mean response value in that node\n", "- **rule:** rule used to split that node (go left if true, go right if false)\n", "\n", "Reading the leaves:\n", "\n", "- **samples:** number of observations in that node\n", "- **value:** mean response value in that node\n", "- **mse:** MSE calculated by comparing the actual response values in that node against \"value\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Making predictions for the testing data" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
priceyearmilesdoorsvtype
03000200313000041
1600020058250040
21200020106000020
\n", "
" ], "text/plain": [ " price year miles doors vtype\n", "0 3000 2003 130000 4 1\n", "1 6000 2005 82500 4 0\n", "2 12000 2010 60000 2 0" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# read the testing data\n", "url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/vehicles_test.csv'\n", "test = pd.read_csv(url)\n", "test['vtype'] = test.vtype.map({'car':0, 'truck':1})\n", "test" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question:** Using the tree diagram above, what predictions will the model make for each observation?" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 4000., 5000., 13500.])" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# use fitted model to make predictions on testing data\n", "X_test = test[feature_cols]\n", "y_test = test.price\n", "y_pred = treereg.predict(X_test)\n", "y_pred" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1190.2380714238084" ] }, "execution_count": 43, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# calculate RMSE\n", "from sklearn.metrics import mean_squared_error\n", "np.sqrt(mean_squared_error(y_test, y_pred))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 2: Classification trees\n", "\n", "**Example:** Predict whether Barack Obama or Hillary Clinton will win the Democratic primary in a particular county in 2008:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Obama-Clinton decision tree](https://github.com/justmarkham/DAT8/raw/226791169b1cc6df8e8845c12e34e748d5ffaa85/notebooks/images/obama_clinton_tree.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Questions:**\n", "\n", "- What are the observations? How many observations are there?\n", "- What is the response variable?\n", "- What are the features?\n", "- What is the most predictive feature?\n", "- Why does the tree split on high school graduation rate twice in a row?\n", "- What is the class prediction for the following county: 15% African-American, 90% high school graduation rate, located in the South, high poverty, high population density?\n", "- What is the predicted probability for that same county?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparing regression trees and classification trees\n", "\n", "|regression trees|classification trees|\n", "|---|---|\n", "|predict a continuous response|predict a categorical response|\n", "|predict using mean response of each leaf|predict using most commonly occuring class of each leaf|\n", "|splits are chosen to minimize MSE|splits are chosen to minimize Gini index (discussed below)|" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Splitting criteria for classification trees\n", "\n", "Common options for the splitting criteria:\n", "\n", "- **classification error rate:** fraction of training observations in a region that don't belong to the most common class\n", "- **Gini index:** measure of total variance across classes in a region" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example of classification error rate\n", "\n", "Pretend we are predicting whether someone buys an iPhone or an Android:\n", "\n", "- At a particular node, there are **25 observations** (phone buyers), of whom **10 bought iPhones and 15 bought Androids**.\n", "- Since the majority class is **Android**, that's our prediction for all 25 observations, and thus the classification error rate is **10/25 = 40%**.\n", "\n", "Our goal in making splits is to **reduce the classification error rate**. Let's try splitting on gender:\n", "\n", "- **Males:** 2 iPhones and 12 Androids, thus the predicted class is Android\n", "- **Females:** 8 iPhones and 3 Androids, thus the predicted class is iPhone\n", "- Classification error rate after this split would be **5/25 = 20%**\n", "\n", "Compare that with a split on age:\n", "\n", "- **30 or younger:** 4 iPhones and 8 Androids, thus the predicted class is Android\n", "- **31 or older:** 6 iPhones and 7 Androids, thus the predicted class is Android\n", "- Classification error rate after this split would be **10/25 = 40%**\n", "\n", "The decision tree algorithm will try **every possible split across all features**, and choose the split that **reduces the error rate the most.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example of Gini index\n", "\n", "Calculate the Gini index before making a split:\n", "\n", "$$1 - \\left(\\frac {iPhone} {Total}\\right)^2 - \\left(\\frac {Android} {Total}\\right)^2 = 1 - \\left(\\frac {10} {25}\\right)^2 - \\left(\\frac {15} {25}\\right)^2 = 0.48$$\n", "\n", "- The **maximum value** of the Gini index is 0.5, and occurs when the classes are perfectly balanced in a node.\n", "- The **minimum value** of the Gini index is 0, and occurs when there is only one class represented in a node.\n", "- A node with a lower Gini index is said to be more \"pure\".\n", "\n", "Evaluating the split on **gender** using Gini index:\n", "\n", "$$\\text{Males: } 1 - \\left(\\frac {2} {14}\\right)^2 - \\left(\\frac {12} {14}\\right)^2 = 0.24$$\n", "$$\\text{Females: } 1 - \\left(\\frac {8} {11}\\right)^2 - \\left(\\frac {3} {11}\\right)^2 = 0.40$$\n", "$$\\text{Weighted Average: } 0.24 \\left(\\frac {14} {25}\\right) + 0.40 \\left(\\frac {11} {25}\\right) = 0.31$$\n", "\n", "Evaluating the split on **age** using Gini index:\n", "\n", "$$\\text{30 or younger: } 1 - \\left(\\frac {4} {12}\\right)^2 - \\left(\\frac {8} {12}\\right)^2 = 0.44$$\n", "$$\\text{31 or older: } 1 - \\left(\\frac {6} {13}\\right)^2 - \\left(\\frac {7} {13}\\right)^2 = 0.50$$\n", "$$\\text{Weighted Average: } 0.44 \\left(\\frac {12} {25}\\right) + 0.50 \\left(\\frac {13} {25}\\right) = 0.47$$\n", "\n", "Again, the decision tree algorithm will try **every possible split**, and will choose the split that **reduces the Gini index (and thus increases the \"node purity\") the most.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Comparing classification error rate and Gini index\n", "\n", "- Gini index is generally preferred because it will make splits that **increase node purity**, even if that split does not change the classification error rate.\n", "- Node purity is important because we're interested in the **class proportions** in each region, since that's how we calculate the **predicted probability** of each class.\n", "- scikit-learn's default splitting criteria for classification trees is Gini index.\n", "\n", "Note: There is another common splitting criteria called **cross-entropy**. It's numerically similar to Gini index, but slower to compute, thus it's not as popular as Gini index." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Building a classification tree in scikit-learn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll build a classification tree using the Titanic data:" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarkedEmbarked_QEmbarked_S
0103Braund, Mr. Owen Harris122.010A/5 211717.2500NaNS01
1211Cumings, Mrs. John Bradley (Florence Briggs Th...038.010PC 1759971.2833C85C00
2313Heikkinen, Miss. Laina026.000STON/O2. 31012827.9250NaNS01
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)035.01011380353.1000C123S01
4503Allen, Mr. William Henry135.0003734508.0500NaNS01
\n", "
" ], "text/plain": [ " PassengerId Survived Pclass \\\n", "0 1 0 3 \n", "1 2 1 1 \n", "2 3 1 3 \n", "3 4 1 1 \n", "4 5 0 3 \n", "\n", " Name Sex Age SibSp Parch \\\n", "0 Braund, Mr. Owen Harris 1 22.0 1 0 \n", "1 Cumings, Mrs. John Bradley (Florence Briggs Th... 0 38.0 1 0 \n", "2 Heikkinen, Miss. Laina 0 26.0 0 0 \n", "3 Futrelle, Mrs. Jacques Heath (Lily May Peel) 0 35.0 1 0 \n", "4 Allen, Mr. William Henry 1 35.0 0 0 \n", "\n", " Ticket Fare Cabin Embarked Embarked_Q Embarked_S \n", "0 A/5 21171 7.2500 NaN S 0 1 \n", "1 PC 17599 71.2833 C85 C 0 0 \n", "2 STON/O2. 3101282 7.9250 NaN S 0 1 \n", "3 113803 53.1000 C123 S 0 1 \n", "4 373450 8.0500 NaN S 0 1 " ] }, "execution_count": 44, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# read in the data\n", "url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/titanic.csv'\n", "titanic = pd.read_csv(url)\n", "\n", "# encode female as 0 and male as 1\n", "titanic['Sex'] = titanic.Sex.map({'female':0, 'male':1})\n", "\n", "# fill in the missing values for age with the median age\n", "titanic.Age.fillna(titanic.Age.median(), inplace=True)\n", "\n", "# create a DataFrame of dummy variables for Embarked\n", "embarked_dummies = pd.get_dummies(titanic.Embarked, prefix='Embarked')\n", "embarked_dummies.drop(embarked_dummies.columns[0], axis=1, inplace=True)\n", "\n", "# concatenate the original DataFrame and the dummy DataFrame\n", "titanic = pd.concat([titanic, embarked_dummies], axis=1)\n", "\n", "# print the updated DataFrame\n", "titanic.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- **Survived:** 0=died, 1=survived (response variable)\n", "- **Pclass:** 1=first class, 2=second class, 3=third class\n", " - What will happen if the tree splits on this feature?\n", "- **Sex:** 0=female, 1=male\n", "- **Age:** numeric value\n", "- **Embarked:** C or Q or S" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [], "source": [ "# define X and y\n", "feature_cols = ['Pclass', 'Sex', 'Age', 'Embarked_Q', 'Embarked_S']\n", "X = titanic[feature_cols]\n", "y = titanic.Survived" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=3,\n", " max_features=None, max_leaf_nodes=None,\n", " min_impurity_decrease=0.0, min_impurity_split=None,\n", " min_samples_leaf=1, min_samples_split=2,\n", " min_weight_fraction_leaf=0.0, presort=False, random_state=1,\n", " splitter='best')" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# fit a classification tree with max_depth=3 on all data\n", "from sklearn.tree import DecisionTreeClassifier\n", "treeclf = DecisionTreeClassifier(max_depth=3, random_state=1)\n", "treeclf.fit(X, y)" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [], "source": [ "# create a Graphviz file\n", "export_graphviz(treeclf, out_file='tree_titanic.dot', feature_names=feature_cols)\n", "\n", "# At the command line, run this to convert to PNG:\n", "# dot -Tpng tree_titanic.dot -o tree_titanic.png" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Tree for Titanic data](https://raw.githubusercontent.com/justmarkham/DAT8/226791169b1cc6df8e8845c12e34e748d5ffaa85/notebooks/images/tree_titanic.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice the split in the bottom right: the **same class** is predicted in both of its leaves. That split didn't affect the **classification error rate**, though it did increase the **node purity**, which is important because it increases the accuracy of our predicted probabilities." ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
featureimportance
0Pclass0.242664
1Sex0.655584
2Age0.064494
3Embarked_Q0.000000
4Embarked_S0.037258
\n", "
" ], "text/plain": [ " feature importance\n", "0 Pclass 0.242664\n", "1 Sex 0.655584\n", "2 Age 0.064494\n", "3 Embarked_Q 0.000000\n", "4 Embarked_S 0.037258" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# compute the feature importances\n", "pd.DataFrame({'feature':feature_cols, 'importance':treeclf.feature_importances_})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 3: Comparing decision trees with other models\n", "\n", "**Advantages of decision trees:**\n", "\n", "- Can be used for regression or classification\n", "- Can be displayed graphically\n", "- Highly interpretable\n", "- Can be specified as a series of rules, and more closely approximate human decision-making than other models\n", "- Prediction is fast\n", "- Features don't need scaling\n", "- Automatically learns feature interactions\n", "- Tends to ignore irrelevant features\n", "- Non-parametric (will outperform linear models if relationship between features and response is highly non-linear)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Trees versus linear models](https://raw.githubusercontent.com/justmarkham/DAT8/226791169b1cc6df8e8845c12e34e748d5ffaa85/notebooks/images/tree_vs_linear.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Disadvantages of decision trees:**\n", "\n", "- Performance is (generally) not competitive with the best supervised learning methods\n", "- Can easily overfit the training data (tuning is required)\n", "- Small variations in the data can result in a completely different tree (high variance)\n", "- Recursive binary splitting makes \"locally optimal\" decisions that may not result in a globally optimal tree\n", "- Doesn't tend to work well if the classes are highly unbalanced\n", "- Doesn't tend to work well with very small datasets" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 1 }