{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", " \n", " Website\n", " \n", "
\n", "\n", "Ghani, Rayid, Frauke Kreuter, Julia Lane, Adrianne Bradford, Alex Engler, Nicolas Guetta Jeanrenaud, Graham Henke, Daniela Hochfellner, Clayton Hunter, Brian Kim, Avishek Kumar, and Jonathan Morgan." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Machine Learning\n", "-----\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of Contents\n", "- [Introduction](#Introduction)\n", " - [Glossary of Terms](#Glossary-of-Terms)\n", "- [Python Setup](#Python-Setup)\n", "- [The Machine Learning Process](#The-Machine-Learning-Process)\n", "- [Problem Formulation](#Problem-Formulation)\n", " - [Four Main Types of ML Tasks for Policy Problems](#Four-Main-Types-of-ML-Tasks-for-Policy-Problems)\n", " - [Our Machine Leaning Problem](#Our-Machine-Learning-Problem)\n", "- [Data Exploration and Preparation](#Data-Exploration-and-Preparation)\n", "- [Building a Model and Model Fitting](#Building-a-Model-and-Model-Fitting)\n", " - [Training and Test Sets](#Training-and-Test-Sets)\n", " - [Class Balancing](#Class-Balancing)\n", " - [Crosstabs](#Crosstabs)\n", " - [Splitting into Features and Labels](#Splitting-into-Features-and-Labels)\n", "- [Model Understanding and Evaluation](#Model-Understanding-and-Evaluation)\n", " - [Running a Machine Learning Model](#Running-a-Machine-Learning-Model)\n", " - [Model Understanding](#Model-Understanding)\n", " - [Model Evaluation](#Model-Evaluation)\n", " - [Confusion Matrix](#Confusion-Matrix)\n", " - [Precision and Recall at k%](#Precision-and-Recall-at-k%)\n", "- [Machine Learning Pipeline](#Machine-Learning-Pipeline)\n", "- [Survey of Algorithms](#Survey-of-Algorithms)\n", "- [Assessing Model Against Baselines](#Assessing-Model-Against-Baselines)\n", "- [Exercise](#Exercise)\n", "- [Additional Resources](#Additional-Resources)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "In this tutorial, we'll discuss how to formulate a research question in the machine learning framework; how to transform raw data into something that can be fed into a model; how to build, evaluate, compare, and select models; and how to reasonably and accurately interpret model results. You'll also get hands-on experience using the `scikit-learn` package in Python to model the data you're familiar with from previous tutorials. \n", "\n", "\n", "This tutorial is based on chapter 6 of [Big Data and Social Science](https://github.com/BigDataSocialScience/)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Glossary of Terms\n", "\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "There are a number of terms specific to Machine Learning that you will find repeatedly in this notebook. \n", "\n", "- **Learning**: In Machine Learning, you'll hear about \"learning a model.\" This is what you probably know as \n", "*fitting* or *estimating* a function, or *training* or *building* a model. These terms are all synonyms and are \n", "used interchangeably in the machine learning literature.\n", "- **Examples**: These are what you probably know as *data points* or *observations* or *rows*. \n", "- **Features**: These are what you probably know as *independent variables*, *attributes*, *predictors*, \n", "or *explanatory variables.*\n", "- **Underfitting**: This happens when a model is too simple and does not capture the structure of the data well \n", "enough.\n", "- **Overfitting**: This happens when a model is too complex or too sensitive to the noise in the data; this can\n", "result in poor generalization performance, or applicability of the model to new data. \n", "- **Regularization**: This is a general method to avoid overfitting by applying additional constraints to the model. \n", "For example, you can limit the number of features present in the final model, or the weight coefficients applied\n", "to the (standardized) features are small.\n", "- **Supervised learning** involves problems with one target or outcome variable (continuous or discrete) that we want\n", "to predict, or classify data into. Classification, prediction, and regression fall into this category. We call the\n", "set of explanatory variables $X$ **features**, and the outcome variable of interest $Y$ the **label**.\n", "- **Unsupervised learning** involves problems that do not have a specific outcome variable of interest, but rather\n", "we are looking to understand \"natural\" patterns or groupings in the data - looking to uncover some structure that \n", "we do not know about a priori. Clustering is the most common example of unsupervised learning, another example is \n", "principal components analysis (PCA).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Python Setup\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "Before we begin, run the code cell below to initialize the libraries we'll be using in this assignment. We're already familiar with `numpy`, `pandas`, and `psycopg2` from previous tutorials. Here we'll also be using [`scikit-learn`](http://scikit-learn.org) to fit modeling." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pylab inline\n", "# from __future__ import division \n", "import pandas as pd\n", "import psycopg2\n", "import sklearn\n", "import seaborn as sns\n", "from sklearn.metrics import precision_recall_curve,roc_curve, auc\n", "from sklearn.metrics import accuracy_score, precision_score, recall_score\n", "from sklearn.ensemble import (RandomForestClassifier, ExtraTreesClassifier,\n", " GradientBoostingClassifier,\n", " AdaBoostClassifier)\n", "from sklearn.linear_model import LogisticRegression, SGDClassifier\n", "from sklearn.naive_bayes import GaussianNB\n", "from sklearn.tree import DecisionTreeClassifier\n", "from sqlalchemy import create_engine\n", "sns.set_style(\"white\")\n", "sns.set_context(\"poster\", font_scale=1.25, rc={\"lines.linewidth\":1.25, \"lines.markersize\":8})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "db_name = \"appliedda\"\n", "hostname = \"10.10.2.10\"\n", "conn = psycopg2.connect(database=db_name, host = hostname) #database connection" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "myschema = 'ada_tanf'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Machine Learning Process\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "The Machine Learning Process is as follows:\n", "\n", "- [**Understand the problem and goal.**](#problem-formulation) *This sounds obvious but is often nontrivial.* Problems typically start as vague \n", "descriptions of a goal - improving health outcomes, increasing graduation rates, understanding the effect of a \n", "variable *X* on an outcome *Y*, etc. It is really important to work with people who understand the domain being\n", "studied to dig deeper and define the problem more concretely. What is the analytical formulation of the metric \n", "that you are trying to optimize?\n", "- [**Formulate it as a machine learning problem.**](#problem-formulation) Is it a classification problem or a regression problem? Is the \n", "goal to build a model that generates a ranked list prioritized by risk, or is it to detect anomalies as new data \n", "come in? Knowing what kinds of tasks machine learning can solve will allow you to map the problem you are working on\n", "to one or more machine learning settings and give you access to a suite of methods.\n", "- **Data exploration and preparation.** Next, you need to carefully explore the data you have. What additional data\n", "do you need or have access to? What variable will you use to match records for integrating different data sources?\n", "What variables exist in the data set? Are they continuous or categorical? What about missing values? Can you use the \n", "variables in their original form, or do you need to alter them in some way?\n", "- [**Feature engineering.**](#feature-generation) In machine learning language, what you might know as independent variables or predictors \n", "or factors or covariates are called \"features.\" Creating good features is probably the most important step in the \n", "machine learning process. This involves doing transformations, creating interaction terms, or aggregating over data\n", "points or over time and space.\n", "- **Method selection.** Having formulated the problem and created your features, you now have a suite of methods to\n", "choose from. It would be great if there were a single method that always worked best for a specific type of problem. Typically, in machine learning, you take a variety of methods and try them, empirically validating which one is the best approach to your problem.\n", "- [**Evaluation.**](#evaluation) As you build a large number of possible models, you need a way choose the best among them. We'll cover methodology to validate models on historical data and discuss a variety of evaluation metrics. The next step is to validate using a field trial or experiment.\n", "- [**Deployment.**](#deployment) Once you have selected the best model and validated it using historical data as well as a field\n", "trial, you are ready to put the model into practice. You still have to keep in mind that new data will be coming in,\n", "and the model might change over time.\n", "\n", "\n", "\n", "You're probably used to fitting models in physical or social science classes. In those cases, you probably had a hypothesis or theory about the underlying process that gave rise to your data, chose an appropriate model based on prior knowledge and fit it using least squares, and used the resulting parameter or coefficient estimates (or confidence intervals) for inference. This type of modeling is very useful for *interpretation*.\n", "\n", "In machine learning, our primary concern is *generalization*. This means that:\n", "- **We care less about the structure of the model and more about the performance** This means that we'll try out a whole bunch of models at a time and choose the one that works best, rather than determining which model to use ahead of time. We can then choose to select a *suboptimal* model if we care about a specific model type. \n", "- **We don't (necessarily) want the model that best fits the data we've *already seen*,** but rather the model that will perform the best on *new data*. This means that we won't gauge our model's performance using the same data that we used to fit the model (e.g., sum of squared errors or $R^2$), and that \"best fit\" or accuracy will most often *not* determine the best model. \n", "- **We can include a lot of variables in to the model.** This may sound like the complete opposite of what you've heard in the past, and it can be hard to swallow. But we will use different methods to deal with many of those concerns in the model fitting process by using a more automatic variable selection process." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem Formulation\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "First, turning something into a real objective function. What do you care about? Do you have data on that thing? What action can you take based on your findings? Do you risk introducing any bias based on the way you model something? \n", "\n", "### Four Main Types of ML Tasks for Policy Problems\n", "\n", "- **Description**: [How can we identify and respond to the most urgent online government petitions?](https://dssg.uchicago.edu/project/improving-government-response-to-citizen-requests-online/)\n", "- **Prediction**: [Which students will struggle academically by third grade?](https://dssg.uchicago.edu/project/predicting-students-that-will-struggle-academically-by-third-grade/)\n", "- **Detection**: [Which police officers are likely to have an adverse interaction with the public?](https://dssg.uchicago.edu/project/expanding-our-early-intervention-system-for-adverse-police-interactions/)\n", "- **Behavior Change**: [How can we prevent juveniles from interacting with the criminal justice system?](https://dssg.uchicago.edu/project/preventing-juvenile-interactions-with-the-criminal-justice-system/)\n", " \n", "### Our Machine Learning Problem\n", "> Of the TANF recipients who's spell ended in a three month time period and then were not on TANF for a full year, which will return within the next two years?\n", "\n", "This is an example of a *binary prediction classification problem*.\n", "\n", "Note the time windows are completely arbitrary. You could use an outcome window of 5, 3, 1 years or 1 day. The outcome window will depend on how often you receive new data, how accurate your predictions are for a given time period, or on what time-scale you can use the output of the data. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Exploration and Preparation\n", "\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "During the first classes, we have explored the data, linked different data sources, and created new variables. \n", "A table was put together using similar techniques and writen to the class schema. We will now implement our machine learning model on this dataset.\n", "\n", "A step-by-step description of how we created the table is provided in the notebook \"Data Preparation\" notebooks we went over on Friday.\n", "\n", "1. **Creating labels**: Labels are the dependent variables, or *Y* variables, that we are trying to predict. In the machine learning framework, labels are often *binary*: true or false, encoded as 1 or 0. This outcome variable is named `label`.\n", "> Refer to the [03_2_ML_data_preparation_creating_labels.ipynb](03_2_ML_data_preparation_creating_labels.ipynb) notebook for how the labels were created.\n", "\n", "1. **Decide on feature**: Our features are our independent variables or predictors. Good features make machine learning systems effective. The better the features the easier it is the capture the structure of the data. You generate features using domain knowledge. In general, it is better to have more complex features and a simpler model rather than vice versa. Keeping the model simple makes it faster to train and easier to understand rather then extensively searching for the \"right\" model and \"right\" set of parameters. Machine Learning Algorithms learn a solution to a problem from sample data. The set of features is the best representation of the sample data to learn a solution to a problem.\n", "> Refer to the [03_3_ML_data_preparation_creating_features.ipynb](03_3_ML_data_preparation_creating_features.ipynb) notebook for how the labels were created.\n", "\n", "1. **Feature engineering** is the process of transforming raw data into features that better represent the underlying problem/data/structure to the predictive models, resulting in improved model accuracy on unseen data.\" ( from [Discover Feature Engineering](http://machinelearningmastery.com/discover-feature-engineering-how-to-engineer-features-and-how-to-get-good-at-it/) ). In text, for example, this might involve deriving traits of the text like word counts, verb counts, or topics to feed into a model rather than simply giving it the raw text. Example of feature engineering are: \n", " - **Transformations**, such a log, square, and square root.\n", " - **Dummy (binary) variables**, sometimes known as *indicator variables*, often done by taking categorical variables (such as industry) which do not have a numeric value, and adding them to models as a binary value.\n", " - **Discretization**. Several methods require features to be discrete instead of continuous. This is often done by binning, which you can do by equal width, deciles, Fisher-Jenks, etc. \n", " - **Aggregation.** Aggregate features often constitute the majority of features for a given problem. These use different aggregation functions (*count, min, max, average, standard deviation, etc.*) which summarize several values into one feature, aggregating over varying windows of time and space. For example, we may want to calculate the *number* (and *min, max, mean, variance*, etc.) of crimes within an *m*-mile radius of an address in the past *t* months for varying values of *m* and *t*, and then use all of them as features.\n", "\n", "1. **Cleaning data**: To run the `scikit-learn` set of models we demonstrate in this notebook, your input dataset must have no missing variables.\n", "\n", "1. **Imputing values to missing or irrelevant data**: Once the features are created, always check to make sure the values make sense. You might have some missing values, or impossible values for a given variable (negative values, major outliers). If you have missing values you should think hard about what makes the most sense for your problem; you may want to replace with `0`, the median or mean of your data, or some other value.\n", "\n", "1. **Scaling features**: Certain models will have an issue with features on different scales. For example, an individual's age is typically a number between 0 and 100 while earnings can be number between 0 and 1000000 (or higher). In order to circumvent this problem, we can scale our features to the same range (eg [0,1])." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Building a Model and Model Fitting\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "We need to munge our dataset into our **features** (predictors, or independent variables, or $X$ variables) and **labels** (dependent variables, or $Y$ variables). For ease of reference, in subsequent examples, names of variables that pertain to predictors will start with \"`X_`\", and names of variables that pertain to outcome variables will start with \"`y_`\".\n", "\n", "But it's not enough to just build the model; we're going to need a way to know whether or not it geenralizes to new data or in to the future. Convincing others of the quality of results is often the *most challenging* part of an analysis. Making repeatable, well-documented work with clear success metrics makes all the difference.\n", "\n", "To convince ourselves - and others - that our modeling results will generalize, we need to hold some data back (not using it to train the model), then apply our model to that hold-out set and \"blindly\" predict, comparing the model's predictions to what we actually observed. This is called **cross-validation**, and it's the best way we have to estimate how a model will perform on *entirely* novel data. We call the data used to build the model the **training set**, and the rest the **test set**.\n", "\n", "In general, we'd like our training set to be as large as possible, to allow our model to be built with as much data as possible. However, you also want to be as confident as possible that your model will generalize to new data. In practice, you'll have to balance these two objectives in a reasonable way. \n", "\n", "There are also many ways to split up your data into training and testing sets. Since you're trying to evaluate how your model will perform *in practice*, it's best to emulate the true use case of your model as closely as possible when you decide how to evaluate it. A good [tutorial on cross-validation](http://scikit-learn.org/stable/modules/cross_validation.html) can be found on the `scikit-learn` site.\n", "\n", "One simple and commonly used method is ***k-fold* cross-validation**, which entails splitting up our dataset into *k* groups, holding out one group while training a model on the rest of the data, evaluating model performance on the held-out \"fold,\" and repeating this process *k* times (we'll get back to this in the text-analysis tutorial). Another method is **temporal validation**, which involves building a model using all the data up until a given point in time, and then testing the model on observations that happened after that point. \n", "\n", "Our current problem is a problem in time where we are trying to predict an event in the future. Generally, if you use the future to predict the past there will be temporal effects that will help the accuracy of your predictions. We cannot use the future to predict the past in real life, so it is important to use `temporal validation` and create our training and test sets accordingly. \n", "\n", "*Note: it is important to segregate your data based on time when creating features. Otherwise there can be \"leakage,\" where you accidentally use information that you would not have known at the time.* This happens often when calculating aggregation features; for instance, it is quite easy to calculate an average using values that go beyond our training set time-span and not realize it. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training and Test Sets\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "In the [03_2_data_preparation_creating_labels](03_2_data_preparation_creating_labels.ipynb) notebook, we created one row for each individual whose TANF spell ended in the 3 months prior to 1 year before our prediction dates, and labeled them with a `1` if they returned to TANF in the 2 years after our prediciton dates.\n", "\n", "In the [03_3_data_preparation_creating_features](03_3_data_preparation_creating_features.ipynb) notebook, we created features for each person in our cohort.\n", "\n", "For both the training and test sets, let's now combine the labels and features into analytical dataframes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# For the Training Set:\n", "sql = '''\n", "SELECT a.*, \n", " b.recp_age_beg, \n", " b.recp_age_end, \n", " c.tot_earn_before, \n", " c.tot_earn_during, \n", " c.tot_earn_after,\n", " c.avg_earn_before, \n", " c.avg_earn_during,\n", " c.avg_earn_after,\n", " c.qtr_full_empl_before,\n", " c.qtr_full_empl_during,\n", " c.qtr_full_empl_after,\n", " d.ed_level,\n", " d.martl_status,\n", " e.avg_len_days,\n", " e.district,\n", " e.homeless\n", "FROM {schema}.labels_20080101 AS a\n", "LEFT JOIN {schema}.features_age_20080101 AS b\n", "ON a.recptno = b.recptno\n", "LEFT JOIN {schema}.features_employment_20080101 AS c\n", "ON a.recptno = c.recptno\n", "LEFT JOIN {schema}.features_member_info_20080101 d\n", "ON a.recptno = d.recptno\n", "LEFT JOIN {schema}.features_case_info_20080101 e\n", "ON a.recptno = e.recptno\n", "'''.format(schema=myschema)\n", "df_training = pd.read_sql(sql, conn)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# For the Testing Set:\n", "sql = '''\n", "SELECT a.*, \n", " b.recp_age_beg, \n", " b.recp_age_end, \n", " c.tot_earn_before, \n", " c.tot_earn_during, \n", " c.tot_earn_after,\n", " c.avg_earn_before, \n", " c.avg_earn_during,\n", " c.avg_earn_after,\n", " c.qtr_full_empl_before,\n", " c.qtr_full_empl_during,\n", " c.qtr_full_empl_after,\n", " d.ed_level,\n", " d.martl_status,\n", " e.avg_len_days,\n", " e.district,\n", " e.homeless\n", "FROM {schema}.labels_20090101 AS a\n", "LEFT JOIN {schema}.features_age_20090101 AS b\n", "ON a.recptno = b.recptno\n", "LEFT JOIN {schema}.features_employment_20090101 AS c\n", "ON a.recptno = c.recptno\n", "LEFT JOIN {schema}.features_member_info_20090101 d\n", "ON a.recptno = d.recptno\n", "LEFT JOIN {schema}.features_case_info_20090101 e\n", "ON a.recptno = e.recptno\n", "'''.format(schema=myschema)\n", "df_testing = pd.read_sql(sql, conn)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_training.describe(include='all', percentiles=[.5, .9, .99])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_testing.describe(include='all')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before running any machine learning algorithms, we have to ensure there are no `NULL` (or `NaN`) values in the data. As you have heard before, __never remove observations with missing values without considering the data you are dropping__. One easy way to check if there are any missing values with `Pandas` is to use the `.info()` method, which returns a count of non-null values for each column in your DataFrame." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_training.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_testing.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data Distribution\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "Let's check how much data we have, and what the split is between positive (1) and negative (0) labels in our training dataset. It's good to know what the \"baseline\" is in our dataset, to be able to intelligently evaluate our performance." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "print('Number of rows: {}'.format(df_training.shape[0]))\n", "df_training['label'].value_counts(normalize=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Crosstabs\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "Crosstabs can be useful to get a summary of and to find trends and patterns in our data. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "pd.crosstab(index=df_training['label'], columns=df_training['qtr_full_empl_before'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Selecting Predictors/Features and what we are predicting (Labels)\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "We'll first list all the columns we have in our data frame and then decide which ones to use as predictors, which one as the label/outcome, and which ones to ignore." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "print(list(df_training))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(list(df_testing))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's remove the ID variables, the label, and the dates from our list of features.\n", "sel_features = list(df_training)\n", "sel_features.remove('recptno')\n", "sel_features.remove('start_date')\n", "sel_features.remove('end_date')\n", "sel_features.remove('label')\n", "sel_features.remove('district') # good to revisit, too many and it's unclear what they mean\n", "\n", "sel_label = 'label'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating dummy variables\n", "\n", "Categorical variables need to be converted to a series of binary (0,1) values for `scikit-learn` to use them - this is also what happens in the background of other packages you may be familiar with (eg telling a `statsmodels` regression function `C()` creates a series of binary variables)\n", "\n", "`pandas` has a very nice `get_dummies` function to make our lives easier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# this is what pandas.DataFrame[column].dtype gives you for two different columns:\n", "df_training['avg_earn_before'].dtype, df_training['ed_level'].dtype" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# find our \"sel_features\" columns that are strings\n", "feat_to_dummy = [c for c in sel_features if df_training[c].dtype == 'O']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# remove these \"feat_to_dummy\" columns from our \"sel_features\" list\n", "for c in feat_to_dummy:\n", " sel_features.remove(c)\n", "# new \"sel_features\" list\n", "print(sel_features)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# get dummy values:\n", "df_train_dummies = pd.get_dummies(df_training[feat_to_dummy])\n", "df_test_dummies = pd.get_dummies(df_testing[feat_to_dummy])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# check if column list is the same for train and test dummy sets\n", "sorted(df_train_dummies.columns.tolist()) == sorted(df_test_dummies.columns.tolist())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "print(df_train_dummies.columns.tolist())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(df_test_dummies.columns.tolist())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# handle problem when training or testing set has additional categories\n", "# in the categorical columns\n", "\n", "# if dummy columns not equal\n", "if not (sorted(df_train_dummies.columns.tolist()) == sorted(df_test_dummies.columns.tolist())):\n", " # if training column list longer than testing list\n", " # assume need to add columns to testing dummy dataframe\n", " if len(df_train_dummies.columns.tolist()) > len(df_test_dummies.columns.tolist()):\n", " print('train longer')\n", " # get columns to add\n", " add_cols = [c for c in df_train_dummies.columns.tolist() \n", " if c not in df_test_dummies.columns.tolist()]\n", " # add missing columns as zeros b/c binary 0 means does not exist\n", " for c in add_cols:\n", " df_test_dummies[c] = 0\n", " # same but when test has categories train doesn't\n", " elif len(df_train_dummies.columns.tolist()) < len(df_test_dummies.columns.tolist()):\n", " print('test longer')\n", " add_cols = [c for c in df_test_dummies.columns.tolist() \n", " if c not in df_train_dummies.columns.tolist()]\n", " # put additional categories into an \"other\" column since training set doesn't have them\n", "\n", " print('to be updated')\n", " # check if additional categories are in variable that has\n", " # an \"unknown\" option\n", " \n", " # in case same length but not equal\n", " else:\n", " print('case not handled. stop and check.')\n", "else:\n", " print('train and test set dummies are equal')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test_dummies.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train_dummies.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sorted(df_train_dummies.columns.tolist()) == sorted(df_test_dummies.columns.tolist())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# add dummy columns to full training and testing dataframes\n", "df_training = df_training.merge(df_train_dummies, left_index=True, right_index=True)\n", "df_testing = df_testing.merge(df_test_dummies, left_index=True, right_index=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# update the \"sel_features\" with the dummy columns:\n", "for c in df_train_dummies.columns.tolist():\n", " sel_features.append(c)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# now we can easily access just the columns we want for modeling:\n", "df_training[sel_features].info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Scaling values\n", "\n", "Certain models will have issue with values on different scales. In your analysis cohort the job count range may vary from zero to ten while the total wages may range from zero to hundreds of thousands or millions. Traditional regression methods, for example, tend to result in features (aka right-hand variables, Xs, etc) with small values having larger coefficients than features with large values. On the other hand, some models - like decision trees - are not generally effected by having variables on different scales.\n", "\n", "To more easily use any model, we'll scale all of our continuous data to value between 0 and 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import MinMaxScaler\n", "scaler = MinMaxScaler()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# With few features it is relatively easy to hand code which columns to scale\n", "# but we can also make our lives a bit easier by doing it programmatically\n", "\n", "# get a list columns with values <0 and/or >1:\n", "cols_to_scale = [c for c in list(df_training[sel_features]) if \n", " df_training[c].min()<0 or df_training[c].max()>1]\n", "print(cols_to_scale)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# add a '*_scl' version of the column for each of our \"columns to scale\"\n", "# and replace our \"sel_features\" list\n", "\n", "for c in cols_to_scale:\n", " # create a new column name by adding '_scl' to the end of each column to scale\n", " new_column_name = c+'_scl'\n", " \n", " # fit MinMaxScaler to training set column\n", " # reshape because scaler built for 2D arrays\n", " scaler.fit(df_training[c].values.reshape(-1, 1))\n", " \n", " # update training and testing datasets with new data\n", " df_training[new_column_name] = scaler.transform(df_training[c].values.reshape(-1, 1))\n", " df_testing[new_column_name] = scaler.transform(df_testing[c].values.reshape(-1, 1))\n", " \n", " # add new column to our \"selection features\"\n", " sel_features.append(new_column_name)\n", " \n", " # and remove the unscaled column\n", " sel_features.remove(c)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# now our selection features are all scaled between 0-1\n", "df_training[sel_features].describe().T" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# and see distribution of training set\n", "df_training[sel_features].describe().T" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# get the underlying numpy.array data for use in scikit-learn\n", "\n", "X_train = df_training[sel_features].values\n", "y_train = df_training[sel_label].values\n", "X_test = df_testing[sel_features].values\n", "y_test = df_testing[sel_label].values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Understanding and Evaluation\n", "\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "In this phase, we will run the Machine Learning model on our training set. The training set's features will be used to predict the labels. Once our model is created using the test set, we will assess its quality by applying it to the test set and by comparing the *predicted values* to the *actual values* for each record in your testing data set. \n", "\n", "- **Performance Estimation**: How well will our model do once it is deployed and applied to new data?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Running a Machine Learning Model\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "Python's [`scikit-learn`](http://scikit-learn.org/stable/) is a commonly used, well documented Python library for machine learning. This library can help you split your data into training and test sets, fit models and use them to predict results on new data, and evaluate your results.\n", "\n", "We will start with the simplest [`LogisticRegression`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) model and see how well that does.\n", "\n", "You can use any number of metrics to judge your models (see [model evaluation](#model-evaluation)), but we'll use [`accuracy_score()`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) (ratio of correct predictions to total number of predictions) as our measure." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's fit a model\n", "from sklearn import linear_model\n", "model = linear_model.LogisticRegression(penalty='l1', C=1)\n", "model.fit( X_train, y_train )\n", "print(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When we print the model results, we see different parameters we can adjust as we refine the model based on running it against test data (values such as `intercept_scaling`, `max_iters`, `penalty`, and `solver`). Example output:\n", "\n", " LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, max_iter=100, multi_class='ovr',\n", " penalty='l2', random_state=None, solver='liblinear', tol=0.0001,\n", " verbose=0)\n", "\n", "To adjust these parameters, one would alter the call that creates the `LogisticRegression()` model instance, passing it one or more of these parameters with a value other than the default. So, to re-fit the model with `max_iter` of 1000, `intercept_scaling` of 2, and `solver` of \"lbfgs\" (pulled from thin air as an example), you'd create your model as follows:\n", "\n", " model = LogisticRegression( max_iter = 1000, intercept_scaling = 2, solver = \"lbfgs\" )\n", "\n", "The basic way to choose values for, or \"tune,\" these parameters is the same as the way you choose a model: fit the model to your training data with a variety of parameters, and see which perform the best on the test set. An obvious drawback is that you can also *overfit* to your test set; in this case, you can alter your method of cross-validation.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model Evaluation \n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "Machine learning models usually do not produce a prediction (0 or 1) directly. Rather, models produce a score between 0 and 1 (that can sometimes be interpreted as a probability), which is basically the model ranking all of the observations from *most likely* to *least likely* to have label of 1. The 0-1 score is then turned into a 0 or 1 based on a threshold. \n", "\n", "If you use the sklearn method `.predict()` then the model will select a threshold for you (generally 0.5) - it is almost **never a good idea to let the model choose the threshold for you**. Instead, you should get the actual score and test different threshold values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# get the prediction scores\n", "y_scores = model.predict_proba(X_test)[:,1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Look at the distribution of scores:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "sns.distplot(y_scores, kde=False, rug=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_testing['y_score'] = y_scores" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# see our selected features and prediction score\n", "df_testing[sel_features + ['y_score']].head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tools like `sklearn` often have a default threshold of 0.5, but a good threshold is selected based on the data, model and the specific problem you are solving. As a trial run, let's set a threshold to the value of 0.05. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# you can make a simple function in one line using \"lambda\":\n", "calc_threshold = lambda x,y: 0 if x < y else 1 \n", "\n", "# given the distribution of the scores, what threshold would you set?\n", "selected_threshold = 0.2\n", "\n", "# create a list of our predicted outocmes\n", "predicted = np.array( [calc_threshold(score, selected_threshold) for score in y_scores] )\n", "\n", "# and our actual, or expected, outcomes\n", "expected = y_test" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Confusion Matrix\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "Once we have tuned our scores to 0 or 1 for classification, we create a *confusion matrix*, which has four cells: true negatives, true positives, false negatives, and false positives. Each data point belongs in one of these cells, because it has both a ground truth and a predicted label. If an example was predicted to be negative and is negative, it's a true negative. If an example was predicted to be positive and is positive, it's a true positive. If an example was predicted to be negative and is positive, it's a false negative. If an example was predicted to be positive and is negative, it's a false negative." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import confusion_matrix\n", "conf_matrix = confusion_matrix(expected,predicted)\n", "print(conf_matrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The count of true negatives is `conf_matrix[0,0]`, false negatives `conf_matrix[1,0]`, true positives `conf_matrix[1,1]`, and false_positives `conf_matrix[0,1]`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Accuracy is the ratio of the correct predictions (both positive and negative) to all predictions. \n", "$$ Accuracy = \\frac{TP+TN}{TP+TN+FP+FN} $$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# generate an accuracy score by comparing expected to predicted.\n", "from sklearn.metrics import accuracy_score\n", "accuracy = accuracy_score(expected, predicted)\n", "print( \"Accuracy = \" + str( accuracy ) )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_training['label'].value_counts(normalize=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Evaluation metrics\n", "\n", "what do we think about this accuracy? good? bad?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Two metrics that are often more relevant than overall accuracy are **precision** and **recall**. \n", "\n", "Precision measures the accuracy of the classifier when it predicts an example to be positive. It is the ratio of correctly predicted positive examples to examples predicted to be positive. \n", "\n", "$$ Precision = \\frac{TP}{TP+FP}$$\n", "\n", "Recall measures the accuracy of the classifier to find positive examples in the data. \n", "\n", "$$ Recall = \\frac{TP}{TP+FN} $$\n", "\n", "By selecting different thresholds we can vary and tune the precision and recall of a given classifier. A conservative classifier (threshold 0.99) will classify a case as 1 only when it is *very sure*, leading to high precision. On the other end of the spectrum, a low threshold (e.g. 0.01) will lead to higher recall. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import precision_score, recall_score\n", "precision = precision_score(expected, predicted)\n", "recall = recall_score(expected, predicted)\n", "print( \"Precision = \" + str( precision ) )\n", "print( \"Recall= \" + str(recall))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we care about our whole precision-recall space, we can optimize for a metric known as the **area under the curve (AUC-PR)**, which is the area under the precision-recall curve. The maximum AUC-PR is 1. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_precision_recall(y_true,y_score):\n", " \"\"\"\n", " Plot a precision recall curve\n", " \n", " Parameters\n", " ----------\n", " y_true: ls\n", " ground truth labels\n", " y_score: ls\n", " score output from model\n", " \"\"\"\n", " precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true,y_score)\n", " plt.plot(recall_curve, precision_curve)\n", " plt.xlabel('Recall')\n", " plt.ylabel('Precision')\n", " auc_val = auc(recall_curve,precision_curve)\n", " print('AUC-PR: {0:1f}'.format(auc_val))\n", " plt.show()\n", " plt.clf()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_precision_recall(expected, y_scores)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Precision and Recall at k%\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "If we only care about a specific part of the precision-recall curve we can focus on more fine-grained metrics. For instance, say there is a special program for those most likely to need assistance within the next year, but that it can only cover *1% of our test set*. In that case, we would want to prioritize the 1% who are *most likely* to need assistance within the next year, and it wouldn't matter too much how accurate we were on the overall data.\n", "\n", "Let's say that, out of the approximately 300,000 observations, we can intervene on 1% of them, or the \"top\" 3000 in a year (where \"top\" means highest likelihood of needing intervention in the next year). We can then focus on optimizing our **precision at 1%**." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_precision_recall_n(y_true, y_prob, model_name):\n", " \"\"\"\n", " y_true: ls \n", " ls of ground truth labels\n", " y_prob: ls\n", " ls of predic proba from model\n", " model_name: str\n", " str of model name (e.g, LR_123)\n", " \"\"\"\n", " from sklearn.metrics import precision_recall_curve\n", " y_score = y_prob\n", " precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true, y_score)\n", " precision_curve = precision_curve[:-1]\n", " recall_curve = recall_curve[:-1]\n", " pct_above_per_thresh = []\n", " number_scored = len(y_score)\n", " for value in pr_thresholds:\n", " num_above_thresh = len(y_score[y_score>=value])\n", " pct_above_thresh = num_above_thresh / float(number_scored)\n", " pct_above_per_thresh.append(pct_above_thresh)\n", " pct_above_per_thresh = np.array(pct_above_per_thresh)\n", " plt.clf()\n", " fig, ax1 = plt.subplots()\n", " ax1.plot(pct_above_per_thresh, precision_curve, 'b')\n", " ax1.set_xlabel('percent of population')\n", " ax1.set_ylabel('precision', color='b')\n", " ax1.set_ylim(0,1.05)\n", " ax2 = ax1.twinx()\n", " ax2.plot(pct_above_per_thresh, recall_curve, 'r')\n", " ax2.set_ylabel('recall', color='r')\n", " ax2.set_ylim(0,1.05)\n", " \n", " name = model_name\n", " plt.title(name)\n", " plt.show()\n", " plt.clf()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "plot_precision_recall_n(expected,y_scores, 'LR')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def precision_at_k(y_true, y_scores,k):\n", " \n", " threshold = np.sort(y_scores)[::-1][int(k*len(y_scores))]\n", " y_pred = np.asarray([1 if i > threshold else 0 for i in y_scores ])\n", " return precision_score(y_true, y_pred)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_at_10 = precision_at_k(expected,y_scores, 0.1)\n", "print('Precision at 10%: {:.3f}'.format(p_at_10))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Feature Understanding\n", "\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "Now that we have evaluated our model overall, let's look at the coefficients for each feature, along with their standard deviation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"The coefficients for each of the features are \")\n", "list(zip(sel_features, model.coef_[0]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "std_coef = np.std(X_train,0)*model.coef_\n", "list(zip(sel_features, std_coef[0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Decision tree model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.tree import DecisionTreeClassifier\n", "\n", "# packages to display a tree in Jupyter notebooks\n", "from sklearn.externals.six import StringIO\n", "from IPython.display import Image\n", "from sklearn.tree import export_graphviz\n", "import graphviz as gv\n", "import pydotplus" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = DecisionTreeClassifier(max_depth=3, min_samples_split=100)\n", "model.fit( X_train, y_train )\n", "print(model)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# visualize the tree\n", "\n", "# object to hold the graphviz data\n", "dot_data = StringIO()\n", "\n", "# create the visualization\n", "export_graphviz(model, out_file=dot_data, filled=True,\n", " rounded=True, special_characters=True,\n", " feature_names=df_training[sel_features].columns.values)\n", "\n", "# convert to a graph from the data\n", "graph = pydotplus.graph_from_dot_data(dot_data.getvalue())\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# print out the graph to zoom in \n", "# graph.write_pdf('./output/model_eval_tree1.pdf')\n", "\n", "# or view it directly in notebook\n", "Image(graph.create_png())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# get the prediction scores\n", "y_scores = model.predict_proba(X_test)[:,1]\n", "\n", "fig, ax = subplots(figsize=(10,5))\n", "sns.distplot(y_scores, kde=False, rug=False, ax=ax)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# given the distribution of the scores, what threshold would you set?\n", "selected_threshold = 0.15\n", "\n", "# create a list of our predicted outocmes\n", "predicted = np.array( [calc_threshold(score, selected_threshold) for score in y_scores] )\n", "\n", "# and our actual, or expected, outcomes\n", "expected = y_test" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "conf_matrix = confusion_matrix(expected,predicted)\n", "print(conf_matrix)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "accuracy = accuracy_score(expected, predicted)\n", "print( \"Accuracy = \" + str( accuracy ) )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_precision = precision_score(expected, predicted)\n", "recall = recall_score(expected, predicted)\n", "print( \"Precision = \" + str( model_precision ) )\n", "print( \"Recall= \" + str(recall))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_precision_recall(expected, y_scores)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_precision_recall_n(expected,y_scores, 'DTC')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Decistion Trees have a `.feature_importances_` rather than `.coef_` attribute:\n", "list(zip(sel_features, model.feature_importances_))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Assessing Model Against Baselines\n", "\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "It is important to check our model against a reasonable **baseline** to know how well our model is doing. \n", "\n", "> Without any context, over 85% accuracy can sound great... But it's not so great when you remember that you could as well or better by declaring that all of the firms will survive in the next year, which would be a stupid (not to mention useless) model. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A good place to start is checking against a *random* baseline, assigning every example a label (positive or negative) completely at random. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "random_score = [random.uniform(0,1) for i in enumerate(y_test)] \n", "random_p_at_selected = precision_at_k(expected,random_score, selected_threshold)\n", "print(random_p_at_selected)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another good practice is checking against an \"expert\" or rule of thumb baseline. \n", "> Here, ..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# avg_wage_pct_10 = np.percentile(df_testing['avg_wage'], 10)\n", "# expert_predicted = np.array([1 if (avg_wage Here, let's compare to a prediction where all employers survive." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "none_predicted = np.array([1 for i in range(df_testing.shape[0])])\n", "none_precision = precision_score(expected, none_predicted)\n", "print(none_precision)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"white\")\n", "sns.set_context(\"poster\", font_scale=2.25, rc={\"lines.linewidth\":2.25, \"lines.markersize\":8})\n", "fig, ax = plt.subplots(1, figsize=(22,12))\n", "sns.barplot(['Random','None Employed', 'Our Model'],\n", "# [random_p_at_1, none_precision, expert_precision, max_p_at_k],\n", " [random_p_at_selected, none_precision, model_precision],\n", "# palette=['#6F777D','#6F777D','#6F777D','#800000'])\n", " palette=['#6F777D','#6F777D','#800000'])\n", "sns.despine()\n", "plt.ylim(0,1)\n", "plt.ylabel('precision at {}%'.format(selected_threshold*100));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Machine Learning Pipeline\n", "- Back to the [Table of Contents](#Table-of-Contents)\n", "\n", "When working on machine learning projects, it is a good idea to structure your code as a modular **pipeline**, which contains all of the steps of your analysis, from the original data source to the results that you report, along with documentation. This has many advantages:\n", "- **Reproducibility**. It's important that your work be reproducible. This means that someone else should be able\n", "to see what you did, follow the exact same process, and come up with the exact same results. It also means that\n", "someone else can follow the steps you took and see what decisions you made, whether that person is a collaborator, \n", "a reviewer for a journal, or the agency you are working with. \n", "- **Ease of model evaluation and comparison**.\n", "- **Ability to make changes.** If you receive new data and want to go through the process again, or if there are \n", "updates to the data you used, you can easily substitute new data and reproduce the process without starting from scratch." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Survey of Algorithms\n", "\n", "- Back to the [Table of Contents](#Table-of-Contents)\n", "\n", "We have only scratched the surface of what we can do with our model. We've only tried one classifier (Logistic Regression), and there are plenty more classification algorithms in `sklearn`. Let's try them! " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clfs = {'RF': RandomForestClassifier(n_estimators=1000, n_jobs=-1),\n", " 'ET': ExtraTreesClassifier(n_estimators=1000, n_jobs=-1),\n", " 'LR': LogisticRegression(penalty='l1', C=1e5),\n", " 'SGD':SGDClassifier(loss='log'),\n", " 'GB': GradientBoostingClassifier(learning_rate=0.05, subsample=0.5, max_depth=6, random_state=17\n", " , n_estimators=10),\n", " 'NB': GaussianNB(),\n", " 'DT': DecisionTreeClassifier(max_depth=10, min_samples_split=10)\n", " }" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sel_clfs = ['RF', 'ET', 'LR', 'SGD', 'GB', 'NB', 'DT']" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "max_p_at_k = 0\n", "df_results = pd.DataFrame()\n", "for clfNM in sel_clfs:\n", " clf = clfs[clfNM]\n", " clf.fit( X_train, y_train )\n", " print(clf)\n", " y_score = clf.predict_proba(X_test)[:,1]\n", " predicted = np.array(y_score)\n", " expected = np.array(y_test)\n", " plot_precision_recall_n(expected,predicted, clfNM)\n", " p_at_1 = precision_at_k(expected,y_score, 0.01)\n", " p_at_5 = precision_at_k(expected,y_score,0.05)\n", " p_at_10 = precision_at_k(expected,y_score,0.10)\n", " fpr, tpr, thresholds = roc_curve(expected,y_score)\n", " auc_val = auc(fpr,tpr)\n", " df_results = df_results.append([{\n", " 'clfNM':clfNM,\n", " 'p_at_1':p_at_1,\n", " 'p_at_5':p_at_5,\n", " 'p_at_10':p_at_10,\n", " 'auc':auc_val,\n", " 'clf': clf\n", " }])\n", " \n", " #feature importances\n", " if hasattr(clf, 'coef_'):\n", " feature_import = dict(\n", " zip(sel_features, clf.coef_.ravel()))\n", " elif hasattr(clf, 'feature_importances_'):\n", " feature_import = dict(\n", " zip(sel_features, clf.feature_importances_))\n", " print(\"FEATURE IMPORTANCES\")\n", " print(feature_import)\n", " \n", " if max_p_at_k < p_at_1:\n", " max_p_at_k = p_at_1\n", " print('Precision at 1%: {:.2f}'.format(p_at_1))\n", "# df_results.to_csv('output/modelrun.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's compare all models at 1%" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "random_score = [random.uniform(0,1) for i in enumerate(y_test)] \n", "random_p_at_1 = precision_at_k(expected,random_score, selected_threshold)\n", "print(random_p_at_1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"white\")\n", "sns.set_context(\"poster\", font_scale=2.25, rc={\"lines.linewidth\":2.25, \"lines.markersize\":8})\n", "fig, ax = plt.subplots(1, figsize=(22,12))\n", "sns.barplot(['Random','None Employed', 'Best Model'],\n", "# [random_p_at_1, none_precision, expert_precision, max_p_at_k],\n", " [random_p_at_1, none_precision, max_p_at_k],\n", "# palette=['#6F777D','#6F777D','#6F777D','#800000'])\n", " palette=['#6F777D','#6F777D','#800000'])\n", "sns.despine()\n", "plt.ylim(0,1)\n", "plt.ylabel('precision at 1%')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise\n", "- Back to [Table of Contents](#Table-of-Contents)\n", "\n", "Our model has just scratched the surface. Try the following: \n", " \n", "- Create more features\n", "- Try more models\n", "- Try different parameters for your model\n", "\n", "The notebook used to create features and labels is the \"Data Preparation\" notebook. Take the time to look at it and understand how every metric was created and added to the data table." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Additional Resources\n", "\n", "- Hastie et al.'s [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/) is a classic and is available online for free.\n", "- James et al.'s [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), also available online, includes less mathematics and is more approachable.\n", "- Wu et al.'s [Top 10 Algorithms in Data Mining](http://www.cs.uvm.edu/~icdm/algorithms/10Algorithms-08.pdf)." ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "py3-ada", "language": "python", "name": "py3-ada" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" }, "toc": { "nav_menu": {}, "number_sections": false, "sideBar": true, "skip_h1_title": false, "toc_cell": false, "toc_position": { "height": "605px", "left": "0px", "right": "1492px", "top": "110px", "width": "270px" }, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }