{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Machine Learning -- Model Training and Evaluation\n", "-----\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction\n", "\n", "In this tutorial, we'll discuss how to formulate a policy problem or a social science question in the machine learning framework; how to transform raw data into something that can be fed into a model; how to build, evaluate, compare, and select models; and how to reasonably and accurately interpret model results. You'll also get hands-on experience using the `scikit-learn` package in Python. \n", "\n", "This tutorial is based on chapter \"Machine Learning\" of [Big Data and Social Science](https://coleridge-initiative.github.io/big-data-and-social-science/)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import sqlite3\n", "import sklearn\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "from dateutil.parser import parse\n", "from sklearn.metrics import precision_recall_curve, roc_curve, auc, confusion_matrix, accuracy_score, precision_score, recall_score\n", "from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier\n", "from sklearn.linear_model import LogisticRegression, SGDClassifier\n", "from sklearn.tree import DecisionTreeClassifier\n", "sns.set_style(\"white\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DB = 'ncdoc.db'\n", "conn = sqlite3.connect(DB)\n", "cur = conn.cursor()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem Formulation\n", "---\n", " \n", "Our Machine Learning Problem\n", ">Of all prisoners released, we would like to predict who is likely to reenter jail within *5* years of the day we make our prediction. For instance, say it is Jan 1, 2009 and we want to identify which \n", ">prisoners are likely to re-enter jail between now and end of 2013. We can run our predictive model and identify who is most likely at risk. The is an example of a *binary classification* problem. \n", "\n", "Note the outcome window of 5 years is completely arbitrary. You could use a window of 5, 3, 1 years or 1 day. \n", "\n", "In order to predict recidivism, we will be using data from the `inmate` and `sentences` table to create labels (predictors, or independent variables, or $X$ variables) and features (dependent variables, or $Y$ variables). \n", "\n", "We need to munge our data into **labels** (1_Machine_Learning_Labels.ipynb) and **features** (2_Machine_Learning_Features.ipynb) before we can train and evaluate **machine learning models** (3_Machine_Learning_Models.ipynb).\n", "\n", "This notebook assumes that you have already worked through the `1_Machine_Learning_Labels` and `2_Machine_Learning_Features` notebooks. If that is not the case, the following functions allow you to bring in the functions developed in those notebooks from `.py` scripts." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We are bringing in the create_labels and create_features functions covered in previous notebooks\n", "# Note that these are brought in from scripts rather than from external packages\n", "from create_labels import create_labels\n", "from create_features import create_features" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# These functions make sure that the tables have been created in the database.\n", "create_labels('2008-12-31', '2009-01-01', '2013-12-31', conn)\n", "create_labels('2013-12-31', '2014-01-01', '2018-12-31', conn)\n", "\n", "create_features('2008-12-31', '2009-01-01', '2013-12-31', conn)\n", "create_features('2013-12-31', '2014-01-01', '2018-12-31', conn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Create Training and Test Sets\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Our Training Set\n", "\n", "We create a training set that takes people at the beginning of 2009 and defines the outcome based on data from 2009-2013 (`recidivism_labels_2009_2013`). The features for each person are based on data up to the end of 2008 (`features_2000_2008`).\n", "\n", "*Note:* It is important to segregate your data based on time when creating features. Otherwise there can be \"leakage\", where you accidentally use information that you would not have known at the time." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sql_string = \"drop table if exists train_matrix;\"\n", "cur.execute(sql_string)\n", "\n", "sql_string = \"create table train_matrix as \"\n", "sql_string += \"select l.inmate_doc_number, l.recidivism, f.num_admits, f.length_longest_sentence, f.age_first_admit, f.age \"\n", "sql_string += \"from recidivism_labels_2009_2013 l \"\n", "sql_string += \"left join features_2000_2008 f on f.inmate_doc_number = l.inmate_doc_number \"\n", "sql_string += \";\"\n", "cur.execute(sql_string)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We then load the training data into `df_training`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sql_string = \"SELECT *\"\n", "sql_string += \"FROM train_matrix \"\n", "sql_string += \";\"\n", "\n", "df_training = pd.read_sql(sql_string, con = conn)\n", "df_training.head(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Our Test (Validation) Set\n", "\n", "In the machine learning process, we want to build models on the training set and evaluate them on the test set. Our test set will use labels from 2014-2018 (`recidivism_labels_2014_2018`), and our features will be based on data up to the end of 2013 (`features_2000_2013`). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sql_string = \"drop table if exists test_matrix;\"\n", "cur.execute(sql_string)\n", "\n", "sql_string = \"create table test_matrix as \"\n", "sql_string += \"select l.inmate_doc_number, l.recidivism, f.num_admits, f.length_longest_sentence, f.age_first_admit, f.age \"\n", "sql_string += \"from recidivism_labels_2014_2018 l \"\n", "sql_string += \"left join features_2000_2013 f on f.inmate_doc_number = l.inmate_doc_number \"\n", "sql_string += \";\"\n", "cur.execute(sql_string)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We load the test data into `df_test`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sql_string = \"SELECT *\"\n", "sql_string += \"FROM test_matrix \"\n", "sql_string += \";\"\n", "\n", "df_test = pd.read_sql(sql_string, con = conn)\n", "df_test.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data Cleaning\n", "\n", "Before we proceed to model training, we need to clean our training data. First, we check the percentage of missing values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "isnan_training_rows = df_training.isnull().any(axis=1)\n", "nrows_training = df_training.shape[0]\n", "nrows_training_isnan = df_training[isnan_training_rows].shape[0]\n", "print('%of frows with NaNs {} '.format(float(nrows_training_isnan)/nrows_training))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that about 1% of the rows in our data have missing values. In the following, we will drop rows with missing values. Note, however, that better ways for dealing with missings exist." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_training = df_training[~isnan_training_rows]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check if the values of the ages at first admit are reasonable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.unique( df_training['age_first_admit'] )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks like this needs some cleaning. We will drop any rows that have age < 14 and > 99. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "keep = (df_training['age_first_admit'] >= 14) & (df_training['age_first_admit'] <= 99)\n", "df_training = df_training[keep]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check how much data we still have and how many examples of recidivism are in our training dataset. When it comes to model evaluation, it is good to know what the \"baseline\" is in our dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Number of rows: {}'.format(df_training.shape[0]))\n", "df_training['recidivism'].value_counts(normalize=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have about 155,000 examples, and about 25% of those are *positive* examples (recidivist), which is what we're trying to identify. About 75% of the examples are *negative* examples (non-recidivist).\n", "\n", "Next, let's take a look at the test set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "isnan_test_rows = df_test.isnull().any(axis=1)\n", "nrows_test = df_test.shape[0]\n", "nrows_test_isnan = df_test[isnan_test_rows].shape[0]\n", "print('%of rows with NaNs {} '.format(float(nrows_test_isnan)/nrows_test))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that about 1% of the rows in our test set have missing values. This matches what we'd expect based on what we saw in the training set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test = df_test[~isnan_test_rows]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " As before, we drop cases with age < 14 and > 99." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "keep = (df_test['age_first_admit'] >= 14) & (df_test['age_first_admit'] <= 99)\n", "df_test = df_test[keep]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also check the number of observations and the outcome distribution for our test data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Number of rows: {}'.format(df_test.shape[0]))\n", "df_test['recidivism'].value_counts(normalize=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Split into features and labels\n", "\n", "Here we select our features and outcome variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sel_features = ['num_admits', 'length_longest_sentence', 'age_first_admit', 'age']\n", "sel_label = 'recidivism'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now create an X- and y-training and X- and y-test object to train and evaluate prediction models with `scikit-learn`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train = df_training[sel_features].values\n", "y_train = df_training[sel_label].values\n", "X_test = df_test[sel_features].values\n", "y_test = df_test[sel_label].values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Model Training\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On this basis, we can now build a prediction model that learns the relationship between our predictors (`X_train`) and recidivism (`y_train`) in the training data. We start with using logistic regression as our first model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = LogisticRegression(penalty = 'none')\n", "model.fit( X_train, y_train )\n", "print(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When we print the model object, we see different model settings that can be adjusted. To adjust these parameters, one would alter the call that creates the `LogisticRegression()` model instance, passing it one or more of these parameters with a value other than the default. So, to re-fit the model with `penalty` of \"elasticnet\", `C` of 0.01, and `intercept_scaling` of 2 (as an example), you'd create your model as follows:\n", "\n", " model = LogisticRegression(penalty = 'elasticnet', C = 0.01, intercept_scaling = 2)\n", "\n", "The basic way to choose values for, or \"tune,\" these parameters is the same as the way you choose a model: fit the model to your training data with a variety of parameters, and see which perform the best on the test set. However, an obvious drawback is that you can also *overfit* to your test set. In this case, you can (and should) alter the validation method (e.g., split the data into a training, validation and test set or run cross-validation in the training set).\n", "\n", "Let's look at what the model learned, i.e. what the coefficients are." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "model.coef_[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.intercept_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Model Evaluation \n", "---\n", "\n", "Machine learning models usually do not produce a prediction (0 or 1) directly. Rather, models produce a score (that can sometimes be interpreted as a probability) between 0 and 1, which lets you more finely rank all of the examples from *most likely* to *least likely* to have label 1 (positive). This score is then turned into a 0 or 1 based on a user-specified threshold. For example, you might label all examples that have a score greater than 0.5 as positive (1), but there's no reason that has to be the cutoff. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_scores = model.predict_proba(X_test)[:,1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a look at the distribution of scores and see if it makes sense to us. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.distplot(y_scores, kde=False, rug=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our distribution of scores is skewed, with the majority of scores on the lower end of the scale. We expect this because 75% of the training data is made up of non-recidivists, so we'd guess that a higher proportion of the examples in the test set will be negative (meaning they should have lower scores). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test['y_score'] = y_scores" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tools like `scikit-learn` often have a default threshold of 0.5, but a good threshold is selected based on the data, model and the specific problem you are solving. As a trial run, let's set a threshold of 0.5. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "calc_threshold = lambda x,y: 0 if x < y else 1 \n", "predicted = np.array( [calc_threshold(score,0.5) for score in y_scores] )\n", "expected = y_test" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Confusion Matrix\n", "\n", "Once we have tuned our scores to 0 or 1 for classification, we create a *confusion matrix*, which has four cells: true negatives, true positives, false negatives, and false positives. If an example was predicted to be negative and is negative, it's a true negative. If an example was predicted to be positive and is positive, it's a true positive. If an example was predicted to be negative and is positive, it's a false negative. If an example was predicted to be positive and is negative, it's a false negative." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "conf_matrix = confusion_matrix(expected,predicted)\n", "print(conf_matrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The count of true negatives is `conf_matrix[0,0]`, false negatives `conf_matrix[1,0]`, true positives `conf_matrix[1,1]`, and false_positives `conf_matrix[0,1]`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "accuracy = accuracy_score(expected, predicted)\n", "print( \"Accuracy = \" + str( accuracy ) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We get an accuracy score of 84%. Recall that our test set had 84.5% non-recidivists and 15.5% recidivists. If we had just labeled all the examples as negative and guessed non-recidivist every time, we would have had an accuracy of 84.5%, so our basic model is not doing better than a \"dumb classifier\". We therefore want to explore other prediction methods in later sections. For now, let's look at the precision and recall scores of our model, still using the default classification threshold." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "precision = precision_score(expected, predicted)\n", "recall = recall_score(expected, predicted)\n", "print( \"Precision = \" + str( precision ) )\n", "print( \"Recall= \" + str(recall))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## AUC-PR and AUC-ROC\n", "\n", "If we care about the whole precision-recall space, we can consider a metric known as the area under the precision-recall curve (AUC-PR). The maximum AUC-PR is 1. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "precision_curve, recall_curve, pr_thresholds = precision_recall_curve(expected, y_scores)\n", "auc_val = auc(recall_curve,precision_curve)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we plot the PR curve and print the corresponding AUC-PR score." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(recall_curve, precision_curve)\n", "plt.xlabel('Recall')\n", "plt.ylabel('Precision')\n", "print('AUC-PR: {0:1f}'.format(auc_val))\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A related performance metric is the area under the receiver operating characteristic curve (AUC-ROC). It also has a maximum of 1, with 0.5 representing a non-informative model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fpr, tpr, thresholds = roc_curve(expected, y_scores)\n", "roc_auc = auc(fpr, tpr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we plot the ROC curve and print the corresponding AUC-ROC score." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)\n", "plt.legend(loc = 'lower right')\n", "plt.plot([0, 1], [0, 1],'r--')\n", "plt.xlim([0, 1])\n", "plt.ylim([0, 1])\n", "plt.ylabel('True Positive Rate')\n", "plt.xlabel('False Positive Rate')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Precision and Recall at k%\n", "\n", "If we only care about a specific part of the precision-recall curve we can focus on more fine-grained metrics. For instance, say there is a special program for people likely to be recidivists, but only 5% can be admitted. In that case, we would want to prioritize the 5% who were *most likely* to end up back in jail, and it wouldn't matter too much how accurate we were on the 80% or so who weren't very likely to end up back in jail. \n", "\n", "Let's say that, out of the approximately 200,000 prisoners, we can intervene on 5% of them, or the \"top\" 10,000 prisoners (where \"top\" means highest predicted risk of recidivism). We can then focus on optimizing our precision at 5%. For this, we first define a function (`plot_precision_recall_n`) that computes and plots precision and recall for any percentage cutoff (k)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_precision_recall_n(y_true, y_prob, model_name):\n", " \"\"\"\n", " y_true: ls\n", " y_prob: ls\n", " model_name: str\n", " \"\"\"\n", " y_score = y_prob\n", " precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true, y_score)\n", " precision_curve = precision_curve[:-1]\n", " recall_curve = recall_curve[:-1]\n", " pct_above_per_thresh = []\n", " number_scored = len(y_score)\n", " for value in pr_thresholds:\n", " num_above_thresh = len(y_score[y_score>=value])\n", " pct_above_thresh = num_above_thresh / float(number_scored)\n", " pct_above_per_thresh.append(pct_above_thresh)\n", " pct_above_per_thresh = np.array(pct_above_per_thresh)\n", " plt.clf()\n", " fig, ax1 = plt.subplots()\n", " ax1.plot(pct_above_per_thresh, precision_curve, 'b')\n", " ax1.set_xlabel('percent of population')\n", " ax1.set_ylabel('precision', color='b')\n", " ax1.set_ylim(0,1.05)\n", " ax2 = ax1.twinx()\n", " ax2.plot(pct_above_per_thresh, recall_curve, 'r')\n", " ax2.set_ylabel('recall', color='r')\n", " ax2.set_ylim(0,1.05)\n", " \n", " name = model_name\n", " plt.title(name)\n", " plt.show()\n", " plt.clf()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now plot the precision and recall scores for the full range of k values (this might take some time to run)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_precision_recall_n(expected,y_scores, 'LR')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we define another function, `precision_at_k`, which returns the precision score for specific values of k." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def precision_at_k(y_true, y_scores,k):\n", " \n", " threshold = np.sort(y_scores)[::-1][int(k*len(y_scores))]\n", " y_pred = np.asarray([1 if i >= threshold else 0 for i in y_scores ])\n", " return precision_score(y_true, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now compute, e.g., precision at top 1% and precision at top 5%." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_at_1 = precision_at_k(expected,y_scores, 0.01)\n", "print('Precision at 1%: {:.2f}'.format(p_at_1))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_at_5 = precision_at_k(expected,y_scores, 0.05)\n", "print('Precision at 5%: {:.2f}'.format(p_at_5))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Baseline \n", "\n", "Finally, it is important to check our model against a reasonable baseline to know how well our model is doing. Without any context, 84% accuracy can sound really great... but it's not so great when you remember that you could do better by declaring everyone a non-recidivist, which would be a useless model. This baseline would be called the *no information rate*.\n", "\n", "In addition to the no information rate, we can check against a *random* baseline by assigning every example a label (positive or negative) completely at random. We can then compute the precision at top 5% for the random model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "random_score = [np.random.uniform(0,1) for i in enumerate(y_test)] \n", "random_predicted = np.array( [calc_threshold(score,0.5) for score in random_score] )\n", "random_p_at_5 = precision_at_k(expected,random_predicted, 0.05)\n", "random_p_at_5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# More models\n", "---\n", "\n", "We have only scratched the surface of what we can do with `scikit-learn`. We've only tried one method (logistic regression), and there are plenty more classification algorithms. In the following, we consider decision trees (`DT`), random forests (`RF`), extremely randomized trees (`ET`) and gradient boosting (`GB`) as additional prediction methods.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clfs = {'DT': DecisionTreeClassifier(max_depth=3),\n", " 'RF': RandomForestClassifier(n_estimators=500, n_jobs=-1),\n", " 'ET': ExtraTreesClassifier(n_estimators=250, n_jobs=-1, criterion='entropy'),\n", " 'GB': GradientBoostingClassifier(learning_rate=0.05, subsample=0.7, max_depth=3, n_estimators=250)}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sel_clfs = ['DT', 'RF', 'ET', 'GB']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use these methods in a loop that trains one model for each method with the training data and plots the corresponding precision and recall at top k figures with the test data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "max_p_at_k = 0\n", "for clfNM in sel_clfs:\n", " clf = clfs[clfNM]\n", " clf.fit( X_train, y_train )\n", " print(clf)\n", " y_score = clf.predict_proba(X_test)[:,1]\n", " predicted = np.array(y_score)\n", " expected = np.array(y_test)\n", " plot_precision_recall_n(expected,predicted, clfNM)\n", " p_at_5 = precision_at_k(expected,y_score, 0.05)\n", " if max_p_at_k < p_at_5:\n", " max_p_at_k = p_at_5\n", " print('Precision at 5%: {:.2f}'.format(p_at_5))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's explore the models we just built. We can, e.g., extract the decision tree result from the list of fitted models." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clf = clfs[sel_clfs[0]]\n", "print(clf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can then print and plot the feature importances for this model, which are stored as the attribute `feature_importances_`. Note that you can explore other models (e.g. the random forest) by extracting the corresponding result from `clfs` as above." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "importances = clf.feature_importances_\n", "indices = np.argsort(importances)[::-1]\n", "\n", "print (\"Feature ranking\")\n", "for f in range(X_test.shape[1]):\n", " print (\"%d. %s (%f)\" % (f + 1, sel_features[f], importances[indices[f]]))\n", "\n", "plt.figure\n", "plt.title (\"Feature Importances\")\n", "plt.bar(range(X_test.shape[1]), importances[indices], color='r', align = \"center\")\n", "plt.xticks(range(X_test.shape[1]), sel_features, rotation=90)\n", "plt.xlim([-1, X_test.shape[1]])\n", "plt.show" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our ML modeling pipeline can be extended in various ways. Further steps may include: \n", " \n", "- Creating more features\n", "- Trying more models\n", "- Trying different parameters for our models" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cur.close()\n", "conn.close()" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" } }, "nbformat": 4, "nbformat_minor": 4 }