{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "


\n", "Lecture 6: Logistic Regression and Decision Trees\n", "

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This week we're discussing more classifiers and their applications. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


\n", "Logistic Regression\n", "

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Logistic regression, like linear regression, is a generalized linear model. However, the final output of a logistic regression model is not continuous; it's binary (0 or 1). The following sections will explain how this works." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


\n", "What is Conditional Probability?\n", "

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Conditional probability is the probability that an event (A) will occur given that some condition (B) is true. For example, say you want to find the probability that a student will take the bus as opposed to walking to class today (A) given that it's snowing heavily outside (B). The probability that the student will take the bus when it's snowing is likely higher than the probability that s/he would take the bus on some other day. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


\n", "An Overview

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The goal of logistic regression is to take a set of datapoints and classify them. This means that we expect to have discrete outputs representing a set of classes. In simple logistic regression, this must be a binary set: our classes must be one of only two possible values. Here are some things that are sometimes modeled as binary classes:\n", "\n", "
  • Male or Female
  • \n", "
  • Rainy or Dry
  • \n", "
  • Democrat or Republican
  • \n", "\n", "The objective is to find an equation that is able to take input data and classify it into one of the two classes. Luckily, the logistic equation is for just such a task. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "The Logistic Equation\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The logistic equation is the basis of the logistic regression model. It looks like this:\n", "![image](https://wikimedia.org/api/rest_v1/media/math/render/svg/5e648e1dd38ef843d57777cd34c67465bbca694f)\n", "\n", "The t in the equation is some linear combination of n variables, or a linear function in an n-dimensional feature space. The formulation of t therefore has the form ax+b. In fitting a logistic regression model, the goal is therefore to minimize error in the logistic equation with the chosen t (of the form ax+b) by tuning a and b. \n", "\n", "\n", "The logistic equation (also known as the sigmoid function) works as follows:\n", "1. Takes an input of n variables\n", "2. Takes a linear combination of the variables as parameter t (this is another way of saying t has the form ax+b)\n", "3. Outputs a value given the input and parameter t\n", "\n", "The output of the logistic equation is always between 0 and 1. \n", "\n", "A visualization of the outputs of the logistic equation is as below (note that this is but one possible output of a logit regression model):\n", "![image](https://upload.wikimedia.org/wikipedia/commons/8/88/Logistic-curve.svg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Threshold Value\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The final output of a logistic regression model should be a binary set of numbers - that is, 0 or 1. However, you'll notice that the output of the logistic equation is a continuous set of numbers between 0 and 1- the function output itself is not 0 or 1. \n", "\n", "We do change the output to a 0 or 1 by picking a threshold value. This is a value between 0 and 1 such that if f(x) > threshold, we give it the value 1, and otherwise it is 0. \n", "![image](https://wikimedia.org/api/rest_v1/media/math/render/svg/aab892e7cf0d00aa6da3aa051335900ff52d12a0)\n", "\n", "The threshold value is the epsilon value in the equation. Usually, the threshold value is set to 0.5: in binary classification, a probability greater than 0.5 for 1 class guarantees that that is the highest probability- if one probability is greater than 1, then the other must be less than 1 as the sum of the two probabilities must be 1. \n", "\n", "The threshold value epsilon determines two key characteristics of a logistic regression classifier: \n", "\n", "1. Sensitivity\n", "2. Specificity" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Sensitivity and Specificity\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Confusion Matrix\n", "![image](http://rasbt.github.io/mlxtend/user_guide/evaluate/confusion_matrix_files/confusion_matrix_1.png)\n", "\n", "Sensitivity, also known as the true positive rate, is the proportion of true positives out of all \"actual positives\" - that is, it is the proportion of positives that are correctly identified as positives.\n", " \n", " Sensitivity = True Positive / (True Positive + False Negatives)\n", "\n", "Specificity, also called the true negative rate, is the proportion of true negatives out of all \"actual negatives\" - that is, it is the proportion of negatives that are correctly identified as negatives.\n", "\n", " Specificity = True Negative / (True Negative + False Positives)\n", "\n", "There is always a trade-off between the two characteristics. Both depend on the threshold value we choose; the higher the threshold, the lower the sensitivity and the higher the specificity. If we have an arbitrarily high threshold value (i.e. 1), all points will be classified as negative; sensitivity = 0 and specificity = 1. The opposite will be true if we set the threshold to be arbitrarily low (i.e. 0). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "The ROC Curve\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ROC curve represents how well a model performs in terms of sensitivity and specificity over all possible thresholds. Sensitivity (on the y-axis) is plotted against 1-specificity, or equivalently the false positive rate (on the x-axis) as the threshold value varies from 0 to 1. An example:\n", "![image](https://www.statsdirect.com/help/resources/images/ebx_1266835018.gif)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Example 1: Predicting Income from Census Data\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll use logistic regression to predict whether annual income is greater than $50k based on census data. You can read more about the dataset here." ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [], "source": [ "# import necessary packages\n", "import pandas as pd\n", "from sklearn.metrics import accuracy_score\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.preprocessing import LabelEncoder\n", "from sklearn.linear_model import LogisticRegression" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [], "source": [ "inc_data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data', header = None, names = ['age', 'workclass', 'fnlwgt', 'education', 'education.num', 'marital.status', 'occupation', 'relationship', 'race', 'sex', 'capital.gain', 'capital.loss', 'hours.per.week', 'native.country', 'income'])\n", "\n", "# drop null values\n", "inc_data = inc_data.dropna()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following uses LabelEncoder() in scikit-learn to encode all features to categorical integer values. Many features in this particular dataset, such as race and sex, are represented as strings with a limited number of possible values. LabelEncoder() re-labels these values as integers between 0 and number of classes-1." ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [], "source": [ "# the column is present in both categorical and numeric form\n", "del inc_data['education']\n", "\n", "# convert all features to categorical integer values\n", "enc = LabelEncoder()\n", "for i in inc_data.columns:\n", " inc_data[i] = enc.fit_transform(inc_data[i])" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [], "source": [ "# target is stored in y\n", "y = inc_data['income']\n", "\n", "# X contains all other features, which we will use to predict target\n", "X = inc_data.drop('income', axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we split the data into train and test sets, where the test set is 30% of the initial dataset." ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [], "source": [ "# train/test split\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,\n", " penalty='l2', random_state=None, solver='liblinear', tol=0.0001,\n", " verbose=0, warm_start=False)" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# build model and fit on train set\n", "logit = LogisticRegression()\n", "logit.fit(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([0, 0, 0, ..., 0, 1, 1])" ] }, "execution_count": 53, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# make predictions on test set\n", "pred_logit = logit.predict(X_test)\n", "pred_logit" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.8110349063363701" ] }, "execution_count": 54, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# measure accuracy\n", "accuracy_score(y_true = y_test, y_pred = pred_logit)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Example 2: Predict Iris Species\n", "

    " ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [], "source": [ "# import necessary packages\n", "import pandas as pd\n", "from sklearn.metrics import accuracy_score\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.linear_model import LogisticRegression" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [], "source": [ "from sklearn import datasets\n", "iris = datasets.load_iris()\n", "\n", "#Here we load the built-in iris dataset\n", "X = iris.data[:, :2]\n", "Y = iris.target\n", "\n", "isSetosa = Y == 0\n", "isNot = Y > 0\n", "isSetosa.any()\n", "Y[isSetosa] = 1\n", "Y[isNot] = 0" ] }, { "cell_type": "code", "execution_count": 69, "metadata": { "collapsed": true }, "outputs": [], "source": [ "#Here we create the train/test split\n", "X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3)" ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,\n", " penalty='l2', random_state=None, solver='liblinear', tol=0.0001,\n", " verbose=0, warm_start=False)" ] }, "execution_count": 70, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Building the model\n", "logreg = LogisticRegression()\n", "logreg.fit(X_train, Y_train)" ] }, { "cell_type": "code", "execution_count": 71, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.97777777777777775" ] }, "execution_count": 71, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Predictions\n", "pred = logreg.predict(X_test)\n", "\n", "#Accuracy\n", "accuracy_score(y_true = Y_test, y_pred = pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Multinomial Logistic Regression\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We won't discuss this in detail here, but it's worth mentioning briefly. Multinomial logistic regression is another classification algorithm. The difference is that the output isn't binary; there can be multiple possible categories for the target, as implied by the name. For example, we can use multinomial regression to predict which movie genre people will like based on their other characteristics. If you're interested in learning how this model works in more detail, there are a lot of good resources on the internet and we encourage you to explore." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Decision Trees\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The decision tree algorithm can be used to do both classification as well as regression and has the advantage of not assuming a linear model. Decisions trees are usually easy to represent visually which makes it easy to understand how the model actually works. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another frequently used classifier is CART, or classification and regression trees.\n", "![image](https://upload.wikimedia.org/wikipedia/commons/f/f3/CART_tree_titanic_survivors.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Geometric Interpretation\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![image](https://docs.microsoft.com/en-us/azure/machine-learning/studio/media/algorithm-choice/image5.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Mathematical Formulation\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The **hard** part is really to construct this tree from the data set. The heart of the CART algorithm lies in deciding how/where to split the data (choosing the right feature). The idea is to associate a **quantitative** measure the quality of a split because then we simply choose the best feature to split.\n", "\n", "A very common measure is the Shannon entropy:\n", "Given a discrete probablity distribution $(p_1, p_2,...p_n)$. The shannon entropy $E(p_1, p_2,...p_n)$ is:\n", "$$-\\sum_{i = 1}^n p_ilog_2(p_i)$$\n", "\n", "The goal of the algorithm is to take the necessary steps to minimize this entropy, by choosing the right features at every stage to accomplish this." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Example 1: Are Mushrooms Poisonous or Not?\n", "

    " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll use the decision tree classifier to predict whether mushroooms are poisonous. You can read about the dataset we're using here. The data shortens the categorical variables to just letters, so the data overview is especially helpful." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# import necessary packages\n", "import pandas as pd\n", "from sklearn.metrics import accuracy_score\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.preprocessing import LabelEncoder\n", "from sklearn.tree import DecisionTreeClassifier\n", "from sklearn.tree import export_graphviz" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "m_data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.data', header = None, names = ['class','cap-shape','cap-surface','cap-color','bruises','odor','gill-attachment','gill-spacing','gill-size','gill-color','stalk-shape','stalk-root','stalk-surface-above-ring','stalk-surface-below-ring','stalk-color-above-ring','stalk-color-below-ring','veil-type','veil-color','ring-number','ring-type','spore-print-color','population','habitat'])\n", "\n", "# drop null values\n", "m_data = m_data.dropna()\n", "\n", "# convert all features to categorical integer values\n", "enc = LabelEncoder()\n", "for i in m_data.columns:\n", " m_data[i] = enc.fit_transform(m_data[i])" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# target is stored in y\n", "y = m_data['class']\n", "\n", "# X contains all other features, which we will use to predict target\n", "X = m_data.drop('class', axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following note may seem self-evident, but just to be extra clear:\n", "\n", "In the cell above, we create X so that it contains all features except for the target variable, and we'll make predictions using X. This doesn't have to be the case, and in fact is usually not the best practice; we can pick features that we think are significant rather than using the entire dataset, and doing so often results in more accurate predictions. For simplicity's sake, however, we omit this in this example. " ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# train/test split\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,\n", " max_features=None, max_leaf_nodes=15, min_impurity_split=1e-07,\n", " min_samples_leaf=1, min_samples_split=2,\n", " min_weight_fraction_leaf=0.0, presort=False, random_state=None,\n", " splitter='best')" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# build model and fit on train set\n", "tree_classifier = DecisionTreeClassifier(max_leaf_nodes=15)\n", "tree_classifier.fit(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Python does not have a good method to visually display a decision tree. If you want to see it, run the code below which writes the tree to a file. Then, go here where you will have to copy the file's contents into the webpage and run it." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# creates a file with the decision tree plotted\n", "with open(\"decisiontree.txt\", 'w') as f:\n", " export_graphviz(tree_classifier, out_file=f, feature_names=list(X))" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([0, 0, 1, ..., 0, 1, 1])" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# make predictions on test set\n", "tree_pred = tree_classifier.predict(X_test)\n", "tree_pred" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.99138638228055787" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# measure accuracy\n", "accuracy_score(y_true = y_test, y_pred = tree_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "


    \n", "Example 2: Predict Higgs Boson Signal\n", "

    " ] }, { "cell_type": "code", "execution_count": 72, "metadata": {}, "outputs": [], "source": [ "# import necessary packages\n", "import pandas as pd\n", "from sklearn.metrics import accuracy_score\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.tree import DecisionTreeClassifier" ] }, { "cell_type": "code", "execution_count": 73, "metadata": {}, "outputs": [], "source": [ "# this code is only necessary because of the data is in an arff file\n", "from scipy.io.arff import loadarff\n", "import urllib2\n", "from StringIO import StringIO\n", "data = urllib2.urlopen('https://www.openml.org/data/download/2063675/phpZLgL9q').read(5005537)\n", "dataset = loadarff(StringIO(data))\n", "higgs = pd.DataFrame(dataset[0], columns=dataset[1].names())\n", "\n", "# target is stored in y\n", "Y = higgs['class']\n", "\n", "# X contains all other features, which we will use to predict target\n", "X = higgs.drop('class', axis=1)" ] }, { "cell_type": "code", "execution_count": 74, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,\n", " max_features=None, max_leaf_nodes=15, min_impurity_split=1e-07,\n", " min_samples_leaf=1, min_samples_split=2,\n", " min_weight_fraction_leaf=0.0, presort=False, random_state=None,\n", " splitter='best')" ] }, "execution_count": 74, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# train/test split\n", "X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3)\n", "\n", "# build model and fit on train set\n", "dTree = DecisionTreeClassifier(max_leaf_nodes=15)\n", "dTree.fit(X_train, Y_train)" ] }, { "cell_type": "code", "execution_count": 75, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array(['1', '1', '1', ..., '1', '0', '0'], dtype=object)" ] }, "execution_count": 75, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# make predictions on test set\n", "dTree_pred = dTree.predict(X_test)\n", "dTree_pred" ] }, { "cell_type": "code", "execution_count": 76, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.6762345679012346" ] }, "execution_count": 76, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# measure accuracy\n", "accuracy_score(y_true = Y_test, y_pred = dTree_pred)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 1 }