{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "[Table of Contents](00.00-Learning-ML.ipynb#Table-of-Contents) • [← *Chapter 2 - Classification*](02.00-Classification.ipynb) • [*Chapter 2.02 - Naive Bayes* →](02.02-Naive-Bayes.ipynb)\n", "\n", "---\n", "\n", "# Chapter 2.01 - Dummy Classifiers\n", "\n", "To really understand how classifiers work, we're going to start with two very basic models called dummy classifiers. \n", "\n", "Technically these aren't machine learning models, as they use simple rules defined by the user. As you will see, they provide a good baseline for performance and demonstrate the importance of using various performance measures to evaluate models. If you've ever witnessed someone *'wow'* an audience by describing a predictive model with an impressively high *accuracy* (such as 95%), you will see why this may not be as impressive as it sounds. (Accuracy has a special definition when we are talking about classification.)\n", "\n", "## Mode\n", "\n", "*Mode* is a statistical term for the most frequent value in a set of data. For example, in the set `a, b, b, b, c, d`, the value `b` occurs most frequently, so it is the mode. You can calculate the mode of a given dataset in Python with the `statistics.mode` function:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "'b'" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from statistics import mode\n", "\n", "# our simple data set\n", "data = ['a','b','b','b','c','d']\n", "\n", "mode(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the above example there are four unique values. In classification, when there are just two unique values in the labels, this is called *binary classification*. For example, consider the set `True, False, False, False, False`. The mode of this set is `False`, and in binary classification this is also known as the *majority class*.\n", "\n", "We can use the mode to create a very simple model for predicting a value (and without requiring any input). In binary classification, this is can be called a *majority class classifier*. Using the example above, a majority class classifier would always predict `False`, and given the example data it would achieve an **accuracy of 80%**! A great achievement for such a simple model.\n", "\n", "This terminology is potentially problematic. When talking about classification, accuracy is a measure of the proportion of predictions that are predicted correctly. The colloquial meaning of accuracy could mislead others about the performance of your model. Consider a set of 100 True and False labels that flag whether a loan has defaulted or not. Perhaps in this set, only 5 of the loans defaulted. With a basic model such as this, we could trivially achieve 95% accuracy by always predicting False.\n", "\n", "#### Additional performance measures\n", "\n", "Using the table below (aptly called a *confusion matrix*), we can define some additional useful measures of performance:\n", "\n", "| | Predicted = True | Predicted = False |\n", "|---|---|---|\n", "| Actual = True | True Positive (TP) | False Negative (FN) |\n", "| Actual = False | False Positive (FP) | True Negative (TN) |\n", "\n", "*Accuracy* measures how often the model is correct, calculated as:\n", "\n", "$$Accuracy=\\frac{TP + TN}{TP + TN + FP + FN}$$\n", "\n", "*True Positive Rate - TPR* (also called Recall or Sensitivity) measures how often the model is correct when the actual value is true, calculated as:\n", "\n", "$$True\\ Positive\\ Rate\\ (TPR)=\\frac{TP}{FN + TP}$$\n", "\n", "*False Positive Rate - FPR* (also called Fallout) measures how often the model is incorrect when the actual value is false, calculated as:\n", "\n", "$$False\\ Positive\\ Rate\\ (FPR)=\\frac{FP}{TN + FP}$$\n", "\n", "*Positive Predictive Value - PPV* (also called Precision) measures the proportion of predictions that are correct when the predicted value is true, calculated as:\n", "\n", "$$Positive\\ Predictive\\ Value\\ (PPV)=\\frac{TP}{FP + TP}$$\n", "\n", "There are many other metrics that can be derived from a confusion matrix, but for now we will focus on just these.\n", "\n", "Generally, the goal is to maximise accuracy, TPR and PPV, and minimise FPR.\n", "\n", "Continuing with our loan defaults example above, let's calculate these three metrics (remembering our model always predicts False):\n", "\n", "| | Predicted = True | Predicted = False |\n", "|---|---|---|\n", "| Actual = True | 0 (TP) | 5 (FN) |\n", "| Actual = False | 0 (FP) | 95 (TN) |\n", "\n", "* We already know accuracy is 95%\n", "* $TPR = \\frac{TP}{FN + TP} = \\frac{0}{5 + 0} = 0%$\n", "* $FPR = \\frac{FP}{TN + FP} = \\frac{0}{95 + 0} = 0%$\n", "* $PPV = \\frac{TP}{FP + TP} = \\frac{0}{0 + 0} = NaN$\n", "\n", "Considering these additional metrics, we can now see that while the model accuracy is high, it's actually plain garbage for it's predicting loan defaults. Where this model is strong however, is to form a baseline. Hopefully your real model will achieve higher accuracy (or TPR, or PPV - depending on what is most important.\n", "\n", "\n", "## Class probability\n", "\n", "Before we continue to *real* models, lets consider one more dummy classifier - the stratified classifier. In statistics, stratified sampling takes samples from each group (class) in the population. It works by assigning predictions according to the probability distribution of the underlying class. This classifier is slightly more complex than the mode variant, and can also potentially achieve a higher accuracy.\n", "\n", "Continuing with example of loan defaults, in our set of 100, 5 default (True labels) and 95 do not (False labels). Notationally, this gives class probabilities of $p(default) = 0.05$ and $p(no\\ default) = 0.95$. In other words, for every 100 predictions this model makes, it will randomly select 5 to be True and the rest False.\n", "\n", "Best case, our stratified classifier makes 5 True predictions that align to the 5 actual True values by chance.\n", "\n", "| | Predicted = True | Predicted = False |\n", "|---|---|---|\n", "| Actual = True | 5 (TP) | 0 (FN) |\n", "| Actual = False | 0 (FP) | 95 (TN) |\n", "\n", "* $Accuracy = \\frac{TP + TN}{TP + TN + FP + FN} = \\frac{5 + 95}{5 + 95 + 0 + 0} = 100%$\n", "* $TPR = \\frac{TP}{FN + TP} = \\frac{5}{0 + 5} = 100%$\n", "* $FPR = \\frac{FP}{TN + FP} = \\frac{0}{95 + 0} = 0%$\n", "* $PPV = \\frac{TP}{FP + TP} = \\frac{5}{5 + 0} = 100%$\n", "\n", "Worst case means our classifier makes no correct predictions.\n", "\n", "| | Predicted = True | Predicted = False |\n", "|---|---|---|\n", "| Actual = True | 0 (TP) | 5 (FN) |\n", "| Actual = False | 5 (FP) | 90 (TN) |\n", "\n", "\n", "* $Accuracy = \\frac{TP + TN}{TP + TN + FP + FN} = \\frac{0 + 90}{0 + 90 + 5 + 5} = 90%$\n", "* $TPR = \\frac{TP}{FN + TP} = \\frac{0}{5 + 0} = 0%$\n", "* $FPR = \\frac{FP}{TN + FP} = \\frac{5}{90 + 5} = 5.26%$\n", "* $PPV = \\frac{TP}{FP + TP} = \\frac{0}{5 + 0} = 0%$\n", "\n", "In this example, we could achieve an accuracy of between 90% (all five wrong) and 100%. If we take all the possible scenarios (i.e. 0 true positives, 1 true positives, ... , 5 true positives), the classifier will give us on average an accuracy of 95% (same as mode classifier), TPR of 50%, FPR of ~2.6% and PPV of 50%. This means, given a sufficiently large data set, we will produce a model that performs (on average) better than the mode dummy classifier according to our additional performance measures.\n", "\n", "## Predicting probabilities\n", "\n", "As discussed in Chapter 2's introduction to classification, classifiers are typically able to output a probability or likelihood of the positive class occurance. This means that instead of predicting True or False, and 1 or 0, predictions will be output as 0.23, 0.67 or 0.94, where this repesents the probability of a True or 1 value occuring.\n", "\n", "Notice that in each confusion matrix above, we have explicitly discretely defined the model predictions (meaning as either True or False, and nothing in between).\n", "\n", "Before we can begin measuring accuracy (or any of the other metrics) of a predicted probability, we need to define a threshold (called a discrimination threshold) below which we assume probabilities to be False or 0, and above which they become True or 1. For example, if this threshold is 0.5 for our values 0.23, 0.67 or 0.94, they would become 0, 1 and 1 respectively. Furthermore, it were 0.75, they would become 0, 0 and 1 respectively.\n", "\n", "But how do we know where to set this threshold? One common method is to maximise what is called the *F1 score*. The F1 score is an alternative accuracy measure which is the weighted average of precision and sensitivity (or recall), and is calculated as:\n", "\n", "$$\n", "F_1\\ score = 2\\frac{PPV \\times TPR}{PPV + TPR}\\\\\n", "$$\n", "\n", "The F1 score is calculated for each threshold from 0.01 to 0.99, with the maximal score occuring indicating the appropriate threshold which maximises both $\\frac{TP}{FP + TP}$ *and* $\\frac{TP}{FN + TP}$ simulatenously.\n", "\n", "There will potentially be times where your problem may place different value on the occurances of FP and FN. Wouldn't it be nice if there was a performance measure that is useful for all binary classification problems (that is, without the need to explicitly define a threshold)?\n", "\n", "We can achieve this by calculating the sensitivity (true positive rate) and fallout (false positive rate) of our model for every threshold increment, and compare this to that of random predictions. \n", "\n", "A random binary prediction has a 50% chance of being correct. Observe the TPR and FPR as we change the number of True predictions in a total set of 100 predictions:\n", "\n", "* 0% True predictions: TPR = 0% and FPR = 0%\n", "* 10% True predictions: TPR = 10% and FPR = 10%\n", "* 20% True predictions: TPR = 20% and FPR = 20%\n", "* ...\n", "* 100% True predictions: TPR = 100% and FPR = 100%\n", "\n", "We can plot TPR and FPR of our random predictions to form a straight line from (0,0) to (1,1). This line is our worst possible baseline. The line is called a *receiver operating characteristic (ROC) curve*, though it's not a curve at this point.\n", "\n", "As the discrimination threshold is varied and the resulting TPR and FPR is plotted, a 'good' model will produce a curve reaching up towards the top left corner. The closer the ROC curve to the top left, the better the model. A perfect model will maximise this curve all the way to the top left corner, effectively making a right angle.\n", "\n", "If the curve crosses over the random baseline, this indicates an error with the model. If the curve is completely below the random baseline, simply inverting the model (replacing all True predictions with False predictions and vice versa) is a trivial improvement.\n", "\n", "The space below the curve is a performance metric called AUC (the area under the curve). By measuring the number of true positives to false positives at each variation of the discrimination threshold, this provides an effective measure of how well a binary classifier can distinguish Trues from Falses, and allows us to measure model performance without explicitly defining a threshold.\n", "\n", "\n", "## Implementing dummy classifiers\n", "\n", "Having a baseline as described above is genuinely useful. Luckily, these models are straightforward to implement in Python using the Scikit-learn Machine Learning package. Conveniently, the `DummyClassifier` class conforms to the same API (i.e. code style) as the rest of the algorithms, so what we learn for applying these baseline models to a dataset is largely transferrable to real machine learning!\n", "\n", "First, let's quickly generate a some sample data to work with:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import make_classification\n", "\n", "# create a sample data set, where X are the features and y are the labels\n", "X, y = make_classification(n_samples=100, n_classes=2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can build our first dummy classifier, using the mode method described above and compute the AUC metric:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.5" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.dummy import DummyClassifier\n", "from sklearn import metrics\n", "\n", "# define and fit our dummy model\n", "model = DummyClassifier(strategy='most_frequent')\n", "model.fit(X, y)\n", "\n", "# generate predictions and calculate sensitivity and fallout\n", "predictions = model.predict(X)\n", "TPR, FPR, thresholds = metrics.roc_curve(y, predictions)\n", "\n", "metrics.auc(TPR, FPR)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And the second dummy classifier, using the probability distribution method:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.53041216486594633" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# define and fit our dummy model\n", "dummy_strat = DummyClassifier(strategy='stratified')\n", "dummy_strat.fit(X, y)\n", "\n", "# generate predictions and calculate sensitivity and fallout\n", "predictions = dummy_strat.predict(X)\n", "TPR, FPR, thresholds = metrics.roc_curve(y, predictions)\n", "\n", "metrics.auc(TPR, FPR)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice the AUC will increase and decrease if you re-run the stratified dummy classifier. This is because the predictions are assigned randomly according to the probability distribution, with a range of possible sensitivity and fallout measures as described above.\n", "\n", "\n", "## Where to now?\n", "\n", "Given the two dummy classifiers as a baseline, our goal with machine learning is to create a model capable of out-performing these. The rest of Chapter 2 is going to explain and demonstrate various different approaches (i.e. types of models) to help you achieve this for your given classification problem. The first one we look at takes the ideas same idea behind modelling class probabilities of the stratified dummy classifer, and introducing inputs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "[Table of Contents](00.00-Learning-ML.ipynb#Table-of-Contents) • [← *Chapter 2 - Classification*](02.00-Classification.ipynb) • [*Chapter 2.02 - Naive Bayes* →](02.02-Naive-Bayes.ipynb)\n", "\n", "" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 1 }