{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "HD5HdShZiqj6" }, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/fonnesbeck/Bios8366/blob/master/notebooks/Section6_4-Support-Vector-Machines.ipynb)\n", "\n", "# Supervised Learning: Support Vector Machines" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NXdZmftjiqj9" }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "import warnings\n", "warnings.simplefilter('ignore')\n", "\n", "DATA_URL = 'https://raw.githubusercontent.com/fonnesbeck/Bios8366/master/data/'" ] }, { "cell_type": "markdown", "metadata": { "id": "dvPqoM_Tiqj-" }, "source": [ "The support vector machine (SVM) is a classification method that attempts to find a hyperplane that separates classes of observations in **feature space**.\n", "\n", "In contrast to some other classifications methods we have seen (*e.g.* Bayesian), the SVM does not invoke a probability model for classification; instead, we aim for the direct caclulation of a **separating hyperplane**.\n", "\n", "Consider the *logistic regression model*, which transforms a linear combination of predictors with the logistic function.\n", "\n", "$$g_{\\theta}(x) = \\frac{1}{1+\\exp(-\\theta^{\\prime} x)}$$\n", "\n", "Notice that when our response is $y=1$, we want the product $\\theta^{\\prime} x$ to be a very large, positive value so that $g_{\\theta}(x) \\rightarrow 1$, and when $y=0$, we want this product to be a very large, negative value, so that $g_{\\theta}(x) \\rightarrow 0$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "fnjheVO1iqj-" }, "outputs": [], "source": [ "invlogit = lambda x, theta: 1. / (1. + np.exp(-x.dot(theta)))\n", "\n", "theta = [2, -0.5]\n", "x = np.c_[np.ones(100), np.linspace(-10,20,100)]\n", "\n", "y = invlogit(x, theta)\n", "\n", "plt.plot(x.dot(theta), y);" ] }, { "cell_type": "markdown", "metadata": { "id": "pPkXNze6iqj_" }, "source": [ "The negative log-likelihood (or **cost function**) for the logistic regression model is as follows:\n", "\n", "$$l(y_i|\\theta,x) = -[y_i \\log g_{\\theta}(x) + (1-y_i)\\log(1-g_{\\theta}(x))]$$\n", "\n", "Consider the case where $y_i=1$. This implies that the cost function is:\n", "\n", "$$l(y_i=1|\\theta,x) = - \\log \\left[ \\frac{1}{1+\\exp(-\\theta\\prime x)} \\right]$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "U_qP87l_iqj_" }, "outputs": [], "source": [ "plt.plot(x.dot(theta), -np.log(y));" ] }, { "cell_type": "markdown", "metadata": { "id": "OFU37Bz1iqj_" }, "source": [ "and when $y_i=0$:\n", "\n", "$$l(y_i=0|\\theta,x) = - \\log \\left[ 1 - \\frac{1}{1+\\exp(-\\theta\\prime x)} \\right]$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4X6BcD7ciqkA" }, "outputs": [], "source": [ "plt.plot(x.dot(theta), -np.log(1.-y));" ] }, { "cell_type": "markdown", "metadata": { "id": "vaJWhU20iqkA" }, "source": [ "One way to develop a support vector machine is to modify the logistic regression model by substituting a different cost function, which is just a **piecewise linear function**.\n", "\n", "For $y_i=1$:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Tt-Dc93TiqkB" }, "outputs": [], "source": [ "plt.plot(x.dot(theta), -np.log(y))\n", "\n", "hinge_cost = lambda x, theta: np.maximum(0, 1 - x.dot(theta))\n", "\n", "plt.plot(x.dot(theta), hinge_cost(x, theta), 'r-');" ] }, { "cell_type": "markdown", "metadata": { "id": "vZGVAdv1iqkB" }, "source": [ "For $y_i=0$:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "AZgMYfVciqkC" }, "outputs": [], "source": [ "plt.plot(x.dot(theta), -np.log(1-y))\n", "\n", "hinge_cost = lambda x, theta: np.maximum(0, 1 + x.dot(theta))\n", "\n", "plt.plot(x.dot(theta), hinge_cost(x, theta), 'r-');" ] }, { "cell_type": "markdown", "metadata": { "id": "_TkAvRkeiqkC" }, "source": [ "Now consider the estimation of the parameters of a *regularized* logistic regression model. This is typically by minimizing:\n", "\n", "$$\\min_{\\theta} \\frac{1}{n} \\left[ \\sum_{i=1}^n y_i -\\log g_{\\theta}(x_i) + (1-y_i)(-\\log(1-g_{\\theta}(x_i))) \\right] + \\frac{\\lambda}{2n} \\sum_{j=1}^k \\theta^2_j$$\n", "\n", "for the support vector machine, we instead substitute our cost function (which we will call $k$) in place of the logistic regression likelihood:\n", "\n", "$$\\min_{\\theta} \\left[ C \\sum_{i=1}^n y_i k_1(\\theta^{\\prime} x_i) + (1-y_i) k_0(\\theta^{\\prime} x_i) \\right] + \\frac{1}{2}\\sum_{j=1}^k \\theta^2_j$$\n", "\n", "where the parameter $C$ is plays a role equivalent to $1/\\lambda$." ] }, { "cell_type": "markdown", "metadata": { "id": "cr_g4mAaiqkC" }, "source": [ "Notice that to make these cost functions $k$ small, we want $x \\ge 1$ or $x \\le -1$ rather than just being greater than or less than zero, for $y=1$ or $y=0$, respectively. If we set the parameter $C$ very large, we would want the summation term to be equal or close to zero in order to minimize the overall optimization objective.\n", "\n", "This objective then essentially becomes:\n", "\n", "$$\\min_{\\theta} \\frac{1}{2} \\sum_{j=1}^k \\theta^2_j$$\n", "\n", "subject to: \n", "\n", "$$\\begin{aligned}\\theta^{\\prime} x_i \\ge 1 &\\text{ if } y_i=1 \\\\\n", "\\theta^{\\prime} x_i \\le -1 &\\text{ if } y_i=0\n", "\\end{aligned}$$" ] }, { "cell_type": "markdown", "metadata": { "id": "x38moWZwiqkC" }, "source": [ "Consider a dataset with two linearly separable groups." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1D5mXouRiqkC" }, "outputs": [], "source": [ "g1 = np.random.multivariate_normal((-1,-1), np.eye(2)*0.2, 10)\n", "g2 = np.random.multivariate_normal((1,1), np.eye(2)*0.2, 10)\n", "\n", "plt.figure()\n", "plt.scatter(*g1.T, color='r')\n", "plt.scatter(*g2.T, color='b')\n", "\n", "plt.xlim(-2,2); plt.ylim(-2,2);" ] }, { "cell_type": "markdown", "metadata": { "id": "M_CcuG_viqkD" }, "source": [ "One possible separation is a line that passes very close to points in both groups." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "EeUjGxdRiqkD" }, "outputs": [], "source": [ "x,y = np.transpose([g1[np.where(g1.T[1]==g1.max(0)[1])[0][0]], \n", " g2[np.where(g2.T[1]==g2.min(0)[1])[0][0]]])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-lOBpPdTiqkD" }, "outputs": [], "source": [ "plt.scatter(*g1.T, color='r')\n", "plt.scatter(*g2.T, color='b')\n", "\n", "x,y = np.transpose([g1[np.where(g1.T[1]==g1.max(0)[1])[0][0]], \n", " g2[np.where(g2.T[1]==g2.min(0)[1])[0][0]]])\n", "b0,b1 = np.linalg.lstsq(np.c_[[1,1],x], y, rcond=None)[0]\n", "xspace = np.linspace(-3,3,100)\n", "plt.plot(xspace, b0 + (b1-.1)*xspace, 'k--')\n", "plt.xlim(-2,2); plt.ylim(-2,2);" ] }, { "cell_type": "markdown", "metadata": { "id": "1Syof6HGiqkD" }, "source": [ "This seems like a poor choice of decision boundary, even though it separates the groups, because it may not be a *robust* solution. SVM avoids this by establishing a **margin** between the decision boundary and the nearest points in each group. This margin is maximized under SVM, and is partly the result of using 1 and -1 as the thresholds for the cost function, rather than zero." ] }, { "cell_type": "markdown", "metadata": { "id": "kQZoXbm_iqkD" }, "source": [ "## Large-margin Classification\n", "\n", "To understand how SVM incorporates a margin into its decision boundary, it helps to re-write our objective function in terms of the norm (length) of the parameter vector:\n", "\n", "$$\\min_{\\theta} \\frac{1}{2} \\sum_{j=1}^k \\theta^2_j = \\min_{\\theta} \\frac{1}{2} ||\\theta||^2$$\n", "\n", "Recall that when we take the inner product of two vectors, we are essentially projecting the values of one vector onto the other, in order to add them. In the case of our inner product $\\theta^{\\prime} x_i$, we are projecting the ith component of $x$ onto the parameter vector $\\theta$. We can therefore re-write this inner product in terms of multiplying vector lengths:\n", "\n", "$$\\theta^{\\prime} x_i = p_i ||\\theta||$$\n", "\n", "where $p_i$ is the **projection** of $x_i$ onto $\\theta$. The objective function now becomes:\n", "\n", "$$\\min_{\\theta} \\frac{1}{2} ||\\theta||^2$$\n", "$$\\begin{aligned}\n", "\\text{subject to }p_i ||\\theta|| \\ge 1 &\\text{ if } y_i=1 \\\\\n", "p_i ||\\theta|| \\le -1 &\\text{ if } y_i=0\n", "\\end{aligned}$$" ] }, { "cell_type": "markdown", "metadata": { "id": "bdcu_ArtiqkD" }, "source": [ "In order to satisfy this criterion for a given parameter vector $\\theta$, we want the $p_i$ to be *as large as possible*. However, when the decision boundary is close to points in the dataset, the corresponding $p_i$ values will be very small, since they are being projected onto the $\\theta$ vector, which is perpendicular to the decision boundary.\n", "\n", "Here is a simple graphical illustration of the difference between two boundary choices, in terms of $p_i$ values.\n", "\n", "First, a boundary choice that passes closely to the points of each class:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Ev-QsSNiiqkE" }, "outputs": [], "source": [ "frame = plt.gca()\n", "frame.axes.yaxis.set_ticklabels([])\n", "frame.axes.xaxis.set_ticklabels([])\n", "\n", "x1 = -1, 0\n", "x2 = 1, 1\n", "\n", "plt.scatter(*x1, s=300, marker='o')\n", "plt.scatter(*x2, s=300, marker='o', color='r')\n", "plt.plot([-1.5, 1.5],[0,1], 'k-');" ] }, { "cell_type": "markdown", "metadata": { "id": "LYmcSkItiqkE" }, "source": [ "The vector of parameters $\\theta$ of the hyperplane is the **normal vector**, and it is *orthogonal* to the hyperplane surface that we are using as a decision boundary:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "An43AulyiqkE" }, "outputs": [], "source": [ "plt.figure()\n", "plt.scatter(*x1, s=300, marker='o')\n", "plt.scatter(*x2, s=300, marker='o', color='r')\n", "plt.plot([-1.5, 1.5],[0,1], 'k-')\n", "plt.plot([-.5, .75], [1, 0], 'k--')\n", "plt.annotate(r\"$\\theta$\", xy=(-0.4, 1), fontsize=20);" ] }, { "cell_type": "markdown", "metadata": { "id": "JLtlJfz5iqkE" }, "source": [ "In order to see what the $p_i$ values will be, we drop perpendicular lines down to the parameter vector $\\theta$. Notice that for this decision boundary, the resulting $p_i$ are quite small (either positive or negative). In order to satisfy our constraint, this will force $||\\theta||$ to be large, which is not desirable given our objective." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ZzCqkZuxiqkE" }, "outputs": [], "source": [ "plt.figure()\n", "plt.scatter(*x1, s=300, marker='o')\n", "plt.scatter(*x2, s=300, marker='o', color='r')\n", "plt.plot([-1.5, 1.5],[0,1], 'k-')\n", "plt.plot([-.5, .75], [1, 0], 'k--')\n", "plt.annotate(r\"$\\theta$\", xy=(-0.4, 1), fontsize=20)\n", "\n", "plt.arrow(-1, 0, 3*(.35), .35, fc=\"b\", ec=\"b\", head_width=0.07, head_length=0.2)\n", "plt.arrow(1, 1, -3*(.28), -.28, fc=\"r\", ec=\"r\", head_width=0.07, head_length=0.2);" ] }, { "cell_type": "markdown", "metadata": { "id": "gHzlXcrziqkE" }, "source": [ "Now consider another decision boundary, which intuitively appears to be a better choice." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2poSo6jdiqkE" }, "outputs": [], "source": [ "plt.figure()\n", "plt.scatter(*x1, s=300, marker='o')\n", "plt.scatter(*x2, s=300, marker='o', color='r')\n", "plt.plot([-.5,.5], [1.5,-.5], 'k-');" ] }, { "cell_type": "markdown", "metadata": { "id": "NLsDf-LwiqkE" }, "source": [ "We can confirm this in terms of our objective function, by showing the corresponding projections $p_i$ to be large, which allows our parameter vector norm to be smaller." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ogHm9NjGiqkE" }, "outputs": [], "source": [ "plt.figure()\n", "plt.scatter(*x1, s=300, marker='o')\n", "plt.scatter(*x2, s=300, marker='o', color='r')\n", "plt.plot([-.5,.5], [1.5,-.5], 'k-')\n", "plt.plot([-1, 1], [-.5, 1.7], 'k--')\n", "plt.annotate(r\"$\\theta$\", xy=(0.6, 1.5), fontsize=20)\n", "\n", "plt.arrow(-1, 0, .1, -.2, fc=\"b\", ec=\"b\", head_width=0.07, head_length=0.1)\n", "plt.arrow(1, 1, -.2, .37, fc=\"r\", ec=\"r\", head_width=0.07, head_length=0.1);" ] }, { "cell_type": "markdown", "metadata": { "id": "NKRULXR6iqkF" }, "source": [ "Thus, the values of $\\{p_i\\}$ define a *margin* that we are attempting to maximize to aid robust classificaction." ] }, { "cell_type": "markdown", "metadata": { "id": "6216CwkciqkF" }, "source": [ "## Feature Expansion\n", "\n", "In general, when the number of sample points is smaller than the dimension, you can always find a perfect separating hyperplane. On the other hand, when the number of points is large relative to the number of dimensions it is usually impossible.\n", "\n", "One way, then, of potentially improving a classifying hyperplane is to increase the dimension of the variable space to create a feature space. One easy way of expanding features is to include transformations of existing variables, such as polynomial expansion.\n", "\n", "Let's consider the simplest possible example of two linearly-inseparable classes, using just a single dimensions. Here we have red and blue points distributed along a line." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TmksuxgLiqkF" }, "outputs": [], "source": [ "reds = np.array([1, 2, 2.5])\n", "blues = np.array([-2, -1.4, 0.2, 3.1, 5])\n", "\n", "plt.figure()\n", "plt.scatter(reds, [0]*len(reds), color='r')\n", "plt.scatter(blues, [0]*len(blues), color='b');" ] }, { "cell_type": "markdown", "metadata": { "id": "zyS1X7goiqkF" }, "source": [ "Clearly, it is impossible to draw a straight line anywhere that will separate the two classes. However, if we create a feature that is just a quadratic function of the original data, the classes become linearly separable." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8QWh0tsviqkF" }, "outputs": [], "source": [ "func = lambda x: (x - 2) **2\n", "\n", "red_features = [reds, func(reds)]\n", "blue_features = [blues, func(blues)]\n", "\n", "plt.figure()\n", "plt.scatter(*red_features, color='r')\n", "plt.scatter(*blue_features, color='b')\n", "\n", "xvals = np.linspace(-3, 6)\n", "plt.plot(xvals, 2 - 0.5*xvals, 'k--');" ] }, { "cell_type": "markdown", "metadata": { "id": "_M5SBl2siqkF" }, "source": [ "More generally, we can create a higher-order polynomial function to use as a decision boundary. For example,\n", "\n", "$$y = \\left\\{ \\begin{aligned} 1 &\\text{if } \\theta_0 + \\theta_1 x_1 + \\theta_2 x_2 + \\theta_3 x_1 x_2 + \\theta_4 x_1^2 + \\ldots \\ge 0 \\\\\n", "0 &\\text{ otherwise}\\end{aligned}\\right.$$\n", "\n", "But, this mapping can substantially *increase the number of features* to consider, and calculating all the polynomial terms can be expensive.\n", "\n", "## Kernels\n", "\n", "An alternative is to employ a function that measures the similarity between two points in the feature space. Generically, such functions are called **kernels**, and they are characterized by being positive and symmetric, in the sense that for kernel $k$, $k(x,x^{\\prime}) = k(x^{\\prime}, x)$ (see [Mercer's Theorem](http://www.wikiwand.com/en/Mercer's_theorem)).\n", "\n", "You can think of kernels as dot products where we can \"cheat\" and calculate the value of the dot product between two points without having to explicitly calculate all their feature values. This shortcut is generally referred to as the **kernel trick**\n" ] }, { "cell_type": "markdown", "metadata": { "id": "Jsu6iJB0iqkF" }, "source": [ "To motivate the need for kernels, let's generate some simulated data which is not linearly separable:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XchlM1BwiqkF" }, "outputs": [], "source": [ "from sklearn.datasets import make_circles\n", "X, y = make_circles(100, factor=.1, noise=.1)\n", "\n", "plt.figure()\n", "plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring');" ] }, { "cell_type": "markdown", "metadata": { "id": "m2jNF180iqkF" }, "source": [ "Clearly, no linear discrimination will ever separate these data. One way we can adjust this is to apply a functional transformation of the input data.\n", "\n", "One common kernel is the Gaussian (radial basis function):\n", "\n", "$$k(x, x^{\\prime}) = \\exp\\left[-\\frac{||x-x^{\\prime}||^2}{2 \\sigma^2}\\right]$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "r3nvR3AdiqkF" }, "outputs": [], "source": [ "r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))" ] }, { "cell_type": "markdown", "metadata": { "id": "z8VPWLq-iqkF" }, "source": [ "If we plot our data after being transformed by `r`, we can see that the data becomes linearly separable." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0IJ94M1wiqkF" }, "outputs": [], "source": [ "from ipywidgets import interact\n", "from mpl_toolkits.mplot3d import Axes3D\n", "\n", "@interact(elev=(-90, 90), azim=(-180, 180))\n", "def plot_3D(elev=30, azim=30):\n", " fig = plt.figure(figsize=(10,8))\n", " ax = fig.add_subplot(111, projection='3d')\n", " ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring')\n", " ax.view_init(elev=elev, azim=azim)\n", " ax.set_xlabel('x')\n", " ax.set_ylabel('y')\n", " ax.set_zlabel('r')" ] }, { "cell_type": "markdown", "metadata": { "id": "7TxaducUiqkF" }, "source": [ "Notice that when $x$ and $x^{\\prime}$ are close to one another, the numerator approaches zero and $k(x,x^{\\prime}) \\approx 1$, while when they are far apart the numerator becomes large and $k(x,x^{\\prime}) \\approx 0$. The parameter $\\sigma$ controls how quickly an increased distance causes the value of the kernel to fall toward zero.\n", "\n", "If we associate a kernel with *each point* for a particular group that we are using as training examples, our classification function becomes:\n", "\n", "$$y = \\left\\{ \\begin{aligned} 1 &\\text{if } \\theta_0 + \\theta_1 k(x,x_1) + \\theta_2 k(x,x_2) + \\ldots \\ge 0\\\\\n", "0 &\\text{ otherwise}\\end{aligned}\\right.$$\n", "\n", "Consider particular values for the parameters, such as $\\theta_0=-0.5$ and $\\theta_i=1, \\, i=1,2,\\ldots$. This would result in the function evaluating to approximately 0.5 for a location that is close to any of the points in the set, and to -0.5 for locations that are reasonably far from all the points (as determined by the value of $\\sigma$).\n", "\n", "For each feature $x_i$ in our dataset, we can calculate the similarity to each feature via the selected kernel:\n", "\n", "$$f_i = \\left[\\begin{align}\n", "k(x_i, &x_0) \\\\\n", "k(x_i, &x_1) \\\\\n", "k(x_i, &x_2) \\\\\n", "\\vdots & \\\\\n", "k(x_i, &x_n)\n", "\\end{align}\\right]$$\n", "\n", "notice that, under the Gaussian kernel at least, there will be one element $k(x_i, x_i)$ that evaluates to 1.\n", "\n", "To use the SVM, we use this $f \\in \\mathbb{R}^{n+1}$ to calculate the inner product $\\theta^{\\prime} f$ and predict $y_i=1$ if $\\theta^{\\prime} f_i \\ge 0$. We obtain the parameters for $\\theta$ by minimizing:\n", "\n", "$$\\min_{\\theta} \\left[ C \\sum_{i=1}^n y_i k_1(\\theta^{\\prime} f_i) + (1-y_i) k_0(\\theta^{\\prime} f_i) \\right] + \\frac{1}{2}\\sum_{j=1}^k \\theta^2_j$$\n" ] }, { "cell_type": "markdown", "metadata": { "id": "T0eCOQ5TiqkG" }, "source": [ "### Regularization and soft margins\n", "\n", "There remains a choice to be made for the values of the SVM parameters. Recall $C$, which corresponds to the inverse of the regularization parameter in a lasso model. This choice of $C$ involves a **bias-variance tradeoff**:\n", "\n", "* large C = low bias, high variance\n", "* small C = high bias, low variance\n", "\n", "In a support vector machine, regularization results in a **soft margin** that allows some points to cross the optimal decision boundary (resulting in misclassifiction for those points). As C gets larger, the more stable the margin becomes, since it is allowing more points to determine the margin.\n", "\n", "We can think of C as a \"budget\" for permitting points to exceed the margin. We can tune C to determine the optimal hyperplane.\n", "\n", "Similarly, if we are using the Gaussian kernel, we must choose a value for $\\sigma^2$. When $\\sigma^2$ is large, then features are considered similar over greater distances, resulting in a smoother decision boundary, while for smaller $\\sigma^2$, similarity falls off quickly with distance.\n", "\n", "* large $\\sigma^2$ = high bias, low variance\n", "* small $\\sigma^2$ = low bias, high variance" ] }, { "cell_type": "markdown", "metadata": { "id": "ekBgqlxpiqkG" }, "source": [ "### Linear kernel\n", "\n", "The simplest choice of kernel is to use no kernel at all, but rather to simply use the **linear combination** of the features themselves as the kernel. Hence,\n", "\n", "$$y = \\left\\{ \\begin{aligned} 1 &\\text{, if } \\theta^{\\prime} x \\ge 0\\\\\n", "0 &\\text{ otherwise}\\end{aligned}\\right.$$\n", "\n", "This approach is useful when there are a *large number of features*, but the *size of the dataset is small*. In this case, a simple linear decision boundary may be appropriate given that there is relatively little data. If the reverse is true, where there are a small number of features and plenty of data, a Gaussian kernel may be more appropriate, as it allows for a more complex decision boundary." ] }, { "cell_type": "markdown", "metadata": { "id": "4f6rOUkXiqkG" }, "source": [ "## Multi-class Classification\n", "\n", "In the exposition above, we have addressed binary classification problems. The SVM can be generalized to multi-class classification. This involves training $K$ binary classifiers, where each of $k=1,\\ldots,K$ classes is trained against the remaining classes pooled into a single group (\"all-versus-one\"). Then for each point, we select the class for which the inner product $\\theta_k^{\\prime} x$ is largest." ] }, { "cell_type": "markdown", "metadata": { "id": "yAeq2os2iqkG" }, "source": [ "## Data Preprocessing\n", "\n", "It is important with many kernels to **scale** the features prior to using them in a SVM. This is because features which are numerically large relative to the others will tend to dominate the norm. So that each feature is able to contribute equally to the selection of the decision boundary, we want them all to have approximately the same range.\n", "\n", "In general, standardization of datasets is a common pratice for statistical learning algorithms. We often ignore the shape of the data distribution and simply center it on the mean, then scale it by dividing by their standard deviation (unless the feature is constant). This is important because the objective function in several learning algorithms (*e.g.* the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around zero and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features.\n", "\n", "Scikit-learn's `preprocessing` module provides a `scale` function to perform this operation on a single array-like dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "JYUvAHUuiqkG" }, "outputs": [], "source": [ "from sklearn import preprocessing\n", "\n", "X = np.array([[ 1., -1., 2.],\n", " [ 2., 0., 0.],\n", " [ 0., 1., -1.]])\n", "X_scaled = preprocessing.scale(X)\n", "X_scaled " ] }, { "cell_type": "markdown", "metadata": { "id": "DHp2__fuiqkG" }, "source": [ "Scaled data has zero mean and unit variance:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Ma6WipzKiqkG" }, "outputs": [], "source": [ "X_scaled.mean(0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mycbEwwCiqkG" }, "outputs": [], "source": [ "X_scaled.std(0)" ] }, { "cell_type": "markdown", "metadata": { "id": "YFNRKjgtiqkG" }, "source": [ "The `preprocessing` module also provides a utility class called `StandardScaler` that allows for the computation of the mean and standard deviation on a training set. This allows one to later *reapply* the same transformation on validation and test sets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lryQ9fA_iqkG" }, "outputs": [], "source": [ "scaler = preprocessing.StandardScaler().fit(X)\n", "scaler" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "flU_teD_iqkG" }, "outputs": [], "source": [ "scaler.mean_ " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "A3N7NgjXiqkG" }, "outputs": [], "source": [ "scaler.scale_ " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gbN54qz4iqkG" }, "outputs": [], "source": [ "scaler.transform(X) " ] }, { "cell_type": "markdown", "metadata": { "id": "ybymMz2YiqkH" }, "source": [ "So then, for new data, we can simply apply the `scaler` object's `transform` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IiqDK5seiqkH" }, "outputs": [], "source": [ "scaler.transform([[-1., 1., 0.]]) " ] }, { "cell_type": "markdown", "metadata": { "id": "-XCfl8HEiqkH" }, "source": [ "Optionally, one can disable either centering or scaling by passing `with_mean=False` or `with_std=False`, respectively." ] }, { "cell_type": "markdown", "metadata": { "id": "ctDkwauCiqkH" }, "source": [ "### Range scaling\n", "\n", "An alternative standardization is scaling features to lie between a given minimum and maximum value (typically between zero and one). This is often the case where we want robustness to very small standard deviations of features or we want to preserve zero entries in sparse data.\n", "\n", "The `MinMaxScaler` provides this scaling." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GMiN9txtiqkH" }, "outputs": [], "source": [ "min_max_scaler = preprocessing.MinMaxScaler()\n", "\n", "min_max_scaler.fit_transform(X)" ] }, { "cell_type": "markdown", "metadata": { "id": "YsdU1KLkiqkH" }, "source": [ "The same instance of the transformer can then be applied to some new test data, which results in the same scaling and shifting operations:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "RLG3LihgiqkH" }, "outputs": [], "source": [ "X_test = np.array([[ -3., -1., 4.]])\n", "min_max_scaler.transform(X_test)" ] }, { "cell_type": "markdown", "metadata": { "id": "08BZMtDyiqkH" }, "source": [ "### Normalization\n", "\n", "Normalization is the process of scaling individual samples to have unit norm. This is useful if you plan to use a quadratic function such as the dot-product or any other kernel to quantify the similarity of any pair of samples.\n", "\n", "The function `normalize` performs this operation on a single array-like dataset, either using the l1 or l2 norms:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Q4Awnz_GiqkH" }, "outputs": [], "source": [ "preprocessing.normalize(X, norm='l2')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8ZQVNUraiqkH" }, "outputs": [], "source": [ "preprocessing.normalize(X, norm='l1')" ] }, { "cell_type": "markdown", "metadata": { "id": "7EeSaxWdiqkH" }, "source": [ "As with scaling, there is also a `Normalizer` class that can be used to establish normalization with respect to a training set." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Pv_RRoVgiqkH" }, "outputs": [], "source": [ "normalizer = preprocessing.Normalizer().fit(X)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "WHLh6YCNiqkH" }, "outputs": [], "source": [ "normalizer.transform(X_test)" ] }, { "cell_type": "markdown", "metadata": { "id": "_9cCrnVBiqkH" }, "source": [ "### Categorical feature encoding\n", "\n", "Often features are not given as continuous values, but rather as categorical classes. For example, variables may be defined as `[\"male\", \"female\"]`, `[\"Europe\", \"US\", \"Asia\"]`, `[\"Disease A\", \"Disease B\", \"Disease C\"]`. Such features can be efficiently coded as integers, for instance `[\"male\", \"US\", \"Disease B\"]` could be expressed as `[0, 1, 1]`.\n", "\n", "Unfortunately, an integer representation can not be used directly with estimators in scikit-learn, because these expect *continuous* input, and would therefore interpret the categories as being ordered, which for the above examples, would be inappropriate.\n", "\n", "One approach is to use a \"one-of-K\" or \"one-hot\" encoding, which is implemented in `OneHotEncoder`. This estimator transforms a categorical feature with `m` possible values into `m` binary features, with only one active." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sls6jq3YiqkH" }, "outputs": [], "source": [ "enc = preprocessing.OneHotEncoder()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zFr5xTTMiqkH" }, "outputs": [], "source": [ "enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]]) " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mrxXmmAkiqkH" }, "outputs": [], "source": [ "enc.transform([[0, 1, 3]]).toarray()" ] }, { "cell_type": "markdown", "metadata": { "id": "hIAHa36ziqkH" }, "source": [ "By default, the cardinality of each feature is inferred automatically from the dataset; this can be manually overriden using the `n_values` argument." ] }, { "cell_type": "markdown", "metadata": { "id": "_p6OMIUFiqkH" }, "source": [ "`LabelBinarizer` is a utility class to help create a label indicator matrix from a list of multi-class labels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "f3D6N0t2iqkI" }, "outputs": [], "source": [ "lb = preprocessing.LabelBinarizer()\n", "lb.fit([1, 2, 6, 4, 2])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "m1Dg3pesiqkI" }, "outputs": [], "source": [ "lb.classes_" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "DMqI6z8EiqkI" }, "outputs": [], "source": [ "lb.transform((1,4))" ] }, { "cell_type": "markdown", "metadata": { "id": "-z-TXwpgiqkI" }, "source": [ "For multiple labels per instance, use MultiLabelBinarizer:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "FNaikNEGiqkI" }, "outputs": [], "source": [ "lb = preprocessing.MultiLabelBinarizer()\n", "lb.fit_transform([(1, 2), (3,)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9Gu88xBIiqkI" }, "outputs": [], "source": [ "lb.classes_" ] }, { "cell_type": "markdown", "metadata": { "id": "XOBuTVrmiqkI" }, "source": [ "`LabelEncoder` is a utility class to help normalize labels such that they contain only consecutive values between 0 and `n_classes-1`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NXGq5gfziqkI" }, "outputs": [], "source": [ "le = preprocessing.LabelEncoder()\n", "le.fit([1,2,2,6])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "RhSb9UphiqkI" }, "outputs": [], "source": [ "le.classes_" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "CNhccMoTiqkI" }, "outputs": [], "source": [ "le.transform([1, 1, 2, 6])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "g2YV50PpiqkI" }, "outputs": [], "source": [ "le.inverse_transform([0, 0, 1, 2])" ] }, { "cell_type": "markdown", "metadata": { "id": "p4UtLVrHiqkI" }, "source": [ "## Missing Data Imputation\n", "\n", "Missing data is a common problem in most real-world scientific datasets. While the best way for dealing with missing data will always be preventing their occurrence in the first place, it usually can't be helped, particularly when data are collected passively or voluntarily, or when data collection and recording is distributed among several people. There are a variety of ways for dealing with missing data, from the very naïve to the very sophisticated, and unfortunately the more common approaches tend to be *ad hoc* and will usually do more harm than good. \n", "\n", "It turns out that more robust methods for imputation are not as difficult to implement as they first appear to be. Two of the best ones are Bayesian imputation and multiple imputation. In this section, we will use **multiple imputation** to account for missing data in a regression analysis. " ] }, { "cell_type": "markdown", "metadata": { "id": "F-yXGEajiqkI" }, "source": [ "As a motivating example, we will use a dataset of educational outcomes for children with hearing impairment. Here, we are interested in determining factors that are associated with better or poorer learning outcomes. \n", "\n", "![hearing aid](https://github.com/fonnesbeck/Bios8366/blob/master/notebooks/images/hearing_aid.jpg?raw=1)\n", "\n", "There is a suite of available predictors, including: \n", "\n", "* gender (`male`)\n", "* number of siblings in the household (`siblings`)\n", "* index of family involvement (`family_inv`)\n", "* whether the primary household language is not English (`non_english`)\n", "* presence of a previous disability (`prev_disab`)\n", "* non-white race (`non_white`)\n", "* age at the time of testing (in months, `age_test`)\n", "* whether hearing loss is not severe (`non_severe_hl`)\n", "* whether the subject's mother obtained a high school diploma or better (`mother_hs`)\n", "* whether the hearing impairment was identified by 3 months of age (`early_ident`)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "J0-yqtiBiqkI" }, "outputs": [], "source": [ "try:\n", " test_scores = pd.read_csv('../data/test_scores.csv', index_col=0)\n", "except FileNotFoundError:\n", " test_scores = pd.read_csv(DATA_URL + 'test_scores.csv', index_col=0)\n", "test_scores.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "2jdI8A9eiqkI" }, "source": [ "For three variables in the dataset, there are incomplete records." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pwxueur5iqkI" }, "outputs": [], "source": [ "test_scores.isnull().sum(0)" ] }, { "cell_type": "markdown", "metadata": { "id": "sQX5IeN_iqkI" }, "source": [ "### Strategies for dealing with missing data\n", "\n", "The easiest (and worst) way to deal with missing data is to **ignore it**. That is, simply run the analysis, missing values and all, hoping for the best. If your software is any good, this approach will simply not work; the algorithm will try to operate on data that includes missing values, and propagate them, resulting in statistics and estimates that cannot be calculated, which will typically raise errors. If your software is poor, it will make some assumption or decision about the missing values, and proceed to generate results conditional on the assumption, which creates problems that may never be detected because no indication was given to any potential problem. \n", "\n", "The next easiest (worst) approach to analyzing data with missing values is to conduct list-wise deletion, by deleting the records that have missing values. This is called **complete case analysis**, because only records that are complete get retained for the analysis. The degree to which complete case analysis is undesirable depends on the mechanism by which data have become missing." ] }, { "cell_type": "markdown", "metadata": { "id": "A0h1VWhLiqkI" }, "source": [ "### Types of Missingness\n", "\n", "- **Missing completely at random (MCAR)**: When data are MCAR, missing cases are, on average, identical to non-missing cases, with respect to the model. Ignoring the missingness will reduce the power of the analysis, but will not bias inference.\n", "- **Missing at random (MAR)**: Missing data depends (usually probabilistically) on measured values, and hence can be modeled by variables observed in the data set. Accounting for the values which “cause” the missing data will produce unbiased results in an analysis.\n", "- **Missing not at random(MNAR)**: Missing data depend on unmeasured or unknown variables. There is no information available to account for the missingness.\n", "\n", "The very best-case scenario for using complete case analysis, which corresponds to MCAR missingness, results in a **loss of power** due to the reduction in sample size. The analysis will lose the information contained in the non-missing elements of a partially-missing record. When data are not missing completely at random, inferences from complete case analysis may be **biased** due to systematic differences between missing and non-missing records that affects the estimates of key parameters.\n", "\n", "One alternative to complete case analysis is to simply fill (*impute*) the missing values with a reasonable guess a the true value, such as the mean, median or modal value of the fully-observed records. This imputation, while not recovering any information regarding the missing value itself for use in the analysis, does provide a mechanism for including the non-missing values in the analysis, thereby making use of all available information." ] }, { "cell_type": "markdown", "metadata": { "id": "YfA-Sm3RiqkJ" }, "source": [ "The `SimpleImputer` class in scikit-learn provides methods for imputing missing values, either using the mean, the median or the most frequent value of the row or column in which the missing values are located. This class also allows for different missing value encodings.\n", "\n", "For example, we can replace missing entries encoded as `np.nan` using the mean value of the columns (axis 0) that contain the missing values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "rpRaFp3tiqkJ" }, "outputs": [], "source": [ "from sklearn.impute import SimpleImputer\n", "\n", "imp = SimpleImputer(strategy='mean')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "d1Y7DCe4iqkJ" }, "outputs": [], "source": [ "imp.fit([[1, 2], [np.nan, 3], [7, 6]])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "PYSPx1ZliqkJ" }, "outputs": [], "source": [ "X = [[np.nan, 1], [6, np.nan], [3, 6]]\n", "imp.transform(X)" ] }, { "cell_type": "markdown", "metadata": { "id": "IkVuksDkiqkJ" }, "source": [ "In our educational outcomes dataset, we are probably better served using mode imputation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "W516HR8XiqkJ" }, "outputs": [], "source": [ "mode_imp = SimpleImputer(strategy='most_frequent')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4rGqo5rbiqkJ" }, "outputs": [], "source": [ "mode_imp.fit(test_scores)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7M12-01ViqkJ" }, "outputs": [], "source": [ "mode_imp.transform(test_scores)[:3]" ] }, { "cell_type": "markdown", "metadata": { "id": "nD-T4x--iqkJ" }, "source": [ "Of course, in Python it is often easier to impute data using Pandas `DataFrame` method `fillna`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_BHLSALBiqkJ" }, "outputs": [], "source": [ "test_scores.siblings.mean()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TCNRz31riqkJ" }, "outputs": [], "source": [ "siblings_imputed = test_scores.siblings.fillna(test_scores.siblings.mean())" ] }, { "cell_type": "markdown", "metadata": { "id": "-MRRfuiaiqkJ" }, "source": [ "This approach may be reasonable under the MCAR assumption, but may induce bias under a MAR scenario, whereby missing values may **differ systematically** relative to non-missing values, making the particular summary statistic used for imputation *biased* as a mean/median/modal value for the missing values.\n", "\n", "Beyond this, the use of a single imputed value to stand in place of the actual missing value glosses over the **uncertainty** associated with this guess at the true value. Any subsequent analysis procedure (*e.g.* regression analysis) will behave as if the imputed value were observed, despite the fact that we are actually unsure of the actual value for the missing variable. The practical consequence of this is that the variance of any estimates resulting from the imputed dataset will be **artificially reduced**." ] }, { "cell_type": "markdown", "metadata": { "id": "YHYMdAzOiqkJ" }, "source": [ "## Multiple Imputation\n", "\n", "One robust alternative to addressing missing data is **multiple imputation** (Schaffer 1999, van Buuren 2012). It produces unbiased parameter estimates, while simultaneously accounting for the uncertainty associated with imputing missing values. It is conceptually and mechanistically straightforward, and produces complete datasets that may be analyzed using any statistical methodology or software one chooses, as if the data had no missing values to begin with.\n", "\n", "Multiple imputation generates imputed values based on a **regression model**. This regression model will help us generate reasonable values, particularly if data are MAR, since it uses information in the dataset that may be informative in predicting what the true value may be. Ideally, we want predictor variables that are **correlated** with the missing variable, and with the mechanism of missingness, if any. For example, one might be able to use test scores from one subject to predict missing test scores from another; or, the probability of income reporting to be missing may vary systematically according to the education level of the individual." ] }, { "cell_type": "markdown", "metadata": { "id": "-EPdFgmuiqkJ" }, "source": [ "To see if there is any potential information among the variables in our dataset to use for imputation, it is helpful to calculate the pairwise correlation between all the variables. Since we have discrete variables in our data, the [Spearman rank correlation coefficient](http://www.wikiwand.com/en/Spearman%27s_rank_correlation_coefficient) is appropriate." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VXlKq-aWiqkJ" }, "outputs": [], "source": [ "test_scores.dropna().corr(method='spearman').round(2)" ] }, { "cell_type": "markdown", "metadata": { "id": "isfrSn2KiqkJ" }, "source": [ "We will try to impute missing values the mother's high school education indicator variable, which takes values of 0 for no high school diploma, or 1 for high school diploma or greater. The appropriate model to predict binary variables is a **logistic regression**. We will use the scikit-learn implementation, `LogisticRegression`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LoW1D4U3iqkJ" }, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegression" ] }, { "cell_type": "markdown", "metadata": { "id": "8Kkd8jBBiqkJ" }, "source": [ "To keep things simple, we will only use variables that are themselves complete to build the predictive model, hence our subset of predictors will exclude family involvement score (`family_inv`) and previous disability (`prev_disab`)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "FTfyOGhgiqkK" }, "outputs": [], "source": [ "impute_subset = test_scores.drop(labels=['family_inv','prev_disab','score'], axis=1)" ] }, { "cell_type": "markdown", "metadata": { "id": "iO23FXLXiqkK" }, "source": [ "Next, we scale the predictor variables to range from 0 to 1, to improve the performance of the regression model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YAclbMYiiqkK" }, "outputs": [], "source": [ "y = impute_subset.pop('mother_hs').values\n", "X = preprocessing.StandardScaler().fit_transform(impute_subset.astype(float))" ] }, { "cell_type": "markdown", "metadata": { "id": "c6TVfW-SiqkK" }, "source": [ "Next, we create a `LogisticRegression` model, and fit it using the non-missing observations." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lbi-X7V_iqkK" }, "outputs": [], "source": [ "missing = np.isnan(y)\n", "\n", "mod = LogisticRegression()\n", "mod.fit(X[~missing], y[~missing])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "rl3tK2mtiqkK" }, "outputs": [], "source": [ "mother_hs_pred = mod.predict(X[missing])\n", "mother_hs_pred" ] }, { "cell_type": "markdown", "metadata": { "id": "GiTeRuM6iqkK" }, "source": [ "These values can then be inserted in place of the missing values, and an analysis can be performed on the entire dataset.\n", "\n", "However, this is still just a single imputation for each missing value, and hence glosses over the uncertainty associated with the derivation of the imputes. Multiple imputation proceeds by **imputing several values**, to generate several complete datasets and performing the same analysis on all of them. With a set of estimates in hand, an *average* estimate of model parameters can be obtained that more adequately accounts for the uncertainty, hopefully providing more robust inference than from a single impute.\n", "\n", "There are a variety of ways to generate multiple imputations. Here, we will exploit **regularization** in order to do this. The `LogisticRegression` class from scikit-learn provides facilities for regularization using either L2 (resulting in ridge regression) or L1 (resulting in LASSO regression) penalties. The degree of regularization in either case is controlled by the `C` parameter, whereby large values of `C` give more freedom to the model, while smaller values of `C` constrain the model more. We can use a selection of `C` values to obtain a range of predictions from variants of the same model. For example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NUAw-zksiqkK" }, "outputs": [], "source": [ "mod2 = LogisticRegression(C=1, penalty='l2')\n", "mod2.fit(X[~missing], y[~missing])\n", "mod2.predict(X[missing])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dr5ab5PHiqkK" }, "outputs": [], "source": [ "mod3 = LogisticRegression(C=0.4, penalty='l2')\n", "mod3.fit(X[~missing], y[~missing])\n", "mod3.predict(X[missing])" ] }, { "cell_type": "markdown", "metadata": { "id": "dObokpociqkK" }, "source": [ "Surprisingly few imputations are required to acheive reasonable estimates, with 3-10 usually sufficient. We will use 3." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Eg55pJvriqkK" }, "outputs": [], "source": [ "mother_hs_imp = []\n", "\n", "for C in 0.1, 0.4, 2:\n", " \n", " mod = LogisticRegression(C=C, penalty='l2')\n", " mod.fit(X[~missing], y[~missing])\n", " imputed = mod.predict(X[missing])\n", " mother_hs_imp.append(imputed)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lfH0BDV0iqkK" }, "outputs": [], "source": [ "mother_hs_imp" ] }, { "cell_type": "markdown", "metadata": { "id": "LPMcvpXOiqkK" }, "source": [ "## SVM using `scikit-learn`\n", "\n", "The scikit-learn machine learning package for Python includes a nice implementation of support vector machines." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "s96-l2IsiqkK" }, "outputs": [], "source": [ "from sklearn import svm" ] }, { "cell_type": "markdown", "metadata": { "id": "T7nej7JfiqkK" }, "source": [ "Let's revisit the wine example. Recall that the data are the result of chemical analyses of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. (The response variable is incorrectly labeled `region`; it should be the grape from which the wine was derived). We might be able to correctly classify a given wine based on its chemical profile.\n", "\n", "To illustrate the characteristics of the SVM, we will select two attributes, which will make things easy to visualize." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jqctCVoMiqkK" }, "outputs": [], "source": [ "try:\n", " wine = pd.read_table(\"../data/wine.dat\", sep='\\s+')\n", "except FileNotFoundError:\n", " wine = pd.read_table(DATA_URL + \"wine.dat\", sep='\\s+')\n", "\n", "attributes = ['Alcohol',\n", " 'Malic acid',\n", " 'Ash',\n", " 'Alcalinity of ash',\n", " 'Magnesium',\n", " 'Total phenols',\n", " 'Flavanoids',\n", " 'Nonflavanoid phenols',\n", " 'Proanthocyanins',\n", " 'Color intensity',\n", " 'Hue',\n", " 'OD280/OD315 of diluted wines',\n", " 'Proline']\n", "\n", "grape = wine.pop('region')\n", "y = grape.values\n", "wine.columns = attributes\n", "X = wine[['Alcohol', 'Proline']].values\n", "\n", "svc = svm.SVC(kernel='linear')\n", "svc.fit(X, y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1ChyNAgYiqkK" }, "outputs": [], "source": [ "wine.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "k0g-vNAPiqkL" }, "source": [ "It is easiest to display the model fit graphically, by evaluating the model over a grid of points." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "DRhiavSmiqkL" }, "outputs": [], "source": [ "from matplotlib.colors import ListedColormap\n", "\n", "# Create color maps for 3-class classification problem, as with iris\n", "cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])\n", "cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])\n", "\n", "def plot_estimator(estimator, X, y, ax=None):\n", " \n", " try:\n", " X, y = X.values, y.values\n", " except AttributeError:\n", " pass\n", " \n", " if ax is None:\n", " _, ax = plt.subplots()\n", " \n", " estimator.fit(X, y)\n", " x_min, x_max = X[:, 0].min() - .1, X[:, 0].max() + .1\n", " y_min, y_max = X[:, 1].min() - .1, X[:, 1].max() + .1\n", " xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),\n", " np.linspace(y_min, y_max, 100))\n", " Z = estimator.predict(np.c_[xx.ravel(), yy.ravel()])\n", "\n", " # Put the result into a color plot\n", " Z = Z.reshape(xx.shape)\n", " ax.pcolormesh(xx, yy, Z, cmap=cmap_light)\n", "\n", " # Plot also the training points\n", " ax.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)\n", " ax.axis('tight')\n", " ax.axis('off')\n", " plt.tight_layout()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Y3SBAww3iqkL" }, "outputs": [], "source": [ "plot_estimator(svc, X, y)" ] }, { "cell_type": "markdown", "metadata": { "id": "uerjsfasiqkL" }, "source": [ "The SVM gets its name from the samples in the dataset from each class that lie closest to the other class. These training samples are called **support vectors** because changing their position in *p*-dimensional space would change the location of the decision boundary. \n", "\n", "In scikit-learn, the indices of the support vectors for each class can be found in the `support_vectors_` attribute of the `SVC` object. Here is a 2 class problem using only classes 1 and 2 in the wine dataset.\n", "\n", "The support vectors are circled." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wpQEHoSkiqkL" }, "outputs": [], "source": [ "# Extract classes 1 and 2\n", "X, y = X[np.in1d(y, [1, 2])], y[np.in1d(y, [1, 2])]\n", "\n", "plt.figure()\n", "plot_estimator(svc, X, y)\n", "plt.scatter(svc.support_vectors_[:, 0], \n", " svc.support_vectors_[:, 1], \n", " s=120, \n", " facecolors='none', \n", " edgecolors='w',\n", " linewidths=2,\n", " zorder=10);" ] }, { "cell_type": "markdown", "metadata": { "id": "AlNpp8EliqkL" }, "source": [ "Clearly, these classes are not linearly separable.\n", "\n", "As we learned, regularization is tuned via the $C$ parameter. In practice, a large $C$ value means that the number of support vectors is small (less regularization), while a small $C$ implies many support vectors (more regularization). scikit-learn sets a default value of $C=1$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gakb_V0oiqkL", "scrolled": false }, "outputs": [], "source": [ "def plot_regularized(power, ax):\n", " svc = svm.SVC(kernel='linear', C=10**power)\n", " plot_estimator(svc, X, y, ax=ax)\n", " ax.scatter(svc.support_vectors_[:, 0], svc.support_vectors_[:, 1], s=80, \n", " facecolors='none', edgecolors='w', linewidths=2, zorder=10)\n", " ax.set_title('Power={}'.format(power))\n", " \n", "fig, axes = plt.subplots(2, 3, figsize=(12,10))\n", "for power, ax in zip(range(-2, 4), axes.ravel()):\n", " plot_regularized(power, ax)" ] }, { "cell_type": "markdown", "metadata": { "id": "zJ0-BWpaiqkL" }, "source": [ "We can choose from a suite of available kernels (`linear`, `poly`, `rbf`, `sigmoid`, `precomputed`) or a custom kernel can be passed as a function. Note that the radial basis function (`rbf`) kernel is just a Gaussian kernel, but with parameter $\\gamma=1/\\sigma^2$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zQuN30ZLiqkL" }, "outputs": [], "source": [ "def plot_poly_svc(degree=3, ax=None):\n", " svc_poly = svm.SVC(kernel='poly', degree=degree)\n", " plot_estimator(svc_poly, X, y, ax=ax)\n", " ax.scatter(svc_poly.support_vectors_[:, 0], svc_poly.support_vectors_[:, 1], \n", " s=80, facecolors='none', linewidths=2, zorder=10)\n", " ax.set_title('Polynomial degree {}'.format(degree))\n", " \n", "fig, axes = plt.subplots(2, 3, figsize=(12,10))\n", "for deg, ax in zip(range(1, 7), axes.ravel()):\n", " plot_poly_svc(deg, ax)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TZDkqV7YiqkL", "scrolled": false }, "outputs": [], "source": [ "def plot_rbf_svc(power=1, ax=None):\n", " \n", " svc_rbf = svm.SVC(kernel='rbf', gamma=10**power)\n", " plot_estimator(svc_rbf, X, y, ax=ax)\n", " ax.scatter(svc_rbf.support_vectors_[:, 0], svc_rbf.support_vectors_[:, 1], \n", " s=80, facecolors='none', linewidths=2, zorder=10)\n", " ax.set_title('$\\gamma=10^{%i}$' % power)\n", " \n", "fig, axes = plt.subplots(2, 3, figsize=(12,10))\n", "for pow, ax in zip(range(-3, 3), axes.ravel()):\n", " plot_rbf_svc(pow, ax)" ] }, { "cell_type": "markdown", "metadata": { "id": "hUgpq4zWiqkL" }, "source": [ "Of course, the radial basis function (RBF) kernel is very flexible and performs best for this dataset. However, it is easy to get carried away tuning to a training dataset--we don't really believe the resulting decision boundary, do we?" ] }, { "cell_type": "markdown", "metadata": { "id": "Ezx46aVmiqkL" }, "source": [ "## Cross-validation\n", "\n", "In order to make objective choices for either kernels or hyperparameter values, we can apply the cross-validation methods outlined in last week's lecture. Every estimator class in `scikit-learn` exposes a `score` method that can judge the quality of the fit (or the prediction) on new data.\n", "\n", "The `score(x,y)` method for the `SVC` class returns the *mean accuracy* of the predictions from `x` with respect to `y`, for the fitted SVM." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "p77LNz4EiqkL" }, "outputs": [], "source": [ "svc_lin = svm.SVC(kernel='linear', C=2)\n", "svc_lin.fit(X, y)\n", "svc_lin.score(X, y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "30fw5CFFiqkL" }, "outputs": [], "source": [ "svc_poly = svm.SVC(kernel='poly', degree=3)\n", "svc_poly.fit(X, y)\n", "svc_poly.score(X, y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "98LvXg6ciqkL" }, "outputs": [], "source": [ "svc_rbf = svm.SVC(kernel='rbf', gamma=1e-2)\n", "svc_rbf.fit(X, y)\n", "svc_rbf.score(X, y)" ] }, { "cell_type": "markdown", "metadata": { "id": "sOXTUmf1iqkL" }, "source": [ "Each estimator in `scikit-learn` has a default estimator score method, which is an evaluation criterion for the problem they are designed to solve. For the `SVC` class, this is the **mean accuracy**, as shown above.\n", "\n", "Alternately, if we use cross-validation, you can specify one of a set of built-in scoring metrics. For classifiers such as support vector machines, these include:\n", "\n", "- **accuracy**\n", ":\t`sklearn.metrics.accuracy_score`\n", "\n", "- **average_precision**\n", ":\t`sklearn.metrics.average_precision_score`\n", "\n", "- **f1**\n", ":\t`sklearn.metrics.f1_score`\n", "\n", "- **precision**\n", ":\t`sklearn.metrics.precision_score`\n", "\n", "- **recall**\n", ":\t`sklearn.metrics.recall_score`\n", "\n", "- **roc_auc**\n", ":\t`sklearn.metrics.roc_auc_score`\n", "\n", "Regression models can use appropriate metrics, like `mean_squared_error` or `r2`.\n", "\n", "Finally, one can specify arbitrary loss functions to be used for assessment. The `metrics` module implements functions assessing prediction errors for specific purposes. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YIJfYz6YiqkM" }, "outputs": [], "source": [ "def custom_loss(observed, predicted):\n", " diff = np.abs(observed - predicted).max()\n", " return np.log(1 + diff)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HSlm3ln4iqkM" }, "outputs": [], "source": [ "from sklearn.metrics import make_scorer\n", "custom_scorer = make_scorer(custom_loss, greater_is_better=False)" ] }, { "cell_type": "markdown", "metadata": { "id": "U_mVWM29iqkM" }, "source": [ "Implementing cross-validation on our wine SVC is straightforward:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NWHhxYDqiqkM" }, "outputs": [], "source": [ "from sklearn import model_selection\n", "\n", "X_train, X_test, y_train, y_test = model_selection.train_test_split(\n", " wine.values, grape.values, test_size=0.4, random_state=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1GjHR0pliqkM" }, "outputs": [], "source": [ "X_train.shape, y_train.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bl3II9MUiqkM" }, "outputs": [], "source": [ "X_test.shape, y_test.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tuvcd_gMiqkM" }, "outputs": [], "source": [ "f = svm.SVC(kernel='linear', C=1)\n", "f.fit(X_train, y_train)\n", "f.score(X_test, y_test)" ] }, { "cell_type": "markdown", "metadata": { "id": "y6vCaeXjiqkM" }, "source": [ "The following example demonstrates how to estimate the accuracy of a linear kernel support vector machine on the wine dataset by splitting the data, fitting a model and computing the score 5 consecutive times (with different splits each time):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2T1FGe4liqkM" }, "outputs": [], "source": [ "scores = model_selection.cross_val_score(f, wine.values, grape.values, cv=5)\n", "scores" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LOQbaafiiqkM" }, "outputs": [], "source": [ "print(\"Accuracy: %0.2f (+/- %0.2f)\" % (scores.mean(), scores.std() * 2))" ] }, { "cell_type": "markdown", "metadata": { "id": "xg-FUYzqiqkM" }, "source": [ "Furthermore, we can customize the scoring method by specifying the `scoring` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "eoY9Cqq2iqkM" }, "outputs": [], "source": [ "model_selection.cross_val_score(f, wine.values, grape.values, cv=5,\n", " scoring='f1_weighted')" ] }, { "cell_type": "markdown", "metadata": { "id": "rhN-rNe-iqkM" }, "source": [ "The module `sklearn.metric` also exposes a set of simple functions measuring prediction error given observations and prediction, such as the confusion matrix:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QvuavRvCiqkM" }, "outputs": [], "source": [ "from sklearn.metrics import confusion_matrix\n", "\n", "svc_poly = svm.SVC(kernel='poly', degree=3).fit(X_train, y_train)\n", "confusion_matrix(y_test, svc_poly.predict(X_test))" ] }, { "cell_type": "markdown", "metadata": { "id": "2wh_QHPTiqkM" }, "source": [ "## Exercise: Titanic survival\n", "\n", "Try to estimate a reasonable support vector classfier for the fate of passengers on the Titanic (`../data/titanic.xls`). Specifically, see if you can correctly classify the survivors based on the covariates available in the dataset.\n", "\n", "As an extension, use multiple imputation to allow for the inclusion of age into the analysis, and see if it makes a difference in the results." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SsLZMPobiqkM" }, "outputs": [], "source": [ "try:\n", " titanic = pd.read_excel(\"../data/titanic.xls\", \"titanic\")\n", "except FileNotFoundError:\n", " titanic = pd.read_excel(DATA_URL + 'titanic.xls', \"titanic\")\n", "titanic.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "OhupXsQmiqkN" }, "outputs": [], "source": [ "# Write answer here" ] }, { "cell_type": "markdown", "metadata": { "id": "B0xwwJaxiqkN" }, "source": [ "---\n", "## References\n", "\n", "- [Coursera's Machine Learning course](https://www.coursera.org/course/ml) by Stanford's Andrew Ng\n", "- [`scikit-learn` User's Guide](http://scikit-learn.org/stable/modules/svm.html) SVM section\n", "- [Scikit-learn tutorials for the Scipy 2013 conference](https://github.com/jakevdp/sklearn_scipy2013) by Jake Vanderplas" ] } ], "metadata": { "colab": { "name": "Section6_4-Support-Vector-Machines.ipynb", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.9" }, "latex_envs": { "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 0 } }, "nbformat": 4, "nbformat_minor": 0 }