{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "f77bd760d23bfc42ef05336fd15bd8a9", "grade": false, "grade_id": "cell-68255202c4a7898d", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "# Part 3: Softmax Regression" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "d3e6cf62296cdce727f7797215c0501b", "grade": false, "grade_id": "cell-08a6bfb906402a9f", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "# Execute this code block to install dependencies when running on colab\n", "try:\n", " import torch\n", "except:\n", " from os.path import exists\n", " from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag\n", " platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())\n", " cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\\.\\([0-9]*\\)\\.\\([0-9]*\\)$/cu\\1\\2/'\n", " accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'\n", "\n", " !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl torchvision" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "f41084d52322800e355e8e013e685196", "grade": false, "grade_id": "cell-ba7598ff6855cef8", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "In the second part of the lab we saw how to make a linear binary classifier using logisitic regression. In this part of the lab we'll turn our attention to multi-class classification.\n", "\n", "Softmax regression (or multinomial logistic regression) is a generalisation of logistic regression to the case where we want to handle multiple classes. In logistic regression we assumed that the labels were binary: $y_i\\in \\{0,1\\}$. We used such a classifier to distinguish between two kinds of hand-written digits. Softmax regression allows us to handle $y_i \\in \\{1,\\dots,K\\}$ where $K$ is the number of classes.\n", "\n", "Recall that in logistic regression, we had a training set $\\{(\\mathbf{x}_1,y_1),\\dots,(\\mathbf{x}_m,y_m)\\}$ of $m$ labeled examples, where the input features are $\\mathbf{x}_i \\in \\mathbb{R}^n$. In logistic regression, our hypothesis took the form:\n", "\n", "\\begin{align}\n", "h_\\theta(\\mathbf{x}) &= \\frac{1}{1 + \\exp(-\\mathbf{x}^\\top\\theta)} \\equiv \\sigma(\\mathbf{x}^\\top\\theta)\n", "\\end{align}\n", "\n", "and the model parameters $\\theta$ were trained to minimise the cost function\n", "\n", "\\begin{align}\n", "J(\\theta) & = \\sum_i y_i \\log(\\sigma(\\mathbf{x}_i^\\top\\theta)) + (1-y_i) \\log(1-\\sigma(\\mathbf{x}_i^\\top\\theta))\n", "\\end{align}\n", "\n", "In the softmax regression setting, we are interested in multi-class classification, and so the label $y$\n", " can take on $K$ different values, rather than only two. Thus, in our training set $\\{(\\mathbf{x}_1,y_1),\\dots,(\\mathbf{x}_m,y_m)\\}$, we now have that $y_i \\in \\{1,\\dots,K\\}$.\n", "\n", "Given a test input $\\mathbf{x}$, we want our hypothesis to estimate the probability that $P(y=k|\\mathbf{x})$ for each value of $k=1,\\dots,K$. That is to say, we want to estimate the probability of the class label taking on each of the $K$ different possible values. Thus, our hypothesis will output a $K$-dimensional vector (whose elements sum to 1) giving us our $K$ estimated probabilities. Concretely, our hypothesis $h_\\theta(\\mathbf{x})$ takes the form:\n", "\n", "\\begin{align}\n", "h_\\theta(\\mathbf{x}) =\n", "\\begin{bmatrix}\n", "P(y = 1 | \\mathbf{x}; \\theta) \\\\\n", "P(y = 2 | \\mathbf{x}; \\theta) \\\\\n", "\\vdots \\\\\n", "P(y = K | \\mathbf{x}; \\theta)\n", "\\end{bmatrix}\n", "=\n", "\\frac{1}{ \\sum_{j=1}^{K}{\\exp(\\theta^{(j)\\top} \\mathbf{x}) }}\n", "\\begin{bmatrix}\n", "\\exp(\\theta^{(1)\\top} \\mathbf{x} ) \\\\\n", "\\exp(\\theta^{(2)\\top} \\mathbf{x} ) \\\\\n", "\\vdots \\\\\n", "\\exp(\\theta^{(K)\\top} \\mathbf{x} ) \\\\\n", "\\end{bmatrix}\n", "\\end{align}\n", "\n", "Here $\\theta^{(1)},\\theta^{(2)},\\dots,\\theta^{(K)} \\in \\mathbb{R}^n$ are the parameters of our model. Notice that the term $\\frac{1}{\\sum_{j=1}^K exp(\\theta^{(j)\\top} \\mathbf{x})}$ normalizes the distribution, so that it sums to one.\n", "\n", "For convenience, we will also write $\\theta$ to denote all the parameters of our model. When you implement softmax regression, it is usually convenient to represent $\\theta$ as a $n$-by-$K$ matrix obtained by concatenating $\\theta_{(1)},\\theta^{(2)},\\dots,\\theta^{(K)}$ into columns, so that\n", "\n", "\\begin{align}\n", "\\theta = \\left[\\begin{array}{cccc}| & | & | & | \\\\\n", "\\theta^{(1)} & \\theta^{(2)} & \\cdots & \\theta^{(K)} \\\\\n", "| & | & | & |\n", "\\end{array}\\right].\n", "\\end{align}\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cost Function\n", "\n", "We now describe the cost function that we’ll use for softmax regression. In the equation below, $1\\{\\cdot\\}$\n", " is an \"indicator function\", such that $1\\{\\mathrm{a true statement}\\}=1$, and $1\\{\\mathrm{a false statement}\\}=0$. For example, $1\\{2+2=4\\}$ evaluates to $1$; whereas $1\\{1+1=5\\}$ evaluates to $0$. Our cost function will be:\n", "\n", "\\begin{align}\n", "J(\\theta) = - \\left[ \\sum_{i=1}^{m} \\sum_{k=1}^{K} 1\\left\\{y_{i} = k\\right\\} \\log \\frac{\\exp(\\theta^{(k)\\top} \\mathbf{x}_i)}{\\sum_{j=1}^K \\exp(\\theta^{(j)\\top} \\mathbf{x}_i)}\\right]\n", "\\end{align}\n", " \n", "Notice that this generalises the logistic regression cost function, which could also have been written:\n", "\n", "\\begin{align}\n", "J(\\theta) &= - \\left[ \\sum_{i=1}^m (1-y^{(i)}) \\log (1-h_\\theta(\\mathbf{x}_i)) + y^{(i)} \\log h_\\theta(\\mathbf{x}_i) \\right] \\\\\n", "&= - \\left[ \\sum_{i=1}^{m} \\sum_{k=0}^{1} 1\\left\\{y^{(i)} = k\\right\\} \\log P(y^{(i)} = k | \\mathbf{x}_i ; \\theta) \\right]\n", "\\end{align}\n", "\n", "\n", "The softmax cost function is similar, except that we now sum over the $K$ different possible values of the class label. Note also that in softmax regression, we have that\n", "\n", "\\begin{equation}\n", "P(y_i = k | \\mathbf{x}_i ; \\theta) = \\frac{\\exp(\\theta^{(k)\\top} \\mathbf{x}_i)}{\\sum_{j=1}^K \\exp(\\theta^{(j)\\top} \\mathbf{x}_i) }\n", "\\end{equation}\n", "\n", "We cannot solve for the minimum of $J(\\theta)$ analytically, and thus we'll resort to using gradient descent as before. Taking derivatives, one can show that the gradient is:\n", "\n", "\\begin{align}\n", "\\nabla_{\\theta^{(k)}} J(\\theta) = - \\sum_{i=1}^{m}{ \\left[ \\mathbf{x}_i \\left( 1\\{ y_i = k\\} - P(y_i = k | \\mathbf{x}_i; \\theta) \\right) \\right] }\n", "\\end{align}\n", "\n", "Armed with this formula for the derivative, one can then use it directly with a gradient descent solver (or any other 1st-order gradient based optimiser)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Use the code box below to complete the implementation of the functions that return the gradients of the softmax loss function, $\\nabla_{\\theta^{(k)}} J(\\theta) \\,\\, \\forall k$ and the loss function itself, $J(\\theta)$:__" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "1a77ae7bf8c9343e3adac168380f17d4", "grade": false, "grade_id": "cell-1cd9ef4ca5aac607", "locked": false, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "import torch\n", "\n", "# we wouldn't normally do this, but for this lab we want to work in double precision\n", "# as we'll need the numerical accuracy later on for doing checks on our gradients:\n", "torch.set_default_dtype(torch.float64) \n", "\n", "def softmax_regression_loss_grad(Theta, X, y):\n", " '''Implementation of the gradient of the softmax loss function.\n", " \n", " Theta is the matrix of parameters, with the parameters of the k-th class in the k-th column\n", " X contains the data vectors (one vector per row)\n", " y is a column vector of the targets\n", " '''\n", " # YOUR CODE HERE\n", " raise NotImplementedError()\n", "\n", " return grad\n", "\n", "def softmax_regression_loss(Theta, X, y):\n", " '''Implementation of the softmax loss function.\n", " \n", " Theta is the matrix of parameters, with the parameters of the k-th class in the k-th column\n", " X contains the data vectors (one vector per row)\n", " y is a column vector of the targets\n", " '''\n", " # YOUR CODE HERE\n", " raise NotImplementedError()\n", "\n", " return loss" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "2c09bc14e82ce918db4d6a26c59c6088", "grade": false, "grade_id": "cell-b620296f46b98eff", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__Use the following code block to confirm that your implementation is correct using gradient checking. If there are problems with your gradient or loss, go back and fix them!:__" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "fd0526a21e73b804996ba405055f01d9", "grade": false, "grade_id": "cell-a50c3fc6270a29dd", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ " # from torch.autograd import gradcheck\n", "from random import randrange\n", "\n", "def grad_check(f, x, analytic_grad, num_checks=10, h=1e-3):\n", " sum_error = 0\n", " for i in range(num_checks):\n", " ix = tuple([randrange(m) for m in x.shape]) #randomly sample value to change\n", "\n", " oldval = x[ix].item()\n", " x[ix] = oldval + h # increment by h\n", " fxph = f(x) # evaluate f(x + h)\n", " x[ix] = oldval - h # increment by h\n", " fxmh = f(x) # evaluate f(x - h)\n", " x[ix] = oldval # reset\n", "\n", " grad_numerical = (fxph - fxmh) / (2 * h)\n", " grad_analytic = analytic_grad[ix]\n", " rel_error = abs(grad_numerical - grad_analytic) / (abs(grad_numerical) + abs(grad_analytic) + 1e-8)\n", " sum_error += rel_error\n", " print('numerical: %f\\tanalytic: %f\\trelative error: %e' % (grad_numerical, grad_analytic, rel_error))\n", " return sum_error / num_checks\n", "\n", "# Create some test data:\n", "num_classes = 10\n", "features_dim = 20\n", "num_items = 100\n", "Theta = torch.randn((features_dim, num_classes))\n", "X = torch.randn((num_items,features_dim))\n", "y = torch.torch.randint(0, num_classes, (num_items, 1))\n", "\n", "# compute the analytic gradient\n", "grad = softmax_regression_loss_grad(Theta, X, y)\n", " \n", "# run the gradient checker \n", "grad_check(lambda th: softmax_regression_loss(th, X, y), Theta, grad)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "4be6d0b57056f8782ca66f6017c15df6", "grade": true, "grade_id": "cell-40b27ac9b1ad944c", "locked": true, "points": 8, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "Theta = torch.Tensor([[1, 0], [0, 1]])\n", "X = torch.Tensor([[1, 0], [0, 1]])\n", "y = torch.LongTensor([[0], [1]])\n", "assert torch.abs(softmax_regression_loss(Theta, X, y) - 0.6265) < 0.0001\n", "grad = softmax_regression_loss_grad(Theta, X, y)\n", "assert torch.torch.allclose(torch.abs(grad/0.2689), torch.ones_like(grad), atol=0.001)\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "36922dc7c975bba6da5244fa6d6f7f82", "grade": false, "grade_id": "cell-9043b78f44ece205", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "## Training Softmax regression with gradient descent on real data\n", "\n", "We'll now try gradient descent with our softmax regression using the digits dataset. As before, when we looked at logistic regression, we load the data and create test and training sets. Note that this time we'll use all the classes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "51544a328b5b7e1aafc5b9de8c4c1a5c", "grade": false, "grade_id": "cell-c804282e48b71082", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "from sklearn.datasets import load_digits\n", "\n", "X, y = (torch.Tensor(z) for z in load_digits(10, True)) #convert to pytorch Tensors\n", "X = torch.cat((X, torch.ones((X.shape[0], 1))), 1) # append a column of 1's to the X's\n", "X /= 255\n", "y = y.reshape(-1, 1) # reshape y into a column vector\n", "y = y.type(torch.LongTensor)\n", "\n", "# We're also going to break the data into a training set for computing the regression parameters\n", "# and a test set to evaluate the predictive ability of those parameters\n", "perm = torch.randperm(y.shape[0])\n", "X_train = X[perm[0:260], :]\n", "y_train = y[perm[0:260]]\n", "X_test = X[perm[260:], :]\n", "y_test = y[perm[260:]]" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "8189c5465f5ab649d77d766cce1c9053", "grade": false, "grade_id": "cell-20ec9853fcef3dea", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We now define a simple gradient descent loop to train the model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "cd52e96e7fc18b2f9e035af97319276d", "grade": false, "grade_id": "cell-09cf6c794033cd87", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "alpha = 0.1\n", "theta_gd = torch.rand((X_train.shape[1], 10))\n", "\n", "for e in range(0, 1000):\n", " gr = softmax_regression_loss_grad(theta_gd, X_train, y_train)\n", " theta_gd -= alpha * gr\n", " if e%100 == 0:\n", " print(\"Training Loss: \", softmax_regression_loss(theta_gd, X_train, y_train))\n", "\n", "# Compute the accuracy of the test set\n", "proba = torch.softmax(X_test @ theta_gd, 1)\n", "print(float((proba.argmax(1)-y_test[:,0]==0).sum()) / float(proba.shape[0]))\n", "print()\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "4cfd1d1c3fa55080d1c48496b5611846", "grade": false, "grade_id": "cell-e747fc8aef8b9e66", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Running the above, you should observe that the training loss decreases over time. The final accuracy on the test set is also printed and should be around 90% (it will depend on the particular training/test splits you generated as well as the initial parameters for the softmax)." ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "fe1357c9d246c369615a068259befbd0", "grade": false, "grade_id": "cell-ec6511037dc97cca", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "# Overparameterisation in softmax regression\n", "\n", "Softmax regression has an unusual property that it has a \"redundant\" set of parameters. To explain what this means, suppose we take each of our parameter vectors $\\theta^{(j)}$, and subtract some fixed vector $\\psi$ from it, so that every $\\theta^{(j)}$ is now replaced with $\\theta^{(j)}−\\psi$ (for every $j=1,\\dots,k$). Our hypothesis now estimates the class label probabilities as\n", "\n", "\\begin{align}\n", "P(y^{(i)} = k | x^{(i)} ; \\theta)\n", "&= \\frac{\\exp((\\theta^{(k)}-\\psi)^\\top x^{(i)})}{\\sum_{j=1}^K \\exp( (\\theta^{(j)}-\\psi)^\\top x^{(i)})} \\\\\n", "&= \\frac{\\exp(\\theta^{(k)\\top} x^{(i)}) \\exp(-\\psi^\\top x^{(i)})}{\\sum_{j=1}^K \\exp(\\theta^{(j)\\top} x^{(i)}) \\exp(-\\psi^\\top x^{(i)})} \\\\\n", "&= \\frac{\\exp(\\theta^{(k)\\top} x^{(i)})}{\\sum_{j=1}^K \\exp(\\theta^{(j)\\top} x^{(i)})}.\n", "\\end{align}\n", "\n", "__In other words, subtracting $\\psi$ from every $\\theta^{(j)}$ does not affect our hypothesis’ predictions at all!__ This shows that softmax regression’s parameters are \"redundant\". More formally, we say that our softmax model is \"overparameterised\" meaning that for any hypothesis we might fit to the data, there are multiple parameter settings that give rise to exactly the same hypothesis function $h_\\theta$ mapping from inputs $\\mathbf{x}$ to the predictions.\n", "\n", "Further, if the cost function $J(\\theta)$ is minimized by some setting of the parameters $(\\theta^{(1)},\\theta^{(2)},\\dots,\\theta^{(k)})$, then it is also minimised by $\\theta^{(1)}-\\psi,\\theta^{(2)}-\\psi,\\dots,\\theta^{(k)}-\\psi)$ for any value of $\\psi$. Thus, the minimiser of $J(\\theta)$ is not unique. \n", "\n", "(Interestingly, $J(\\theta)$ is still convex, and thus gradient descent will not run into local optima problems. The Hessian is however singular/non-invertible, which causes a straightforward implementation of Newton's method (a second-order optimiser) to run into numerical problems.)\n", "\n", "Notice also that by setting $\\psi=\\theta^{(K)}$, one can always replace $\\theta^{(K)}$ with $\\theta^{(K)}-\\psi=\\mathbf{0}$ (the vector of all $0$’s), without affecting the hypothesis. Thus, one could \"eliminate\" the vector of parameters $\\theta^{(K)}$ (or any other $\\theta^{(k)}$, for any single value of $k$), without harming the representational power of our hypothesis. Indeed, rather than optimising over the $K \\cdot n$ parameters $(\\theta^{(1)},\\theta^{(2)},\\dots,\\theta^{(k)})$ (where $\\theta^{(k)} \\in \\mathbb{R}^n$, one can instead set $\\theta^{(K)}=\\mathbf{0}$ and optimize only with respect to the $(K-1) \\cdot n$ remaining parameters." ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "5a2fe88b85094401af28006c8d3c4e37", "grade": false, "grade_id": "cell-136a282d648894c1", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__Use the following block to implement the softmax gradients for the case where the final column of the parameters theta is fixed to be zero:__" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "752b313a062a6fc7934cd9a822b49c05", "grade": false, "grade_id": "cell-5f82a71197a01bdd", "locked": false, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "import torch\n", "\n", "def softmax_regression_loss_grad_0(Theta, X, y):\n", " '''Implementation of the gradient of the softmax loss function, with the parameters of the\n", " last class fixed to be zero.\n", " \n", " Theta is the matrix of parameters, with the parameters of the k-th class in the k-th column; \n", " K-1 classes are included, and the parameters of the last class are implicitly zero.\n", " X contains the data vectors (one vector per row)\n", " y is a column vector of the targets\n", " '''\n", " \n", " # add the missing column of zeros:\n", " Theta = torch.cat((Theta, torch.zeros(Theta.shape[0],1)), 1)\n", "\n", " # YOUR CODE HERE\n", " raise NotImplementedError()\n", " \n", " # remove the last column from the gradients\n", " grad = grad[0:grad.shape[0], 0:grad.shape[1]-1]\n", " return grad" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "188e450a2df3dfafae8884f39dfbbed5", "grade": true, "grade_id": "cell-64aef23ec9b37670", "locked": true, "points": 2, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "Theta = torch.Tensor([[1, 0], [0, 0]])\n", "X = torch.Tensor([[1, 0], [0, 1]])\n", "y = torch.LongTensor([[0], [1]])\n", "\n", "grad = softmax_regression_loss_grad(Theta, X, y)\n", "grad0 = softmax_regression_loss_grad_0(Theta[:,0:grad.shape[1]-1], X, y)\n", "assert torch.torch.allclose(grad[:,0:grad.shape[1]-1], grad0)\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "bcaff3402994537f507747a2f8d74f5e", "grade": false, "grade_id": "cell-58a854c27f8d089c", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Finally, we can run gradient descent with our reduced paramter gradient function, and confirm that the results are similar to before:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "ca01d5452181b98755d19e124c7e8191", "grade": false, "grade_id": "cell-e8598e9ec2e830a6", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "alpha = 0.1\n", "theta_gd = torch.rand((X_train.shape[1], 9))\n", "\n", "for e in range(0, 1000):\n", " gr = softmax_regression_loss_grad_0(theta_gd, X_train, y_train)\n", " theta_gd -= alpha * gr\n", "\n", "theta_gd = torch.cat((theta_gd, torch.zeros(theta_gd.shape[0], 1)), 1)\n", "proba = torch.softmax(X_test @ theta_gd, 1)\n", "print(float((proba.argmax(1)-y_test[:,0]==0).sum()) / float(proba.shape[0]))\n", "print()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }