{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

Introduction to Machine Learning: Assignment 2

\n", "\n", "__Given date:__ March 3\n", "\n", "__Due date:__ March 20\n", "\n", "__Total: 25pts__\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 1 Least Absolute Shrinkage and Selection Operator \n", "\n", "__(13pts)__\n", "\n", "Learning a model through the OLS loss can be done very efficiently through either gradient descent or even through the Normal equations. The same is true for ridge regression. For the Lasso formulation however, the non differentiability of the absolute value at $0$ makes the learning more tricky.\n", "\n", "\n", "\n", "One approach, known as _ISTA_ (see Amir Beck and Marc Teboulle, _A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems_) consists in combining traditional gradient descent steps with a projection onto the $\\ell_1$ norm ball. Concretely, for the LASSO objective \n", "\n", "\\begin{align}\n", "\\ell(\\boldsymbol \\beta) = \\|\\boldsymbol X\\boldsymbol \\beta - \\boldsymbol t\\|^2_2 + \\lambda \\|\\boldsymbol \\beta\\|_1\n", "\\end{align}\n", "\n", "where $\\boldsymbol \\beta = (\\beta_1, \\beta_2,\\ldots, \\beta_D)$ (note that we don't include the bias) and the feature vectors $\\left\\{\\boldsymbol x_i\\right\\}_{i=1}^N$ (corresponding to the rows of the matrix $\\boldsymbol X$) as well as the targets $t_i$ are assumed to be centered, i.e.\n", "\\begin{align}\n", "\\boldsymbol x_{ij} \\leftarrow \\boldsymbol x_{ij}- \\frac{1}{N}\\sum_{i=1}^{N} x_{ij}\\\\\n", "t_i \\leftarrow t_i - \\frac{1}{N}\\sum_{i=1}^N t_i\n", "\\end{align}\n", "\n", "(Note that this is equivalent to taking $\\beta_0 = \\frac{1}{N}\\sum_{i=1}^N t_i$)\n", "The ISTA update takes the form \n", "\n", "\\begin{align}\n", "\\boldsymbol \\beta^{k+1} \\leftarrow \\mathcal{T}_{\\lambda \\eta} (\\boldsymbol \\beta^{k} - 2\\eta \\mathbf{X}^T(\\mathbf{X}\\mathbf{\\beta} - \\mathbf{t}))\n", "\\end{align}\n", "\n", "where $\\mathcal{T}_{\\lambda \\eta}(\\mathbf{x})_i$ is the thresholding operator defined component-wise as\n", "\n", "\\begin{align}\n", "\\mathcal{T}_{\\lambda t}(\\mathbf{\\beta})_i = (|\\beta_i| - \\lambda t)_+ \\text{sign}(\\beta_i)\n", "\\end{align}\n", "\n", "In the equations above, $\\eta$ is an appropriate step size and $(x)_+ = \\max(x, 0)$ \n", "\n", "##### Question 1.1. (5pts)\n", "\n", "Complete the function 'ISTA' which must return a final estimate for the regression vector $\\mathbf{\\beta}$ given a feature matrix $\\mathbf{X}$, a target vector $\\mathbf{t}$ (the function should include the centering steps for $\\mathbf{x}_i$ and $t_i$) regularization weight $\\lambda$, and the choice for the learning rate $\\eta$. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def ISTA(beta_init, X, t, lbda, eta):\n", " \n", " '''The function takes as input an initial guess for beta, a set \n", " of feature vectors stored in X and their corresponding \n", " targets stored in t, a regularization weight lbda, \n", " step size parameter eta and must return the \n", " regression weights following from the minimization of \n", " the LASSO objective'''\n", " \n", " beta_LASSO = np.zeros((np.shape(X)[0], 1))\n", " \n", " # add your code here\n", " \n", " \n", " return beta_LASSO\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Question 1.2. (3pts)\n", "\n", "Apply your algorithm to the data (in red) given below for polynomial features up to degree 8-10 and for various values of $\\lambda$. Display the result on top of the true model (in blue)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "\n", "\n", "x = np.linspace(0,1,20)\n", "xtrue = np.linspace(0,1,100)\n", "t_true = 0.1 + 1.3*xtrue\n", "\n", "t = 0.1 + 1.3*x\n", "\n", "tnoisy = t+np.random.normal(0,.1,len(x))\n", "\n", "plt.scatter(x, tnoisy, c='r')\n", "plt.plot(xtrue, t_true)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Question 1.3 FISTA (3pts)\n", "\n", "It is possible to improve the ISTA updates by combining them with Nesterov accelerated gradient descent. The resulting update, known as FISTA can read, for a constant step size, by \n", "letting $\\mathbf{y}^{(1)} = {\\boldsymbol \\beta}^{(0)}$, $\\eta^1 = 1$ and then using \n", "\n", "\\begin{align}\n", "\\left\\{\n", "\\begin{array}{l}\n", "&\\boldsymbol{\\beta}^{k} = \\text{ISTA}(\\mathbf{y}^{k})\\\\\n", "&\\eta^{(k+1)} = \\frac{1+\\sqrt{1+4(\\eta^{(k)})^2}}{2}\\\\\n", "&\\mathbf{y}^{(k+1)} = \\mathbf{\\beta}^{(k)} + \\left(\\frac{\\eta^{(k)} - 1}{\\eta^{(k+1)}}\\right)\\left({\\boldsymbol\\beta}^{(k)} - {\\boldsymbol\\beta}^{(k-1)}\\right)\\end{array}\\right.\n", "\\end{align}\n", "\n", "Here $\\text{ISTA}$ denotes a __single__ ISTA update.\n", "\n", "Complete the function below so that it performs the FISTA iterations. Then apply it to the data given in question 1.2." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def FISTA(X, t, eta0, beta0, lbda):\n", " \n", " '''function should return the solution to the minimization of the\n", " the LASSO objective ||X*beta - t||_2^2 + lambda*||beta||_1\n", " by means of FISTA updates'''\n", " \n", " \n", " \n", " return beta_LASSO " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Question 1.4. (2pts)\n", "\n", "Compare the ISTA and FISTA updates by plotting the evolution of the loss $\\ell(\\mathbf{\\beta})$ as a function of the iterations for both approaches. Take a sufficient number of iterations (1000 - 10,000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "\n", "FISTA_loss = np.zeros((max_iter, 1))\n", "ISTA_loss = np.zeros((max_iter, 1))\n", "\n", "plt.plot(FISTA_loss)\n", "plt.plot(ISTA_loss)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 2. Logistic Regression\n", "\n", "__(12pts)__\n", "\n", "##### Question 2.1 Logistic regression (5pts)\n", "\n", "As we saw during the lectures, one approach at learning a (binary) linear discriminant is to combine the sigmoid activation function with the linear discriminant $\\beta_0 + \\mathbf{\\beta}^T \\mathbf{x}$. We then assume that the probability of having a particular target ($0$ vs $1$) follows a Bernoulli with parameter $\\sigma(\\tilde{\\mathbf{\\beta}}^T\\tilde{\\mathbf{x}})$. i.e. we have \n", "\n", "$$\\left\\{\\begin{array}{l}\n", "P(t = 1|x) = \\sigma(\\mathbf{\\beta}^T\\mathbf{x})\\\\\n", "P(t = 0|x) = 1-\\sigma(\\mathbf{\\beta}^T\\mathbf{x})\\end{array}\\right.$$\n", "\n", "The total density can read from the product of each of the independent densities as \n", "\n", "$$P(\\left\\{t_i\\right\\}_{i=1}^N) = \\prod_{i=1}^N \\sigma(\\mathbf{\\beta}^T\\mathbf{x})^{t^{(i)}}(1-\\sigma(\\mathbf{\\beta}^T\\mathbf{x}))^{1-t^{(i)}}$$\n", "\n", "we can then take the log and compute the derivatives of the resulting expression with respect to each weight $\\beta_j$. Implement this approach below. Recall that the derivative of sigma has a _simple expression_. The first function below might not be needed in the implementation of the function 'solve_logisticRegression'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Step 1 define the sigmoid activation and its derivative\n", "\n", "\n", "def sigmoid(x):\n", "\n", " '''the function should return the sigmoid and its derivative at all the \n", " points encoded in the vector x (be careful of the fact that )'''\n", " \n", " \n", " return sig, deriv_sig\n", "\n", "\n", "def solve_logisticRegression(xi, ti, beta0, maxIter, eta):\n", " \n", " '''The function should return the vector of weights in logistic regression\n", " following from gradient descent iterations applied to the log likelihood function'''\n", " \n", " \n", " \n", " return beta\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 2.2 Logistic regression and Fisher scoring (5pts)\n", "\n", "An interesting aspect of the MLE estimator in logistic regression (as opposed to other objective functions) is that the Hessian is positive definite. We can thus improve the iterations by using a second order method (such as Newton's method) where the simpler gradient iterations $\\mathbf{\\beta}^{k+1}\\leftarrow \\mathbf{\\beta}^k - \\eta\\nabla \\ell(\\mathbf{\\beta}^k)$ are replaced by \n", "\n", "$$\\mathbf{\\beta}^{k+1}\\leftarrow \\mathbf{\\beta}^k - \\eta H^{-1}(\\mathbf{\\beta^k})\\nabla \\ell(\\mathbf{\\beta}^k)$$\n", "\n", "Start by completing the function below which should return the Hessian of the negative log likelihood. Note that we move in the direction $-\\nabla_{\\beta}\\ell$, hence we minimize. You should there consider the negative log likelihood. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def HessianMLE(beta):\n", " \n", " '''Function should return the Hessian (see https://en.wikipedia.org/wiki/Hessian_matrix) \n", " of the log likelihood at a particular value of the weights beta'''\n", " \n", " \n", " return HessianMatrix\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def Fisher_scoring(beta0, maxIter, eta):\n", " \n", " '''Function should compute the logistic regression classifier by relying on Fisher scoring\n", " iterates should start at beta0 and be applied with a learning eta'''\n", "\n", " while numIter