{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# NYU Introduction to Machine Learning\n", "## Assignment 2\n", "\n", "__Given date: February 20__\n", "\n", "__Due Date: March 8__\n", "\n", "__Total: 25pts__\n", "\n", "In this assignment you will implement the main regularization approaches as well as cross validation. You will study how the OLS criterion that we used in regression can be extended to classification. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 1. Understanding Cross validation (5pts)\n", "\n", "Using the lines below load the dataset 'Assignment2_Ex1_xi' and 'Assignment2_Ex1_ti'. Each of the points in the training set is represented by 5 features $x_{i,1}$, $x_{i,2}, \\ldots x_{i,5}$. Among those features we want to find those which are the most meaningful to the description of the targets $t_i$. You can think of the targets as expressing for example the probability to develop a particular trait or disease and the features as encoding the expressivity of particular genes. In such a framework the objective would thus mean finding the genes that most influence the particular trait. For this we will implement a Best Subset Selection approach with cross validation. Complete the cell below by implementing the following steps\n", "\n", "__1.__ For each number of weights (d=1 to 5) compute all the subsets (beta_i, beta_j, ...) of size d of weights. \n", "__2.__ Split the training set in K=5 bins, for each bin k=1,...5, learn the weights by using the linear_regression function of scikit learn (do not reimplement gradient descent except if you really have too much time). Learn the weights on the remaining K-1 bins then comute the MSE on bin k. \n", "__3.__ Find the optimal subset of coefficients by comparing the MSE and plot the MSE as a function of the number k of weights by averaging the errors over the size k subsets. I.e MSE(1) = (1/5)(MSE(beta0) + MSE(beta1) + ...MSE(beta4))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "xi = np.load('Assignment2_Ex1_xi.npy')\n", "ti = np.load('Assignment2_Ex1_ti.npy')\n", "\n", "D = 5 # number of coefficients \n", "\n", "K = 5 # number of bins used for cross validation\n", "\n", "# Note that K does not have to be equal to D \n", "# (this is a choice we make here but we could have taken any value for K)\n", "\n", "# Step 1: Finding the optimal d\n", "\n", "MSE = np.zeros((D,1))\n", "\n", "for d in np.arange(0,D-1):\n", " \n", " # 1) select each of the (D choose d) subset of coefficient and learn a \n", " # model and compute the MSE\n", " \n", " \n", " \n", " \n", " # 2) once you have computed the MSE for each of the \n", " \n", " \n", " \n", " \n", "# Step 2 plotting the evolution of the average prediction error as a function of the number of coefficient\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 2. Predicting graduate admissions (5pts)\n", "\n", "In this second question, we want to predict admission to graduate school based on a collection of features [provided by Kaggle](https://www.kaggle.com/mohansacharya/graduate-admissions) \n", "including: \n", "\n", "- GRE and TOEFL Scores\n", "- University Rating \n", "- Letter of Recommendation Strength \n", "- Undergraduate GPA \n", "- ...\n", " \n", "We want to learn a ridge regression model (use the scikit learn model with the fit and predict functions). \n", "\n", "- Start by splitting the dataset into a training (about 90%) an a test (remaining 10%) parts using a call to the train_test_split function from the model_selection module. Put the test aside for the rest of the exercise. \n", "\n", "- Now that you are perfectly comfortable with the idea of cross validation, we will also try to evaluate the optimal lambda in the Ridge regression model. For this, you can use an extension of scikit learn Ridge regression model: sklearn.linear_model.RidgeCV. This extension lets you specify an array of $\\lambda$ values ($\\alpha$ in scikit learn) to try. The best value is then returned through a call to the 'alpha_' attribute of the model (read the [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) for more details). Train the model (both lambda and beta) on the training subset of item 1.\n", "\n", "- Finally evaluate the prediction of your model on the 10% test set you kept on the side at the beginning. \n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "df = pd.read_csv(\"Admission_Predict.csv\")\n", "\n", "# put your code here \n", "\n", "\n", "from sklearn.linear_model import RidgeCV" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Question 3. Iterative Hard and Soft Thresholding (10pts)\n", "\n", "\n", "An alternative to the simple OLS criterion or to the Ridge regression model, LASSO regression minimizes a combination of a data fidelity term and a penalty on the sum of the absolute values of the regression coefficients, i.e.\n", "\n", "\\begin{align}\n", "\\ell(\\boldsymbol \\beta ) = \\frac{1}{N}\\sum_{i=1}^N (t^{(i)} - (\\boldsymbol{\\beta}^T \\boldsymbol x^{(i)}))^2 + \\lambda \\sum_{j=0}^D |\\beta_j|, \\quad (\\text{LASSO})\n", "\\end{align}\n", "\n", "One of the main difficulty with the LASSO lies in the non differentiability of the absolute value which appears in the regularization term. Because of the use of the absolute value, the gradient cannot be computed at 0. Instead of relying on gradient updates, we can instead turn to the constrained formulation\n", "\n", "\n", "\\begin{align}\n", "\\min & \\quad \\ell(\\boldsymbol \\beta ) = \\frac{1}{N}\\sum_{i=1}^N (t^{(i)} - (\\boldsymbol{\\beta}^T \\boldsymbol x^{(i)}))^2\\\\\n", "\\text{subject to}& \\quad \\sum_{j=0}^D |\\beta_j|\\leq t\n", "\\end{align}\n", "\n", "The drawback with such a formulation is that we now have to solve a constrained problem. A common approach relies on the use of thresholding algorithms and in particular to the class of so-called _iterative shrinkage-thresholding algorithms (ISTA)_. If we write the OLS objective in matrix form as $\\ell(\\boldsymbol \\beta) = \\frac{1}{2N}\\|\\tilde{\\mathbf{X}}\\mathbf{\\beta} - \\mathbf{t}\\|_2^2$, Iterative shrinkage-thresholding algorithms are based on the following update :\n", "\n", "\n", "\\begin{align}\n", "\\mathbf{\\beta} \\leftarrow \\mathcal{T}_{\\lambda \\eta}\\left\\{\\mathbf{\\beta} - 2\\eta \\tilde{\\mathbf{X}}^T\\left(\\tilde{\\mathbf{X}}\\mathbf{\\beta} - \\mathbf{t}\\right) \\right\\}\n", "\\end{align}\n", "\n", "where $\\mathcal{T}$ is the thresholding operator \n", "\\begin{align}\n", "\\mathcal{T}_{\\alpha}(\\mathbf{\\beta})_i = \\left(|\\beta_i| - \\alpha\\right)_+\\text{sign}(\\beta_i)\n", "\\end{align}\n", "\n", "Here $\\left(|\\beta_i| - \\alpha\\right)_+ = \\max\\left\\{|\\beta_i| - \\alpha, 0\\right\\}$ and $\\text{sign}(\\beta_i)$ denotes the sign of the coefficient $\\beta_i$. From the definition above, you can also see that $\\lambda \\eta$ acts as a threshold on the $\\beta_i$. The larger $\\lambda$, the more $\\beta_i$ will be set to $0$. \n", "\n", "\n", "#### Question 3.1 (6pts) Complete the function ISTA below which should return a vector of weights $\\mathbf{\\beta}$, starting from some initial guess $\\beta_{\\text{init}}$ and for a training set stored in the matrix $X$ and vector of targets $t$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\n", "def ISTA(beta_init, lamb, eta, X, t):\n", " \n", " # function should apply the Iterative Shrinkage \n", " # Thresholding updates, starting from Beta_init and \n", " # for a set of feature vectors stored in matrix X \n", " # with associated targets stored in t.\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " return beta_ISTA \n", " \n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 3.2 (4pts) Test your algorithm on the dataset provided by the two files 'Assignment2_Ex32_Xi.npy' and 'Assignment2_Ex32_ti.npy' below. as above, two of the weights are irrelevant. Try to tune the parameters $\\eta$ and $\\lambda$ and study when you can recover those. " ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "\n", "X = np.load('Assignment2_Ex32_Xi.npy')\n", "t = np.load('Assignment2_Ex32_ti.npy')\n", "\n", "\n", "# put your code here \n", "\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Question 4 (5pts). From regression to binary classification: Predicting deaths on the Titanic\n", "\n", "We have seen how the OLS objective can be used to learn a regression model. This objective remains in fact absolutely valid in the classification framework. In binary classification, the targets associated to the feature vectors take one of two values (let us say $1$ and $0$ or $+1$ and $-1$). If we want to learn a model that classifies some feature vectors $\\mathbf{x}^{(i)}$ as belonging to class $\\mathcal{C}_0$ vs $\\mathcal{C}_1$ and we are given a training set $C_{0, \\text{tr}} = \\left\\{\\mathbf{x}^{(i)}\\right\\}_{i=1}^{N_0}$ and $C_{1, \\text{tr}} = \\left\\{\\mathbf{x}^{(j)}\\right\\}_{j=1}^{N_1}$, we can try to learn a separating plane $\\beta_0 +\\beta_1 x_1 + \\ldots \\beta_D x_D$ such that $\\beta_0 +\\beta_1 x^{(i)}_1 + \\ldots \\beta_D x^{(i)}_D =+1 $ for all $x^{(i)}\\in C_0$ and $\\beta_0 +\\beta_1 x^{(j)}_1 + \\ldots \\beta_D x^{(j)}_D =-1$ for all $x^{(j)}$ in $\\mathcal{C}_1$. \n", "\n", "For any new point $\\mathbf{x}$ of unknown class, we can then compute $\\beta_0 +\\beta_1x_1 + \\ldots +\\beta_D x_D$ and classify our point as belonging to $C_0$ if $\\beta_0 +\\beta_1x_1 + \\ldots +\\beta_D x_D>0$.\n", "\n", "Combine this idea with the linear regression model from scikit learn to learn a linear binary classifier for the ['Titanic'](https://www.kaggle.com/c/titanic/data?select=test.csv) dataset from Kaggle. Start by loading the training and test data from this dataset and then complete the cell below.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "from sklearn.linear_model import LinearRegression\n", "\n", "training_data = pd.read_csv('train.csv')\n", "\n", "# Step 1. \n", "# =========================================================================\n", "# Use the linearRegression model from scikit learn with binary \n", "# targets to predict the passengers that will survive and di in the \n", "# case of the sinking of a ship. Start by turning the class targets to \n", "# binary or +1/-1 values. Then turn possible non numeric features to numbers. Finally \n", "# learn the separating plane.\n", "\n", "\n", "\n", "\n", "\n", "# Step 2. \n", "# =========================================================================\n", "# Validate your model on the test set and compute the fraction of correctly \n", "# classified samples using the function accuracy_score from the sklearn.metrics module\n", "\n", "from sklearn.metrics import accuracy_score\n", "\n", "training_data = pd.read_csv('test.csv')\n", "\n", "\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 }