{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Regression Week 4: Ridge Regression (gradient descent)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, you will implement ridge regression via gradient descent. You will:\n", "* Convert an SFrame into a Numpy array\n", "* Write a Numpy function to compute the derivative of the regression weights with respect to a single feature\n", "* Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Fire up graphlab create" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make sure you have the latest version of GraphLab Create (>= 1.7)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "import graphlab as gl" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Load in house sales data\n", "\n", "Dataset is from house sales in King County, the region where the city of Seattle, WA is located." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "ename": "AttributeError", "evalue": "'module' object has no attribute 'SFrame'", "output_type": "error", "traceback": [ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[1;31mAttributeError\u001b[0m Traceback (most recent call last)", "\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m()\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0msales\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mgl\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mSFrame\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'data/kc_house_data.gl/'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[1;31mAttributeError\u001b[0m: 'module' object has no attribute 'SFrame'" ] } ], "source": [ "sales = gl.SFrame('data/kc_house_data.gl/')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we want to do any \"feature engineering\" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def calcRSS(model, features, output):\n", " predict = model.predict(features)\n", " error = output - predict\n", " rss = np.sum(np.square(error))\n", " return rss" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Import useful functions from previous notebook" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste `get_numpy_data()` from the second notebook of Week 2." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import numpy as np # note this allows us to refer to numpy as np instead " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def get_numpy_data(data_sframe, features, output):\n", " data_sframe['constant'] = 1 # add a constant column to an SFrame\n", " # prepend variable 'constant' to the features list\n", " features = ['constant'] + features\n", " # select the columns of data_SFrame given by the ‘features’ list into the SFrame ‘features_sframe’\n", " features_sframe = data_sframe[features]\n", " # this will convert the features_sframe into a numpy matrix with GraphLab Create >= 1.7!!\n", " features_matrix = features_sframe.to_numpy()\n", " # assign the column of data_sframe associated with the target to the variable ‘output_sarray’\n", " output_sarray['target']=output\n", " \n", "\n", " # this will convert the SArray into a numpy array:\n", " output_array = output_sarray.to_numpy() # GraphLab Create>= 1.7!!\n", " return(features_matrix, output_array)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Also, copy and paste the `predict_output()` function to compute the predictions for an entire matrix of features given the matrix and the weights:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Computing the Derivative" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.\n", "```\n", "Cost(w)\n", "= SUM[ (prediction - output)^2 ]\n", "+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).\n", "```\n", "\n", "Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to `w[i]` can be written as: \n", "```\n", "2*SUM[ error*[feature_i] ].\n", "```\n", "The derivative of the regularization term with respect to `w[i]` is:\n", "```\n", "2*l2_penalty*w[i].\n", "```\n", "Summing both, we get\n", "```\n", "2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i].\n", "```\n", "That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus `2*l2_penalty*w[i]`. \n", "\n", "**We will not regularize the constant.** Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the `2*l2_penalty*w[0]` term).\n", "\n", "Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus `2*l2_penalty*w[i]`.\n", "\n", "With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call `feature_is_constant` which you should set to `True` when computing the derivative of the constant and `False` otherwise." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):\n", " # If feature_is_constant is True, derivative is twice the dot product of errors and feature\n", " \n", " # Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight\n", " \n", " return derivative" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To test your feature derivartive run the following:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') \n", "my_weights = np.array([1., 10.])\n", "test_predictions = predict_output(example_features, my_weights) \n", "errors = test_predictions - example_output # prediction errors\n", "\n", "# next two lines should print the same values\n", "print feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False)\n", "print np.sum(errors*example_features[:,1])*2+20.\n", "print ''\n", "\n", "# next two lines should print the same values\n", "print feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True)\n", "print np.sum(errors)*2." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Gradient Descent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of *increase* and therefore the negative gradient is the direction of *decrease* and we're trying to *minimize* a cost function. \n", "\n", "The amount by which we move in the negative gradient *direction* is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a **maximum number of iterations** and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)\n", "\n", "With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100):\n", " print 'Starting gradient descent with l2_penalty = ' + str(l2_penalty)\n", " \n", " weights = np.array(initial_weights) # make sure it's a numpy array\n", " iteration = 0 # iteration counter\n", " print_frequency = 1 # for adjusting frequency of debugging output\n", " \n", " #while not reached maximum number of iterations:\n", " iteration += 1 # increment iteration counter\n", " ### === code section for adjusting frequency of debugging output. ===\n", " if iteration == 10:\n", " print_frequency = 10\n", " if iteration == 100:\n", " print_frequency = 100\n", " if iteration%print_frequency==0:\n", " print('Iteration = ' + str(iteration))\n", " ### === end code section ===\n", " \n", " # compute the predictions based on feature_matrix and weights using your predict_output() function\n", "\n", " # compute the errors as predictions - output\n", "\n", " # from time to time, print the value of the cost function\n", " if iteration%print_frequency==0:\n", " print 'Cost function = ', str(np.dot(errors,errors) + l2_penalty*(np.dot(weights,weights) - weights[0]**2))\n", " \n", " for i in xrange(len(weights)): # loop over each weight\n", " # Recall that feature_matrix[:,i] is the feature column associated with weights[i]\n", " # compute the derivative for weight[i].\n", " #(Remember: when i=0, you are computing the derivative of the constant!)\n", "\n", " # subtract the step size times the derivative from the current weight\n", " \n", " print 'Done with gradient descent at iteration ', iteration\n", " print 'Learned weights = ', str(weights)\n", " return weights" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Visualizing effect of L2 penalty" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "simple_features = ['sqft_living']\n", "my_output = 'price'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us split the dataset into training set and test set. Make sure to use `seed=0`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "train_data,test_data = sales.random_split(.8,seed=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this part, we will only use `'sqft_living'` to predict `'price'`. Use the `get_numpy_data` function to get a Numpy versions of your data with only this feature, for both the `train_data` and the `test_data`. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)\n", "(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's set the parameters for our optimization:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "initial_weights = np.array([0., 0.])\n", "step_size = 1e-12\n", "max_iterations=1000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights:\n", "\n", "`simple_weights_0_penalty`\n", "\n", "we'll use them later." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights:\n", "\n", "`simple_weights_high_penalty`\n", "\n", "we'll use them later." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "plt.plot(simple_feature_matrix,output,'k.',\n", " simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-',\n", " simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compute the RSS on the TEST data for the following three sets of weights:\n", "1. The initial weights (all zeros)\n", "2. The weights learned with no regularization\n", "3. The weights learned with high regularization\n", "\n", "Which weights perform best?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***QUIZ QUESTIONS***\n", "1. What is the value of the coefficient for `sqft_living` that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?\n", "2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper?\n", "3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Running a multiple regression with L2 penalty" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now consider a model with 2 features: `['sqft_living', 'sqft_living15']`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, create Numpy versions of your training and test data with these two features. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. \n", "my_output = 'price'\n", "(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)\n", "(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "initial_weights = np.array([0.0,0.0,0.0])\n", "step_size = 1e-12\n", "max_iterations = 1000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights:\n", "\n", "`multiple_weights_0_penalty`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights:\n", "\n", "`multiple_weights_high_penalty`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compute the RSS on the TEST data for the following three sets of weights:\n", "1. The initial weights (all zeros)\n", "2. The weights learned with no regularization\n", "3. The weights learned with high regularization\n", "\n", "Which weights perform best?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "***QUIZ QUESTIONS***\n", "1. What is the value of the coefficient for `sqft_living` that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?\n", "2. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? \n", "3. We make prediction for the first house in the test set using two sets of weights (no regularization vs high regularization). Which weights make better prediction for that particular house?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python [conda env:gl-env]", "language": "python", "name": "conda-env-gl-env-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" } }, "nbformat": 4, "nbformat_minor": 1 }