{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Multi-Variable Linear Regession" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "In this example, we will implement linear regression with multiple variables to predict the prices of houses.\n", "\n", "Suppose you are selling your house and you want to know what a good market price would be. One way to do this is to first collect information on recent houses sold and make a model of housing\n", "prices.\n", "\n", "The file ex1data2.txt contains a training set of housing prices in Portland, Oregon. The first column is the size of the house (in square feet), the second column is the number of bedrooms, and the third column is the price of the house.\n", "\n", " NOTE: The example and sample data is being taken from the \"Machine Learning course by Andrew Ng\" in Coursera." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# initial imports\n", "import numpy as np\n", "from matplotlib import pyplot as plt\n", "%matplotlib notebook\n", "import seaborn as sns\n", "\n", "# setting graph properties\n", "plt.rcParams['figure.dpi'] = 300 # setting figure dpi for better quality graphs\n", "plt.rcParams['figure.figsize'] = [10,8]\n", "sns.set(context=\"notebook\", style=\"white\") # graph styling using seaborn\n", "%config InlineBackend.figure_format = 'svg'" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# imports from my models designed for these examples\n", "from models.linear_regression import compute_cost, gradient_descent, compute_gradient, normal_equation\n", "from models.data_preprocessing import add_bias_unit, feature_normalize" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading and Displaying Data" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loading data ...\n", "['House size' 'No. of bedrooms']\n", "[[2104.00 3.00]\n", " [1600.00 3.00]\n", " [2400.00 3.00]\n", " [1416.00 2.00]\n", " [3000.00 4.00]]\n" ] } ], "source": [ "print('Loading data ...')\n", "\n", "data = np.loadtxt('data/ex1data2.txt', delimiter=',')\n", "X = data[:, :-1] # 47x2\n", "y = data[:, -1, None] # 47x1\n", "m = y.size # 47\n", "\n", "# printing first 5 elements\n", "with np.printoptions(precision=2, floatmode='fixed', suppress=True, linewidth=150):\n", " print(np.array(['House size', 'No. of bedrooms']))\n", " print(X[:5, :])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature Normalization\n", "\n", "By looking at the values, note that house sizes are about\n", "1000 times the number of bedrooms. When features differ by orders of magnitude, first performing feature scaling can make gradient descent converge much more quickly.\n", "\n", "The feature normalize function implements:\n", "- Subtraction of the mean value of each feature from the dataset.\n", "- After subtracting the mean, additionally scale (divide) the feature values by their respective “standard deviations.”\n", "\n", "The standard deviation is a way of measuring how much variation there is in the range of values of a particular feature (most data points will lie within ±2 standard deviations of the mean); this is an alternative to taking the range of values (max-min).\n", "\n", "For example, inside feature_normalize, the quantity X(:,1) contains all the values of $x_{1}$ (house sizes) in the training\n", "set, so np.std(X(:,1)) computes the standard deviation of the house sizes.\n", "\n", "At the time that feature_normalize function is called, the extra column of 1’s corresponding to $x_{0} = 1$ has not yet been added to X.\n", "\n", "**Implementation Note: When normalizing the features, it is important\n", "to store the values used for normalization - the mean value and the standard deviation used for the computations. After learning the parameters\n", "from the model, we often want to predict the prices of houses we have not\n", "seen before. Given a new x value (living room area and number of bedrooms), we must first normalize x using the mean and standard deviation\n", "that we had previously computed from the training set.**" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Normalizing the features\n", "['House size' 'No. of bedrooms']\n", "[[ 1.00 0.13 -0.23]\n", " [ 1.00 -0.51 -0.23]\n", " [ 1.00 0.51 -0.23]\n", " [ 1.00 -0.74 -1.55]\n", " [ 1.00 1.27 1.10]]\n" ] } ], "source": [ "# ----------------Feature Normalization -----------------\n", "print(\"Normalizing the features\")\n", "\n", "X, mu, sigma = feature_normalize(X)\n", "# mu is the mean value\n", "# sigma is the standard deviation\n", "\n", "# adding intercept term to X\n", "X = add_bias_unit(X)\n", "\n", "# Printing first 5 normalized data points\n", "with np.printoptions(precision=2, floatmode='fixed', suppress=True, linewidth=150):\n", " print(np.array(['House size', 'No. of bedrooms']))\n", " print(X[:5, :])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note: the intercept term $x_{0} = 1$ is not normalized**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Gradient Descent\n", "\n", "Previously, we implemented gradient descent on a univariate regression problem. The only difference in a univariate and multi-variate linear regression gradient descent is that there is one or more than one feature in the multi-variate data matrix X. The hypothesis function and the batch gradient descent update\n", "rule remain unchanged.\n", "\n", "### Update equation\n", "\n", "The objective of linear regression is to minimize the cost function:\n", "\n", "$$ J(\\theta) = \\frac{1}{2m}\\sum_{i=1}^{m} \\left( h_{\\theta}\\left( x^{(i)} \\right) - y^{(i)} \\right) ^{2} $$\n", "\n", "where the hypothesis $h_{\\theta}(x)$ is given by the linear model :\n", "$$ \n", "h_{\\theta}(x) = \\theta^{T}x = \\theta_{0} + \\theta_{1}x_{1}\n", "$$\n", "\n", "the parameters of our model are the $\\theta_{j}$ values. These are the values we will adjust to minimize cost J(θ). One way to do this is to use the batch gradient descent algorithm. In batch gradient descent, each\n", "iteration performs the update \n", "\n", "$\n", "\\theta_{j} := \\theta_{j} - \\alpha \\frac{1}{m} \\sum_{i=1}^{m} \\left(h_{\\theta} \\left(x^{(i)}\\right) -y^{(i)}\\right)x^{(i)}_{j}$ (simultaneously update $\\theta_{j}$ for all j) \n", "\n", "With each step of gradient descent, our parameters $\\theta_{j}$ come closer to the\n", "optimal values that will achieve the lowest cost J(θ)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implementation Note\n", "\n", "In the multivariate case, the cost function can also be written in the folowing vectorized form:\n", "\n", "$$\n", "J(\\theta) = \\frac{1}{2m} \\left( X\\theta - \\vec{y} \\right)^{T} \\left( X\\theta - \\vec{y} \\right)\n", "$$\n", "\n", "where, \n", "$$\n", "X = \\begin{bmatrix} \\dots & \\left( x^{(1)} \\right)^{T} & \\dots \\\\\n", "\\dots & \\left( x^{(2)} \\right)^{T} & \\dots \\\\\n", "& \\vdots & \\\\ \n", "\\dots & \\left( x^{(m)} \\right)^{T} & \\dots \\end{bmatrix} \\space\\space\\space\\space\\space \\vec{y} = \\begin{bmatrix} y^{1} \\\\ y^{2} \\\\ \\vdots \\\\ y^{m} \\end{bmatrix}\n", "$$\n", "\n", "The vectorized version is efficient when you’re working with numerical\n", "computing tools like Octave/MATLAB/NUMPY. If you are an expert with matrix\n", "operations, you can prove to yourself that the two forms are equivalent." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running gradient descent ...\n", "Obtained theta is: [340410.91897274 109162.68848142 -6293.24735132]\n" ] }, { "data": { "text/plain": [ "Text(0.5, 1.0, 'Covariance Graph with appropriate learning rate')" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "print('Running gradient descent ...')\n", "\n", "# Choose some alpha value\n", "alpha = 0.03\n", "num_iters = 400\n", "\n", "# Init Theta and Run Gradient Descent\n", "initial_theta = np.zeros((X.shape[1], 1))\n", "theta, J_history = gradient_descent(X, y, initial_theta, alpha, num_iters)\n", "\n", "print(\"Obtained theta is: {}\".format(theta.ravel()))\n", "\n", "# Covariance Graph with appropriate learning rate\n", "plt.plot(range(J_history.size), J_history, lw=2)\n", "plt.xlabel(\"Number of Iterations\")\n", "plt.ylabel(\"Cost J\")\n", "plt.title(\"Covariance Graph with appropriate learning rate\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Selecting Learning Rates\n", "\n", "In this part of the exercise, you will get to try out different learning rates for\n", "the dataset and find a learning rate that converges quickly. \n", "\n", "The next phase we are going to call our gradient_descent function and run gradient descent for about 50 iterations at the chosen learning rate. The function should also return the history of J(θ) values in a vector J_history. After the last iteration, we are going to plot the J values against the number of the iterations.\n", "\n", "Note: If your graph looks very different, especially if your value of J(θ)\n", "increases or even blows up, adjust your learning rate and try again. We recommend trying values of the learning rate α on a log-scale, at multiplicative\n", "steps of about 3 times the previous value (i.e., 0.3, 0.1, 0.03, 0.01 and so on).\n", "You may also want to adjust the number of iterations you are running if that\n", "will help you see the overall trend in the curve." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Text(0, 0.5, 'Cost J')" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "# runnning gradient descend with different alphas\n", "theta1, J_history1 = gradient_descent(X, y, initial_theta, 0.01, num_iters)\n", "theta2, J_history2 = gradient_descent(X, y, initial_theta, 0.1, num_iters)\n", "theta3, J_history3 = gradient_descent(X, y, initial_theta, 0.3, num_iters)\n", "theta4, J_history4 = gradient_descent(X, y, initial_theta, 0.003, num_iters)\n", "\n", "# Plot the convergence graph\n", "fig = plt.figure(\"Covariance Graph\")\n", "ax = fig.subplots()\n", "ax.set_title(\"Convergence Graph with multiple alphas\")\n", "ax.plot(range(J_history4.size), J_history4, lw=2,c='c', label=\"alpha = 0.003\")\n", "ax.plot(range(J_history.size), J_history, lw=2, c='r', label=\"alpha = 0.03\")\n", "ax.plot(range(J_history1.size), J_history1, lw=2, c='g' , label=\"alpha = 0.01\")\n", "ax.plot(range(J_history2.size), J_history2, lw=2, c='b', label=\"alpha = 0.1\")\n", "ax.plot(range(J_history3.size), J_history3, lw=2, c='y', label=\"alpha = 0.3\")\n", "\n", "ax.legend()\n", "ax.set_xlabel('Number of iterations')\n", "ax.set_ylabel('Cost J')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice the changes in the convergence curves as the learning rate changes.\n", "With a small learning rate, you should find that gradient descent takes a very\n", "long time to converge to the optimal value. Conversely, with a large learning\n", "rate, gradient descent might not converge or might even diverge!" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Theta computed from gradient descent: \n", "[340410.91897274 109162.68848142 -6293.24735132]\n", "\n", "Predicted price of a 1650 sq-ft, 3 br house (using gradient descent): 293142.43\n" ] } ], "source": [ "# Display gradient descent's result\n", "print('Theta computed from gradient descent: ')\n", "print(theta.ravel())\n", "\n", "# Estimate the price of a 1650 sq-ft, 3 br house\n", "\n", "test_data = np.array([1650,3]).reshape(1, 2)\n", "test_data, _, __ = feature_normalize(test_data, mu, sigma)\n", "test_data = add_bias_unit(test_data)\n", "price = test_data.dot(theta)\n", "# ============================================================\n", "\n", "print('\\nPredicted price of a 1650 sq-ft, 3 br house (using gradient descent): {:.2f}'.format(price[0,0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Extra (Normal Equation):\n", "\n", "In **Normal Equation** method, we will minimize J by explicitly taking its derivatives with respect to $\\theta_{j}$'s amd setting them to zero. This allows us to find the optimum theta withut iterations.\n", "\n", "In case of small datasets it is preferable to use normal equation to find the minimum theta values.\n", "\n", "Key feature of Normal Equation are:\n", "- No need to choose learning rate alpha.\n", "- No need of iterations.\n", "- better option when the number of features is small.\n", "- No need of feature scaling or normalization.\n", "\n", "The vectorized formula for normal equation is:\n", "\n", "$$ \\theta = \\left( X^{T}X \\right)^{-1}X^{T}\\vec{y}$$\n", "\n", "where:\n", "$X^{T}$ is the transpose of matrix X.\n", "\n", "Note: the X matrix should contain the $x_{0} = 1$ bias unit in each data point." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loading data ...\n", "Solving with normal equations...\n", "Theta computed from the normal equations: \n", "[89597.9095428 139.21067402 -8738.01911233]\n", "Predicted price of a 1650 sq-ft, 3 br house (using normal equations): 293081.46\n" ] } ], "source": [ "# ================Normal Equations ================\n", "\n", "print('Loading data ...')\n", "\n", "data = np.loadtxt('data/ex1data2.txt', delimiter=',')\n", "X = data[:, :-1] # 47x2\n", "y = data[:, -1, None] # 47x1\n", "m = y.size # 47\n", "\n", "# adding bias unit to data\n", "X = add_bias_unit(X)\n", "\n", "print('Solving with normal equations...')\n", "\n", "# Calculate the parameters from the normal equation\n", "theta_norm = normal_equation(X, y)\n", "\n", "# Display normal equation's result\n", "print('Theta computed from the normal equations: ')\n", "print(theta_norm.ravel())\n", "\n", "# Estimate the price of a 1650 sq-ft, 3 br house\n", "test_data1 = np.array([1650,3]).reshape(1, 2)\n", "test_data1 = add_bias_unit(test_data1)\n", "price_norm = test_data1.dot(theta_norm)\n", "\n", "print('Predicted price of a 1650 sq-ft, 3 br house (using normal equations): {:.2f}'.format(price_norm[0,0]))" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" } }, "nbformat": 4, "nbformat_minor": 4 }