{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lecture 13: Linear regression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Today we will learn the regression, which is the simpliest model in machine learning.\n", "\n", "In previous lab practice, we learned how to import data from [UCI machine learning dataset repository](https://archive.ics.uci.edu/ml/datasets.html). \n", "\n", "* Download `winequality-red.csv` from [UCI machine learning repo on Kaggle](https://www.kaggle.com/uciml/red-wine-quality-cortez-et-al-2009/), unzip it and put it in the same directory with this notebook.\n", "* Check the csv file using Excel on the lab computer. Import the data using the following command." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import pandas as pd\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check the [csv preview on Kaggle](https://www.kaggle.com/uciml/red-wine-quality-cortez-et-al-2009), we will use the first three columns: `fixed acidity`, `volatile acidity`, and `citric acid`, we are interested how these every two quantities are related." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wine_data = pd.read_csv('winequality-red.csv')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(wine_data['citric acid'], wine_data['fixed acidity'], alpha=0.2) \n", "# alpha makes the dots a little transparent\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Question 1: fixed acidity vs citric acid concentration\n", "\n", "We would like to fit a line to this data. i.e., based on this information, what is the most likely linear relationship between the fixed acid and citric acid concentration of wine?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we are doing a linear model: if $x$ is the citric acid concentration, then the fixed acidity $y$ should be \n", "\n", "$$ y = w x + b.$$\n", "\n", "So we are looking for $w$ (weight) and $b$ (bias) that will fit the line as well as possible to the data. What does that mean though? It means that we want to minimize the error that our linear model $y = wx + b$ will have on predicting the weight from the height on our existing data. On a fixed acidity-citric acid concentration pair $(x_i, y_i)$ from our data-set, the model is guessing $y = w x_i + b$, and the actual answer is $y_i$. The total squared error (called *Loss*) is:\n", "\n", "$$L(w,b) = \\sum_{i=1}^{N} \\Big((w x_i + b) - y_i\\Big)^2$$\n", "\n", "where $\\{(x_i, y_i)\\}$ are our fixed acidity-citric acid concentration pairs. \n", "\n", "* \"Why squared error and not sum of absolute values?\" we might ask, both are good options. There are good reasons to go for squared error as the first choice.\n", "\n", "We want to minimize this squared error function above. We can:\n", "\n", "* Solve $\\nabla_{w,b} L(w,b) = 0$. The gradient will be zero at a local minimum.\n", "* Use gradient descent.\n", "\n", "**Remark**: the loss function is a function in $w$ and $b$, not $x$ and $y$!!!!." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reading: solving $\\nabla_{w,b} L(w,b) = 0$: \n", "\n", "Solving the gradient = 0. You write down the partial derivatives and solve the linear equations for $w$ and $b$. \n", "$$\\frac{\\partial}{\\partial b} L(w,b) = 2 \\sum\\limits_{i=1}^N \\big(w x_i + b-y_i\\big) = 0$$\n", "\n", "$$\\frac{\\partial}{\\partial w} L(w,b) = 2 \\sum\\limits_{i=1}^N (w x_i + b-y_i) \\cdot x_i) = 0$$\n", "\n", "Simplifying the first equation is straightforward:\n", "$$wX + Nb − Y = 0, \\quad \\text{where} \\quad X = \\sum\\limits_{i=1}^N x_i, \\quad Y = \\sum\\limits_{i=1}^N y_i .$$\n", "Simplifying the second equaton:\n", "$$\n", "w\\sum\\limits_{i=1}^N x_i^2 - b\\sum\\limits_{i=1}^N x_i - \\sum\\limits_{i=1}^N x_i y_i\n", "= wA + bX - C = 0, \n", "\\quad \\text{where} \\quad A = \\sum\\limits_{i=1}^N x_i^2, \\quad C = \\sum\\limits_{i=1}^N x_i y_i.\n", "$$\n", "Solving this linear system yields:\n", "$$\n", "w = \\frac{XY - nC}{X^2 - NA}, \\quad \\text{ and } \\quad b = \\frac{AY - CX}{NA - X^2}.\n", "$$\n", "If we digging deeper to simplify:\n", "$$\n", "w =\\frac{\\sum_{i=1}^{N}(x_{i}-\\bar{x})(y_{i}-\\bar{y})}{\\sum_{i=1}^{N}(x_{i}-\\bar{x})^{2}}\n", "\\quad \\text{ and } b = \\bar{y} - w\\bar{x}.\n", "$$\n", "\n", "## Least square\n", "\n", "This is called a *closed-form solution* because we are getting straight to the answer. (no gradient descent or approximation)\n", "\n", "This is solving the normal equation for the least square problem." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = wine_data['citric acid']\n", "y = wine_data['fixed acidity']\n", "N = len(x)\n", "X = np.sum(x)\n", "A = np.sum(x * x) # sum of the squares\n", "C = np.sum(x * y) # sum of x_i * y_i\n", "Y = np.sum(y)\n", "w = (X*Y - N*C) / (X**2- N*A)\n", "b = (A*Y - C*X) / (N*A - X**2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# more statistical representation\n", "x_bar = np.mean(x)\n", "y_bar = np.mean(y)\n", "w = np.sum( (x-x_bar) * (y-y_bar) )/ np.sum( (x-x_bar)**2 )\n", "b = y_bar - w*x_bar" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cross-validation/Testing" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "XX = np.linspace(0,1,200)\n", "YY = w * XX + b\n", "plt.scatter(wine_data['citric acid'], wine_data['fixed acidity'], alpha=0.1)\n", "plt.plot(XX,YY,color='red',linewidth = 4, alpha=0.4)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Exploratory Data Analysis using seaborn\n", "\n", "For data analysis, Exploratory Data Analysis (EDA) can be our first step. EDA helps us to:\n", "\n", "* To give insight into a data set.\n", "* Understand the underlying structure.\n", "* Extract important parameters and relationships that hold between them.\n", "* Test underlying models/hypothesis." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import seaborn as sns\n", "sns.set()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.distplot(wine_data['citric acid'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(12,8))\n", "sns.heatmap(wine_data.corr(),annot=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Linear regression fitting using the built-in library:\n", "\n", "No need to be a hero every time. We can use scikit-learn's [`LinearRegression()` class](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).\n", "\n", "We train the parameters by using the `fit` function and we use the function it learns using the `predict` function. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import linear_model\n", "\n", "# model\n", "acid_regression = linear_model.LinearRegression()\n", "\n", "# training data\n", "X_train = wine_data['citric acid']\n", "y_train = wine_data['fixed acidity']\n", "\n", "# train/fit\n", "acid_regression.fit(X_train, y_train)\n", "\n", "# testing/CV\n", "X_test = np.linspace(0,1,200)\n", "y_pred = acid_regression.predict(X_test)\n", "\n", "# visualize\n", "plt.scatter(X_train, y_train, alpha=0.1)\n", "plt.plot(X_test,y_pred, color='red',linewidth = 4, alpha=0.4)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## In-class exercise: \n", "* Repeat the procedure above for `volatile acidity` and `fixed acidity` using both explicit formula and `scikit-learn`'s `LinearRegression()` class.\n", "* Implement the mean square error measure for each training sample:\n", "$$\n", "\\text{MSE} = \\frac{1}{N} \\sum_{i=1}^{N} \n", "\\Big(y^{\\text{Pred}}_i - y^{\\text{Actual}}_i\\Big)^2.\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## In-class exercise:\n", "* Read the examples in [`LinearRegression()` class](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html), and apply scikit-learn's linear regression model on total sulfur dioxide vs pH in `winequality-red.csv`. We can first scatter plot to get a visual cue of whether these two quantities are related." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.2" }, "latex_envs": { "LaTeX_envs_menu_present": true, "autoclose": true, "autocomplete": true, "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 1, "hotkeys": { "equation": "Ctrl-E", "itemize": "Ctrl-I" }, "labels_anchors": false, "latex_user_defs": false, "report_style_numbering": false, "user_envs_cfg": false }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }