{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Linear regression\n", "\n", "The goal of this exercise is to implement the least mean squares algorithm (LMS) for linear regression seen in the course. \n", "\n", "We start by importing numpy and matplotlib." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Least mean squares\n", "\n", "To generate the data for the exercise, we will use the `scikit-learn` library . It provides a huge selection of already implemented machine learning algorithms for classification, regression or clustering.\n", "\n", "If you use Colab, `scikit-learn` should already be installed. Otherwise, install it with either `conda` or `pip` (you may need to restart this notebook afterwards):\n", "\n", "```\n", "conda install scikit-learn\n", "# or:\n", "pip install scikit-learn\n", "```\n", "\n", "We will use the method `sklearn.datasets.make_regression` to generate the data. The documentation of this method is available at .\n", "\n", "The following cell imports the method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import make_regression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now generate the data. We start with the simplest case where the inputs have only one dimension. We will generate 100 samples $(x_i, t_i)$ linked by a linear relationship and some noise.\n", "\n", "The following code generates the data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "N = 100\n", "X, t = make_regression(n_samples=N, n_features=1, noise=15.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`n_samples` is the number of samples generates, `n_features` is the number of input variables and `noise` quantifies how the points deviate from the linear relationship. \n", "\n", "**Q:** Print the shape of the arrays `X` and `t` to better understand what is generated. Visualize the dataset using matplotlib (`plt.scatter`). Vary the value of the `noise` argument in the previous cell and visualize the data again. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now is the time to implement the LMS algorithm with numpy.\n", "\n", "Remember the LMS algorithm from the course:\n", "\n", "* $w=0 \\quad;\\quad b=0$\n", "\n", "* **for** M epochs:\n", "\n", " * $dw=0 \\quad;\\quad db=0$\n", "\n", " * **for** each sample $(x_i, t_i)$:\n", "\n", " * $y_i = w \\, x_i + b$\n", "\n", " * $dw = dw + (t_i - y_i) \\, x_i$\n", "\n", " * $db = db + (t_i - y_i)$\n", "\n", " * $\\Delta w = \\eta \\, \\frac{1}{N} dw$\n", "\n", " * $\\Delta b = \\eta \\, \\frac{1}{N} db$\n", " \n", "Our linear model $y = w \\, x + b$ predicts outputs for an input $x$. The error $t-y$ between the prediction and the data is used to adapt the weight $w$ and the bias $b$ at the end of each epoch.\n", "\n", "**Q:** Implement the LMS algorithm and apply it to the generated data. The Python code that you will write is almost a line-by-line translation of the pseudo-code above. You will use a learning rate `eta = 0.1` at first, but you will vary this value later. Start by running a single epoch, as it will be easier to debug it, and then increase the number of epochs to 100. Print the value of the weight and bias at the end." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Visualize the quality of the fit by superposing the learned model to the data with matplotlib. \n", "\n", "*Tip*: you can get the extreme values of the xaxis with `X.min()` and `X.max()`. To visualize the model, you just need to plot a line between the points `(X.min(), w*X.min()+b)` and `(X.max(), w*X.max()+b)`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another option is to predict a value for all inputs and plot this vector $y$ against the desired values $t$.\n", "\n", "**Q:** Make a scatter plot where $t$ is the x-axis and $y = w\\, x + b$ is the y-axis. How should the points be arranged in the ideal case? Also plot what this ideal relationship should be." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A much better method to analyse the result of the learning algorithm is to track the **mean squared error** (mse) after each epoch, i.e. the loss function which we actually want to minimize. The MSE is defined as:\n", "\n", "$$\\text{mse} = \\frac{1}{N} \\, \\sum_{i=1}^N (t_i - y_i)^2$$\n", "\n", "**Q:** Modify your LMS algorithm (either directly or copy it in the next cell) to track the mse after each epoch. After each epoch, append the mse on the training set to a list (initially empty) and plot it at the end. How does the mse evolve? Which value does it get in the end? Why? How many epochs do you actually need?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's now study the influence of the learning rate `eta=0.1` seemed to work, but is it the best value?\n", "\n", "**Q:** Iterate over multiple values of `eta` using a logarithmic scale and plot the final mse after 100 epochs as a function of the learning rate. Conclude.\n", "\n", "*Hint:* the logarithmic scale means that you will try values such as $10^{-5}$, $10^{-4}$, $10^{-3}$, etc. until 1.0. In Python, you can either write explictly 0.0001 or use the notation `1e-4`. For the plot, use `np.log10(eta)` to only display the exponent on the X-axis." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Scikit-learn\n", "\n", "The code that you have written is functional, but extremely slow, as you use for loops in Python. For so little data samples, it does not make a difference, but if you had millions of samples, this would start to be a problem.\n", "\n", "The solution is to use optimized implementations of the algorithms, running in C++ or FORTRAN under the hood. We will use here the LMS algorithm provided by `scikit-learn` as you have already installed it and it is very simple to use. Note that one could use tensorflow too, but that would be killing a fly with a sledgehammer.\n", "\n", "`scikit-learn` provides a `LinearRegression` object that implements LMS. The documentation is at: .\n", "\n", "You simply import it with:\n", "\n", "```python\n", "from sklearn.linear_model import LinearRegression\n", "```\n", "\n", "You create the object with:\n", "\n", "```python\n", "reg = LinearRegression()\n", "```\n", "\n", "`reg` is now an object with different methods (`fit()`, `predict()`) that accept any kind of data and performs linear regression. \n", "\n", "To train the model on the data $(X, t)$, simply use:\n", "\n", "```python\n", "reg.fit(X, t)\n", "```\n", "\n", "The parameters of the model are obtained with `reg.coef_` for $w$ and `reg.intercept_` for $b$. \n", "\n", "You can predict outputs for new inputs using:\n", "\n", "```python\n", "y = reg.predict(X)\n", "```\n", "\n", "**Q:** Apply linear regression on the data using `scikit-learn`. Check the model parameters after learning and compare them to what you obtained previously. Print the mse and make a plot comparing the predictions with the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Delta learning rule\n", "\n", "Let's now implement the online version of LMS, the **delta learning rule**. The only difference is that the parameter updates are applied immediately after each example is evaluated, not at the end of training. \n", "\n", "* $w=0 \\quad;\\quad b=0$\n", "\n", "* **for** M epochs:\n", "\n", " * **for** each sample $(x_i, t_i)$:\n", "\n", " * $y_i = w \\, x_i + b$\n", "\n", " * $\\Delta w = \\eta \\, (t_i - y_i ) \\, x_i$\n", "\n", " * $\\Delta b = \\eta \\, (t_i - y_i)$\n", " \n", "**Q:** Implement the delta learning rule for the regression problem with `eta = 0.1`. Plot the evolution of the mse and compare it to LMS. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Vary the learning rate logarithmically as for LMS and conclude." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3.9.13 ('base')", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" }, "vscode": { "interpreter": { "hash": "3d24234067c217f49dc985cbc60012ce72928059d528f330ba9cb23ce737906d" } } }, "nbformat": 4, "nbformat_minor": 4 }