{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Exercise 4.2\n", "## Linear regression\n", "In this task we will design and train a linear model using [Keras](https://keras.io/).\n", "\n", "### Tasks\n", "1. Complete the implemetation of the `LinearLayer`\n", "2. Define a meaningful objective\n", "3. Implement gradient descent and train the linear model for 80 epochs." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf\n", "from tensorflow import keras\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "layers = keras.layers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simulation of data\n", "Let's first simulate some noisy data" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "x.shape: (100, 1)\n", "y.shape: (100,)\n" ] } ], "source": [ "np.random.seed(1904)\n", "x = np.float32(np.linspace(-1, 1, 100)[:,np.newaxis])\n", "y = np.float32(2 * x[:,0] + 0.3 * np.random.randn(100))\n", "print(\"x.shape:\", x.shape)\n", "print(\"y.shape:\", y.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implement linear model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we have to design a linear layer that maps from the input $x$ to the output $y$ using a single adaptive weight $w$:\n", " \n", "$$y = w \\cdot x$$\n", "\n", "### Task 1\n", "Complete the implementation of the `LinearLayer` by adding the linear transformation in the `call` function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class LinearLayer(layers.Layer):\n", "\n", " def __init__(self, units=1, input_dim=1): # when intializing the layer the weights have to be initialized\n", " super(LinearLayer, self).__init__()\n", " w_init = tf.random_normal_initializer()\n", " self.w = tf.Variable(initial_value=w_init(shape=(input_dim, units), dtype=\"float32\"),\n", " trainable=True)\n", "\n", " def call(self, inputs): # when calling the layer the linear transformation has to be performed\n", " return ..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Build a model using the implemented layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = keras.models.Sequential()\n", "model.add(LinearLayer(units=1, input_dim=1))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.build((None, 1))\n", "print(model.summary())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Performance before the training\n", "Plot data and model before the training" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_pred = model(x)\n", "\n", "fig, ax = plt.subplots(1)\n", "ax.plot(x, y, 'bo', label='data')\n", "ax.plot(x, y_pred, 'r-', label='model')\n", "ax.set(xlabel='$x$', ylabel='$y$')\n", "ax.grid()\n", "ax.legend(loc='lower right')\n", "plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task 2: Define the objective function\n", "Define a meaningful objective here (regression task). \n", "Note that you can use `tf.reduce_mean()` to average your loss estimate over the full data set (100 points)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def loss(x, y):\n", " return ...." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task 3 - Train the model using gradient descent\n", "'Train' the linear model for 80 epochs (or iterations) with a meaningful learning rate and implement gradient descent. \n", "Hint: you can access the adaptive parameters using `model.trainable_weights` and perform $w' \\rightarrow w-z$ using `w.assign_sub(z)`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "epochs = ... # number of epochs\n", "lr = ... # learning rate\n", "\n", "for epoch in range(epochs):\n", "\n", " with tf.GradientTape() as tape:\n", " output = model(x, training=True)\n", " # Compute loss value\n", " loss_value = loss(tf.convert_to_tensor(y), output)\n", " grads = tape.gradient(...)\n", " \n", " for weight, grad in zip(model.trainable_weights, grads):\n", " weight.assign_sub(...)\n", "\n", " print(\"Current loss at epoch %d: %.4f\" % (epoch, float(loss_value)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Performance of the fitted model\n", "Plot data and model after the training" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(1)\n", "\n", "y_pred = model(x)\n", "\n", "ax.plot(x, y, 'bo', label='data')\n", "ax.plot(x, y_pred, 'r-', label='model')\n", "ax.set(xlabel='$x$', ylabel='$y$')\n", "ax.grid()\n", "ax.legend(loc='lower right')\n", "plt.tight_layout()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 4 }