{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Multi-layer Perceptron" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The goal of this exercise is to implement a shallow multi-layer perceptron to perform non-linear classification. Let's start with usual imports, including the logistic function." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "import matplotlib.pyplot as plt\n", "from IPython.display import clear_output\n", "%matplotlib inline\n", "\n", "from sklearn.model_selection import train_test_split\n", "\n", "rng = np.random.default_rng()\n", "\n", "def logistic(x):\n", " return 1.0/(1.0 + np.exp(-x))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Structure of the MLP \n", "\n", "In this exercise, we will consider a MLP for non-linear binary classification composed of 2 input neurons $\\mathbf{x}$, one output neuron $y$ and $K$ hidden neurons in a single hidden layer ($\\mathbf{h}$). \n", "\n", "The output neuron is a vector $\\mathbf{y}$ with one element that sums its inputs with $K$ weights $W^2$ and a bias $\\mathbf{b}^2$.\n", "\n", "$$\\mathbf{y} = \\sigma( W^2 \\times \\mathbf{h} + \\mathbf{b}^2)$$\n", "\n", "It uses the logistic transfer function: \n", "\n", "$$\\sigma(x) = \\dfrac{1}{1 + \\exp -x}$$\n", "\n", "As in logistic regression for linear classification, we will interpret $y$ as the probability that the input $\\mathbf{x}$ belongs to the positive class. \n", "\n", "$W^2$ is a $1 \\times K$ matrix (we could interpret it as a vector, but this will make the computations easier) and $\\mathbf{b}^2$ is a vector with one element. \n", "\n", "Each of the $K$ hidden neurons receives 2 weights from the input layer, what gives a $K \\times 2$ weight matrix $W^1$, and $K$ biases in the vector $\\mathbf{b}^1$. They will also use the logistic activation function at first:\n", "\n", "$$\\mathbf{h} = \\sigma(W^1 \\times \\mathbf{x} + \\mathbf{b}^1)$$\n", "\n", "The goal is to implement the backpropagation algorithm by comparing the desired output $t$ with the prediction $y$:\n", "\n", "* The output error is a vector with one element:\n", " \n", "$$\\delta = (\\mathbf{t} - \\mathbf{y})$$\n", "\n", "* The backpropagated error is a vector with $K$ elements:\n", "\n", "$$\\delta_\\text{hidden} = \\sigma'(W^1 \\times \\mathbf{x} + \\mathbf{b}^1) \\, W_2^T \\times \\delta$$\n", "\n", "($W^2$ is a $1 \\times K$ matrix, so $W_2^T \\times \\delta$ is a $K \\times 1$ vector. The vector $\\sigma'(W^1 \\times \\mathbf{x} + \\mathbf{b}^1)$ is multiplied element-wise.)\n", "\n", "* Parameter updates follow the delta learning rule:\n", "\n", "$$\\Delta W^1 = \\eta \\, \\delta_\\text{hidden} \\times \\mathbf{x}^T$$\n", "\n", "$$\\Delta \\mathbf{b}^1 = \\eta \\, \\delta_\\text{hidden} $$\n", "\n", "$$\\Delta W^2 = \\eta \\, \\delta \\, \\mathbf{h}^T$$\n", "\n", "$$\\Delta \\mathbf{b}^2 = \\eta \\, \\delta$$\n", "\n", "Notice the transpose operators to obtain the correct shapes. You will remember that the derivative of the logistic function is given by:\n", "\n", "$$\\sigma'(x)= \\sigma(x) \\, (1- \\sigma(x))$$\n", "\n", "**Q:** Why do not we use the derivative of the transfer function of the output neuron when computing the output error $\\delta$? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The MLP will be trained on a non-linear dataset with samples of each class forming a circle. Each sample has two input dimensions. In the cell below, blue points represent the positive class (t=1), orange ones the negative class (t=0)." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAlsAAAFlCAYAAADcXS0xAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAAjCElEQVR4nO3dX2xc53nn8d+zCgMMioLTwkpikXItLLxcuJYSZQk3gW6SMChtF4JUoSHsXjQoAgju1jAQYImVUIA1BBQSoAsDbLPxeougyUXr5YXC2rC32kTahYvsBghVppTcRFjBiSuOjNhNQ3aLEqjsPr04MyY5miFneP697znfD2AM53DCeTOjM+eZ533e5zV3FwAAAPLxb8oeAAAAQJURbAEAAOSIYAsAACBHBFsAAAA5ItgCAADIEcEWAABAjj5U9gB2ct999/mDDz5Y9jAAAAB2de3atb9z9/3dx4MOth588EEtLS2VPQwAAIBdmdlbvY4zjQgAAJAjgi0AAIAcEWwBAADkiGALAAAgRwRbAAAAOSLYAgAAyBHBFgAAQI4ItgAAAHJEsAUAAJAjgi0AYVpZkJ5/RHqumdyuLJQ9IgDYk6C36wFQUysL0ivPSnc3kvvrt5P7knRkprxxAcAekNkCEJ4r5zYDrY67G8lx9Ec2EAhSJsGWmX3NzN4xsxt9fm9mNm9mt8xsxcw+mcXzAqio9dXhjmMzG7h+W5JvZgMJuIDSZZXZ+hNJj+3w+8clPdT+77Skr2b0vACqaHR8uOMgGwgELJNgy91fl/T3OzzkhKRveOK7kppmdn8Wzw2ggqbmpJHG9mMjjeR4Hqow/UY2EAhWUTVbY5Jub7m/2j52DzM7bWZLZrb07rvvFjI4IHhVCAaGcWRGOj4vjR6UZMnt8fl8iuOrMv1GNhAIVlGrEa3HMe/1QHd/UdKLkjQ5OdnzMUCt1HVl3pGZYv7/7TT9FtPrOzW3/d+JlE82cGUheW3WV5NAbmourtcJKEFRma1VSQe33B+XdKeg5wbiFkItTpUza1WZfisiG1iVLCBQsKIyWy9LesbMXpL0K5LW3f3tgp4biFvZwUDVM2uj4+3gocfx2OSdDaxKFhAoWFatH/5M0v+VNGFmq2b2JTN72syebj/kNUlvSrol6b9J+o9ZPC9QC2XX4oSQWctT0cX4MSs78AcilUlmy92f2uX3Lul3s3guoHaKqsXpp+oX2E5Ghjqk3VUpCwgUiO16gNCVHQzU4QJbVDF+7MoO/IFIEWwBMSgzGOACi46yA38gUgRbAHbGBRZbkQUEhkawBeStCn2JuMACwJ4RbAF5qnrbBADAropqagrUU9XbJgAAdkWwBeSp6m0TgLSqvDsB0EawBeSp7IakQMjY/gc1QbAF5Inu5EB/TLOjJgi2gDwVsTkwEKsyp9mZvkSBWI0I5I22CUBvZe1OwCphFIzMFgCgHGVNszN9iYIRbAEAylHWNDurhFEwphEBAOUpY5q9DpurIyhktgAA9cIqYRSMYAsAUC+sEkbBmEYEANQPq4RRIDJbqCZ66AAAAkFmC9VDDx0AQEDIbKF66KEDAAgImS1UDz10Bra43NLFyzd1Z21DB5oNzU5P6OTRsbKHVRu8/kA9EGyheuihM5DF5ZbOXrqujbvvS5Jaaxs6e+m6JHHBLwCvP1AfTCOieuihM5CLl29+cKHv2Lj7vi5evlnSiOqF1x+oD4ItVA89dAZyZ21jqOPIFq8/UB9MI6Ka6KGzqwPNhlo9LuwHmo0ej0bWeP2B+iCzBdTU7PSEGiP7th1rjOzT7PRESSPqb3G5pWMXrurQmVd17MJVLS63yh5SajG9/gDSIbMF1FSnCDv01XAhFZJnuXowltcfQHrm7mWPoa/JyUlfWloqexgASnTswtWe021jzYa+c+ZzhY2jO+iTkkzU+VOHCZDqbmUh6eO3vpqsep6ao4yhpszsmrtPdh8ns4Vy8SGFXYRSSL7T6sHQgy36eeWIHSswAGq2UJ7Oh9T6bUm++SHFPobYol/BeNGF5KEEfcPqZORaaxtybU7DVqHuLQjsWIEBEGyhPHxIVV4Whe2hFJKHEvQNi35eOWPHCgyAYAvl4UOq0rLKqJw8Oqbzpw5rrNmQKanVKqNOKpSgb1ixZuSi0W9nCnaswBbUbKE8bKtTaVnWOJ08OlZ6jVGsqwfp55WzqbntNVsSO1bgHgRbKA8fUpVWxYxKCEHfsGanJ3quogw9IxeNThE8C32wA4ItDCaPVYN8SEmq7koxMiphSJORq+q/zcyxYwV2QZ8t7K57abOUZKDYbzC1KvduqvL/tzrg/QOG16/PFgXy2B2rBnNT5ZVioRS2Y2+q/G8TKBrTiNgdqwZzU8W6pq1irHFCour/NiuLRtFBItjC7lg1mIle9S/UNSFU/NuMEN3sg8U0InY3NZfUaG3FqsGh9Os59dl/vz/K3k2ovlj7itUaJR/BItjC7o7MJMXwowclWXJLcfxQ+tW//K8fvktdE4JEzV2EKPkIFtOIGAxLm1Mtg9+p/oW6JoSKf5uRoeQjWGS2gAGk3Xom1n31AESEko9gEWwBA0i7DJ76FwC5o+QjWEwjIjghdq1Ouww+1n31AESGko8gEWwhKN1dqzvTdZJKDUyyWAZP/QsA1BPTiAhKqF2rmQYEyrW43NKxC1d16MyrOnbh6sD1kkAIyGwhKKF2rWYaEChPqBlvYFAEWwhKyF2rmQYEyrFTxptzEjFgGhFBYboOQLdQM97AoMhsIShM1wHoNmzGO8QVzcFhw+pCEWwhOEzXAdhqdnpiW82W1D/jTX3XANiwunBMIwIAgjbMPo2hrmgOChtWF47MFgAgeINmvKnvGgAbVheOzBa2W1mQnn9Eeq6Z3K4slD0iABgY+5AOoN/G1GxYnRuCLWzqzOOv35bkm/P4BFwAIsGK5gGwYXXhCLawiXl8AJEbpr6rttiwunDUbGET8/gAKoAVzQNgw+pCkdnCJubxAQDIHMEWNjGPDwCZYONsbMU0IjZ1Usp0FQaAPaOxKroRbGE75vEBIBU2zkY3phEBAMgQjVWHUJPejpkEW2b2mJndNLNbZnamx+8/Y2brZvb99n8UAQEAKonGqgOqUW/H1MGWme2T9BVJj0t6WNJTZvZwj4f+pbt/ov0fjZsAAJVEY9UB1ai3YxY1W49KuuXub0qSmb0k6YSkv8ngb6MGFpdbunj5pu6sbehAs6HZ6QnqGgBEq/P5xefaLmrU2zGLYGtM0u0t91cl/UqPx33azP5a0h1J/8nd3+j1x8zstKTTkvTAAw9kMDyEjFU7AKooz8aqlfmCOjrenkLscbxisqjZsh7HvOv+X0n6JXf/uKQ/lLTY74+5+4vuPunuk/v3789geBGrQeHgTqt2AADbdb6gttY25Nr8ghplH68a9XbMIthalXRwy/1xJdmrD7j7P7j7P7Z/fk3SiJndl8FzV1dNCgdZtQMAg6vUF9Qa7dGYxTTi9yQ9ZGaHJLUkPSnpN7c+wMw+Jukn7u5m9qiSIO+nGTx3de1UOFihf4gHmg21egRWrNoBELO8pvoq9wW1Jr0dU2e23P09Sc9IuizpB5IW3P0NM3vazJ5uP+w3JN1o12zNS3rS3bunGrFVTQoHWbUDoGrynOqjrUScMumz5e6vufu/c/d/6+5/0D72gru/0P75j9z9l9394+7+KXf/P1k8b6XVZFPok0fHdP7UYY01GzJJY82Gzp86HGexJwAo36k+vqDGie16QjU1l9RobZ1KrGjhYJ6rdgCgaHlO9dFWIk4EW6FiU2gAiFLetah8QY0PwVbIalI4CABVMjs9sa1/oMRUX90RbAEAkCGm+tCNYAsAgIwx1YetMlmNCAAAgN4ItgAAAHJEsAUAAJAjgi0AAIAcUSCPUuS1bxgAoEJWFirRb5JgC4Xr7BvW6UHT2TdMEgEXACCxsrB9J5X128l9KbqAi2lEFC7PfcMAABVx5dz2Leuk5P6Vc+WMJwWCLRQuz33DAAAVsb463PGAEWyhcP32B8tq3zAAQAWMjg93PGAEWyjc7PSEGiP7th1j3zAAwDZTc9JI15fwkUZyPDIUyKNw7BsGAPmLftV3pwi+AqsRzd3LHkNfk5OTvrS0VPYwAACISveqbymZQTh/6nBcAVdkzOyau092H2casQwrC9Lzj0jPNZPblYWyRwQAqBBWfYeFacSiVahvCAAgTKz6DguZraJVqG8IACBMrPoOC8FW0SrUNwQAECZWfYeFacSijY4nU4e9jgMAkAFWfYeFYKtoU3Pba7akaPuGAADCdfLoGMFVIJhGLNqRGen4vDR6UJIlt8fnKY4HAKCiyGyV4chMUMFV9I3vAAAIGMFWzXU3vmutbejspeuSRMAFAEAGmEasORrfAQCQL4KtmqPxHQAA+SLYqjka3wEAkC+CrUFVdD9DGt8BAJAvCuQHUeH9DGl8BwCorJWFZDu89dWkefjUXCnXbXP3wp90UJOTk760tFT2MJJMVs+u7welL98ofjwAAGBn3YkSKWkinmNvSzO75u6T3ceZRhwE+xkCABCXK+e2B1pScv/KucKHQrA1iH77FrKfIQAAYQooUUKwNYipuST1uBX7GQIAEK6AEiUEW4NgP0MAAOISUKKE1YiDCmw/QwAAsIPONTuA1YgEWwAAVNzicqueLX4CSZQQbAEAUGGLyy2dvXT9g31wW2sbOnvpuiTVI+AKAMEWMlPbb04AELCLl29+EGh1bNx9Xxcv3+QzuiAEW8gE35wAIEx31jaGOo7ssRoRmdjpmxMAoDwHmo2hjiN7BFvIBN+cACBMs9MTaozs23asMbJPs9MTJY2ofgi2kAm+OQFAmE4eHdP5U4c11mzIJI01Gzp/6nD4JR4rC8nexM81k9uVhbJHtGf1rdkKZCfwqpidnthWsyXxzQkAQnHy6Fj4wdVW3ZtIr99O7ktRXqvrGWxV7E3sp8jVgZ2/y2pEAEBqO20iHeF1up7BVsXexF7KWB0Y3TcnAECYAtpEOgv1rNmq2JvYC6sDAQDRCmgT6SzUM9iq2JvYC6sDAQC9LC63dOzCVR0686qOXbiqxeVW2UO6V0CbSGehnsFWxd7EXlgdCADo1ikxaa1tyLVZYhJcwHVkRjo+L40elGTJ7fH5aEt96lmzFdBO4HlhdSAAoFtUW/cEsol0FuoZbEmVehN7YXUgAKAbJSblqG+wVQOsDgQAbHWg2VCrR2BFiUm+6lmzBQBADbF1TznIbAEAUBOUmJSDYAsAgBqhxKR4TCMCAADkiGALAAAgRwRbAAAAOSLYysLKgvT8I9JzzeR2ZaHsEQEAgEBQIJ/WyoL0yrPS3XbfkvXbyX2p0k1TAQDAYMhspXXl3Gag1XF3IzkOAABqL5Ngy8weM7ObZnbLzM70+L2Z2Xz79ytm9sksnjcI66vDHQcAIECLyy0du3BVh868qmMXroa3OXXEUgdbZrZP0lckPS7pYUlPmdnDXQ97XNJD7f9OS/pq2ucNxuj4cMcBAAjM4nJLZy9dV2ttQy6ptbahs5euE3BlJIvM1qOSbrn7m+7+z5JeknSi6zEnJH3DE9+V1DSz+zN47vJNzUkjXXtKjTSS4xnh2wYAIE8XL9/Uxt33tx3buPu+Ll6+WdKIqiWLYGtM0u0t91fbx4Z9TJyOzEjH56XRg5IsuT0+n1lxPN82AAB5u9Njc+qdjmM4WaxGtB7HfA+PSR5odlrJVKMeeOCBdCMrypGZ3FYe7vRtg+0WAABZONBsqNUjsDrQbPR4dIlWFpIFaOurSbnO1FwUK/+zyGytSjq45f64pDt7eIwkyd1fdPdJd5/cv39/BsOLG982AAB5m52eUGNk37ZjjZF9mp2eKGlEPXRaLa3fluSbrZYi6G2ZRbD1PUkPmdkhM/uwpCclvdz1mJcl/VZ7VeKnJK27+9sZPHfl9ftWEdy3DQBAtE4eHdP5U4c11mzIJI01Gzp/6nBYMygRt1pKPY3o7u+Z2TOSLkvaJ+lr7v6GmT3d/v0Lkl6T9ISkW5L+SdJvp33eupidntDZS9e3TSUG920DABC9k0fHwgquukXcaimTDvLu/pqSgGrrsRe2/OySfjeL56qbzj/8i5dv6s7ahg40G5qdngj7hAAAIGuj4+0pxB7HA8d2PREI/tsGAAB5m5rbvj2elHmrpbwQbAEAUFGLy63qzIx0Vh1GuBqRYKuGKnXyAQB66vRp7NT8dvo0Sor3Mz/HVkt5YiPqmqFJKgDUA13hw0GwVTOcfABQD/RpDAfBVs1w8gFAPdCnMRwEWzXDyQcA9RBFV/iaINiqGU4+AKiHKLrC1wSrEWuGJqkAUB/0aQwDwVYNcfIBAFAcphEBAAByRLAFAACQI4ItAACAHBFsAQAA5IhgCwAAIEcEW1lYWZCef0R6rpncriyUPSIAAKovkusvrR/SWlmQXnlWutve7mb9dnJfinJncgAAohDR9ZfMVlpXzm2+0R13N5LjAAAgHxFdfwm20lpfHe44AABIL6LrL8FWWqPjwx0HAADpRXT9JdhKa2pOGmlsPzbSSI4DAIB8RHT9JdhK68iMdHxeGj0oyZLb4/PBFecBAFApEV1/zd3LHkNfk5OTvrS0VPYwKmNxuaWLl2/qztqGDjQbmp2eYENqAKg4PvuLY2bX3H2y+zitH2picbmls5eua+Pu+5Kk1tqGzl66LkmcdABQUXz2h4FpxJq4ePnmBydbx8bd93Xx8s2SRgQAyBuf/WEg2KqJO2sbQx0HAMSPz/4wMI1YEweaDbV6nFwHmo0ej94dNQAAEL6sP/uxN2S2amJ2ekKNkX3bjjVG9ml2emLov9WpAWitbci1WQOwuNzKaLQAgCxk+dkfjEj2Q9yKYGsYEb7BHSePjun8qcMaazZkksaaDZ0/dXhP2ShqAAAgDll+9gehsx/i+m1JvrkfYuDXY1o/DKp7w0spaZ4WaE+PPB0686p6/asxST+68GtFDwcAUBfPP9IOtLqMHpS+fKP48XTp1/qBzNagItrwMm/95vqpAQAA5Cqi/RC3ItgaVKRvcB4qWQMAAAhfRPshbkWwNahI3+A8VK4GAAAQh4j2Q9yK1g+DmprrXbMV+Bucl5NHxwiuAADF6tRIXzmXzCyNjifX4cBrpwm2BhXpGwwAwF4F2VPxyEx0116CrWFE+AYDALAX7KuYHWq2ihZxry4AQH3QUzE7ZLaK1N2rq9OMTSJjBgAICvsqZofMVpHo1QUAiAQ9FbNDsFUkenUBACJBT8XsMI1YpNHxPtsM1K9XFwAgbJ0i+OBWI0aIYKtI9OoCAESEnorZYBqxSEdmko2rRw9KsuS2hhtZAwBQJ2S2ikavLgAAaoXMFgAAQI4ItgAAQLFq1uCbaURkIsj9swAA4alhg28yW0its39Wa21Drs39sxaXW2UPDQAQmho2+CbYQmrsnwUAGFgNG3wTbCE19s8CAAysXyPvCjf4JthCauyfBQAY2NRc0tB7q4o3+CbYQmrsnwUAGFgNG3yzGhGpsX8WAGAoNWvwTbCFTLB/FgDkixY78SLYAgAgcJ0WO52V350WO5IIuCJAzRYAAIGjxU7cyGwBAFCy3aYIabETNzJbAACUaJBdOGixEzeCLQAASjTIFCEtduLGNCKCwCobAHU1yBQhLXbiRrCF0rHKBkCdHWg21OoRcHVPEdJiJ15MI6J0rLIBUGdMEVYfmS2UjlU2AOqMKcLqSxVsmdkvSvrvkh6U9GNJM+7+sx6P+7Gk/y/pfUnvuftkmudFtQyaQgeAqmKKsNrSTiOekXTF3R+SdKV9v5/PuvsnCLTQjRQ6AARkZUF6/hHpuWZyu7JQ9oiil3Ya8YSkz7R//rqk/y3pP6f8m6gZUugAEIiVBemVZ6W77dmG9dvJfalWG0dnzdx97/9jszV3b265/zN3/4Uej/uRpJ9Jckn/1d1f3OFvnpZ0WpIeeOCB//DWW2/teXwAAGAIzz+SBFjdRg9KX75R/HgiY2bXes3g7ZrZMrNvS/pYj1/93hDPf8zd75jZRyR9y8x+6O6v93pgOxB7UZImJyf3HglW1cqCdOWctL4qjY5LU3N82wAAZGN9dbjjGMiuwZa7f77f78zsJ2Z2v7u/bWb3S3qnz9+40759x8y+KelRST2DLeyA9C4AIE+j430yW+PFj6VC0hbIvyzpi+2fvyjpz7sfYGY/Z2Y/3/lZ0q9KIhe5F1fObQZaHXc3kuMAAKQ1NSeNdK0EH2kkx7FnaQvkL0haMLMvSfpbSV+QJDM7IOmP3f0JSR+V9E0z6zzfn7r7X6R83noivTs0tgECgCF0ZkkoV8lUqmDL3X8qaarH8TuSnmj//Kakj6d5HrSR3h0K2wABwB4cmSG4yhjb9cSE9O5Q2AYIABACgq2YHJmRjs8nS3Blye3xeb6B9ME2QACAELA3YmxI7w6MbYAAACEgs4XKYhsgAHlbXG7p2IWrOnTmVR27cFWLy62yh4QAkdmqg5o2QmUbIAB5inYRTk2vCWUi2Kq6mjdCPXl0LOwPPQDR2mkRTrCfOzW/JpSFacSqoxFqT6T+AaQV5SIcrgmlINiqOhqh3qOT+m+tbci1mfon4AIwjH6LbYJehMM1oRQEW1XXr+FpjRuh0n8LQBaiXITDNaEUBFtVRyPUe0SZ+gcQnJNHx3T+1GGNNRsySWPNhs6fOhxuvZbENaEkFMhXHftc3YP+WwCyEt0iHK4JpSDYqgMaoW4zOz2xbbm2FEHqHwCywjWhcARbqB36bwEAikSwhVqKLvUPAIgWBfIAAAA5ItgCAADIEdOIQIQWl1vUnAEF4FxDFgi2gMhEu/ktEBnONWSFaUQgMnTAB4rBuYaskNlCNlYWaJJXEDrgI0tMk/XHuYasEGwhvZUF6ZVnN3eSX7+d3JcIuJT9xYwO+MhKqNNkoQSAnGvICtOISO/Kuc1Aq+PuRnK85joXs9bahlybF7PF5dae/2aUm98iSCFOk+VxzuwV5xqyQrCF9NZXhzteI3lczKLc/BZB6jcd1lrbKCW4kcIKADnXkBWmEZHe6HgyddjreM3lVfNBB3xkod80maTSphNDq5OqzLlGXW2pyGwhvak5aaSrhmGkkRyvuX61HdR8IAS9psk6ysomcc7koFNXu35bkm/W1a4slD2y2iDYQnpHZqTj89LoQUmW3B6f51uTqPlA2DrTZP2UkU3inMkBdbWlYxoR2TgyQ3DVQ2f6IYSVVUAvJ4+O6eLlm8GsuuOcyQF1taUj2AJyVpmaD1TW7PTEthYQUrnZpGjOmVjqoKirLR3TiABQc6y624OY6qCoqy2duXvZY+hrcnLSl5aWyh4GAADbPf9In2zRQenLN4ofz25iycJFzsyuuftk93GmEQHkKpRu4ECmYquDoq62VEwjAshNSN3AgUz1q3eiDgo9EGwhDisLSdr+uWZyG2JdBO6RVzfwxeWWjl24qkNnXtWxC1eDDt5iGiuGQB0UhsA0IsLHRtfRyqMbeKibJ/cS01gxpM5nD3VQGACZLYSPhnzRyqMbeEh75+0mprFiD47MJMXwz60ltwRa6IPMFsIXWyFqBe21yD2P/k2h7J03yGsSylgBlIvMFsJHIWqp0hS559G/KYS98wZ9TUIYK4DyEWwhfBSilirtVNjJo2P6zpnP6UcXfk3fOfO51LVKIeydN+hrEsJYAZSPaUSEj0LUUoU2FRbC3nmDviYhjBVA+Qi2EIdhGvLRKTlTB5qNYDYp7ih777xhXpOyxwqgfEwjolpi2q8sEkyF3YvXBMAwCLZQLbSJyBybFN+L1wTAMJhGRLXQJiIXTIXdi9cEwKDIbKFaaBMBoBe2/EKJCLZQLbSJANCNWk6UjGAL1XJkRjo+L40elGTJ7fF5ViMCdUYtJ0pGzRaqZ5g2EQCqj1pOlIzMFlAUakaAclDLiZIRbAFFoGYEKA+1nCgZwRZQBGpGgPJQy4mSUbMFFIGaEaBc1HKiRGS2gCJQMwKkQ80jIkawBRSBmhFg76h5ROQItoAiUDOCvFU580PNIyJHzRZQFGpGkJdO5qcTkHQyP1I1/s1R84jIkdkCqqDKWQ3sruqZH2oeETmCLSB21LOg6pkfah4ROYItIHZVz2pgd1XP/FDziMhRswXErupZDexuam57zZa098zPykISqK+vJsHa1FwYQQ01j4gYmS0gdlXPamB3WWV+mJIGckFmC4hdllkNxCuLzM9OU9JklYA9I7MFxK7sehZWQiaq8DowJQ3kgswWUAVl1bNUvb/ToKryOoyOt6cQexwHsGepMltm9gUze8PM/sXMJnd43GNmdtPMbpnZmTTPCSAgrIRMVOV1oMUCkIu004g3JJ2S9Hq/B5jZPklfkfS4pIclPWVmD6d8XgAhYNopUZXXoewpaaCiUk0juvsPJMnMdnrYo5Juufub7ce+JOmEpL9J89wAAsC0U6JKrwMtFoDMFVEgPyZp66fQavtYT2Z22syWzGzp3XffzX1wAFJg2inB6wBgB7sGW2b2bTO70eO/EwM+R6+0l/d7sLu/6O6T7j65f//+AZ8CQCmYdkrwOgDYwa7TiO7++ZTPsSrp4Jb745LupPybAELBtFOC1wFAH0VMI35P0kNmdsjMPizpSUkvF/C8AAAApUvb+uHXzWxV0qclvWpml9vHD5jZa5Lk7u9JekbSZUk/kLTg7m+kGzaA2qhCs1AAtWbufcunSjc5OelLS0tlDwNAWbqbhUpJ4Tn1UAACZGbX3P2evqNs1wMgXFVpFgqg1gi2AISrKs1CAdQawRaAcPVrChpjs1AAtUWwBSBcNAsFUAEEWwDCRbNQABWQam9EAMgdzUIBRI7MFgAAQI4ItgAAAHJEsAUAAJAjgi0AAIAcEWwBAADkiGALAAAgRwRbAAAAOSLYAgAAyBHBFgAAQI4ItgAAAHJk7l72GPoys3clvVX2OLa4T9LflT0I5I73uT54r+uB97k+yn6vf8nd93cfDDrYCo2ZLbn7ZNnjQL54n+uD97oeeJ/rI9T3mmlEAACAHBFsAQAA5Ihgazgvlj0AFIL3uT54r+uB97k+gnyvqdkCAADIEZktAACAHBFs7cDMvmBmb5jZv5hZ39UNZvaYmd00s1tmdqbIMSI9M/tFM/uWmf2/9u0v9Hncj83supl938yWih4n9ma389MS8+3fr5jZJ8sYJ9Ib4L3+jJmtt8/h75vZXBnjRDpm9jUze8fMbvT5fXDnNMHWzm5IOiXp9X4PMLN9kr4i6XFJD0t6ysweLmZ4yMgZSVfc/SFJV9r3+/msu38ixKXFuNeA5+fjkh5q/3da0lcLHSQyMcRn8V+2z+FPuPu5QgeJrPyJpMd2+H1w5zTB1g7c/QfufnOXhz0q6Za7v+nu/yzpJUkn8h8dMnRC0tfbP39d0snyhoKMDXJ+npD0DU98V1LTzO4veqBIjc/imnD31yX9/Q4PCe6cJthKb0zS7S33V9vHEI+PuvvbktS+/Uifx7mk/2lm18zsdGGjQxqDnJ+cw9Uw6Pv4aTP7azP7H2b2y8UMDQUL7pz+UJlPHgIz+7akj/X41e+5+58P8id6HGOJZ2B2ep+H+DPH3P2OmX1E0rfM7Iftb1gI1yDnJ+dwNQzyPv6Vku1U/tHMnpC0qGSqCdUS3Dld+2DL3T+f8k+sSjq45f64pDsp/yYyttP7bGY/MbP73f3tdqr5nT5/40779h0z+6aSaQuCrbANcn5yDlfDru+ju//Dlp9fM7P/Ymb3uTv7JlZLcOc004jpfU/SQ2Z2yMw+LOlJSS+XPCYM52VJX2z//EVJ92Q0zeznzOznOz9L+lUlCygQtkHOz5cl/VZ7BdOnJK13ppURlV3fazP7mJlZ++dHlVwDf1r4SJG34M7p2me2dmJmvy7pDyXtl/SqmX3f3afN7ICkP3b3J9z9PTN7RtJlSfskfc3d3yhx2BjeBUkLZvYlSX8r6QuStPV9lvRRSd9sf05/SNKfuvtflDReDKjf+WlmT7d//4Kk1yQ9IemWpH+S9NtljRd7N+B7/RuSfsfM3pO0IelJp7N3dMzszyR9RtJ9ZrYq6fcljUjhntN0kAcAAMgR04gAAAA5ItgCAADIEcEWAABAjgi2AAAAckSwBQAAkCOCLQAAgBwRbAEAAOSIYAsAACBH/wrhLXmX9kIOCwAAAABJRU5ErkJggg==", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "from sklearn.datasets import make_circles\n", "\n", "N = 100\n", "d = 2\n", "X, t = make_circles(n_samples=N, noise = 0.03, random_state=42)\n", "\n", "plt.figure(figsize=(10, 6))\n", "plt.scatter(X[t==1, 0], X[t==1, 1])\n", "plt.scatter(X[t==0, 0], X[t==0, 1])\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Split the data into a training and test set (80/20). Make sure to call them `X_train, X_test, t_train, t_test`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Class definition" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The neural network is entirely defined by its parameters, i.e. the weight matrices and bias vectors, as well the transfer function of the hidden neurons. In order to make your code more reusable, the MLP will be implemented as a Python class. The following cell defines the class, but we will explain it step by step afterwards." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class MLP:\n", " \n", " def __init__(self, d, K, activation_function, max_val, eta):\n", " \n", " self.d = d\n", " self.K = K\n", " self.activation_function = activation_function\n", " self.eta = eta\n", " \n", " self.W1 = rng.uniform(-max_val, max_val, (K, d)) \n", " self.b1 = rng.uniform(-max_val, max_val, (K, 1))\n", " \n", " self.W2 = rng.uniform(-max_val, max_val, (1, K)) \n", " self.b2 = rng.uniform(-max_val, max_val, (1, 1))\n", " \n", " def feedforward(self, x):\n", " \n", " # Make sure x has 2 rows\n", " x = np.array(x).reshape((self.d, -1))\n", "\n", " # Hidden layer\n", " self.h = self.activation_function(np.dot(self.W1, x) + self.b1)\n", "\n", " # Output layer\n", " self.y = logistic(np.dot(self.W2, self.h) + self.b2) \n", " \n", " \n", " def train(self, X_train, t_train, nb_epochs, visualize=True):\n", " errors = []\n", " \n", " for epoch in range(nb_epochs):\n", " \n", " nb_errors = 0\n", "\n", " # Epoch\n", " for i in range(X_train.shape[0]):\n", "\n", " # Feedforward pass: sets self.h and self.y\n", " self.feedforward(X_train[i, :])\n", " \n", " # Backpropagation\n", " self.backprop(X_train[i, :], t_train[i])\n", " \n", " # Predict the class: \n", " if self.y[0, 0] > 0.5:\n", " c = 1\n", " else:\n", " c = 0\n", "\n", " # Count the number of misclassifications\n", " if t_train[i] != c: \n", " nb_errors += 1\n", " \n", " # Compute the error rate\n", " errors.append(nb_errors/X_train.shape[0])\n", " \n", " # Plot the decision function every 10 epochs\n", " if epoch % 10 == 0 and visualize:\n", " self.plot_classification() \n", "\n", " # Stop when the error rate is 0\n", " if nb_errors == 0:\n", " if visualize:\n", " self.plot_classification() \n", " break\n", " \n", " return errors, epoch+1\n", "\n", " def backprop(self, x, t):\n", " \n", " # Make sure x has 2 rows\n", " x = np.array(x).reshape((self.d, -1))\n", "\n", " # TODO: implement backpropagation\n", " \n", " def test(self, X_test, t_test):\n", " \n", " nb_errors = 0\n", " for i in range(X_test.shape[0]):\n", "\n", " # Feedforward pass\n", " self.feedforward(X_test[i, :]) \n", "\n", " # Predict the class: \n", " if self.y[0, 0] > 0.5:\n", " c = 1\n", " else:\n", " c = 0\n", "\n", " # Count the number of misclassifications\n", " if t_test[i] != c: \n", " nb_errors += 1\n", "\n", " return nb_errors/X_test.shape[0]\n", " \n", " def plot_classification(self):\n", "\n", " # Allow redrawing \n", " clear_output(wait=True)\n", "\n", " x_min, x_max = X_train[:, 0].min(), X_train[:, 0].max()\n", " y_min, y_max = X_train[:, 1].min(), X_train[:, 1].max()\n", " xx, yy = np.meshgrid(np.arange(x_min, x_max, .02), np.arange(y_min, y_max, .02))\n", "\n", " x1 = xx.ravel()\n", " x2 = yy.ravel() \n", " x = np.array([[x1[i], x2[i]] for i in range(x1.shape[0])])\n", "\n", " self.feedforward(x.T)\n", " Z = self.y.copy()\n", " Z[Z>0.5] = 1\n", " Z[Z<=0.5] = 0\n", "\n", " from matplotlib.colors import ListedColormap\n", " cm_bright = ListedColormap(['#FF0000', '#0000FF'])\n", "\n", " fig = plt.figure(figsize=(10, 6))\n", " plt.contourf(xx, yy, Z.reshape(xx.shape), cmap=cm_bright, alpha=.4)\n", " plt.scatter(X_train[:, 0], X_train[:, 1], c=t_train, cmap=cm_bright, edgecolors='k')\n", " plt.scatter(X_test[:, 0], X_test[:, 1], c=t_test, cmap=cm_bright, alpha=0.4, edgecolors='k')\n", " plt.xlim(xx.min(), xx.max())\n", " plt.ylim(yy.min(), yy.max())\n", " plt.show() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The constructor `__init__` of the class accepts several arguments:\n", "\n", "* `d` is the number inputs, here 2.\n", "* `K` is the number of hidden neurons.\n", "* `activation_function` is the function to use for the hidden neurons, for example the `logistic` function defined at the beginning of the notebook. Note that the name of the method can be stored as a variable.\n", "* `max_val` is the maximum value used to initialize the weight matrices.\n", "* `eta` is the learning rate.\n", "\n", "The constructor starts by saving these arguments as attributes, so that they can be used in other method as `self.K`:\n", "\n", "```python\n", "def __init__(self, d, K, activation_function, max_val, eta):\n", "\n", " self.d = d\n", " self.K = K\n", " self.activation_function = activation_function\n", " self.eta = eta\n", "```\n", "\n", "The constructor then initializes randomly the weight matrices and bias vectors, uniformly between `-max_val` and `max_val`.\n", "\n", "```python\n", "self.W1 = rng.uniform(-max_val, max_val, (K, d)) \n", "self.b1 = rng.uniform(-max_val, max_val, (K, 1))\n", "\n", "self.W2 = rng.uniform(-max_val, max_val, (1, K)) \n", "self.b2 = rng.uniform(-max_val, max_val, (1, 1))\n", "```\n", "\n", "You can then already create the `MLP` object and observe how the parameters are initialized:\n", "\n", "```python\n", "mlp = MLP(d=2, K=15, activation_function=logistic, max_val=1.0, eta=0.05)\n", "```\n", "\n", "**Q:** Create the object and print the weight matrices and bias vectors." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `feedforward` method takes a vector `x` as input, reshapes it to make sure it has two rows, and computes the hidden activation $\\mathbf{h}$ and the prediction $\\mathbf{y}$.\n", "\n", "```python\n", "def feedforward(self, x):\n", "\n", " # Make sure x has 2 rows\n", " x = np.array(x).reshape((self.d, -1))\n", "\n", " # Hidden layer\n", " self.h = self.activation_function(np.dot(self.W1, x) + self.b1)\n", "\n", " # Output layer\n", " self.y = logistic(np.dot(self.W2, self.h) + self.b2) \n", "```\n", "\n", "Notice the use of `self.` to access attributes, as well as the use of `np.dot()` to mulitply vectors and matrices.\n", "\n", "**Q:** Using the randomly initialized weights, apply the `feedforward()` method to an input vector (for example $[0.5, 0.5]$) and print `h` and `y`. What is the predicted class of the example?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The class also provides a visualization method. It is not import to understand the code for the exercise, so you can safely skip it. It displays the training data as plain points, the test data as semi-transparent points and displays the decision function as a background color (all points in the blue region will be classified as negative examples).\n", "\n", "**Q:** Plot the initial classification on the dataset with random weights. Is there a need for learning? Reinitialize the weights and biases multiple times. What do you observe?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Backpropagation\n", "\n", "The `train()` method implements the training loop you have already implemented several times: several epochs over the training set, making a prediction for each input and modifying the parameters according to the prediction error: \n", "\n", "```python\n", "def train(self, X_train, t_train, nb_epochs, visualize=True):\n", " errors = []\n", "\n", " for epoch in range(nb_epochs):\n", "\n", " nb_errors = 0\n", "\n", " # Epoch\n", " for i in range(X_train.shape[0]):\n", "\n", " # Feedforward pass: sets self.h and self.y\n", " self.feedforward(X_train[i, :])\n", "\n", " # Backpropagation\n", " self.backprop(X_train[i, :], t_train[i])\n", "\n", " # Predict the class: \n", " if self.y[0, 0] > 0.5:\n", " c = 1\n", " else:\n", " c = 0\n", "\n", " # Count the number of misclassifications\n", " if t_train[i] != c: \n", " nb_errors += 1\n", "\n", " # Compute the error rate\n", " errors.append(nb_errors/X_train.shape[0])\n", "\n", " # Plot the decision function every 10 epochs\n", " if epoch % 10 == 0 and visualize:\n", " self.plot_classification() \n", "\n", " # Stop when the error rate is 0\n", " if nb_errors == 0:\n", " if visualize:\n", " self.plot_classification() \n", " break\n", "\n", " return errors, epoch+1\n", "```\n", "\n", "The training methods stops after `nb_epochs` epochs or when no error is made during the last epoch. The decision function is visualized every 10 epochs to better understand what is happening. The method returns a list containing the error rate after each epoch, as well as the number of epochs needed to reach an error rate of 0.\n", "\n", "The only thing missing is the `backprop(x, t)` method, which currently does nothing:\n", "\n", "```python\n", "def backprop(self, x, t):\n", "\n", " # Make sure x has 2 rows\n", " x = np.array(x).reshape((self.d, -1))\n", "\n", " # TODO: implement backpropagation\n", "```\n", "\n", "**Q:** Implement the *online* backpropagation algorithm. \n", "\n", "All you have to do is to backpropagate the output error and adapt the parameters using the delta learning rule:\n", "\n", "1. compute the output error `delta`.\n", "2. compute the backpropagated error `delta_hidden`.\n", "3. increment the parameters `self.W1, self.b1, self.W2, self.b2` accordingly.\n", "\n", "The only difficulty is to take care of the shape of each matrix (before multiplying two matrices or vectors, test what their shape is).\n", "\n", "*Note:* you can either edit directly the cell containing the definition of the class, or create a new class `TrainableMLP` inheriting from the class `MLP` and simply redefine the `backprop()` method. The solution will use the second option to be more readable, but it does not matter." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Train the MLP for 1000 epochs on the data using a learning rate of 0.05, 15 hidden neurons and weights initialized between -1 and 1. Plot the evolution of the training error." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Use the `test()` method to compute the error on the test set. What is the test accuracy of your network after training? Compare it to the training accuracy." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Experiments\n", "\n", "### Influence of the number of hidden neurons\n", "\n", "**Q:** Try different values for the number of hidden neurons $K$ (e.g. 2, 5, 10, 15, 20, 25, 50...) and observe how the accuracy and speed of convergence evolve." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Influence of the learning rate\n", "\n", "**Q:** Vary the learning rate between extreme values. How does the performance evolve?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Influence of weight initialization\n", "\n", "**Q:** The weights are initialized randomly between -1 and 1. Try to initialize them to 0. Does it work? Why?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** For a fixed number of hidden neurons (e.g. $K=15$) and a correct value of `eta`, train 10 times the network with different initial weights and superimpose on the same plot the evolution of the training error. Conclude." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Influence of the transfer function\n", "\n", "**Q:** Modify the `backprop()` method so that it applies backpropagation correctly for any of the four activation functions:\n", "\n", "* linear\n", "* logistic\n", "* tanh\n", "* relu" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Linear transfer function\n", "def linear(x):\n", " return x\n", "\n", "# tanh transfer function \n", "def tanh(x):\n", " return np.tanh(x)\n", "\n", "# ReLU transfer function\n", "def relu(x):\n", " x = x.copy()\n", " x[x < 0.] = 0.\n", " return x" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remember that the derivatives of these activations functions are easy to compute:\n", "\n", "* linear: $f'(x) = 1$\n", "* logistic: $f'(x) = f(x) \\, (1 - f(x))$\n", "* tanh: $f'(x) = 1 - f^2(x)$\n", "* relu: $f'(x) = \\begin{cases}1 \\; \\text{if} \\; x>0\\\\ 0 \\; \\text{if} \\; x \\leq 0\\\\ \\end{cases}$\n", "\n", "*Hint:* `activation_function` is a variable like others, although it is the name of a method. You can apply comparisons on it:\n", "\n", "```python\n", "if self.activation_function == linear:\n", " diff = something\n", "elif self.activation_function == logistic:\n", " diff = something\n", "elif self.activation_function == tanh:\n", " diff = something\n", "elif self.activation_function == relu:\n", " diff = something\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Use a linear transfer function for the hidden neurons. How does performance evolve? Is the non-linearity of the transfer function important for learning?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Use this time the hyperbolic tangent function as a transfer function for the hidden neurons. Does it improve learning? " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Use the Rectified Linear Unit (ReLU) transfer function. What does it change? Conclude on the importance of the transfer function for the hidden neurons. Select the best one from now on." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Influence of data normalization\n", "\n", "The input data returned by `make_circles` is nicely center around 0, with values between -1 and 1. What happens if this is not the case with your data?\n", "\n", "**Q:** Shift the input data `X` using the formula:\n", "\n", "$$X_\\text{shifted} = 8 \\, X + 2$$\n", "\n", "regenerate the training and test sets and train the MLP on them. What do you observe?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Normalize the shifted data so that it has a mean of 0 and a variance of 1 in each dimension, using the formula:\n", "\n", "$$X_\\text{normalized} = \\dfrac{X_\\text{shifted} - \\text{mean}(X_\\text{shifted})}{\\text{std}(X_\\text{shifted})}$$\n", "\n", "and retrain the network. Conclude." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Influence of randomization\n", "\n", "The training loop we used until now iterated over the training samples in the exact same order at every epoch. The samples are therefore not i.i.d (independent and identically distributed) as they follow the same sequence. \n", "\n", "**Q:** Modify the `train()` method so that the indices of the training samples are randomized between two epochs. Check the doc of `rng.permutation()` for help." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Influence of weight initialization - part 2\n", "\n", "According to the empirical analysis by Glorot and Bengio in “Understanding the difficulty of training deep feedforward neural networks”, the optimal initial values for the weights between two layers of a MLP are uniformly taken in the range:\n", "\n", "$$\n", " \\mathcal{U}( - \\sqrt{\\frac{6}{N_{\\text{in}}+N_{\\text{out}}}} , \\sqrt{\\frac{6}{N_{\\text{in}}+N_{\\text{out}}}} )\n", "$$\n", "\n", "where $N_{\\text{in}}$ is the number of neurons in the first layer and $N_{\\text{out}}$ the number of neurons in the second layer.\n", "\n", "**Q:** Modify the constructor of your class to initialize both hidden and output weights with this new range. The biases should be initialized to 0. What is the effect?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Summary\n", "\n", "**Q:** Now that we optimized the MLP, it is time to cross-validate again the number of hidden neurons and the learning rate. As the networks always get a training error rate of 0 and the test set is not very relevant, the maincriteria will be the number of epochs needed on average to converge. Find the best MLP for the dataset (there is not a single solution), for example by iterating over multiple values of `K` and `eta`. What do you think of the change in performance between the first naive implementation and the final one? What were the most critical changes?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 4 }