{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Chapter 1: Using neural nets to recognize handwritten digits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Perceptrons" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sigmoid neurons" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1 ([link](http://neuralnetworksanddeeplearning.com/chap1.html#exercises_191892)): sigmoid neurons simulating perceptrons, part I" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's show that the behavior of a single perceptron doesn't change if we multiply its weight vector and its bias by a constant $c > 0$. The output of this perceptron is:\n", "\n", "\\begin{eqnarray}\n", " \\mbox{output} = \\left\\{ \n", " \\begin{array}{ll} \n", " 0 & \\mbox{if } cw\\cdot x + cb \\leq 0 \\\\\n", " 1 & \\mbox{if } cw\\cdot x + cb > 0\n", " \\end{array}\n", " \\right.\n", "\\end{eqnarray}\n", "\n", "Which is the same as the output without the multiplication by $c$:\n", "\n", "\\begin{eqnarray}\n", " \\mbox{output} = \\left\\{ \n", " \\begin{array}{ll} \n", " 0 & \\mbox{if } w\\cdot x + b \\leq 0 \\\\\n", " 1 & \\mbox{if } w\\cdot x + b > 0\n", " \\end{array}\n", " \\right.\n", "\\end{eqnarray}\n", "\n", "Since this is true for every perceptron, the behavior of our neural network as a whole doesn't change either." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercice 2: sigmoid neurons simulating perceptrons, part II" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a given sigmoid neuron with weight and bias multiplied by $c$, the activation function is $\\sigma(cw.x+cb) = \\frac{1}{1 + e^{-(cw.x+cb)}}$.\n", "\n", "For each perceptron, it is supposed that $w.x+b \\neq 0$.\n", "* If $w.x+b > 0$, then as $c \\rightarrow \\infty$, $\\frac{1}{1 + e^{-(cw.x+cb)}} \\rightarrow \\frac{1}{1 + 0} = 1$.\n", "* If $w.x+b < 0$, then as $c \\rightarrow \\infty$, $\\frac{1}{1 + e^{-(cw.x+cb)}} \\rightarrow \\frac{1}{1 + \\infty} = 0$.\n", "\n", "Therefore our sigmoid neuron gives the same output as a perceptron of parameters $(w,b)$ in the limit as $c \\rightarrow \\infty$.\n", "\n", "Since this is true for every neuron, our network as a whole has the same behavior as a network of perceptrons as $c \\rightarrow \\infty$.\n", "\n", "If however we have $w.x+b = 0$ for at least one sigmoid neuron, then its output is going to be $\\sigma(0) = \\frac{1}{1+e^0} = \\frac{1}{2}$ whatever the value of $c$. So we will never have the behavior of a perceptron, which only ever outputs 0 or 1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The architecture of neural networks" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A simple network to classify handwritten digits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 3 ([link](http://neuralnetworksanddeeplearning.com/chap1.html#exercise_513527)): converting the output layer into a bitwise representation with an extra layer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's what our extra 4-neuron layer should do:\n", "* (digit = 0) (1, 0, 0, 0, 0, 0, 0, 0, 0, 0) -> (0, 0, 0, 0)\n", "* (digit = 1) (0, 1, 0, 0, 0, 0, 0, 0, 0, 0) -> (0, 0, 0, 1)\n", "* (digit = 2) (0, 0, 1, 0, 0, 0, 0, 0, 0, 0) -> (0, 0, 1, 0)\n", "* (digit = 3) (0, 0, 0, 1, 0, 0, 0, 0, 0, 0) -> (0, 0, 1, 1)\n", "* (digit = 4) (0, 0, 0, 0, 1, 0, 0, 0, 0, 0) -> (0, 1, 0, 0)\n", "* (digit = 5) (0, 0, 0, 0, 0, 1, 0, 0, 0, 0) -> (0, 1, 0, 1)\n", "* (digit = 6) (0, 0, 0, 0, 0, 0, 1, 0, 0, 0) -> (0, 1, 1, 0)\n", "* (digit = 7) (0, 0, 0, 0, 0, 0, 0, 1, 0, 0) -> (0, 1, 1, 1)\n", "* (digit = 8) (0, 0, 0, 0, 0, 0, 0, 0, 1, 0) -> (1, 0, 0, 0)\n", "* (digit = 9) (0, 0, 0, 0, 0, 0, 0, 0, 0, 1) -> (1, 0, 0, 1)\n", "\n", "Our first neuron must output roughly 1 for 8 and 9, and roughly 0 otherwise. The following weight vector will make 8 and 9 the only contributors to its weighted input: (0, 0, 0, 0, 0, 0, 0, 0, 1, 1). With this vector, the weighted input $w.x$ is roughly 1 for digits 8 and 9, and roughly 0 for other digits. More precisely, since the correct output in the old output layer is at least 0.99 and incorrect outputs are smaller than 0.01, $0.99 \\leq w.x \\leq 1.01$ for 8 and 9, and $0 \\leq w.x \\leq 0.02$ for other digits.\n", "\n", "Now which bias would be appropriate? Let's say we want the same precision as in the old output layer: $0.99 \\leq \\sigma(w.x+b)$ for 8 and 9, and $\\sigma(w.x+b) \\leq 0.01$ for other digits.\n", "\n", "Or to speak roughly, we look for $b$ satisfying $\\sigma(b) \\approx 0$ and $\\sigma(1+b) \\approx 1$.\n", "\n", "If we set $b = -0.5$, we're almost there since $\\sigma(b) < 0.5$ and $\\sigma(1+b) > 0.5$. We only have to make the slope much steeper, by multiplying all this by a large constant (meaning a larger weight vector).\n", "\n", "How large is enough? Remember our desired precision. We are looking for $\\tilde{w}$ such that $0.99 \\leq \\sigma(\\tilde{w}(0.99+b))$ and $\\sigma(\\tilde{w}(0.02+b)) \\leq 0.01$.\n", "\n", "At this point let's note $\\alpha$ our desired precision: here, $\\alpha = 0.01$.\n", "\n", "So we want:\n", "\n", "\\begin{eqnarray}\n", " \\left\\{ \n", " \\begin{array}{ll} \n", " 1-\\alpha \\leq \\sigma(\\tilde{w}(1-\\alpha+b)) \\\\\n", " \\sigma(\\tilde{w}(2\\alpha+b)) \\leq \\alpha\n", " \\end{array}\n", " \\right.\n", "\\end{eqnarray}\n", "\n", "Which gives us:\n", "\n", "\\begin{eqnarray}\n", " \\left\\{ \n", " \\begin{array}{ll} \n", " \\tilde{w} \\geq \\frac{ln(\\frac{1-\\alpha}{\\alpha})}{1-\\alpha+b} \\\\\n", " \\tilde{w} \\geq -\\frac{ln(\\frac{1-\\alpha}{\\alpha})}{2\\alpha+b}\n", " \\end{array}\n", " \\right.\n", "\\end{eqnarray}\n", "\n", "\\begin{eqnarray}\n", " \\left\\{ \n", " \\begin{array}{ll} \n", " \\tilde{w} \\geq 9.38 \\\\\n", " \\tilde{w} \\geq 9.58\n", " \\end{array}\n", " \\right.\n", "\\end{eqnarray}\n", "\n", "Before concluding for this neuron, let's generalize to the other ones, because I'd like all the weights and biases of the last layer to be as close to each other as possible (for simplicity).\n", "\n", "So if we have a neuron that should take the output of $n \\leq 10$ neurons of the old output layer into account (by using a weight vector with only zeros and ones), we'll necessarily have $1-\\alpha \\leq w.x \\leq 1 + (n-1) \\alpha$ when the output is one of those digits, and $0 \\leq w.x \\leq n\\alpha$ for the other digits.\n", "\n", "So our conditions become:\n", "\n", "\\begin{eqnarray}\n", " \\left\\{ \n", " \\begin{array}{ll} \n", " 1-\\alpha \\leq \\sigma(\\tilde{w}(1-\\alpha+b)) \\\\\n", " \\sigma(\\tilde{w}(n\\alpha+b)) \\leq \\alpha\n", " \\end{array}\n", " \\right.\n", "\\end{eqnarray}\n", "\n", "With solution, since $n\\alpha < 0.5$ (and so $n\\alpha+b < 0$):\n", "\n", "\\begin{eqnarray}\n", " \\left\\{ \n", " \\begin{array}{ll} \n", " \\tilde{w} \\geq \\frac{ln(\\frac{1-\\alpha}{\\alpha})}{1-\\alpha+b} \\\\\n", " \\tilde{w} \\geq -\\frac{ln(\\frac{1-\\alpha}{\\alpha})}{n\\alpha+b}\n", " \\end{array}\n", " \\right.\n", "\\end{eqnarray}\n", "\n", "Now $n \\leq 10$ for each neuron, so we'll be good everywhere with the following:\n", "\n", "\\begin{eqnarray}\n", " \\left\\{ \n", " \\begin{array}{ll} \n", " \\tilde{w} \\geq 9.38 \\\\\n", " \\tilde{w} \\geq 11.49\n", " \\end{array}\n", " \\right.\n", "\\end{eqnarray}\n", "\n", "Or let's say $\\tilde{w} \\geq 12$.\n", "\n", "So the final weight vector will be made of zeros and twelves, and the final bias will be -6 everywhere (why -6? Because we've begun our argument with a weight of 1 and a bias of -0.5, and then we've multiplied all this by 12 to make the sigmoid slope steeper, finally obtaining a bias of $- 0.5 \\times 12 = -6$. The sigmoid would become centered around another value if we kept $b = -0.5$).\n", "\n", "Here's our final solution:\n", "\n", "* Neuron 1: $w = 12(0, 0, 0, 0, 0, 0, 0, 0, 1, 1)$ ; $b = -6$\n", "* Neuron 2: $w = 12(0, 0, 0, 0, 1, 1, 1, 1, 0, 0)$ ; $b = -6$\n", "* Neuron 3: $w = 12(0, 0, 1, 1, 0, 0, 1, 1, 0, 0)$ ; $b = -6$\n", "* Neuron 4: $w = 12(0, 1, 0, 1, 0, 1, 0, 1, 0, 1)$ ; $b = -6$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learning with gradient descent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 4 ([link](http://neuralnetworksanddeeplearning.com/chap1.html#exercises_647181)): prove the local optimality of gradient descent\n", "\n", "The Cauchy-Schwartz inequality states that for 2 vectors $u$ and $v$, we have:\n", "* $-\\|u\\| \\|v\\| \\leq u.v$, and\n", "* $-\\|u\\| \\|v\\| = u.v$ if and only if $u$ and $v$ are negatively proportional to each other, i.e. $u = -\\alpha v$, $\\alpha > 0$.\n", "\n", "Here, for all $\\Delta v$ of length $\\epsilon$, we necessarily have:\n", "\n", "$-\\epsilon \\| \\nabla C \\| \\leq \\Delta v . \\nabla C$\n", "\n", "And there is only one way to have the equality: letting $\\Delta v$ be negatively proportional to $\\nabla C$ (while keeping the size constraint $\\| \\Delta v \\| = \\epsilon$):\n", "\n", "$\\Delta v = -\\frac{\\epsilon}{\\| \\nabla C \\|} \\nabla C$\n", "\n", "Which indeed gives us:\n", "\n", "\\begin{equation*}\n", " \\begin{aligned}\n", " \\Delta v . \\nabla C &= -\\frac{\\epsilon}{\\| \\nabla C \\|} \\nabla C . \\nabla C \\\\\n", " &= -\\frac{\\epsilon}{\\| \\nabla C \\|} \\| \\nabla C \\|^2 \\\\\n", " &= -\\epsilon \\| \\nabla C \\|\n", " \\end{aligned}\n", "\\end{equation*}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 5: one-dimensional gradient descent\n", "\n", "In the one-dimensional case, the gradient of C is just its derivative: $\\nabla C (x) = C'(x)$, and $\\Delta v(x) = - \\epsilon \\frac{C'(x)}{C'(x)} = - \\epsilon$.\n", "\n", "So at every step we're just moving to the new absciss $x+\\epsilon$ if $C'(x) < 0$, and $x-\\epsilon$ if $C'(x) > 0$.\n", "\n", "This is very natural if you have a picture in mind: imagine being on the parabola $y = x^2$. Then we're heading toward the global minimum at every step.\n", "\n", "The one-dimensional case is also a good opportunity to show that we're only heading toward a *local* minimum, not necessarily a *global* one:\n", "\n", "![local_vs_global_arrows.png](img/local_vs_global_arrows.png)\n", "\n", "If we start from the valley to the right and $\\epsilon$ is small enough, we're never going to reach the global minimum." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercice 6 ([link](http://neuralnetworksanddeeplearning.com/chap1.html#exercise_263792)): online learning\n", "\n", "Online learning uses a mini-batch size of only 1. We compare it with a mini-batch size of 20 below.\n", "\n", "**Advantage**: learning occurs faster;\n", "\n", "**Disadvantage**: the approximation of the gradient may be quite inaccurate, and so the direction of the gradient descent may be imprecise or even completely off with some extreme examples.\n", "\n", "This analysis is just the same as a mini-batch size of 20 compared to the whole training set." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implementing our network to classify digits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 7 ([link](http://neuralnetworksanddeeplearning.com/chap1.html#exercise_270628)): write out $a' = \\sigma(wa+b)$ in component form\n", "\n", "Let's number the neurons in the $(l-1)^{th}$ layer $1 \\leq k \\leq K$, and the neurons in the $l^{th}$ layer $1 \\leq j \\leq J$.\n", "\n", "$a' = \\sigma(wa+b)$ implies for neuron $j$, by the rules of matrix multiplication:\n", "\n", "\\begin{aligned}\n", "a'_j &= \\sigma\\left(\\sum\\limits_{k=1}^{K} w_{jk} a_k + b_j\\right) \\\\\n", "a'_j &= \\frac{1}{1 + exp(- \\sum\\limits_{k=1}^{K} w_{jk} a_k - b_j)}\n", "\\end{aligned}\n", "\n", "Which is the same as equation [4](http://neuralnetworksanddeeplearning.com/chap1.html#eqtn4) in the book (with slightly different notations)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 8 ([link](http://neuralnetworksanddeeplearning.com/chap1.html#exercise_420023)): a network with just 2 layers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For clarity, I've copied the files necessary to solve this exercise in the ```chap1ex8``` directory.\n", "\n", "Just to make sure we get the same results as Nielsen, let's execute ```chap1ex8/exec_normal.py```. Here's the output:\n", "\n", "```\n", "Epoch 0: 9077 / 10000\n", "Epoch 1: 9225 / 10000\n", "Epoch 2: 9317 / 10000\n", "...\n", "Epoch 24: 9490 / 10000\n", "Epoch 25: 9464 / 10000\n", "Epoch 26: 9446 / 10000\n", "Epoch 27: 9479 / 10000\n", "Epoch 28: 9462 / 10000\n", "Epoch 29: 9483 / 10000\n", "```\n", "Our best classification rate occurs at epoch 24, with a rate of 94.9 percent − close to Nielsen's 95.42 percent, which is obtained as the best in 3 runs in his case." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now to the network with only 2 layers. In ```chap1ex8/exec_chap1ex8.py```, we simply replace:\n", "\n", "```net = network.Network([784, 30, 10])```\n", "\n", "with:\n", "\n", "```net = network.Network([784, 10])```\n", "\n", "What do we get now?\n", "\n", "```\n", "Epoch 0: 4180 / 10000\n", "Epoch 1: 5650 / 10000\n", "Epoch 2: 5748 / 10000\n", "...\n", "Epoch 24: 7503 / 10000\n", "Epoch 25: 7478 / 10000\n", "Epoch 26: 7480 / 10000\n", "Epoch 27: 7496 / 10000\n", "Epoch 28: 7516 / 10000\n", "Epoch 29: 7485 / 10000\n", "```\n", "\n", "The execution is significantly faster (since we have removed one hidden layer), but the classification accuracy is considerably worse (at best, 75.03 percent at epoch 24).\n", "\n", "So our hidden layer was actually useful." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }