{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Chapter 2: How the backpropagation algorithm works" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Warm up: a fast matrix-based approach to computing the output from a neural network" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The two assumptions we need about the cost function" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Hadamard product" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The four fundamental equations behind backpropagation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Problem 1 ([link](http://neuralnetworksanddeeplearning.com/chap2.html#problem_563815)): alternate presentation of the equations of backpropagation\n", "\n", "* Transforming **BP1**: $\\delta^L = \\sigma'(z^L) \\odot \\nabla_a C$ can be rewritten as $\\delta^L = \\Sigma'(z^L) \\nabla_a C$\n", "\n", "This is actually a particular case of the following, which stems from the definition of matrix multiplication:\n", "\n", "If $X = \n", "\\begin{pmatrix}x_1 \\\\ x_2 \\\\ \\ldots \\\\ x_n \\end{pmatrix}$\n", "and $Y = \n", "\\begin{pmatrix}y_1 \\\\ y_2 \\\\ \\ldots \\\\ y_n \\end{pmatrix}$\n", "then $X \\odot Y = Y \\odot X = \\sum\\limits_{k=1}^n x_k y_k = \n", "\\begin{pmatrix}x_1 & 0 & & \\ldots\\\\0 & x_2 & & \\\\ & & \\ddots & \\\\ \\ldots & & & x_n\\end{pmatrix}\n", "\\begin{pmatrix}y_1 \\\\ y_2 \\\\ \\ldots \\\\ y_n \\end{pmatrix}$\n", "\n", "* Transforming **BP2**: this is the exact same reasoning.\n", "\n", "* Now to prove the last equation, we only have to repeatedly replace $\\delta^{l+1}$ in $\\delta^l = \\Sigma'(z^l) (w^{l+1})^T \\delta^{l+1}$ with $\\Sigma'(z^{l+1}) (w^{l+2})^T \\delta^{l+2}$ until reaching $l+1 = L$, then using $\\delta^L = \\sigma'(z^L) \\odot \\nabla_a C$ to finally get:\n", "\n", "\\begin{eqnarray}\n", " \\delta^l = \\Sigma'(z^l) (w^{l+1})^T \\ldots \\Sigma'(z^{L-1}) (w^L)^T \n", " \\Sigma'(z^L) \\nabla_a C\n", "\\end{eqnarray}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1 ([link](http://neuralnetworksanddeeplearning.com/chap2.html#exercise_392831)): prove equations BP3 and BP4\n", "\n", "* **BP3**: $\\frac{\\partial C}{\\partial b^l_j} = \\delta^l_j$\n", "\n", "By definition, $\\delta_j^l = \\frac{\\partial C}{\\partial z_j^l}$.\n", "\n", "Applying the chain rule, we can write this in terms of partial derivatives with respect to the biases of every neuron in layer $l$:\n", "\n", "$$\\delta_j^l = \\sum_k \\frac{\\partial C}{\\partial b_k^l} \\frac{\\partial b_k^l}{\\partial z_j^l}$$.\n", "\n", "Since $b_k^l$ depends only on $z_k^l$, all terms in this sum are null, except the one where $k = j$:\n", "\n", "$$\\delta_j^l = \\frac{\\partial C}{\\partial b_j^l} \\frac{\\partial b_j^l}{\\partial z_j^l}$$.\n", "\n", "Now since $z^l_j = \\sum_k w^l_{jk} a^{l-1}_k+b^l_j$, and therefore $b^l_j = z^l_j - \\sum_k w^l_{jk} a^{l-1}_k$, we have $\\frac{\\partial b_j^l}{\\partial z_j^l} = 1$.\n", "\n", "This gives us **BP3**: \n", "\n", "$$\\frac{\\partial C}{\\partial b^l_j} = \\delta^l_j$$\n", "\n", "* **BP4**: $\\frac{\\partial C}{\\partial w^l_{jk}} = a^{l-1}_k \\delta^l_j$\n", "\n", "As previously, $w_{jk}^l$ only affects only $z_j^l$, so let's drop the sum right away and write the chain rule as:\n", "\n", "$$\\frac{\\partial C}{\\partial w_{jk}^l} = \\frac{\\partial C}{\\partial z_j^l} \\frac{\\partial z_j^l}{\\partial w_{jk}^l}$$\n", "\n", "By definition, $\\frac{\\partial C}{\\partial z_j^l}$ is $\\delta_j^l$ and since $z^l_j = \\sum_k w^l_{jk} a^{l-1}_k+b^l_j$, we have $\\frac{\\partial z_j^l}{\\partial w_{jk}^l} = a_k^{l-1}$.\n", "\n", "And so we have **BP4**:\n", "\n", "$$\\frac{\\partial C}{\\partial w^l_{jk}} = a^{l-1}_k \\delta^l_j$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The backpropagation algorithm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 2 ([link](http://neuralnetworksanddeeplearning.com/chap2.html#exercises_675621)): Backpropagation with a single modified neuron\n", "\n", "For this we have to recognize which parts of the backpropagation equation actually depend on the precise form of the activation function, and which parts don't.\n", "\n", "Let's say our modified neuron is the $i^{th}$ neuron in the $l^{th}$ layer.\n", "\n", "In the step 2 (feedforward), we will compute everything as before, except for $a_i^l = f(z_i^l)$.\n", "\n", "Now:\n", "\n", "* if the modified neuron is in the output layer ($l = L$), we will have to separate the different error calculations in step 3, and compute $\\delta_i^L = \\frac{\\partial C}{\\partial a_i^L} f'(z_i^L)$ for our modified neuron, and $\\delta_j^L = \\frac{\\partial C}{\\partial a_j^L} \\sigma'(z_j^L)$ for the other neurons. The other steps will be unchanged.\n", "\n", "* if the modified neuron is inside the network ($l < L$), steps 3 and 5 will be unchanged, but at step 4 we will have to compute things differently for our neuron. It will help here to consider the component form of **BP2**, which is the equation used in this step (below, the not yet modified form):\n", "\n", "$$\\delta^l_j = \\sum_k w^{l+1}_{kj} \\delta^{l+1}_k \\sigma'(z^l_j)$$\n", "\n", "But this time, we will have for our modified neuron:\n", "\n", "$$\\delta^l_i = \\sum_k w^{l+1}_{ki} \\delta^{l+1}_k f'(z^l_i)$$\n", "\n", "And for the other neurons:\n", "\n", "$$\\delta^l_j = \\sum_k w^{l+1}_{kj} \\delta^{l+1}_k \\sigma'(z^l_j)$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 3 ([link](http://neuralnetworksanddeeplearning.com/chap2.html#exercises_675621)): Backpropagation with linear neurons\n", "\n", "With the activation function $\\sigma(z) = z$, we have $\\sigma'(z) = 1$ and $a_j^l = z_j^l$, which simplifies the backpropagation algorithm:\n", "\n", "* In step 3, the output error vector is simply:\n", "\n", "$$\\delta^{L} = \\nabla_a C$$\n", "\n", "* In step 4, the error vector is simply:\n", "\n", "$$\\delta^{l} = (w^{l+1})^T \\delta^{l+1}$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The code for backpropagation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Problem 2 ([link](http://neuralnetworksanddeeplearning.com/chap2.html#problem_269962)): a fully matrix-based approach to backpropagation over a mini-batch\n", "\n", "The code is in the ```chap2p2``` directory. Executing ```exec_normal.py``` makes use of ```network.py```, which is the original python3 file except that I've added a timer. Here's the output:\n", "\n", "```\n", "Epoch 0: 9078 / 10000, elapsed time: 8.82s\n", "Epoch 1: 9258 / 10000, elapsed time: 18.31s\n", "Epoch 2: 9319 / 10000, elapsed time: 27.38s\n", "...\n", "Epoch 27: 9434 / 10000, elapsed time: 234.64s\n", "Epoch 28: 9457 / 10000, elapsed time: 242.92s\n", "Epoch 29: 9444 / 10000, elapsed time: 251.13s\n", "```\n", "\n", "```network_matrix.py``` implements the matrix-based approach. Let's execute ```exec_matrix.py```:\n", "\n", "```\n", "Epoch 0: 8216 / 10000, elapsed time: 2.59s\n", "Epoch 1: 8365 / 10000, elapsed time: 5.05s\n", "Epoch 2: 8375 / 10000, elapsed time: 7.49s\n", "...\n", "Epoch 27: 9482 / 10000, elapsed time: 67.95s\n", "Epoch 28: 9483 / 10000, elapsed time: 70.37s\n", "Epoch 29: 9511 / 10000, elapsed time: 72.78s\n", "```\n", "\n", "On my computer, the matrix-based approach is 3.5 times faster.\n", "\n", "Note that the method remains exactly the same, therefore the differences in accuracy are only due to the randomness and shouldn't be interpreted." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## In what sense is backpropagation a fast algorithm?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Backpropagation: the big picture" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }