{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "**Appendix D – Autodiff**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_This notebook contains toy implementations of various autodiff techniques, to explain how they works._\n", "\n", "\n", " \n", "
\n", " Run in Google Colab\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Warning**: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's make sure this notebook works well in both python 2 and 3:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# To support both python 2 and python 3\n", "from __future__ import absolute_import, division, print_function, unicode_literals" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose we want to compute the gradients of the function $f(x,y)=x^2y + y + 2$ with regards to the parameters x and y:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "def f(x,y):\n", " return x*x*y + y + 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One approach is to solve this analytically:\n", "\n", "$\\dfrac{\\partial f}{\\partial x} = 2xy$\n", "\n", "$\\dfrac{\\partial f}{\\partial y} = x^2 + 1$" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def df(x,y):\n", " return 2*x*y, x*x + 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So for example $\\dfrac{\\partial f}{\\partial x}(3,4) = 24$ and $\\dfrac{\\partial f}{\\partial y}(3,4) = 10$." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(24, 10)" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df(3, 4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perfect! We can also find the equations for the second order derivatives (also called Hessians):\n", "\n", "$\\dfrac{\\partial^2 f}{\\partial x \\partial x} = \\dfrac{\\partial (2xy)}{\\partial x} = 2y$\n", "\n", "$\\dfrac{\\partial^2 f}{\\partial x \\partial y} = \\dfrac{\\partial (2xy)}{\\partial y} = 2x$\n", "\n", "$\\dfrac{\\partial^2 f}{\\partial y \\partial x} = \\dfrac{\\partial (x^2 + 1)}{\\partial x} = 2x$\n", "\n", "$\\dfrac{\\partial^2 f}{\\partial y \\partial y} = \\dfrac{\\partial (x^2 + 1)}{\\partial y} = 0$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At x=3 and y=4, these Hessians are respectively 8, 6, 6, 0. Let's use the equations above to compute them:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def d2f(x, y):\n", " return [2*y, 2*x], [2*x, 0]" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "([8, 6], [6, 0])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "d2f(3, 4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perfect, but this requires some mathematical work. It is not too hard in this case, but for a deep neural network, it is pratically impossible to compute the derivatives this way. So let's look at various ways to automate this!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Numeric differentiation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we compute an approxiation of the gradients using the equation: $\\dfrac{\\partial f}{\\partial x} = \\displaystyle{\\lim_{\\epsilon \\to 0}}\\dfrac{f(x+\\epsilon, y) - f(x, y)}{\\epsilon}$ (and there is a similar definition for $\\dfrac{\\partial f}{\\partial y}$)." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "def gradients(func, vars_list, eps=0.0001):\n", " partial_derivatives = []\n", " base_func_eval = func(*vars_list)\n", " for idx in range(len(vars_list)):\n", " tweaked_vars = vars_list[:]\n", " tweaked_vars[idx] += eps\n", " tweaked_func_eval = func(*tweaked_vars)\n", " derivative = (tweaked_func_eval - base_func_eval) / eps\n", " partial_derivatives.append(derivative)\n", " return partial_derivatives" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def df(x, y):\n", " return gradients(f, [x, y])" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[24.000400000048216, 10.000000000047748]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df(3, 4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It works well!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The good news is that it is pretty easy to compute the Hessians. First let's create functions that compute the first order derivatives (also called Jacobians):" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(24.000400000048216, 10.000000000047748)" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def dfdx(x, y):\n", " return gradients(f, [x,y])[0]\n", "\n", "def dfdy(x, y):\n", " return gradients(f, [x,y])[1]\n", "\n", "dfdx(3., 4.), dfdy(3., 4.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can simply apply the `gradients()` function to these functions:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "def d2f(x, y):\n", " return [gradients(dfdx, [3., 4.]), gradients(dfdy, [3., 4.])]" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[7.999999951380232, 6.000099261882497],\n", " [6.000099261882497, -1.4210854715202004e-06]]" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "d2f(3, 4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So everything works well, but the result is approximate, and computing the gradients of a function with regards to $n$ variables requires calling that function $n$ times. In deep neural nets, there are often thousands of parameters to tweak using gradient descent (which requires computing the gradients of the loss function with regards to each of these parameters), so this approach would be much too slow." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implementing a Toy Computation Graph" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rather than this numerical approach, let's implement some symbolic autodiff techniques. For this, we will need to define classes to represent constants, variables and operations." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "class Const(object):\n", " def __init__(self, value):\n", " self.value = value\n", " def evaluate(self):\n", " return self.value\n", " def __str__(self):\n", " return str(self.value)\n", "\n", "class Var(object):\n", " def __init__(self, name, init_value=0):\n", " self.value = init_value\n", " self.name = name\n", " def evaluate(self):\n", " return self.value\n", " def __str__(self):\n", " return self.name\n", "\n", "class BinaryOperator(object):\n", " def __init__(self, a, b):\n", " self.a = a\n", " self.b = b\n", "\n", "class Add(BinaryOperator):\n", " def evaluate(self):\n", " return self.a.evaluate() + self.b.evaluate()\n", " def __str__(self):\n", " return \"{} + {}\".format(self.a, self.b)\n", "\n", "class Mul(BinaryOperator):\n", " def evaluate(self):\n", " return self.a.evaluate() * self.b.evaluate()\n", " def __str__(self):\n", " return \"({}) * ({})\".format(self.a, self.b)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Good, now we can build a computation graph to represent the function $f$:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "x = Var(\"x\")\n", "y = Var(\"y\")\n", "f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can run this graph to compute $f$ at any point, for example $f(3, 4)$." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "42" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.value = 3\n", "y.value = 4\n", "f.evaluate()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perfect, it found the ultimate answer." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Computing gradients" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The autodiff methods we will present below are all based on the *chain rule*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose we have two functions $u$ and $v$, and we apply them sequentially to some input $x$, and we get the result $z$. So we have $z = v(u(x))$, which we can rewrite as $z = v(s)$ and $s = u(x)$. Now we can apply the chain rule to get the partial derivative of the output $z$ with regards to the input $x$:\n", "\n", "$ \\dfrac{\\partial z}{\\partial x} = \\dfrac{\\partial s}{\\partial x} \\cdot \\dfrac{\\partial z}{\\partial s}$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now if $z$ is the output of a sequence of functions which have intermediate outputs $s_1, s_2, ..., s_n$, the chain rule still applies:\n", "\n", "$ \\dfrac{\\partial z}{\\partial x} = \\dfrac{\\partial s_1}{\\partial x} \\cdot \\dfrac{\\partial s_2}{\\partial s_1} \\cdot \\dfrac{\\partial s_3}{\\partial s_2} \\cdot \\dots \\cdot \\dfrac{\\partial s_{n-1}}{\\partial s_{n-2}} \\cdot \\dfrac{\\partial s_n}{\\partial s_{n-1}} \\cdot \\dfrac{\\partial z}{\\partial s_n}$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In forward mode autodiff, the algorithm computes these terms \"forward\" (i.e., in the same order as the computations required to compute the output $z$), that is from left to right: first $\\dfrac{\\partial s_1}{\\partial x}$, then $\\dfrac{\\partial s_2}{\\partial s_1}$, and so on. In reverse mode autodiff, the algorithm computes these terms \"backwards\", from right to left: first $\\dfrac{\\partial z}{\\partial s_n}$, then $\\dfrac{\\partial s_n}{\\partial s_{n-1}}$, and so on.\n", "\n", "For example, suppose you want to compute the derivative of the function $z(x)=\\sin(x^2)$ at x=3, using forward mode autodiff. The algorithm would first compute the partial derivative $\\dfrac{\\partial s_1}{\\partial x}=\\dfrac{\\partial x^2}{\\partial x}=2x=6$. Next, it would compute $\\dfrac{\\partial z}{\\partial x}=\\dfrac{\\partial s_1}{\\partial x}\\cdot\\dfrac{\\partial z}{\\partial s_1}= 6 \\cdot \\dfrac{\\partial \\sin(s_1)}{\\partial s_1}=6 \\cdot \\cos(s_1)=6 \\cdot \\cos(3^2)\\approx-5.46$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's verify this result using the `gradients()` function defined earlier:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[-5.46761419430053]" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from math import sin\n", "\n", "def z(x):\n", " return sin(x**2)\n", "\n", "gradients(z, [3])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Look good. Now let's do the same thing using reverse mode autodiff. This time the algorithm would start from the right hand side so it would compute $\\dfrac{\\partial z}{\\partial s_1} = \\dfrac{\\partial \\sin(s_1)}{\\partial s_1}=\\cos(s_1)=\\cos(3^2)\\approx -0.91$. Next it would compute $\\dfrac{\\partial z}{\\partial x}=\\dfrac{\\partial s_1}{\\partial x}\\cdot\\dfrac{\\partial z}{\\partial s_1} \\approx \\dfrac{\\partial s_1}{\\partial x} \\cdot -0.91 = \\dfrac{\\partial x^2}{\\partial x} \\cdot -0.91=2x \\cdot -0.91 = 6\\cdot-0.91=-5.46$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course both approaches give the same result (except for rounding errors), and with a single input and output they involve the same number of computations. But when there are several inputs or outputs, they can have very different performance. Indeed, if there are many inputs, the right-most terms will be needed to compute the partial derivatives with regards to each input, so it is a good idea to compute these right-most terms first. That means using reverse-mode autodiff. This way, the right-most terms can be computed just once and used to compute all the partial derivatives. Conversely, if there are many outputs, forward-mode is generally preferable because the left-most terms can be computed just once to compute the partial derivatives of the different outputs. In Deep Learning, there are typically thousands of model parameters, meaning there are lots of inputs, but few outputs. In fact, there is generally just one output during training: the loss. This is why reverse mode autodiff is used in TensorFlow and all major Deep Learning libraries." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There's one additional complexity in reverse mode autodiff: the value of $s_i$ is generally required when computing $\\dfrac{\\partial s_{i+1}}{\\partial s_i}$, and computing $s_i$ requires first computing $s_{i-1}$, which requires computing $s_{i-2}$, and so on. So basically, a first pass forward through the network is required to compute $s_1$, $s_2$, $s_3$, $\\dots$, $s_{n-1}$ and $s_n$, and then the algorithm can compute the partial derivatives from right to left. Storing all the intermediate values $s_i$ in RAM is sometimes a problem, especially when handling images, and when using GPUs which often have limited RAM: to limit this problem, one can reduce the number of layers in the neural network, or configure TensorFlow to make it swap these values from GPU RAM to CPU RAM. Another approach is to only cache every other intermediate value, $s_1$, $s_3$, $s_5$, $\\dots$, $s_{n-4}$, $s_{n-2}$ and $s_n$. This means that when the algorithm computes the partial derivatives, if an intermediate value $s_i$ is missing, it will need to recompute it based on the previous intermediate value $s_{i-1}$. This trades off CPU for RAM (if you are interested, check out [this paper](https://pdfs.semanticscholar.org/f61e/9fd5a4878e1493f7a6b03774a61c17b7e9a4.pdf))." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Forward mode autodiff" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "Const.gradient = lambda self, var: Const(0)\n", "Var.gradient = lambda self, var: Const(1) if self is var else Const(0)\n", "Add.gradient = lambda self, var: Add(self.a.gradient(var), self.b.gradient(var))\n", "Mul.gradient = lambda self, var: Add(Mul(self.a, self.b.gradient(var)), Mul(self.a.gradient(var), self.b))\n", "\n", "x = Var(name=\"x\", init_value=3.)\n", "y = Var(name=\"y\", init_value=4.)\n", "f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2\n", "\n", "dfdx = f.gradient(x) # 2xy\n", "dfdy = f.gradient(y) # x² + 1" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(24.0, 10.0)" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dfdx.evaluate(), dfdy.evaluate()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the output of the `gradient()` method is fully symbolic, we are not limited to the first order derivatives, we can also compute second order derivatives, and so on:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "d2fdxdx = dfdx.gradient(x) # 2y\n", "d2fdxdy = dfdx.gradient(y) # 2x\n", "d2fdydx = dfdy.gradient(x) # 2x\n", "d2fdydy = dfdy.gradient(y) # 0" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[8.0, 6.0], [6.0, 0.0]]" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[[d2fdxdx.evaluate(), d2fdxdy.evaluate()],\n", " [d2fdydx.evaluate(), d2fdydy.evaluate()]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the result is now exact, not an approximation (up to the limit of the machine's float precision, of course)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Forward mode autodiff using dual numbers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A nice way to apply forward mode autodiff is to use [dual numbers](https://en.wikipedia.org/wiki/Dual_number). In short, a dual number $z$ has the form $z = a + b\\epsilon$, where $a$ and $b$ are real numbers, and $\\epsilon$ is an infinitesimal number, positive but smaller than all real numbers, and such that $\\epsilon^2=0$.\n", "It can be shown that $f(x + \\epsilon) = f(x) + \\dfrac{\\partial f}{\\partial x}\\epsilon$, so simply by computing $f(x + \\epsilon)$ we get both the value of $f(x)$ and the partial derivative of $f$ with regards to $x$. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Dual numbers have their own arithmetic rules, which are generally quite natural. For example:\n", "\n", "**Addition**\n", "\n", "$(a_1 + b_1\\epsilon) + (a_2 + b_2\\epsilon) = (a_1 + a_2) + (b_1 + b_2)\\epsilon$\n", "\n", "**Subtraction**\n", "\n", "$(a_1 + b_1\\epsilon) - (a_2 + b_2\\epsilon) = (a_1 - a_2) + (b_1 - b_2)\\epsilon$\n", "\n", "**Multiplication**\n", "\n", "$(a_1 + b_1\\epsilon) \\times (a_2 + b_2\\epsilon) = (a_1 a_2) + (a_1 b_2 + a_2 b_1)\\epsilon + b_1 b_2\\epsilon^2 = (a_1 a_2) + (a_1b_2 + a_2b_1)\\epsilon$\n", "\n", "**Division**\n", "\n", "$\\dfrac{a_1 + b_1\\epsilon}{a_2 + b_2\\epsilon} = \\dfrac{a_1 + b_1\\epsilon}{a_2 + b_2\\epsilon} \\cdot \\dfrac{a_2 - b_2\\epsilon}{a_2 - b_2\\epsilon} = \\dfrac{a_1 a_2 + (b_1 a_2 - a_1 b_2)\\epsilon - b_1 b_2\\epsilon^2}{{a_2}^2 + (a_2 b_2 - a_2 b_2)\\epsilon - {b_2}^2\\epsilon} = \\dfrac{a_1}{a_2} + \\dfrac{a_1 b_2 - b_1 a_2}{{a_2}^2}\\epsilon$\n", "\n", "**Power**\n", "\n", "$(a + b\\epsilon)^n = a^n + (n a^{n-1}b)\\epsilon$\n", "\n", "etc." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a class to represent dual numbers, and implement a few operations (addition and multiplication). You can try adding some more if you want." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "class DualNumber(object):\n", " def __init__(self, value=0.0, eps=0.0):\n", " self.value = value\n", " self.eps = eps\n", " def __add__(self, b):\n", " return DualNumber(self.value + self.to_dual(b).value,\n", " self.eps + self.to_dual(b).eps)\n", " def __radd__(self, a):\n", " return self.to_dual(a).__add__(self)\n", " def __mul__(self, b):\n", " return DualNumber(self.value * self.to_dual(b).value,\n", " self.eps * self.to_dual(b).value + self.value * self.to_dual(b).eps)\n", " def __rmul__(self, a):\n", " return self.to_dual(a).__mul__(self)\n", " def __str__(self):\n", " if self.eps:\n", " return \"{:.1f} + {:.1f}ε\".format(self.value, self.eps)\n", " else:\n", " return \"{:.1f}\".format(self.value)\n", " def __repr__(self):\n", " return str(self)\n", " @classmethod\n", " def to_dual(cls, n):\n", " if hasattr(n, \"value\"):\n", " return n\n", " else:\n", " return cls(n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$3 + (3 + 4 \\epsilon) = 6 + 4\\epsilon$" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "6.0 + 4.0ε" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "3 + DualNumber(3, 4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$(3 + 4ε)\\times(5 + 7ε)$ = $3 \\times 5 + 3 \\times 7ε + 4ε \\times 5 + 4ε \\times 7ε$ = $15 + 21ε + 20ε + 28ε^2$ = $15 + 41ε + 28 \\times 0$ = $15 + 41ε$" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "15.0 + 41.0ε" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "DualNumber(3, 4) * DualNumber(5, 7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's see if the dual numbers work with our toy computation framework:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "42.0" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.value = DualNumber(3.0)\n", "y.value = DualNumber(4.0)\n", "\n", "f.evaluate()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Yep, sure works. Now let's use this to compute the partial derivatives of $f$ with regards to $x$ and $y$ at x=3 and y=4:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "x.value = DualNumber(3.0, 1.0) # 3 + ε\n", "y.value = DualNumber(4.0) # 4\n", "\n", "dfdx = f.evaluate().eps\n", "\n", "x.value = DualNumber(3.0) # 3\n", "y.value = DualNumber(4.0, 1.0) # 4 + ε\n", "\n", "dfdy = f.evaluate().eps" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "24.0" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dfdx" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "10.0" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dfdy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great! However, in this implementation we are limited to first order derivatives.\n", "Now let's look at reverse mode." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reverse mode autodiff" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's rewrite our toy framework to add reverse mode autodiff:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "class Const(object):\n", " def __init__(self, value):\n", " self.value = value\n", " def evaluate(self):\n", " return self.value\n", " def backpropagate(self, gradient):\n", " pass\n", " def __str__(self):\n", " return str(self.value)\n", "\n", "class Var(object):\n", " def __init__(self, name, init_value=0):\n", " self.value = init_value\n", " self.name = name\n", " self.gradient = 0\n", " def evaluate(self):\n", " return self.value\n", " def backpropagate(self, gradient):\n", " self.gradient += gradient\n", " def __str__(self):\n", " return self.name\n", "\n", "class BinaryOperator(object):\n", " def __init__(self, a, b):\n", " self.a = a\n", " self.b = b\n", "\n", "class Add(BinaryOperator):\n", " def evaluate(self):\n", " self.value = self.a.evaluate() + self.b.evaluate()\n", " return self.value\n", " def backpropagate(self, gradient):\n", " self.a.backpropagate(gradient)\n", " self.b.backpropagate(gradient)\n", " def __str__(self):\n", " return \"{} + {}\".format(self.a, self.b)\n", "\n", "class Mul(BinaryOperator):\n", " def evaluate(self):\n", " self.value = self.a.evaluate() * self.b.evaluate()\n", " return self.value\n", " def backpropagate(self, gradient):\n", " self.a.backpropagate(gradient * self.b.value)\n", " self.b.backpropagate(gradient * self.a.value)\n", " def __str__(self):\n", " return \"({}) * ({})\".format(self.a, self.b)" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "x = Var(\"x\", init_value=3)\n", "y = Var(\"y\", init_value=4)\n", "f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2\n", "\n", "result = f.evaluate()\n", "f.backpropagate(1.0)" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "((x) * (x)) * (y) + y + 2\n" ] } ], "source": [ "print(f)" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "42" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "24.0" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.gradient" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "10.0" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y.gradient" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, in this implementation the outputs are just numbers, not symbolic expressions, so we are limited to first order derivatives. However, we could have made the `backpropagate()` methods return symbolic expressions rather than values (e.g., return `Add(2,3)` rather than 5). This would make it possible to compute second order gradients (and beyond). This is what TensorFlow does, as do all the major libraries that implement autodiff." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reverse mode autodiff using TensorFlow" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "try:\n", " # %tensorflow_version only exists in Colab.\n", " %tensorflow_version 1.x\n", "except Exception:\n", " pass\n", "\n", "import tensorflow as tf" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(42.0, [24.0, 10.0])" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tf.reset_default_graph()\n", "\n", "x = tf.Variable(3., name=\"x\")\n", "y = tf.Variable(4., name=\"y\")\n", "f = x*x*y + y + 2\n", "\n", "jacobians = tf.gradients(f, [x, y])\n", "\n", "init = tf.global_variables_initializer()\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " f_val, jacobians_val = sess.run([f, jacobians])\n", "\n", "f_val, jacobians_val" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since everything is symbolic, we can compute second order derivatives, and beyond. However, when we compute the derivative of a tensor with regards to a variable that it does not depend on, instead of returning 0.0, the `gradients()` function returns None, which cannot be evaluated by `sess.run()`. So beware of `None` values. Here we just replace them with zero tensors." ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "([8.0, 6.0], [6.0, 0.0])" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hessians_x = tf.gradients(jacobians[0], [x, y])\n", "hessians_y = tf.gradients(jacobians[1], [x, y])\n", "\n", "def replace_none_with_zero(tensors):\n", " return [tensor if tensor is not None else tf.constant(0.)\n", " for tensor in tensors]\n", "\n", "hessians_x = replace_none_with_zero(hessians_x)\n", "hessians_y = replace_none_with_zero(hessians_y)\n", "\n", "init = tf.global_variables_initializer()\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " hessians_x_val, hessians_y_val = sess.run([hessians_x, hessians_y])\n", "\n", "hessians_x_val, hessians_y_val" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And that's all folks! Hope you enjoyed this notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" }, "nav_menu": { "height": "603px", "width": "616px" }, "toc": { "navigate_menu": true, "number_sections": true, "sideBar": true, "threshold": 6, "toc_cell": false, "toc_section_display": "block", "toc_window_display": true } }, "nbformat": 4, "nbformat_minor": 1 }