{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div>\n",
    "<img src=\"https://discuss.pytorch.org/uploads/default/original/2X/3/35226d9fbc661ced1c5d17e374638389178c3176.png\" width=\"400\" style=\"margin: 50px auto; display: block; position: relative; left: -30px;\" />\n",
    "</div>\n",
    "\n",
    "<!--NAVIGATION-->\n",
    "# < [Data](2-Data.ipynb) | Autograd | [Optimization](4-Optimization.ipynb) >"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Automatic Differentiation\n",
    "\n",
    "Automatic differentation (autodiff) is a key feature of PyTorch.\n",
    "PyTorch can differentiate the outcome of any computation with respect to its inputs. You don't need to compute the gradients yourself. This allows you to express and optimize complex models without worrying about correctly differentiating the model.\n",
    "\n",
    "We will start by discussing a little bit of the math behind autodiff. We then cover PyTorch's `.backward()` method that does everything automatically for you. Finally, we have a quick look under the hood to see how PyTorch does its magic.\n",
    "\n",
    "### Table of Contents\n",
    "\n",
    "#### 1. [Understanding gradient computation](#Understanding-gradient-computation)\n",
    "#### 2. [A linear regression example](#A-linear-regression-example)\n",
    "#### 3. [Useful features](#Useful-features)\n",
    "#### 4. [Advanced topics](#Advanced-topics)\n",
    "---\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "torch.__version__"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Understanding gradient computation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Requires grad attribute"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The `requires_grad` property on a Tensor tells PyTorch to track computations based on this tensor. \n",
    "After you compute a quantity `y` (forward pass), you can compute the gradient of `y` with respect to all tensors that have `requires_grad==True`. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = torch.Tensor([2])\n",
    "print(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(x.requires_grad)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "x.requires_grad_(True)  # note the underscore"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(x.requires_grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Checking how Autograd tracks operations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y = x * x\n",
    "print(y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(z.requires_grad)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(z.grad_fn)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "z = y + 4\n",
    "print(z)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Computing gradients with `.backward()`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The gradient computation (the backward pass) is triggered with `z.backward()`. You will find the computed gradients in `x.grad`.\n",
    "\n",
    "This computes the gradient of z with respect to x."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(x.grad)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "z.backward()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(x.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we have $z(x) = x^2 + 4$, therefore $\\frac{dz}{dx}(x) = 2  x$.  We can indeed check that `x.grad = 2*x`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This simple polynomial expression is easy enough to differentiate by hand. When expressions become tensor-valued and more complex, however, computing gradients becomes tedious and error-prone. The power of PyTorch is that it can compute gradients of any tensor with respect to its 'inputs' automatically. This greatly simplifies the optimization of complex, creative ML models."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Remember:\n",
    "- `tensor.requires_grad`\n",
    "- `tensor.grad`\n",
    "- `tensor.backward()`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# A linear regression example"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have the simple linear regression $loss = (x \\cdot W + b - y)^{2}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's create our sample data point `x` and `y`, and our regression parameters `W` and `b`.\n",
    "\n",
    "Since we want to update `W` and `b`, we need gradient with respect to them, so we set their `requires_grad` attribute to `True`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = torch.Tensor([1,2,3])\n",
    "y = torch.Tensor([1])\n",
    "\n",
    "W = torch.rand((3,1), requires_grad=True)\n",
    "b = torch.rand(1, requires_grad=True)\n",
    "print(W, \"\\n\\n\", b)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss = (x @ W + b - y) ** 2\n",
    "print(loss)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>\n",
    "<img src=\"figures/grad.png\" width=\"400\" style=\"margin-left: auto;margin-right: auto;display: block;\" />\n",
    "</p>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before calling backward, all gradients are `None`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(W.grad, b.grad)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss.backward()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>\n",
    "<img src=\"figures/backward.png\" width=\"400\" style=\"margin-left: auto;margin-right: auto;display: block;\" />\n",
    "</p>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "After calling backward, gradients of all parameters have been computed !"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(W.grad, \"\\n\\n\", b.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note: No gradient of the `loss` is computed with respect to `x` and `y` since they do not require gradient."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(x.grad, y.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Gradients accumulate !"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss = (x @ W + b - y) ** 2\n",
    "loss.backward()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(W.grad, \"\\n\\n\", b.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You see that the second time, the gradient computed is twice as big. This is because `.backward()` __accumulates__ the gradients.\n",
    "\n",
    "If you want fresh gradient values, you need to set the `.grad` attributes of the parameters to zero before you call `.backward()`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Useful features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Skipping history tracking with `torch.no_grad()`\n",
    "\n",
    "After you trained a model, you just want to use it without computing gradients.\n",
    "Building a computation graph for every operation would be wasteful if you don't need it.\n",
    "Therefore, you can skip these operations by wrapping your code with the `with torch.no_grad():` context."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = torch.randn(3, requires_grad=True)\n",
    "print(\"x.requires_grad : \", x.requires_grad)\n",
    "\n",
    "y = (x ** 2)\n",
    "print(\"y.requires_grad : \", y.requires_grad)\n",
    "\n",
    "with torch.no_grad():\n",
    "    y = (x ** 2)\n",
    "    print(\"y.requires_grad : \", y.requires_grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Any variable created within the `no_grad` context will have `requires_grad==False`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Dropping history with `.detach()`\n",
    "\n",
    "Some tensors are computed from others, but you may want to consider them constants without computation history (called leaf variables). For that, you can use the `.detach()` method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "A = torch.rand(1,2, requires_grad=True)\n",
    "B = A.mean()\n",
    "\n",
    "print(\"B : \", B)\n",
    "print(\"B.requires_grad :\", B.requires_grad)\n",
    "print(\"B.grad_fn :\", B.grad_fn)\n",
    "\n",
    "C = B.detach()\n",
    "print(\"\\n-- C = B.detach() -- \\n\")\n",
    "\n",
    "print(\"C : \", C)\n",
    "print(\"C.requires_grad :\", C.requires_grad)\n",
    "print(\"C.grad_fn :\", C.grad_fn)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "heading_collapsed": "true"
   },
   "source": [
    "# Advanced topics"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Leaves vs Nodes\n",
    "\n",
    "*Advanced*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "heading_collapsed": "true"
   },
   "source": [
    "PyTorch's autograd mechanism differentiates between two types of tensors:\n",
    "- __node variables__ are the result of a pytorch operation\n",
    "- __leaf variables__ are directly created by a user\n",
    "\n",
    "We can use the `.is_leaf` property to differentiate between the two types."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "A = torch.tensor([[1., 2.], [3., 4.]], requires_grad=True)\n",
    "B = torch.tensor([[1., 2.], [3., 4.]], requires_grad=True) + 2  # B is the result of an operation (+)\n",
    "C = 5 * A  # C is the result of an operation (*)\n",
    "print(\"A.is_leaf :\", A.is_leaf)\n",
    "print(\"B.is_leaf :\", B.is_leaf)\n",
    "print(\"C.is_leaf :\", C.is_leaf)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When `.backward()` is called, only the **leaf variables** have their gradients stored in their `.grad` attribute."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "heading_collapsed": "true"
   },
   "source": [
    "### Differentiating w.r.t. intermediate values: `.retain_grad()`\n",
    "\n",
    "*Advanced*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When doing the backward pass, Autograd computes the gradient of the output with respect to every intermediate variables in the computation graph. However, by default, only gradients of variables that were **created by the user** (leaf) and **have the `requires_grad` property to True** are saved.\n",
    "\n",
    "Indeed, most of the time when training a model you only need the gradient of a loss w.r.t. to your model parameters (which are leaf variables). "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "A = torch.Tensor([[1, 2], [3, 4]])\n",
    "A.requires_grad_()\n",
    "\n",
    "B = 5 * (A + 3)\n",
    "C = B.mean()\n",
    "\n",
    "print(\"A.grad :\", A.grad)\n",
    "print(\"B.grad :\", B.grad)\n",
    "C.backward()\n",
    "print(\"\\n-- Backward --\\n\")\n",
    "print(\"A.grad :\", A.grad)\n",
    "print(\"B.grad :\", B.grad)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "A = torch.Tensor([[1, 2], [3, 4]])\n",
    "A.requires_grad_()\n",
    "\n",
    "B = 5 * (A + 3)\n",
    "B.retain_grad()  # <----- This line let us have access to gradient wrt. B after the backward pass\n",
    "C = B.mean()\n",
    "\n",
    "\n",
    "print(\"A.grad :\", A.grad)\n",
    "print(\"B.grad :\", B.grad)\n",
    "C.backward()\n",
    "print(\"\\n-- Backward --\\n\")\n",
    "print(\"A.grad :\", A.grad)\n",
    "print(\"B.grad :\", B.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "heading_collapsed": "true"
   },
   "source": [
    "### Inspecting PyTorch's computation graph\n",
    "\n",
    "*Advanced*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can explore how PyTorch keeps track of history by inspecting the `tensor.grad_fn` argument:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(y.grad_fn)\n",
    "print(y.grad_fn.next_functions[0][0])\n",
    "print(y.grad_fn.next_functions[0][0].next_functions[0][0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Each value has a `grad_fn` corresponding to the operation that produced the value. \n",
    "Each operation's `grad_fn` points to its inputs through `next_functions`.\n",
    "For each input, `next_functions` contains a tuple of the input's `grad_fn` and, if the operation had multiple outputs, an index of the relevant output."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# In our example, the final `add` operation has two inputs:\n",
    "# - The first is the output of `multiplication`.\n",
    "# - The second is a constant `4` for which we don't require a gradient.\n",
    "y.grad_fn.next_functions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "___"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<!--NAVIGATION-->\n",
    "# < [Data](2-Data.ipynb) | Autograd | [Optimization](4-Optimization.ipynb) >"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}