{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "AyYaroWfELFo" }, "source": [ "# Building your Recurrent Neural Network - Step by Step\n", "\n", "Welcome to Course 5's first assignment, where you'll be implementing key components of a Recurrent Neural Network, or RNN, in NumPy! \n", "\n", "By the end of this assignment, you'll be able to:\n", "\n", "* Define notation for building sequence models\n", "* Describe the architecture of a basic RNN\n", "* Identify the main components of an LSTM\n", "* Implement backpropagation through time for a basic RNN and an LSTM\n", "* Give examples of several types of RNN \n", "\n", "Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have \"memory.\" They can read inputs $x^{\\langle t \\rangle}$ (such as words) one at a time, and remember some contextual information through the hidden layer activations that get passed from one time step to the next. This allows a unidirectional (one-way) RNN to take information from the past to process later inputs. A bidirectional (two-way) RNN can take context from both the past and the future, much like Marty McFly. \n", "\n", "**Notation**:\n", "- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. \n", "\n", "- Superscript $(i)$ denotes an object associated with the $i^{th}$ example. \n", "\n", "- Superscript $\\langle t \\rangle$ denotes an object at the $t^{th}$ time \n", "step. \n", " \n", "- Subscript $i$ denotes the $i^{th}$ entry of a vector.\n", "\n", "**Example**: \n", "- $a^{(2)[3]<4>}_5$ denotes the activation of the 2nd training example (2), 3rd layer [3], 4th time step <4>, and 5th entry in the vector.\n", "\n", "#### Pre-requisites\n", "* You should already be familiar with `numpy`\n", "* To refresh your knowledge of numpy, you can review course 1 of the specialization \"Neural Networks and Deep Learning\":\n", " * Specifically, review the week 2's practice assignment [\"Python Basics with Numpy (optional assignment)\"](https://www.coursera.org/learn/neural-networks-deep-learning/programming/isoAV/python-basics-with-numpy)\n", " \n", " \n", "#### Be careful when modifying the starter code!\n", "* When working on graded functions, please remember to only modify the code that is between:\n", "```Python\n", "#### START CODE HERE\n", "```\n", "and:\n", "```Python\n", "#### END CODE HERE\n", "```\n", "* In particular, avoid modifying the first line of graded routines. These start with:\n", "```Python\n", "# GRADED FUNCTION: routine_name\n", "```\n", "The automatic grader (autograder) needs these to locate the function - so even a change in spacing will cause issues with the autograder, returning 'failed' if any of these are modified or missing. Now, let's get started! " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of Content\n", "\n", "- [Packages](#0)\n", "- [1 - Forward Propagation for the Basic Recurrent Neural Network](#1)\n", " - [1.1 - RNN Cell](#1-1)\n", " - [Exercise 1 - rnn_cell_forward](#ex-1)\n", " - [1.2 - RNN Forward Pass](#1-2)\n", " - [Exercise 2 - rnn_forward](#ex-2)\n", "- [2 - Long Short-Term Memory (LSTM) Network](#2)\n", " - [2.1 - LSTM Cell](#2-1)\n", " - [Exercise 3 - lstm_cell_forward](#ex-3)\n", " - [2.2 - Forward Pass for LSTM](#2-2)\n", " - [Exercise 4 - lstm_forward](#ex-4)\n", "- [3 - Backpropagation in Recurrent Neural Networks (OPTIONAL / UNGRADED)](#3)\n", " - [3.1 - Basic RNN Backward Pass](#3-1)\n", " - [Exercise 5 - rnn_cell_backward](#ex-5)\n", " - [Exercise 6 - rnn_backward](#ex-6)\n", " - [3.2 - LSTM Backward Pass](#3-2)\n", " - [Exercise 7 - lstm_cell_backward](#ex-7)\n", " - [3.3 Backward Pass through the LSTM RNN](#3-3)\n", " - [Exercise 8 - lstm_backward](#ex-8)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Lyejzf2TELFr" }, "source": [ "\n", "## Packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "fXWmy3GdELFr" }, "outputs": [], "source": [ "import numpy as np\n", "from rnn_utils import *\n", "from public_tests import *" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "03kGDM_sELFv" }, "source": [ "\n", "## 1 - Forward Propagation for the Basic Recurrent Neural Network\n", "\n", "Later this week, you'll get a chance to generate music using an RNN! The basic RNN that you'll implement has the following structure: \n", "\n", "In this example, $T_x = T_y$. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "ZBxzkRzsELFv" }, "source": [ "\n", "
Figure 1: Basic RNN model
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "jFNRtAP_ELFw" }, "source": [ "### Dimensions of input $x$\n", "\n", "#### Input with $n_x$ number of units\n", "* For a single time step of a single input example, $x^{(i) \\langle t \\rangle }$ is a one-dimensional input vector\n", "* Using language as an example, a language with a 5000-word vocabulary could be one-hot encoded into a vector that has 5000 units. So $x^{(i)\\langle t \\rangle}$ would have the shape (5000,) \n", "* The notation $n_x$ is used here to denote the number of units in a single time step of a single training example" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "EnYGy4L-ELFx" }, "source": [ "#### Time steps of size $T_{x}$\n", "* A recurrent neural network has multiple time steps, which you'll index with $t$.\n", "* In the lessons, you saw a single training example $x^{(i)}$ consisting of multiple time steps $T_x$. In this notebook, $T_{x}$ will denote the number of timesteps in the longest sequence." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Azzhk7jCELFx" }, "source": [ "#### Batches of size $m$\n", "* Let's say we have mini-batches, each with 20 training examples \n", "* To benefit from vectorization, you'll stack 20 columns of $x^{(i)}$ examples\n", "* For example, this tensor has the shape (5000,20,10) \n", "* You'll use $m$ to denote the number of training examples \n", "* So, the shape of a mini-batch is $(n_x,m,T_x)$" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "qNR7VozOELFy" }, "source": [ "#### 3D Tensor of shape $(n_{x},m,T_{x})$\n", "* The 3-dimensional tensor $x$ of shape $(n_x,m,T_x)$ represents the input $x$ that is fed into the RNN\n", "\n", "#### Taking a 2D slice for each time step: $x^{\\langle t \\rangle}$\n", "* At each time step, you'll use a mini-batch of training examples (not just a single example)\n", "* So, for each time step $t$, you'll use a 2D slice of shape $(n_x,m)$\n", "* This 2D slice is referred to as $x^{\\langle t \\rangle}$. The variable name in the code is `xt`." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "VOzhhTj4ELFy" }, "source": [ "### Definition of hidden state $a$\n", "\n", "* The activation $a^{\\langle t \\rangle}$ that is passed to the RNN from one time step to another is called a \"hidden state.\"\n", "\n", "### Dimensions of hidden state $a$\n", "\n", "* Similar to the input tensor $x$, the hidden state for a single training example is a vector of length $n_{a}$\n", "* If you include a mini-batch of $m$ training examples, the shape of a mini-batch is $(n_{a},m)$\n", "* When you include the time step dimension, the shape of the hidden state is $(n_{a}, m, T_x)$\n", "* You'll loop through the time steps with index $t$, and work with a 2D slice of the 3D tensor \n", "* This 2D slice is referred to as $a^{\\langle t \\rangle}$\n", "* In the code, the variable names used are either `a_prev` or `a_next`, depending on the function being implemented\n", "* The shape of this 2D slice is $(n_{a}, m)$" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "67vYjIRTELFz" }, "source": [ "### Dimensions of prediction $\\hat{y}$\n", "* Similar to the inputs and hidden states, $\\hat{y}$ is a 3D tensor of shape $(n_{y}, m, T_{y})$\n", " * $n_{y}$: number of units in the vector representing the prediction\n", " * $m$: number of examples in a mini-batch\n", " * $T_{y}$: number of time steps in the prediction\n", "* For a single time step $t$, a 2D slice $\\hat{y}^{\\langle t \\rangle}$ has shape $(n_{y}, m)$\n", "* In the code, the variable names are:\n", " - `y_pred`: $\\hat{y}$ \n", " - `yt_pred`: $\\hat{y}^{\\langle t \\rangle}$" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "9ZrlQ4X8ELFz" }, "source": [ "Here's how you can implement an RNN: \n", "\n", "### Steps:\n", "1. Implement the calculations needed for one time step of the RNN.\n", "2. Implement a loop over $T_x$ time steps in order to process all the inputs, one at a time. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "6oXWAKeTELF0" }, "source": [ "\n", "### 1.1 - RNN Cell\n", "\n", "You can think of the recurrent neural network as the repeated use of a single cell. First, you'll implement the computations for a single time step. The following figure describes the operations for a single time step of an RNN cell: \n", "\n", "\n", "
Figure 2: Basic RNN cell. Takes as input $x^{\\langle t \\rangle}$ (current input) and $a^{\\langle t - 1\\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\\langle t \\rangle}$ which is given to the next RNN cell and also used to predict $\\hat{y}^{\\langle t \\rangle}$ \n", "
\n", "\n", "**`RNN cell` versus `RNN_cell_forward`**:\n", "* Note that an RNN cell outputs the hidden state $a^{\\langle t \\rangle}$. \n", " * `RNN cell` is shown in the figure as the inner box with solid lines \n", "* The function that you'll implement, `rnn_cell_forward`, also calculates the prediction $\\hat{y}^{\\langle t \\rangle}$\n", " * `RNN_cell_forward` is shown in the figure as the outer box with dashed lines" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "HhFLBKwbELF0" }, "source": [ "\n", "### Exercise 1 - rnn_cell_forward\n", "\n", "Implement the RNN cell described in Figure 2.\n", "\n", "**Instructions**:\n", "1. Compute the hidden state with tanh activation: $a^{\\langle t \\rangle} = \\tanh(W_{aa} a^{\\langle t-1 \\rangle} + W_{ax} x^{\\langle t \\rangle} + b_a)$\n", "2. Using your new hidden state $a^{\\langle t \\rangle}$, compute the prediction $\\hat{y}^{\\langle t \\rangle} = softmax(W_{ya} a^{\\langle t \\rangle} + b_y)$. (The function `softmax` is provided)\n", "3. Store $(a^{\\langle t \\rangle}, a^{\\langle t-1 \\rangle}, x^{\\langle t \\rangle}, parameters)$ in a `cache`\n", "4. Return $a^{\\langle t \\rangle}$ , $\\hat{y}^{\\langle t \\rangle}$ and `cache`\n", "\n", "#### Additional Hints\n", "* A little more information on [numpy.tanh](https://numpy.org/devdocs/reference/generated/numpy.tanh.html)\n", "* In this assignment, there's an existing `softmax` function for you to use. It's located in the file 'rnn_utils.py' and has already been imported.\n", "* For matrix multiplication, use [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "fxI-F0HWELF1" }, "outputs": [], "source": [ "# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "# GRADED FUNCTION: rnn_cell_forward\n", "\n", "def rnn_cell_forward(xt, a_prev, parameters):\n", " \"\"\"\n", " Implements a single forward step of the RNN-cell as described in Figure (2)\n", "\n", " Arguments:\n", " xt -- your input data at timestep \"t\", numpy array of shape (n_x, m).\n", " a_prev -- Hidden state at timestep \"t-1\", numpy array of shape (n_a, m)\n", " parameters -- python dictionary containing:\n", " Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)\n", " Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)\n", " Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n", " ba -- Bias, numpy array of shape (n_a, 1)\n", " by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n", " Returns:\n", " a_next -- next hidden state, of shape (n_a, m)\n", " yt_pred -- prediction at timestep \"t\", numpy array of shape (n_y, m)\n", " cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)\n", " \"\"\"\n", " \n", " # Retrieve parameters from \"parameters\"\n", " Wax = parameters[\"Wax\"]\n", " Waa = parameters[\"Waa\"]\n", " Wya = parameters[\"Wya\"]\n", " ba = parameters[\"ba\"]\n", " by = parameters[\"by\"]\n", " \n", " ### START CODE HERE ### (≈2 lines)\n", " # compute next activation state using the formula given above\n", " a_next = np.tanh(np.dot(Waa,a_prev) + np.dot(Wax,xt) + ba)\n", " # compute output of the current cell using the formula given above\n", " yt_pred = softmax( np.dot(Wya,a_next) + by)\n", " ### END CODE HERE ###\n", " \n", " # store values you need for backward propagation in cache\n", " cache = (a_next, a_prev, xt, parameters)\n", " \n", " return a_next, yt_pred, cache" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "V03ZGazVELF4" }, "outputs": [], "source": [ "np.random.seed(1)\n", "xt_tmp = np.random.randn(3, 10)\n", "a_prev_tmp = np.random.randn(5, 10)\n", "parameters_tmp = {}\n", "parameters_tmp['Waa'] = np.random.randn(5, 5)\n", "parameters_tmp['Wax'] = np.random.randn(5, 3)\n", "parameters_tmp['Wya'] = np.random.randn(2, 5)\n", "parameters_tmp['ba'] = np.random.randn(5, 1)\n", "parameters_tmp['by'] = np.random.randn(2, 1)\n", "\n", "a_next_tmp, yt_pred_tmp, cache_tmp = rnn_cell_forward(xt_tmp, a_prev_tmp, parameters_tmp)\n", "print(\"a_next[4] = \\n\", a_next_tmp[4])\n", "print(\"a_next.shape = \\n\", a_next_tmp.shape)\n", "print(\"yt_pred[1] =\\n\", yt_pred_tmp[1])\n", "print(\"yt_pred.shape = \\n\", yt_pred_tmp.shape)\n", "\n", "# UNIT TESTS\n", "rnn_cell_forward_tests(rnn_cell_forward)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "c_6KJp0lELF7" }, "source": [ "**Expected Output**: \n", "```Python\n", "a_next[4] = \n", " [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978\n", " -0.18887155 0.99815551 0.6531151 0.82872037]\n", "a_next.shape = \n", " (5, 10)\n", "yt_pred[1] =\n", " [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212\n", " 0.36920224 0.9966312 0.9982559 0.17746526]\n", "yt_pred.shape = \n", " (2, 10)\n", "\n", "```" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "rjz378-tELF7" }, "source": [ "\n", "### 1.2 - RNN Forward Pass \n", "\n", "- A recurrent neural network (RNN) is a repetition of the RNN cell that you've just built. \n", " - If your input sequence of data is 10 time steps long, then you will re-use the RNN cell 10 times \n", "- Each cell takes two inputs at each time step:\n", " - $a^{\\langle t-1 \\rangle}$: The hidden state from the previous cell\n", " - $x^{\\langle t \\rangle}$: The current time step's input data\n", "- It has two outputs at each time step:\n", " - A hidden state ($a^{\\langle t \\rangle}$)\n", " - A prediction ($y^{\\langle t \\rangle}$)\n", "- The weights and biases $(W_{aa}, b_{a}, W_{ax}, b_{x})$ are re-used each time step \n", " - They are maintained between calls to `rnn_cell_forward` in the 'parameters' dictionary\n", "\n", "\n", "\n", "
Figure 3: Basic RNN. The input sequence $x = (x^{\\langle 1 \\rangle}, x^{\\langle 2 \\rangle}, ..., x^{\\langle T_x \\rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\\langle 1 \\rangle}, y^{\\langle 2 \\rangle}, ..., y^{\\langle T_x \\rangle})$.
\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "r104TTL-ELF8" }, "source": [ "\n", "### Exercise 2 - rnn_forward\n", "\n", "Implement the forward propagation of the RNN described in Figure 3.\n", "\n", "**Instructions**:\n", "* Create a 3D array of zeros, $a$ of shape $(n_{a}, m, T_{x})$ that will store all the hidden states computed by the RNN\n", "* Create a 3D array of zeros, $\\hat{y}$, of shape $(n_{y}, m, T_{x})$ that will store the predictions \n", " - Note that in this case, $T_{y} = T_{x}$ (the prediction and input have the same number of time steps)\n", "* Initialize the 2D hidden state `a_next` by setting it equal to the initial hidden state, $a_{0}$\n", "* At each time step $t$:\n", " - Get $x^{\\langle t \\rangle}$, which is a 2D slice of $x$ for a single time step $t$\n", " - $x^{\\langle t \\rangle}$ has shape $(n_{x}, m)$\n", " - $x$ has shape $(n_{x}, m, T_{x})$\n", " - Update the 2D hidden state $a^{\\langle t \\rangle}$ (variable name `a_next`), the prediction $\\hat{y}^{\\langle t \\rangle}$ and the cache by running `rnn_cell_forward`\n", " - $a^{\\langle t \\rangle}$ has shape $(n_{a}, m)$\n", " - Store the 2D hidden state in the 3D tensor $a$, at the $t^{th}$ position\n", " - $a$ has shape $(n_{a}, m, T_{x})$\n", " - Store the 2D $\\hat{y}^{\\langle t \\rangle}$ prediction (variable name `yt_pred`) in the 3D tensor $\\hat{y}_{pred}$ at the $t^{th}$ position\n", " - $\\hat{y}^{\\langle t \\rangle}$ has shape $(n_{y}, m)$\n", " - $\\hat{y}$ has shape $(n_{y}, m, T_x)$\n", " - Append the cache to the list of caches\n", "* Return the 3D tensor $a$ and $\\hat{y}$, as well as the list of caches\n", "\n", "#### Additional Hints\n", "- Some helpful documentation on [np.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html)\n", "- If you have a 3 dimensional numpy array and are indexing by its third dimension, you can use array slicing like this: `var_name[:,:,i]`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "VmeprGJpELF9" }, "outputs": [], "source": [ "# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "# GRADED FUNCTION: rnn_forward\n", "\n", "def rnn_forward(x, a0, parameters):\n", " \"\"\"\n", " Implement the forward propagation of the recurrent neural network described in Figure (3).\n", "\n", " Arguments:\n", " x -- Input data for every time-step, of shape (n_x, m, T_x).\n", " a0 -- Initial hidden state, of shape (n_a, m)\n", " parameters -- python dictionary containing:\n", " Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)\n", " Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)\n", " Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n", " ba -- Bias numpy array of shape (n_a, 1)\n", " by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n", "\n", " Returns:\n", " a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)\n", " y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)\n", " caches -- tuple of values needed for the backward pass, contains (list of caches, x)\n", " \"\"\"\n", " \n", " # Initialize \"caches\" which will contain the list of all caches\n", " caches = []\n", " \n", " # Retrieve dimensions from shapes of x and parameters[\"Wya\"]\n", " n_x, m, T_x = x.shape\n", " n_y, n_a = parameters[\"Wya\"].shape\n", " \n", " ### START CODE HERE ###\n", " \n", " # initialize \"a\" and \"y_pred\" with zeros (≈2 lines)\n", " a = np.zeros((n_a, m, T_x))\n", " y_pred = np.zeros((n_y, m, T_x))\n", " \n", " # Initialize a_next (≈1 line)\n", " a_next = a0\n", " \n", " # loop over all time-steps\n", " for t in range(T_x):\n", " # Update next hidden state, compute the prediction, get the cache (≈1 line)\n", " a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t], a_next, parameters)\n", " # Save the value of the new \"next\" hidden state in a (≈1 line)\n", " a[:,:,t] = a_next\n", " # Save the value of the prediction in y (≈1 line)\n", " y_pred[:,:,t] = yt_pred\n", " # Append \"cache\" to \"caches\" (≈1 line)\n", " caches.append(cache)\n", " ### END CODE HERE ###\n", " \n", " # store values needed for backward propagation in cache\n", " caches = (caches, x)\n", " \n", " return a, y_pred, caches" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "jEPrd77rELF_" }, "outputs": [], "source": [ "np.random.seed(1)\n", "x_tmp = np.random.randn(3, 10, 4)\n", "a0_tmp = np.random.randn(5, 10)\n", "parameters_tmp = {}\n", "parameters_tmp['Waa'] = np.random.randn(5, 5)\n", "parameters_tmp['Wax'] = np.random.randn(5, 3)\n", "parameters_tmp['Wya'] = np.random.randn(2, 5)\n", "parameters_tmp['ba'] = np.random.randn(5, 1)\n", "parameters_tmp['by'] = np.random.randn(2, 1)\n", "\n", "a_tmp, y_pred_tmp, caches_tmp = rnn_forward(x_tmp, a0_tmp, parameters_tmp)\n", "print(\"a[4][1] = \\n\", a_tmp[4][1])\n", "print(\"a.shape = \\n\", a_tmp.shape)\n", "print(\"y_pred[1][3] =\\n\", y_pred_tmp[1][3])\n", "print(\"y_pred.shape = \\n\", y_pred_tmp.shape)\n", "print(\"caches[1][1][3] =\\n\", caches_tmp[1][1][3])\n", "print(\"len(caches) = \\n\", len(caches_tmp))\n", "\n", "#UNIT TEST \n", "rnn_forward_test(rnn_forward)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "R135qjynELGC" }, "source": [ "**Expected Output**:\n", "\n", "```Python\n", "a[4][1] = \n", " [-0.99999375 0.77911235 -0.99861469 -0.99833267]\n", "a.shape = \n", " (5, 10, 4)\n", "y_pred[1][3] =\n", " [ 0.79560373 0.86224861 0.11118257 0.81515947]\n", "y_pred.shape = \n", " (2, 10, 4)\n", "caches[1][1][3] =\n", " [-1.1425182 -0.34934272 -0.20889423 0.58662319]\n", "len(caches) = \n", " 2\n", "```" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "GANptfNiELGC" }, "source": [ "### Congratulations! \n", "\n", "You've successfully built the forward propagation of a recurrent neural network from scratch. Nice work! \n", "\n", "#### Situations when this RNN will perform better:\n", "- This will work well enough for some applications, but it suffers from vanishing gradients. \n", "- The RNN works best when each output $\\hat{y}^{\\langle t \\rangle}$ can be estimated using \"local\" context. \n", "- \"Local\" context refers to information that is close to the prediction's time step $t$.\n", "- More formally, local context refers to inputs $x^{\\langle t' \\rangle}$ and predictions $\\hat{y}^{\\langle t \\rangle}$ where $t'$ is close to $t$.\n", "\n", "What you should remember:\n", "* The recurrent neural network, or RNN, is essentially the repeated use of a single cell.\n", "* A basic RNN reads inputs one at a time, and remembers information through the hidden layer activations (hidden states) that are passed from one time step to the next.\n", " * The time step dimension determines how many times to re-use the RNN cell\n", "* Each cell takes two inputs at each time step:\n", " * The hidden state from the previous cell\n", " * The current time step's input data\n", "* Each cell has two outputs at each time step:\n", " * A hidden state \n", " * A prediction\n", "\n", "\n", "In the next section, you'll build a more complex model, the LSTM, which is better at addressing vanishing gradients. The LSTM is better able to remember a piece of information and save it for many time steps. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "x2QbsWFzELGD" }, "source": [ "\n", "## 2 - Long Short-Term Memory (LSTM) Network\n", "\n", "The following figure shows the operations of an LSTM cell:\n", "\n", "\n", "
Figure 4: LSTM cell. This tracks and updates a \"cell state,\" or memory variable $c^{\\langle t \\rangle}$ at every time step, which can be different from $a^{\\langle t \\rangle}$. \n", "Note, the $softmax^{}$ includes a dense layer and softmax.
\n", "\n", "Similar to the RNN example above, you'll begin by implementing the LSTM cell for a single time step. Then, you'll iteratively call it from inside a \"for loop\" to have it process an input with $T_x$ time steps. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "fyUcxGCJELGD" }, "source": [ "### Overview of gates and states\n", "\n", "#### Forget gate $\\mathbf{\\Gamma}_{f}$\n", "\n", "* Let's assume you are reading words in a piece of text, and plan to use an LSTM to keep track of grammatical structures, such as whether the subject is singular (\"puppy\") or plural (\"puppies\"). \n", "* If the subject changes its state (from a singular word to a plural word), the memory of the previous state becomes outdated, so you'll \"forget\" that outdated state.\n", "* The \"forget gate\" is a tensor containing values between 0 and 1.\n", " * If a unit in the forget gate has a value close to 0, the LSTM will \"forget\" the stored state in the corresponding unit of the previous cell state.\n", " * If a unit in the forget gate has a value close to 1, the LSTM will mostly remember the corresponding value in the stored state.\n", "\n", "##### Equation\n", "\n", "$$\\mathbf{\\Gamma}_f^{\\langle t \\rangle} = \\sigma(\\mathbf{W}_f[\\mathbf{a}^{\\langle t-1 \\rangle}, \\mathbf{x}^{\\langle t \\rangle}] + \\mathbf{b}_f)\\tag{1} $$\n", "\n", "##### Explanation of the equation:\n", "\n", "* $\\mathbf{W_{f}}$ contains weights that govern the forget gate's behavior. \n", "* The previous time step's hidden state $[a^{\\langle t-1 \\rangle}$ and current time step's input $x^{\\langle t \\rangle}]$ are concatenated together and multiplied by $\\mathbf{W_{f}}$. \n", "* A sigmoid function is used to make each of the gate tensor's values $\\mathbf{\\Gamma}_f^{\\langle t \\rangle}$ range from 0 to 1.\n", "* The forget gate $\\mathbf{\\Gamma}_f^{\\langle t \\rangle}$ has the same dimensions as the previous cell state $c^{\\langle t-1 \\rangle}$. \n", "* This means that the two can be multiplied together, element-wise.\n", "* Multiplying the tensors $\\mathbf{\\Gamma}_f^{\\langle t \\rangle} * \\mathbf{c}^{\\langle t-1 \\rangle}$ is like applying a mask over the previous cell state.\n", "* If a single value in $\\mathbf{\\Gamma}_f^{\\langle t \\rangle}$ is 0 or close to 0, then the product is close to 0.\n", " * This keeps the information stored in the corresponding unit in $\\mathbf{c}^{\\langle t-1 \\rangle}$ from being remembered for the next time step.\n", "* Similarly, if one value is close to 1, the product is close to the original value in the previous cell state.\n", " * The LSTM will keep the information from the corresponding unit of $\\mathbf{c}^{\\langle t-1 \\rangle}$, to be used in the next time step.\n", " \n", "##### Variable names in the code\n", "The variable names in the code are similar to the equations, with slight differences. \n", "* `Wf`: forget gate weight $\\mathbf{W}_{f}$\n", "* `bf`: forget gate bias $\\mathbf{b}_{f}$\n", "* `ft`: forget gate $\\Gamma_f^{\\langle t \\rangle}$" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "HHeiRiqKELGE" }, "source": [ "#### Candidate value $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$\n", "* The candidate value is a tensor containing information from the current time step that **may** be stored in the current cell state $\\mathbf{c}^{\\langle t \\rangle}$.\n", "* The parts of the candidate value that get passed on depend on the update gate.\n", "* The candidate value is a tensor containing values that range from -1 to 1.\n", "* The tilde \"~\" is used to differentiate the candidate $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$ from the cell state $\\mathbf{c}^{\\langle t \\rangle}$.\n", "\n", "##### Equation\n", "$$\\mathbf{\\tilde{c}}^{\\langle t \\rangle} = \\tanh\\left( \\mathbf{W}_{c} [\\mathbf{a}^{\\langle t - 1 \\rangle}, \\mathbf{x}^{\\langle t \\rangle}] + \\mathbf{b}_{c} \\right) \\tag{3}$$\n", "\n", "##### Explanation of the equation\n", "* The *tanh* function produces values between -1 and 1.\n", "\n", "\n", "##### Variable names in the code\n", "* `cct`: candidate value $\\mathbf{\\tilde{c}}^{\\langle t \\rangle}$" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "xewEj_FuELGF" }, "source": [ "#### Update gate $\\mathbf{\\Gamma}_{i}$\n", "\n", "* You use the update gate to decide what aspects of the candidate $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$ to add to the cell state $c^{\\langle t \\rangle}$.\n", "* The update gate decides what parts of a \"candidate\" tensor $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$ are passed onto the cell state $\\mathbf{c}^{\\langle t \\rangle}$.\n", "* The update gate is a tensor containing values between 0 and 1.\n", " * When a unit in the update gate is close to 1, it allows the value of the candidate $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$ to be passed onto the hidden state $\\mathbf{c}^{\\langle t \\rangle}$\n", " * When a unit in the update gate is close to 0, it prevents the corresponding value in the candidate from being passed onto the hidden state.\n", "* Notice that the subscript \"i\" is used and not \"u\", to follow the convention used in the literature.\n", "\n", "##### Equation\n", "\n", "$$\\mathbf{\\Gamma}_i^{\\langle t \\rangle} = \\sigma(\\mathbf{W}_i[a^{\\langle t-1 \\rangle}, \\mathbf{x}^{\\langle t \\rangle}] + \\mathbf{b}_i)\\tag{2} $$ \n", "\n", "##### Explanation of the equation\n", "\n", "* Similar to the forget gate, here $\\mathbf{\\Gamma}_i^{\\langle t \\rangle}$, the sigmoid produces values between 0 and 1.\n", "* The update gate is multiplied element-wise with the candidate, and this product ($\\mathbf{\\Gamma}_{i}^{\\langle t \\rangle} * \\tilde{c}^{\\langle t \\rangle}$) is used in determining the cell state $\\mathbf{c}^{\\langle t \\rangle}$.\n", "\n", "##### Variable names in code (Please note that they're different than the equations)\n", "In the code, you'll use the variable names found in the academic literature. These variables don't use \"u\" to denote \"update\".\n", "* `Wi` is the update gate weight $\\mathbf{W}_i$ (not \"Wu\") \n", "* `bi` is the update gate bias $\\mathbf{b}_i$ (not \"bu\")\n", "* `it` is the update gate $\\mathbf{\\Gamma}_i^{\\langle t \\rangle}$ (not \"ut\")" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "yvxVv83-ELGF" }, "source": [ "#### Cell state $\\mathbf{c}^{\\langle t \\rangle}$\n", "\n", "* The cell state is the \"memory\" that gets passed onto future time steps.\n", "* The new cell state $\\mathbf{c}^{\\langle t \\rangle}$ is a combination of the previous cell state and the candidate value.\n", "\n", "##### Equation\n", "\n", "$$ \\mathbf{c}^{\\langle t \\rangle} = \\mathbf{\\Gamma}_f^{\\langle t \\rangle}* \\mathbf{c}^{\\langle t-1 \\rangle} + \\mathbf{\\Gamma}_{i}^{\\langle t \\rangle} *\\mathbf{\\tilde{c}}^{\\langle t \\rangle} \\tag{4} $$\n", "\n", "##### Explanation of equation\n", "* The previous cell state $\\mathbf{c}^{\\langle t-1 \\rangle}$ is adjusted (weighted) by the forget gate $\\mathbf{\\Gamma}_{f}^{\\langle t \\rangle}$\n", "* and the candidate value $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$, adjusted (weighted) by the update gate $\\mathbf{\\Gamma}_{i}^{\\langle t \\rangle}$\n", "\n", "##### Variable names and shapes in the code\n", "* `c`: cell state, including all time steps, $\\mathbf{c}$ shape $(n_{a}, m, T_x)$\n", "* `c_next`: new (next) cell state, $\\mathbf{c}^{\\langle t \\rangle}$ shape $(n_{a}, m)$\n", "* `c_prev`: previous cell state, $\\mathbf{c}^{\\langle t-1 \\rangle}$, shape $(n_{a}, m)$" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "XHVgaJxiELGG" }, "source": [ "#### Output gate $\\mathbf{\\Gamma}_{o}$\n", "\n", "* The output gate decides what gets sent as the prediction (output) of the time step.\n", "* The output gate is like the other gates, in that it contains values that range from 0 to 1.\n", "\n", "##### Equation\n", "\n", "$$ \\mathbf{\\Gamma}_o^{\\langle t \\rangle}= \\sigma(\\mathbf{W}_o[\\mathbf{a}^{\\langle t-1 \\rangle}, \\mathbf{x}^{\\langle t \\rangle}] + \\mathbf{b}_{o})\\tag{5}$$ \n", "\n", "##### Explanation of the equation\n", "* The output gate is determined by the previous hidden state $\\mathbf{a}^{\\langle t-1 \\rangle}$ and the current input $\\mathbf{x}^{\\langle t \\rangle}$\n", "* The sigmoid makes the gate range from 0 to 1.\n", "\n", "\n", "##### Variable names in the code\n", "* `Wo`: output gate weight, $\\mathbf{W_o}$\n", "* `bo`: output gate bias, $\\mathbf{b_o}$\n", "* `ot`: output gate, $\\mathbf{\\Gamma}_{o}^{\\langle t \\rangle}$" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "fWkiZ0M-ELGG" }, "source": [ "#### Hidden state $\\mathbf{a}^{\\langle t \\rangle}$\n", "\n", "* The hidden state gets passed to the LSTM cell's next time step.\n", "* It is used to determine the three gates ($\\mathbf{\\Gamma}_{f}, \\mathbf{\\Gamma}_{u}, \\mathbf{\\Gamma}_{o}$) of the next time step.\n", "* The hidden state is also used for the prediction $y^{\\langle t \\rangle}$.\n", "\n", "##### Equation\n", "\n", "$$ \\mathbf{a}^{\\langle t \\rangle} = \\mathbf{\\Gamma}_o^{\\langle t \\rangle} * \\tanh(\\mathbf{c}^{\\langle t \\rangle})\\tag{6} $$\n", "\n", "##### Explanation of equation\n", "* The hidden state $\\mathbf{a}^{\\langle t \\rangle}$ is determined by the cell state $\\mathbf{c}^{\\langle t \\rangle}$ in combination with the output gate $\\mathbf{\\Gamma}_{o}$.\n", "* The cell state state is passed through the `tanh` function to rescale values between -1 and 1.\n", "* The output gate acts like a \"mask\" that either preserves the values of $\\tanh(\\mathbf{c}^{\\langle t \\rangle})$ or keeps those values from being included in the hidden state $\\mathbf{a}^{\\langle t \\rangle}$\n", "\n", "##### Variable names and shapes in the code\n", "* `a`: hidden state, including time steps. $\\mathbf{a}$ has shape $(n_{a}, m, T_{x})$\n", "* `a_prev`: hidden state from previous time step. $\\mathbf{a}^{\\langle t-1 \\rangle}$ has shape $(n_{a}, m)$\n", "* `a_next`: hidden state for next time step. $\\mathbf{a}^{\\langle t \\rangle}$ has shape $(n_{a}, m)$ " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "7OYaNPNPELGH" }, "source": [ "#### Prediction $\\mathbf{y}^{\\langle t \\rangle}_{pred}$\n", "* The prediction in this use case is a classification, so you'll use a softmax.\n", "\n", "The equation is:\n", "$$\\mathbf{y}^{\\langle t \\rangle}_{pred} = \\textrm{softmax}(\\mathbf{W}_{y} \\mathbf{a}^{\\langle t \\rangle} + \\mathbf{b}_{y})$$\n", "\n", "##### Variable names and shapes in the code\n", "* `y_pred`: prediction, including all time steps. $\\mathbf{y}_{pred}$ has shape $(n_{y}, m, T_{x})$. Note that $(T_{y} = T_{x})$ for this example.\n", "* `yt_pred`: prediction for the current time step $t$. $\\mathbf{y}^{\\langle t \\rangle}_{pred}$ has shape $(n_{y}, m)$" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "G49sqmnoELGI" }, "source": [ "\n", "### 2.1 - LSTM Cell\n", "\n", "\n", "### Exercise 3 - lstm_cell_forward\n", "\n", "Implement the LSTM cell described in Figure 4.\n", "\n", "**Instructions**:\n", "1. Concatenate the hidden state $a^{\\langle t-1 \\rangle}$ and input $x^{\\langle t \\rangle}$ into a single matrix: \n", "\n", "$$concat = \\begin{bmatrix} a^{\\langle t-1 \\rangle} \\\\ x^{\\langle t \\rangle} \\end{bmatrix}$$ \n", "\n", "2. Compute all formulas (1 through 6) for the gates, hidden state, and cell state.\n", "3. Compute the prediction $y^{\\langle t \\rangle}$.\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "RZ1Uq6pmELGI" }, "source": [ "#### Additional Hints\n", "* You can use [numpy.concatenate](https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html). Check which value to use for the `axis` parameter.\n", "* The functions `sigmoid()` and `softmax` are imported from `rnn_utils.py`.\n", "* Some docs for [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html)\n", "* Use [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) for matrix multiplication.\n", "* Notice that the variable names `Wi`, `bi` refer to the weights and biases of the **update** gate. There are no variables named \"Wu\" or \"bu\" in this function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "JU3tUxvmELGJ" }, "outputs": [], "source": [ "# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "# GRADED FUNCTION: lstm_cell_forward\n", "\n", "def lstm_cell_forward(xt, a_prev, c_prev, parameters):\n", " \"\"\"\n", " Implement a single forward step of the LSTM-cell as described in Figure (4)\n", "\n", " Arguments:\n", " xt -- your input data at timestep \"t\", numpy array of shape (n_x, m).\n", " a_prev -- Hidden state at timestep \"t-1\", numpy array of shape (n_a, m)\n", " c_prev -- Memory state at timestep \"t-1\", numpy array of shape (n_a, m)\n", " parameters -- python dictionary containing:\n", " Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n", " bf -- Bias of the forget gate, numpy array of shape (n_a, 1)\n", " Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n", " bi -- Bias of the update gate, numpy array of shape (n_a, 1)\n", " Wc -- Weight matrix of the first \"tanh\", numpy array of shape (n_a, n_a + n_x)\n", " bc -- Bias of the first \"tanh\", numpy array of shape (n_a, 1)\n", " Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)\n", " bo -- Bias of the output gate, numpy array of shape (n_a, 1)\n", " Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n", " by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n", " \n", " Returns:\n", " a_next -- next hidden state, of shape (n_a, m)\n", " c_next -- next memory state, of shape (n_a, m)\n", " yt_pred -- prediction at timestep \"t\", numpy array of shape (n_y, m)\n", " cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)\n", " \n", " Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),\n", " c stands for the cell state (memory)\n", " \"\"\"\n", "\n", " # Retrieve parameters from \"parameters\"\n", " Wf = parameters[\"Wf\"] # forget gate weight\n", " bf = parameters[\"bf\"]\n", " Wi = parameters[\"Wi\"] # update gate weight (notice the variable name)\n", " bi = parameters[\"bi\"] # (notice the variable name)\n", " Wc = parameters[\"Wc\"] # candidate value weight\n", " bc = parameters[\"bc\"]\n", " Wo = parameters[\"Wo\"] # output gate weight\n", " bo = parameters[\"bo\"]\n", " Wy = parameters[\"Wy\"] # prediction weight\n", " by = parameters[\"by\"]\n", " \n", " # Retrieve dimensions from shapes of xt and Wy\n", " n_x, m = xt.shape\n", " n_y, n_a = Wy.shape\n", "\n", " ### START CODE HERE ###\n", " # Concatenate a_prev and xt (≈1 line)\n", " concat = np.concatenate([a_prev,xt])\n", "\n", " # Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)\n", " ft = sigmoid(np.dot(Wf,concat) + bf) # Forget Gate\n", " it = sigmoid(np.dot(Wi,concat) + bi) # Update Gate\n", " cct = np.tanh(np.dot(Wc,concat) + bc) # Candidate Value\n", " c_next = c_prev*ft + cct*it # C_t\n", " ot = sigmoid(np.dot(Wo,concat) + bo) # output gate\n", " a_next = ot*(np.tanh(c_next)) #a_t\n", " \n", " # Compute prediction of the LSTM cell (≈1 line)\n", " yt_pred = softmax(np.dot(Wy,a_next) + by)\n", " ### END CODE HERE ###\n", "\n", " # store values needed for backward propagation in cache\n", " cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)\n", "\n", " return a_next, c_next, yt_pred, cache" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "h9ssBEoxELGN", "scrolled": true }, "outputs": [], "source": [ "np.random.seed(1)\n", "xt_tmp = np.random.randn(3, 10)\n", "a_prev_tmp = np.random.randn(5, 10)\n", "c_prev_tmp = np.random.randn(5, 10)\n", "parameters_tmp = {}\n", "parameters_tmp['Wf'] = np.random.randn(5, 5 + 3)\n", "parameters_tmp['bf'] = np.random.randn(5, 1)\n", "parameters_tmp['Wi'] = np.random.randn(5, 5 + 3)\n", "parameters_tmp['bi'] = np.random.randn(5, 1)\n", "parameters_tmp['Wo'] = np.random.randn(5, 5 + 3)\n", "parameters_tmp['bo'] = np.random.randn(5, 1)\n", "parameters_tmp['Wc'] = np.random.randn(5, 5 + 3)\n", "parameters_tmp['bc'] = np.random.randn(5, 1)\n", "parameters_tmp['Wy'] = np.random.randn(2, 5)\n", "parameters_tmp['by'] = np.random.randn(2, 1)\n", "\n", "a_next_tmp, c_next_tmp, yt_tmp, cache_tmp = lstm_cell_forward(xt_tmp, a_prev_tmp, c_prev_tmp, parameters_tmp)\n", "\n", "print(\"a_next[4] = \\n\", a_next_tmp[4])\n", "print(\"a_next.shape = \", a_next_tmp.shape)\n", "print(\"c_next[2] = \\n\", c_next_tmp[2])\n", "print(\"c_next.shape = \", c_next_tmp.shape)\n", "print(\"yt[1] =\", yt_tmp[1])\n", "print(\"yt.shape = \", yt_tmp.shape)\n", "print(\"cache[1][3] =\\n\", cache_tmp[1][3])\n", "print(\"len(cache) = \", len(cache_tmp))\n", "\n", "# UNIT TEST\n", "lstm_cell_forward_test(lstm_cell_forward)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "mjSDYoQoELGP" }, "source": [ "**Expected Output**:\n", "\n", "```Python\n", "a_next[4] = \n", " [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482\n", " 0.76566531 0.34631421 -0.00215674 0.43827275]\n", "a_next.shape = (5, 10)\n", "c_next[2] = \n", " [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942\n", " 0.76449811 -0.0981561 -0.74348425 -0.26810932]\n", "c_next.shape = (5, 10)\n", "yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381\n", " 0.00943007 0.12666353 0.39380172 0.07828381]\n", "yt.shape = (2, 10)\n", "cache[1][3] =\n", " [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874\n", " 0.07651101 -1.03752894 1.41219977 -0.37647422]\n", "len(cache) = 10\n", "```" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "tb-4WWn4ELGQ" }, "source": [ "\n", "### 2.2 - Forward Pass for LSTM\n", "\n", "Now that you have implemented one step of an LSTM, you can iterate this over it using a for loop to process a sequence of $T_x$ inputs. \n", "\n", "\n", "
Figure 5: LSTM over multiple time steps.
\n", "\n", " \n", "### Exercise 4 - lstm_forward\n", " \n", "Implement `lstm_forward()` to run an LSTM over $T_x$ time steps. \n", "\n", "**Instructions**\n", "* Get the dimensions $n_x, n_a, n_y, m, T_x$ from the shape of the variables: `x` and `parameters`\n", "* Initialize the 3D tensors $a$, $c$ and $y$\n", " - $a$: hidden state, shape $(n_{a}, m, T_{x})$\n", " - $c$: cell state, shape $(n_{a}, m, T_{x})$\n", " - $y$: prediction, shape $(n_{y}, m, T_{x})$ (Note that $T_{y} = T_{x}$ in this example)\n", " - **Note** Setting one variable equal to the other is a \"copy by reference\". In other words, don't do `c = a', otherwise both these variables point to the same underlying variable.\n", "* Initialize the 2D tensor $a^{\\langle t \\rangle}$ \n", " - $a^{\\langle t \\rangle}$ stores the hidden state for time step $t$. The variable name is `a_next`.\n", " - $a^{\\langle 0 \\rangle}$, the initial hidden state at time step 0, is passed in when calling the function. The variable name is `a0`.\n", " - $a^{\\langle t \\rangle}$ and $a^{\\langle 0 \\rangle}$ represent a single time step, so they both have the shape $(n_{a}, m)$ \n", " - Initialize $a^{\\langle t \\rangle}$ by setting it to the initial hidden state ($a^{\\langle 0 \\rangle}$) that is passed into the function.\n", "* Initialize $c^{\\langle t \\rangle}$ with zeros. \n", " - The variable name is `c_next`\n", " - $c^{\\langle t \\rangle}$ represents a single time step, so its shape is $(n_{a}, m)$\n", " - **Note**: create `c_next` as its own variable with its own location in memory. Do not initialize it as a slice of the 3D tensor $c$. In other words, **don't** do `c_next = c[:,:,0]`.\n", "* For each time step, do the following:\n", " - From the 3D tensor $x$, get a 2D slice $x^{\\langle t \\rangle}$ at time step $t$\n", " - Call the `lstm_cell_forward` function that you defined previously, to get the hidden state, cell state, prediction, and cache\n", " - Store the hidden state, cell state and prediction (the 2D tensors) inside the 3D tensors\n", " - Append the cache to the list of caches" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "XMmJrPSdELGQ" }, "outputs": [], "source": [ "# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "# GRADED FUNCTION: lstm_forward\n", "\n", "def lstm_forward(x, a0, parameters):\n", " \"\"\"\n", " Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (4).\n", "\n", " Arguments:\n", " x -- Input data for every time-step, of shape (n_x, m, T_x).\n", " a0 -- Initial hidden state, of shape (n_a, m)\n", " parameters -- python dictionary containing:\n", " Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n", " bf -- Bias of the forget gate, numpy array of shape (n_a, 1)\n", " Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n", " bi -- Bias of the update gate, numpy array of shape (n_a, 1)\n", " Wc -- Weight matrix of the first \"tanh\", numpy array of shape (n_a, n_a + n_x)\n", " bc -- Bias of the first \"tanh\", numpy array of shape (n_a, 1)\n", " Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)\n", " bo -- Bias of the output gate, numpy array of shape (n_a, 1)\n", " Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n", " by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n", " \n", " Returns:\n", " a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)\n", " y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)\n", " c -- The value of the cell state, numpy array of shape (n_a, m, T_x)\n", " caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)\n", " \"\"\"\n", "\n", " # Initialize \"caches\", which will track the list of all the caches\n", " caches = []\n", " \n", " ### START CODE HERE ###\n", " #Wy = parameters['Wy'] # Save parameters in local variables in case you want to use Wy instead of parameters['Wy']\n", " # Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)\n", " n_x, m, T_x = x.shape\n", " n_y, n_a = parameters['Wy'].shape\n", " \n", " # initialize \"a\", \"c\" and \"y\" with zeros (≈3 lines)\n", " a = np.zeros((n_a,m,T_x))\n", " c = np.zeros((n_a,m,T_x))\n", " y = np.zeros((n_y,m,T_x))\n", " \n", " # Initialize a_next and c_next (≈2 lines)\n", " a_next = a0\n", " c_next = np.zeros((n_a,m))\n", " \n", " # loop over all time-steps\n", " for t in range(T_x):\n", " # Get the 2D slice 'xt' from the 3D input 'x' at time step 't'\n", " xt = x[:,:,t]\n", " # Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)\n", " a_next, c_next, yt, cache = lstm_cell_forward(xt, a_next, c_next, parameters)\n", " # Save the value of the new \"next\" hidden state in a (≈1 line)\n", " a[:,:,t] = a_next\n", " # Save the value of the next cell state (≈1 line)\n", " c[:,:,t] = c_next\n", " # Save the value of the prediction in y (≈1 line)\n", " y[:,:,t] = yt\n", " # Append the cache into caches (≈1 line)\n", " caches.append(cache)\n", " \n", " ### END CODE HERE ###\n", " \n", " # store values needed for backward propagation in cache\n", " caches = (caches, x)\n", "\n", " return a, y, c, caches" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "JehC5gwdELGS" }, "outputs": [], "source": [ "np.random.seed(1)\n", "x_tmp = np.random.randn(3, 10, 7)\n", "a0_tmp = np.random.randn(5, 10)\n", "parameters_tmp = {}\n", "parameters_tmp['Wf'] = np.random.randn(5, 5 + 3)\n", "parameters_tmp['bf'] = np.random.randn(5, 1)\n", "parameters_tmp['Wi'] = np.random.randn(5, 5 + 3)\n", "parameters_tmp['bi']= np.random.randn(5, 1)\n", "parameters_tmp['Wo'] = np.random.randn(5, 5 + 3)\n", "parameters_tmp['bo'] = np.random.randn(5, 1)\n", "parameters_tmp['Wc'] = np.random.randn(5, 5 + 3)\n", "parameters_tmp['bc'] = np.random.randn(5, 1)\n", "parameters_tmp['Wy'] = np.random.randn(2, 5)\n", "parameters_tmp['by'] = np.random.randn(2, 1)\n", "\n", "a_tmp, y_tmp, c_tmp, caches_tmp = lstm_forward(x_tmp, a0_tmp, parameters_tmp)\n", "print(\"a[4][3][6] = \", a_tmp[4][3][6])\n", "print(\"a.shape = \", a_tmp.shape)\n", "print(\"y[1][4][3] =\", y_tmp[1][4][3])\n", "print(\"y.shape = \", y_tmp.shape)\n", "print(\"caches[1][1][1] =\\n\", caches_tmp[1][1][1])\n", "print(\"c[1][2][1]\", c_tmp[1][2][1])\n", "print(\"len(caches) = \", len(caches_tmp))\n", "\n", "# UNIT TEST \n", "lstm_forward_test(lstm_forward)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "TAETfQVFELGV" }, "source": [ "**Expected Output**:\n", "\n", "```Python\n", "a[4][3][6] = 0.172117767533\n", "a.shape = (5, 10, 7)\n", "y[1][4][3] = 0.95087346185\n", "y.shape = (2, 10, 7)\n", "caches[1][1][1] =\n", " [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139\n", " 0.41005165]\n", "c[1][2][1] -0.855544916718\n", "len(caches) = 2\n", "```" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "CLgW871YELGW" }, "source": [ "### Congratulations! \n", "\n", "You have now implemented the forward passes for both the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. The framework will take care of the rest. \n", "\n", "What you should remember:\n", " \n", "* An LSTM is similar to an RNN in that they both use hidden states to pass along information, but an LSTM also uses a cell state, which is like a long-term memory, to help deal with the issue of vanishing gradients\n", "* An LSTM cell consists of a cell state, or long-term memory, a hidden state, or short-term memory, along with 3 gates that constantly update the relevancy of its inputs:\n", " * A forget gate, which decides which input units should be remembered and passed along. It's a tensor with values between 0 and 1. \n", " * If a unit has a value close to 0, the LSTM will \"forget\" the stored state in the previous cell state.\n", " * If it has a value close to 1, the LSTM will mostly remember the corresponding value.\n", " * An update gate, again a tensor containing values between 0 and 1. It decides on what information to throw away, and what new information to add.\n", " * When a unit in the update gate is close to 1, the value of its candidate is passed on to the hidden state.\n", " * When a unit in the update gate is close to 0, it's prevented from being passed onto the hidden state.\n", " * And an output gate, which decides what gets sent as the output of the time step\n", " \n", "\n", "Let's recap all you've accomplished so far. You have: \n", "\n", "* Used notation for building sequence models\n", "* Become familiar with the architecture of a basic RNN and an LSTM, and can describe their components\n", "\n", "The rest of this notebook is optional, and will not be graded, but as always, you are encouraged to push your own understanding! Good luck and have fun. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "hRg4ba60ELGW" }, "source": [ " \n", "## 3 - Backpropagation in Recurrent Neural Networks (OPTIONAL / UNGRADED)\n", "\n", "In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If, however, you are an expert in calculus (or are just curious) and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. \n", "\n", "When in an earlier [course](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/0VSHe/derivatives-with-a-computation-graph) you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated, and so were not derived in lecture. However, they're briefly presented for your viewing pleasure below. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "7TnuUyMPELGX" }, "source": [ "Note: This notebook does not implement the backward path from the Loss 'J' backwards to 'a'. This would have included the dense layer and softmax, which are a part of the forward path. This is assumed to be calculated elsewhere and the result passed to `rnn_backward` in 'da'. It is further assumed that loss has been adjusted for batch size (m) and division by the number of examples is NOT required here." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "CYPUrRdXELGX" }, "source": [ "This section is optional and ungraded, because it's more difficult and has fewer details regarding its implementation. Note that this section only implements key elements of the full path! \n", "\n", "Onward, brave one: " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "eN4Hc8P4ELGX" }, "source": [ " \n", "### 3.1 - Basic RNN Backward Pass\n", "\n", "Begin by computing the backward pass for the basic RNN cell. Then, in the following sections, iterate through the cells.\n", "\n", "
\n", "
Figure 6: The RNN cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the time steps of the RNN by following the chain rule from calculus. Internal to the cell, the chain rule is also used to calculate $(\\frac{\\partial J}{\\partial W_{ax}},\\frac{\\partial J}{\\partial W_{aa}},\\frac{\\partial J}{\\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. The operation can utilize the cached results from the forward path.
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "nZb7R-V8ELGY" }, "source": [ "Recall from lecture that the shorthand for the partial derivative of cost relative to a variable is `dVariable`. For example, $\\frac{\\partial J}{\\partial W_{ax}}$ is $dW_{ax}$. This will be used throughout the remaining sections." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "cyh9i4bFELGZ" }, "source": [ "\n", "
\n", "
Figure 7: This implementation of `rnn_cell_backward` does **not** include the output dense layer and softmax which are included in `rnn_cell_forward`. \n", "\n", "$da_{next}$ is $\\frac{\\partial{J}}{\\partial a^{\\langle t \\rangle}}$ and includes loss from previous stages and current stage output logic. The addition shown in green will be part of your implementation of `rnn_backward`.
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "PeVJfrjkELGa" }, "source": [ "##### Equations\n", "To compute `rnn_cell_backward`, you can use the following equations. It's a good exercise to derive them by hand. Here, $*$ denotes element-wise multiplication while the absence of a symbol indicates matrix multiplication.\n", "\n", "\\begin{align}\n", "\\displaystyle a^{\\langle t \\rangle} &= \\tanh(W_{ax} x^{\\langle t \\rangle} + W_{aa} a^{\\langle t-1 \\rangle} + b_{a})\\tag{-} \\\\[8pt]\n", "\\displaystyle \\frac{\\partial \\tanh(x)} {\\partial x} &= 1 - \\tanh^2(x) \\tag{-} \\\\[8pt]\n", "\\displaystyle {dtanh} &= da_{next} * ( 1 - \\tanh^2(W_{ax}x^{\\langle t \\rangle}+W_{aa} a^{\\langle t-1 \\rangle} + b_{a})) \\tag{0} \\\\[8pt]\n", "\\displaystyle {dW_{ax}} &= dtanh \\cdot x^{\\langle t \\rangle T}\\tag{1} \\\\[8pt]\n", "\\displaystyle dW_{aa} &= dtanh \\cdot a^{\\langle t-1 \\rangle T}\\tag{2} \\\\[8pt]\n", "\\displaystyle db_a& = \\sum_{batch}dtanh\\tag{3} \\\\[8pt]\n", "\\displaystyle dx^{\\langle t \\rangle} &= { W_{ax}}^T \\cdot dtanh\\tag{4} \\\\[8pt]\n", "\\displaystyle da_{prev} &= { W_{aa}}^T \\cdot dtanh\\tag{5}\n", "\\end{align}\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "FYskaCprELGa" }, "source": [ "\n", "### Exercise 5 - rnn_cell_backward\n", "\n", "Implementing `rnn_cell_backward`.\n", "\n", "The results can be computed directly by implementing the equations above. However, you have an option to simplify them by computing 'dz' and using the chain rule. \n", "This can be further simplified by noting that $\\tanh(W_{ax}x^{\\langle t \\rangle}+W_{aa} a^{\\langle t-1 \\rangle} + b_{a})$ was computed and saved as `a_next` in the forward pass. \n", "\n", "To calculate `dba`, the 'batch' above is a sum across all 'm' examples (axis= 1). Note that you should use the `keepdims = True` option.\n", "\n", "It may be worthwhile to review Course 1 [Derivatives with a Computation Graph](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/0VSHe/derivatives-with-a-computation-graph) through [Backpropagation Intuition](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/6dDj7/backpropagation-intuition-optional), which decompose the calculation into steps using the chain rule. \n", "Matrix vector derivatives are described [here](http://cs231n.stanford.edu/vecDerivs.pdf), though the equations above incorporate the required transformations.\n", "\n", "**Note**: `rnn_cell_backward` does **not** include the calculation of loss from $y \\langle t \\rangle$. This is incorporated into the incoming `da_next`. This is a slight mismatch with `rnn_cell_forward`, which includes a dense layer and softmax. \n", "\n", "**Note on the code**: \n", " \n", "$\\displaystyle dx^{\\langle t \\rangle}$ is represented by dxt, \n", "$\\displaystyle d W_{ax}$ is represented by dWax, \n", "$\\displaystyle da_{prev}$ is represented by da_prev, \n", "$\\displaystyle dW_{aa}$ is represented by dWaa, \n", "$\\displaystyle db_{a}$ is represented by dba, \n", "`dz` is not derived above but can optionally be derived by students to simplify the repeated calculations.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "BmbWomYOELGb" }, "outputs": [], "source": [ "# UNGRADED FUNCTION: rnn_cell_backward\n", "\n", "def rnn_cell_backward(da_next, cache):\n", " \"\"\"\n", " Implements the backward pass for the RNN-cell (single time-step).\n", "\n", " Arguments:\n", " da_next -- Gradient of loss with respect to next hidden state\n", " cache -- python dictionary containing useful values (output of rnn_cell_forward())\n", "\n", " Returns:\n", " gradients -- python dictionary containing:\n", " dx -- Gradients of input data, of shape (n_x, m)\n", " da_prev -- Gradients of previous hidden state, of shape (n_a, m)\n", " dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)\n", " dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)\n", " dba -- Gradients of bias vector, of shape (n_a, 1)\n", " \"\"\"\n", " \n", " # Retrieve values from cache\n", " (a_next, a_prev, xt, parameters) = cache\n", " \n", " # Retrieve values from parameters\n", " Wax = parameters[\"Wax\"]\n", " Waa = parameters[\"Waa\"]\n", " Wya = parameters[\"Wya\"]\n", " ba = parameters[\"ba\"]\n", " by = parameters[\"by\"]\n", "\n", " ### START CODE HERE ###\n", " # compute the gradient of dtanh term using a_next and da_next (≈1 line)\n", " dtanh = None\n", "\n", " # compute the gradient of the loss with respect to Wax (≈2 lines)\n", " dxt = None\n", " dWax = None\n", "\n", " # compute the gradient with respect to Waa (≈2 lines)\n", " da_prev = None\n", " dWaa = None\n", "\n", " # compute the gradient with respect to b (≈1 line)\n", " dba = None\n", "\n", " ### END CODE HERE ###\n", " \n", " # Store the gradients in a python dictionary\n", " gradients = {\"dxt\": dxt, \"da_prev\": da_prev, \"dWax\": dWax, \"dWaa\": dWaa, \"dba\": dba}\n", " \n", " return gradients" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "VVbG5aPtELGd", "scrolled": true }, "outputs": [], "source": [ "np.random.seed(1)\n", "xt_tmp = np.random.randn(3,10)\n", "a_prev_tmp = np.random.randn(5,10)\n", "parameters_tmp = {}\n", "parameters_tmp['Wax'] = np.random.randn(5,3)\n", "parameters_tmp['Waa'] = np.random.randn(5,5)\n", "parameters_tmp['Wya'] = np.random.randn(2,5)\n", "parameters_tmp['ba'] = np.random.randn(5,1)\n", "parameters_tmp['by'] = np.random.randn(2,1)\n", "\n", "a_next_tmp, yt_tmp, cache_tmp = rnn_cell_forward(xt_tmp, a_prev_tmp, parameters_tmp)\n", "\n", "da_next_tmp = np.random.randn(5,10)\n", "gradients_tmp = rnn_cell_backward(da_next_tmp, cache_tmp)\n", "print(\"gradients[\\\"dxt\\\"][1][2] =\", gradients_tmp[\"dxt\"][1][2])\n", "print(\"gradients[\\\"dxt\\\"].shape =\", gradients_tmp[\"dxt\"].shape)\n", "print(\"gradients[\\\"da_prev\\\"][2][3] =\", gradients_tmp[\"da_prev\"][2][3])\n", "print(\"gradients[\\\"da_prev\\\"].shape =\", gradients_tmp[\"da_prev\"].shape)\n", "print(\"gradients[\\\"dWax\\\"][3][1] =\", gradients_tmp[\"dWax\"][3][1])\n", "print(\"gradients[\\\"dWax\\\"].shape =\", gradients_tmp[\"dWax\"].shape)\n", "print(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients_tmp[\"dWaa\"][1][2])\n", "print(\"gradients[\\\"dWaa\\\"].shape =\", gradients_tmp[\"dWaa\"].shape)\n", "print(\"gradients[\\\"dba\\\"][4] =\", gradients_tmp[\"dba\"][4])\n", "print(\"gradients[\\\"dba\\\"].shape =\", gradients_tmp[\"dba\"].shape)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "8rDmdFwaELGf" }, "source": [ "**Expected Output**:\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
\n", " gradients[\"dxt\"][1][2] =\n", " \n", " -1.3872130506\n", "
\n", " gradients[\"dxt\"].shape =\n", " \n", " (3, 10)\n", "
\n", " gradients[\"da_prev\"][2][3] =\n", " \n", " -0.152399493774\n", "
\n", " gradients[\"da_prev\"].shape =\n", " \n", " (5, 10)\n", "
\n", " gradients[\"dWax\"][3][1] =\n", " \n", " 0.410772824935\n", "
\n", " gradients[\"dWax\"].shape =\n", " \n", " (5, 3)\n", "
\n", " gradients[\"dWaa\"][1][2] = \n", " \n", " 1.15034506685\n", "
\n", " gradients[\"dWaa\"].shape =\n", " \n", " (5, 5)\n", "
\n", " gradients[\"dba\"][4] = \n", " \n", " [ 0.20023491]\n", "
\n", " gradients[\"dba\"].shape = \n", " \n", " (5, 1)\n", "
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "-iKRDCSwELGg" }, "source": [ "\n", "### Exercise 6 - rnn_backward\n", "\n", "Computing the gradients of the cost with respect to $a^{\\langle t \\rangle}$ at every time step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.\n", "\n", "**Instructions**:\n", "\n", "Implement the `rnn_backward` function. Initialize the return variables with zeros first, and then loop through all the time steps while calling the `rnn_cell_backward` at each time time step, updating the other variables accordingly." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "23d3urTKELGg" }, "source": [ "* Note that this notebook does not implement the backward path from the Loss 'J' backwards to 'a'. \n", " * This would have included the dense layer and softmax which are a part of the forward path. \n", " * This is assumed to be calculated elsewhere and the result passed to `rnn_backward` in 'da'. \n", " * You must combine this with the loss from the previous stages when calling `rnn_cell_backward` (see figure 7 above).\n", "* It is further assumed that loss has been adjusted for batch size (m).\n", " * Therefore, division by the number of examples is not required here." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "G38ecvUDELGg" }, "outputs": [], "source": [ "# UNGRADED FUNCTION: rnn_backward\n", "\n", "def rnn_backward(da, caches):\n", " \"\"\"\n", " Implement the backward pass for a RNN over an entire sequence of input data.\n", "\n", " Arguments:\n", " da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)\n", " caches -- tuple containing information from the forward pass (rnn_forward)\n", " \n", " Returns:\n", " gradients -- python dictionary containing:\n", " dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)\n", " da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)\n", " dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)\n", " dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)\n", " dba -- Gradient w.r.t the bias, of shape (n_a, 1)\n", " \"\"\"\n", " \n", " ### START CODE HERE ###\n", " \n", " # Retrieve values from the first cache (t=1) of caches (≈2 lines)\n", " (caches, x) = caches\n", " (a1, a0, x1, parameters) = caches[0]\n", " \n", " # Retrieve dimensions from da's and x1's shapes (≈2 lines)\n", " n_a, m, T_x = da.shape\n", " n_x, m = x1.shape \n", " \n", " # initialize the gradients with the right sizes (≈6 lines)\n", " dx = None\n", " dWax = None\n", " dWaa = None\n", " dba = None\n", " da0 = None\n", " da_prevt = None\n", " \n", " # Loop through all the time steps\n", " for t in reversed(range(T_x)):\n", " # Compute gradients at time step t. Choose wisely the \"da_next\" and the \"cache\" to use in the backward propagation step. (≈1 line)\n", " gradients = None\n", " # Retrieve derivatives from gradients (≈ 1 line)\n", " dxt, da_prevt, dWaxt, dWaat, dbat = gradients[\"dxt\"], gradients[\"da_prev\"], gradients[\"dWax\"], gradients[\"dWaa\"], gradients[\"dba\"]\n", " # Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)\n", " dx[:, :, t] = None \n", " dWax += None \n", " dWaa += None \n", " dba += None \n", " \n", " # Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line) \n", " da0 = None\n", " ### END CODE HERE ###\n", "\n", " # Store the gradients in a python dictionary\n", " gradients = {\"dx\": dx, \"da0\": da0, \"dWax\": dWax, \"dWaa\": dWaa,\"dba\": dba}\n", " \n", " return gradients" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "oZXl_w5pELGj" }, "outputs": [], "source": [ "np.random.seed(1)\n", "x_tmp = np.random.randn(3,10,4)\n", "a0_tmp = np.random.randn(5,10)\n", "parameters_tmp = {}\n", "parameters_tmp['Wax'] = np.random.randn(5,3)\n", "parameters_tmp['Waa'] = np.random.randn(5,5)\n", "parameters_tmp['Wya'] = np.random.randn(2,5)\n", "parameters_tmp['ba'] = np.random.randn(5,1)\n", "parameters_tmp['by'] = np.random.randn(2,1)\n", "\n", "a_tmp, y_tmp, caches_tmp = rnn_forward(x_tmp, a0_tmp, parameters_tmp)\n", "da_tmp = np.random.randn(5, 10, 4)\n", "gradients_tmp = rnn_backward(da_tmp, caches_tmp)\n", "\n", "print(\"gradients[\\\"dx\\\"][1][2] =\", gradients_tmp[\"dx\"][1][2])\n", "print(\"gradients[\\\"dx\\\"].shape =\", gradients_tmp[\"dx\"].shape)\n", "print(\"gradients[\\\"da0\\\"][2][3] =\", gradients_tmp[\"da0\"][2][3])\n", "print(\"gradients[\\\"da0\\\"].shape =\", gradients_tmp[\"da0\"].shape)\n", "print(\"gradients[\\\"dWax\\\"][3][1] =\", gradients_tmp[\"dWax\"][3][1])\n", "print(\"gradients[\\\"dWax\\\"].shape =\", gradients_tmp[\"dWax\"].shape)\n", "print(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients_tmp[\"dWaa\"][1][2])\n", "print(\"gradients[\\\"dWaa\\\"].shape =\", gradients_tmp[\"dWaa\"].shape)\n", "print(\"gradients[\\\"dba\\\"][4] =\", gradients_tmp[\"dba\"][4])\n", "print(\"gradients[\\\"dba\\\"].shape =\", gradients_tmp[\"dba\"].shape)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "8JFS03nAELGl" }, "source": [ "**Expected Output**:\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
\n", " gradients[\"dx\"][1][2] =\n", " \n", " [-2.07101689 -0.59255627 0.02466855 0.01483317]\n", "
\n", " gradients[\"dx\"].shape =\n", " \n", " (3, 10, 4)\n", "
\n", " gradients[\"da0\"][2][3] =\n", " \n", " -0.314942375127\n", "
\n", " gradients[\"da0\"].shape =\n", " \n", " (5, 10)\n", "
\n", " gradients[\"dWax\"][3][1] =\n", " \n", " 11.2641044965\n", "
\n", " gradients[\"dWax\"].shape =\n", " \n", " (5, 3)\n", "
\n", " gradients[\"dWaa\"][1][2] = \n", " \n", " 2.30333312658\n", "
\n", " gradients[\"dWaa\"].shape =\n", " \n", " (5, 5)\n", "
\n", " gradients[\"dba\"][4] = \n", " \n", " [-0.74747722]\n", "
\n", " gradients[\"dba\"].shape = \n", " \n", " (5, 1)\n", "
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "L8N3p4gTELGm" }, "source": [ "\n", "### 3.2 - LSTM Backward Pass" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "MNNR4NQRELGm" }, "source": [ "#### 1. One Step Backward\n", "The LSTM backward pass is slightly more complicated than the forward pass.\n", "\n", "
\n", "
Figure 8: LSTM Cell Backward. Note the output functions, while part of the `lstm_cell_forward`, are not included in `lstm_cell_backward`
\n", "\n", "The equations for the LSTM backward pass are provided below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "QYZwnAEGELGo" }, "source": [ "#### 2. Gate Derivatives\n", "Note the location of the gate derivatives ($\\gamma$..) between the dense layer and the activation function (see graphic above). This is convenient for computing parameter derivatives in the next step. \n", "\\begin{align}\n", "d\\gamma_o^{\\langle t \\rangle} &= da_{next}*\\tanh(c_{next}) * \\Gamma_o^{\\langle t \\rangle}*\\left(1-\\Gamma_o^{\\langle t \\rangle}\\right)\\tag{7} \\\\[8pt]\n", "dp\\widetilde{c}^{\\langle t \\rangle} &= \\left(dc_{next}*\\Gamma_u^{\\langle t \\rangle}+ \\Gamma_o^{\\langle t \\rangle}* (1-\\tanh^2(c_{next})) * \\Gamma_u^{\\langle t \\rangle} * da_{next} \\right) * \\left(1-\\left(\\widetilde c^{\\langle t \\rangle}\\right)^2\\right) \\tag{8} \\\\[8pt]\n", "d\\gamma_u^{\\langle t \\rangle} &= \\left(dc_{next}*\\widetilde{c}^{\\langle t \\rangle} + \\Gamma_o^{\\langle t \\rangle}* (1-\\tanh^2(c_{next})) * \\widetilde{c}^{\\langle t \\rangle} * da_{next}\\right)*\\Gamma_u^{\\langle t \\rangle}*\\left(1-\\Gamma_u^{\\langle t \\rangle}\\right)\\tag{9} \\\\[8pt]\n", "d\\gamma_f^{\\langle t \\rangle} &= \\left(dc_{next}* c_{prev} + \\Gamma_o^{\\langle t \\rangle} * (1-\\tanh^2(c_{next})) * c_{prev} * da_{next}\\right)*\\Gamma_f^{\\langle t \\rangle}*\\left(1-\\Gamma_f^{\\langle t \\rangle}\\right)\\tag{10}\n", "\\end{align}\n", "\n", "#### 3. Parameter Derivatives \n", "\n", "$ dW_f = d\\gamma_f^{\\langle t \\rangle} \\begin{bmatrix} a_{prev} \\\\ x_t\\end{bmatrix}^T \\tag{11} $\n", "$ dW_u = d\\gamma_u^{\\langle t \\rangle} \\begin{bmatrix} a_{prev} \\\\ x_t\\end{bmatrix}^T \\tag{12} $\n", "$ dW_c = dp\\widetilde c^{\\langle t \\rangle} \\begin{bmatrix} a_{prev} \\\\ x_t\\end{bmatrix}^T \\tag{13} $\n", "$ dW_o = d\\gamma_o^{\\langle t \\rangle} \\begin{bmatrix} a_{prev} \\\\ x_t\\end{bmatrix}^T \\tag{14}$\n", "\n", "To calculate $db_f, db_u, db_c, db_o$ you just need to sum across all 'm' examples (axis= 1) on $d\\gamma_f^{\\langle t \\rangle}, d\\gamma_u^{\\langle t \\rangle}, dp\\widetilde c^{\\langle t \\rangle}, d\\gamma_o^{\\langle t \\rangle}$ respectively. Note that you should have the `keepdims = True` option.\n", "\n", "$\\displaystyle db_f = \\sum_{batch}d\\gamma_f^{\\langle t \\rangle}\\tag{15}$\n", "$\\displaystyle db_u = \\sum_{batch}d\\gamma_u^{\\langle t \\rangle}\\tag{16}$\n", "$\\displaystyle db_c = \\sum_{batch}d\\gamma_c^{\\langle t \\rangle}\\tag{17}$\n", "$\\displaystyle db_o = \\sum_{batch}d\\gamma_o^{\\langle t \\rangle}\\tag{18}$\n", "\n", "Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.\n", "\n", "$ da_{prev} = W_f^T d\\gamma_f^{\\langle t \\rangle} + W_u^T d\\gamma_u^{\\langle t \\rangle}+ W_c^T dp\\widetilde c^{\\langle t \\rangle} + W_o^T d\\gamma_o^{\\langle t \\rangle} \\tag{19}$\n", "\n", "Here, to account for concatenation, the weights for equations 19 are the first n_a, (i.e. $W_f = W_f[:,:n_a]$ etc...)\n", "\n", "$ dc_{prev} = dc_{next}*\\Gamma_f^{\\langle t \\rangle} + \\Gamma_o^{\\langle t \\rangle} * (1- \\tanh^2(c_{next}))*\\Gamma_f^{\\langle t \\rangle}*da_{next} \\tag{20}$\n", "\n", "$ dx^{\\langle t \\rangle} = W_f^T d\\gamma_f^{\\langle t \\rangle} + W_u^T d\\gamma_u^{\\langle t \\rangle}+ W_c^T dp\\widetilde c^{\\langle t \\rangle} + W_o^T d\\gamma_o^{\\langle t \\rangle}\\tag{21} $\n", "\n", "where the weights for equation 21 are from n_a to the end, (i.e. $W_f = W_f[:,n_a:]$ etc...)\n", "\n", "\n", "### Exercise 7 - lstm_cell_backward\n", "\n", "Implement `lstm_cell_backward` by implementing equations $7-21$ below. \n", " \n", " \n", "**Note**: \n", "\n", "In the code:\n", "\n", "$d\\gamma_o^{\\langle t \\rangle}$ is represented by `dot`, \n", "$dp\\widetilde{c}^{\\langle t \\rangle}$ is represented by `dcct`, \n", "$d\\gamma_u^{\\langle t \\rangle}$ is represented by `dit`, \n", "$d\\gamma_f^{\\langle t \\rangle}$ is represented by `dft`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "2C_nFC_qELGo" }, "outputs": [], "source": [ "# UNGRADED FUNCTION: lstm_cell_backward\n", "\n", "def lstm_cell_backward(da_next, dc_next, cache):\n", " \"\"\"\n", " Implement the backward pass for the LSTM-cell (single time-step).\n", "\n", " Arguments:\n", " da_next -- Gradients of next hidden state, of shape (n_a, m)\n", " dc_next -- Gradients of next cell state, of shape (n_a, m)\n", " cache -- cache storing information from the forward pass\n", "\n", " Returns:\n", " gradients -- python dictionary containing:\n", " dxt -- Gradient of input data at time-step t, of shape (n_x, m)\n", " da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)\n", " dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)\n", " dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n", " dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n", " dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)\n", " dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)\n", " dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)\n", " dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)\n", " dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)\n", " dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)\n", " \"\"\"\n", "\n", " # Retrieve information from \"cache\"\n", " (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache\n", " \n", " ### START CODE HERE ###\n", " # Retrieve dimensions from xt's and a_next's shape (≈2 lines)\n", " n_x, m = None\n", " n_a, m = None\n", " \n", " # Compute gates related derivatives. Their values can be found by looking carefully at equations (7) to (10) (≈4 lines)\n", " dot = None\n", " dcct = None\n", " dit = None\n", " dft = None\n", " \n", " # Compute parameters related derivatives. Use equations (11)-(18) (≈8 lines)\n", " dWf = None\n", " dWi = None\n", " dWc = None\n", " dWo = None\n", " dbf = None\n", " dbi = None\n", " dbc = None\n", " dbo = None\n", "\n", " # Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (19)-(21). (≈3 lines)\n", " da_prev = None\n", " dc_prev = None\n", " dxt = None\n", " ### END CODE HERE ###\n", " \n", " \n", " \n", " # Save gradients in dictionary\n", " gradients = {\"dxt\": dxt, \"da_prev\": da_prev, \"dc_prev\": dc_prev, \"dWf\": dWf,\"dbf\": dbf, \"dWi\": dWi,\"dbi\": dbi,\n", " \"dWc\": dWc,\"dbc\": dbc, \"dWo\": dWo,\"dbo\": dbo}\n", "\n", " return gradients" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "pggjdThtELGs", "scrolled": false }, "outputs": [], "source": [ "np.random.seed(1)\n", "xt_tmp = np.random.randn(3,10)\n", "a_prev_tmp = np.random.randn(5,10)\n", "c_prev_tmp = np.random.randn(5,10)\n", "parameters_tmp = {}\n", "parameters_tmp['Wf'] = np.random.randn(5, 5+3)\n", "parameters_tmp['bf'] = np.random.randn(5,1)\n", "parameters_tmp['Wi'] = np.random.randn(5, 5+3)\n", "parameters_tmp['bi'] = np.random.randn(5,1)\n", "parameters_tmp['Wo'] = np.random.randn(5, 5+3)\n", "parameters_tmp['bo'] = np.random.randn(5,1)\n", "parameters_tmp['Wc'] = np.random.randn(5, 5+3)\n", "parameters_tmp['bc'] = np.random.randn(5,1)\n", "parameters_tmp['Wy'] = np.random.randn(2,5)\n", "parameters_tmp['by'] = np.random.randn(2,1)\n", "\n", "a_next_tmp, c_next_tmp, yt_tmp, cache_tmp = lstm_cell_forward(xt_tmp, a_prev_tmp, c_prev_tmp, parameters_tmp)\n", "\n", "da_next_tmp = np.random.randn(5,10)\n", "dc_next_tmp = np.random.randn(5,10)\n", "gradients_tmp = lstm_cell_backward(da_next_tmp, dc_next_tmp, cache_tmp)\n", "print(\"gradients[\\\"dxt\\\"][1][2] =\", gradients_tmp[\"dxt\"][1][2])\n", "print(\"gradients[\\\"dxt\\\"].shape =\", gradients_tmp[\"dxt\"].shape)\n", "print(\"gradients[\\\"da_prev\\\"][2][3] =\", gradients_tmp[\"da_prev\"][2][3])\n", "print(\"gradients[\\\"da_prev\\\"].shape =\", gradients_tmp[\"da_prev\"].shape)\n", "print(\"gradients[\\\"dc_prev\\\"][2][3] =\", gradients_tmp[\"dc_prev\"][2][3])\n", "print(\"gradients[\\\"dc_prev\\\"].shape =\", gradients_tmp[\"dc_prev\"].shape)\n", "print(\"gradients[\\\"dWf\\\"][3][1] =\", gradients_tmp[\"dWf\"][3][1])\n", "print(\"gradients[\\\"dWf\\\"].shape =\", gradients_tmp[\"dWf\"].shape)\n", "print(\"gradients[\\\"dWi\\\"][1][2] =\", gradients_tmp[\"dWi\"][1][2])\n", "print(\"gradients[\\\"dWi\\\"].shape =\", gradients_tmp[\"dWi\"].shape)\n", "print(\"gradients[\\\"dWc\\\"][3][1] =\", gradients_tmp[\"dWc\"][3][1])\n", "print(\"gradients[\\\"dWc\\\"].shape =\", gradients_tmp[\"dWc\"].shape)\n", "print(\"gradients[\\\"dWo\\\"][1][2] =\", gradients_tmp[\"dWo\"][1][2])\n", "print(\"gradients[\\\"dWo\\\"].shape =\", gradients_tmp[\"dWo\"].shape)\n", "print(\"gradients[\\\"dbf\\\"][4] =\", gradients_tmp[\"dbf\"][4])\n", "print(\"gradients[\\\"dbf\\\"].shape =\", gradients_tmp[\"dbf\"].shape)\n", "print(\"gradients[\\\"dbi\\\"][4] =\", gradients_tmp[\"dbi\"][4])\n", "print(\"gradients[\\\"dbi\\\"].shape =\", gradients_tmp[\"dbi\"].shape)\n", "print(\"gradients[\\\"dbc\\\"][4] =\", gradients_tmp[\"dbc\"][4])\n", "print(\"gradients[\\\"dbc\\\"].shape =\", gradients_tmp[\"dbc\"].shape)\n", "print(\"gradients[\\\"dbo\\\"][4] =\", gradients_tmp[\"dbo\"][4])\n", "print(\"gradients[\\\"dbo\\\"].shape =\", gradients_tmp[\"dbo\"].shape)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "V579PRbyELGv" }, "source": [ "**Expected Output**:\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
\n", " gradients[\"dxt\"][1][2] =\n", " \n", " 3.23055911511\n", "
\n", " gradients[\"dxt\"].shape =\n", " \n", " (3, 10)\n", "
\n", " gradients[\"da_prev\"][2][3] =\n", " \n", " -0.0639621419711\n", "
\n", " gradients[\"da_prev\"].shape =\n", " \n", " (5, 10)\n", "
\n", " gradients[\"dc_prev\"][2][3] =\n", " \n", " 0.797522038797\n", "
\n", " gradients[\"dc_prev\"].shape =\n", " \n", " (5, 10)\n", "
\n", " gradients[\"dWf\"][3][1] = \n", " \n", " -0.147954838164\n", "
\n", " gradients[\"dWf\"].shape =\n", " \n", " (5, 8)\n", "
\n", " gradients[\"dWi\"][1][2] = \n", " \n", " 1.05749805523\n", "
\n", " gradients[\"dWi\"].shape = \n", " \n", " (5, 8)\n", "
\n", " gradients[\"dWc\"][3][1] = \n", " \n", " 2.30456216369\n", "
\n", " gradients[\"dWc\"].shape = \n", " \n", " (5, 8)\n", "
\n", " gradients[\"dWo\"][1][2] = \n", " \n", " 0.331311595289\n", "
\n", " gradients[\"dWo\"].shape = \n", " \n", " (5, 8)\n", "
\n", " gradients[\"dbf\"][4] = \n", " \n", " [ 0.18864637]\n", "
\n", " gradients[\"dbf\"].shape = \n", " \n", " (5, 1)\n", "
\n", " gradients[\"dbi\"][4] = \n", " \n", " [-0.40142491]\n", "
\n", " gradients[\"dbi\"].shape = \n", " \n", " (5, 1)\n", "
\n", " gradients[\"dbc\"][4] = \n", " \n", " [ 0.25587763]\n", "
\n", " gradients[\"dbc\"].shape = \n", " \n", " (5, 1)\n", "
\n", " gradients[\"dbo\"][4] = \n", " \n", " [ 0.13893342]\n", "
\n", " gradients[\"dbo\"].shape = \n", " \n", " (5, 1)\n", "
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "PceNMxTDELGv" }, "source": [ "\n", "### 3.3 Backward Pass through the LSTM RNN\n", "\n", "This part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. \n", "\n", "\n", "### Exercise 8 - lstm_backward\n", "\n", "Implement the `lstm_backward` function.\n", "\n", "**Instructions**: Create a for loop starting from $T_x$ and going backward. For each step, call `lstm_cell_backward` and update your old gradients by adding the new gradients to them. Note that `dxt` is not updated, but is stored." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "nK0kSO1yELGw" }, "outputs": [], "source": [ "# UNGRADED FUNCTION: lstm_backward\n", "\n", "def lstm_backward(da, caches):\n", " \n", " \"\"\"\n", " Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).\n", "\n", " Arguments:\n", " da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)\n", " caches -- cache storing information from the forward pass (lstm_forward)\n", "\n", " Returns:\n", " gradients -- python dictionary containing:\n", " dx -- Gradient of inputs, of shape (n_x, m, T_x)\n", " da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)\n", " dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n", " dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n", " dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)\n", " dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)\n", " dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)\n", " dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)\n", " dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)\n", " dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)\n", " \"\"\"\n", "\n", " # Retrieve values from the first cache (t=1) of caches.\n", " (caches, x) = caches\n", " (a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]\n", " \n", " ### START CODE HERE ###\n", " # Retrieve dimensions from da's and x1's shapes (≈2 lines)\n", " n_a, m, T_x = None\n", " n_x, m = None\n", " \n", " # initialize the gradients with the right sizes (≈12 lines)\n", " dx = None\n", " da0 = None\n", " da_prevt = None\n", " dc_prevt = None\n", " dWf = None\n", " dWi = None\n", " dWc = None\n", " dWo = None\n", " dbf = None\n", " dbi = None\n", " dbc = None\n", " dbo = None\n", " \n", " # loop back over the whole sequence\n", " for t in reversed(range(None)):\n", " # Compute all gradients using lstm_cell_backward\n", " gradients = None\n", " # Store or add the gradient to the parameters' previous step's gradient\n", " da_prevt = None\n", " dc_prevt = None\n", " dx[:,:,t] = None\n", " dWf += None\n", " dWi += None\n", " dWc += None\n", " dWo += None\n", " dbf += None\n", " dbi += None\n", " dbc += None\n", " dbo += None\n", " # Set the first activation's gradient to the backpropagated gradient da_prev.\n", " da0 = None\n", " \n", " ### END CODE HERE ###\n", "\n", " # Store the gradients in a python dictionary\n", " gradients = {\"dx\": dx, \"da0\": da0, \"dWf\": dWf,\"dbf\": dbf, \"dWi\": dWi,\"dbi\": dbi,\n", " \"dWc\": dWc,\"dbc\": dbc, \"dWo\": dWo,\"dbo\": dbo}\n", " \n", " return gradients" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "sBipd8MoELGy" }, "outputs": [], "source": [ "np.random.seed(1)\n", "x_tmp = np.random.randn(3,10,7)\n", "a0_tmp = np.random.randn(5,10)\n", "\n", "parameters_tmp = {}\n", "parameters_tmp['Wf'] = np.random.randn(5, 5+3)\n", "parameters_tmp['bf'] = np.random.randn(5,1)\n", "parameters_tmp['Wi'] = np.random.randn(5, 5+3)\n", "parameters_tmp['bi'] = np.random.randn(5,1)\n", "parameters_tmp['Wo'] = np.random.randn(5, 5+3)\n", "parameters_tmp['bo'] = np.random.randn(5,1)\n", "parameters_tmp['Wc'] = np.random.randn(5, 5+3)\n", "parameters_tmp['bc'] = np.random.randn(5,1)\n", "parameters_tmp['Wy'] = np.zeros((2,5)) # unused, but needed for lstm_forward\n", "parameters_tmp['by'] = np.zeros((2,1)) # unused, but needed for lstm_forward\n", "\n", "a_tmp, y_tmp, c_tmp, caches_tmp = lstm_forward(x_tmp, a0_tmp, parameters_tmp)\n", "\n", "da_tmp = np.random.randn(5, 10, 4)\n", "gradients_tmp = lstm_backward(da_tmp, caches_tmp)\n", "\n", "print(\"gradients[\\\"dx\\\"][1][2] =\", gradients_tmp[\"dx\"][1][2])\n", "print(\"gradients[\\\"dx\\\"].shape =\", gradients_tmp[\"dx\"].shape)\n", "print(\"gradients[\\\"da0\\\"][2][3] =\", gradients_tmp[\"da0\"][2][3])\n", "print(\"gradients[\\\"da0\\\"].shape =\", gradients_tmp[\"da0\"].shape)\n", "print(\"gradients[\\\"dWf\\\"][3][1] =\", gradients_tmp[\"dWf\"][3][1])\n", "print(\"gradients[\\\"dWf\\\"].shape =\", gradients_tmp[\"dWf\"].shape)\n", "print(\"gradients[\\\"dWi\\\"][1][2] =\", gradients_tmp[\"dWi\"][1][2])\n", "print(\"gradients[\\\"dWi\\\"].shape =\", gradients_tmp[\"dWi\"].shape)\n", "print(\"gradients[\\\"dWc\\\"][3][1] =\", gradients_tmp[\"dWc\"][3][1])\n", "print(\"gradients[\\\"dWc\\\"].shape =\", gradients_tmp[\"dWc\"].shape)\n", "print(\"gradients[\\\"dWo\\\"][1][2] =\", gradients_tmp[\"dWo\"][1][2])\n", "print(\"gradients[\\\"dWo\\\"].shape =\", gradients_tmp[\"dWo\"].shape)\n", "print(\"gradients[\\\"dbf\\\"][4] =\", gradients_tmp[\"dbf\"][4])\n", "print(\"gradients[\\\"dbf\\\"].shape =\", gradients_tmp[\"dbf\"].shape)\n", "print(\"gradients[\\\"dbi\\\"][4] =\", gradients_tmp[\"dbi\"][4])\n", "print(\"gradients[\\\"dbi\\\"].shape =\", gradients_tmp[\"dbi\"].shape)\n", "print(\"gradients[\\\"dbc\\\"][4] =\", gradients_tmp[\"dbc\"][4])\n", "print(\"gradients[\\\"dbc\\\"].shape =\", gradients_tmp[\"dbc\"].shape)\n", "print(\"gradients[\\\"dbo\\\"][4] =\", gradients_tmp[\"dbo\"][4])\n", "print(\"gradients[\\\"dbo\\\"].shape =\", gradients_tmp[\"dbo\"].shape)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "mM9UT18cELG0" }, "source": [ "**Expected Output**:\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
\n", " gradients[\"dx\"][1][2] =\n", " \n", " [0.00218254 0.28205375 -0.48292508 -0.43281115]\n", "
\n", " gradients[\"dx\"].shape =\n", " \n", " (3, 10, 4)\n", "
\n", " gradients[\"da0\"][2][3] =\n", " \n", " 0.312770310257\n", "
\n", " gradients[\"da0\"].shape =\n", " \n", " (5, 10)\n", "
\n", " gradients[\"dWf\"][3][1] = \n", " \n", " -0.0809802310938\n", "
\n", " gradients[\"dWf\"].shape =\n", " \n", " (5, 8)\n", "
\n", " gradients[\"dWi\"][1][2] = \n", " \n", " 0.40512433093\n", "
\n", " gradients[\"dWi\"].shape = \n", " \n", " (5, 8)\n", "
\n", " gradients[\"dWc\"][3][1] = \n", " \n", " -0.0793746735512\n", "
\n", " gradients[\"dWc\"].shape = \n", " \n", " (5, 8)\n", "
\n", " gradients[\"dWo\"][1][2] = \n", " \n", " 0.038948775763\n", "
\n", " gradients[\"dWo\"].shape = \n", " \n", " (5, 8)\n", "
\n", " gradients[\"dbf\"][4] = \n", " \n", " [-0.15745657]\n", "
\n", " gradients[\"dbf\"].shape = \n", " \n", " (5, 1)\n", "
\n", " gradients[\"dbi\"][4] = \n", " \n", " [-0.50848333]\n", "
\n", " gradients[\"dbi\"].shape = \n", " \n", " (5, 1)\n", "
\n", " gradients[\"dbc\"][4] = \n", " \n", " [-0.42510818]\n", "
\n", " gradients[\"dbc\"].shape = \n", " \n", " (5, 1)\n", "
\n", " gradients[\"dbo\"][4] = \n", " \n", " [ -0.17958196]\n", "
\n", " gradients[\"dbo\"].shape = \n", " \n", " (5, 1)\n", "
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "0vKIm6eCELG0" }, "source": [ "### Congratulations on completing this assignment! \n", "\n", "You now understand how recurrent neural networks work! In the next exercise, you'll use an RNN to build a character-level language model. See you there! " ] } ], "metadata": { "coursera": { "schema_names": [ "DLSC5W1-A1" ] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 1 }