{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# TensorFlow Scan Examples\n", "#### By [Rob DiPietro](http://rdipietro.github.io) – Version 0.32 – April 28, 2016." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Post or Jupyter Notebook?\n", "\n", "This work is available both as a [post](http://rdipietro.github.io/tensorflow-scan-examples/) and as a [Jupyter notebook](https://github.com/rdipietro/jupyter-notebooks/tree/master/tensorflow-scan-examples). If you see any mistakes or have any questions, please open a [GitHub issue](https://github.com/rdipietro/jupyter-notebooks/issues)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Overview](#overview)\n", "- [Preliminaries](#preliminaries)\n", "- [Hard Coding the Cumulative Sum](#hard-coding-the-cumulative-sum)\n", "- [Learning the Cumulative Sum](#learning-the-cumulative-sum)\n", " - [Generating Inputs and Targets](#generating-inputs-and-targets)\n", " - [Defining the RNN Model from Scratch](#defining-the-rnn-model-from-scratch)\n", " - [Defining an Optimizer](#defining-an-optimizer)\n", " - [Training](#training)\n", " - [Testing Qualitatively](#testing-qualitatively)\n", " - [Ideas for Playing with the Code](#ideas-for-playing-with-the-code)\n", "- [Some Final Thoughts](#some-final-thoughts)\n", "- [Acknowledgements](#acknowledgements)\n", "- [About Me](#about-me)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Overview\n", "\n", "`scan` was [recently made available](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/functional_ops.py) in TensorFlow.\n", "\n", "`scan` lets us write loops inside a computation graph, allowing backpropagation and all.\n", "We could explicitly unroll the loops ourselves, creating new graph nodes for each loop\n", "iteration, but then the number of iterations is fixed instead of dynamic, and graph\n", "creation can be [extremely slow](https://github.com/tensorflow/tensorflow/issues/511).\n", "\n", "Let's go over two examples. First, we'll create a simple cumulative-sum operation using\n", "`scan`. For example, `[1, 2, 2, 2]` as input will produce `[1, 3, 5, 7]` as output. Second,\n", "we'll build a toy RNN from scratch, and we'll have it *learn* the cumulative-sum operation\n", "from example input, target sequences. For example, the RNN will learn to map `[1, 2, 2, 2]`\n", "to `[1, 3, 5, 7]` (approximately)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Hard Coding the Cumulative Sum\n", "\n", "Let's just start with code:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 1. 3. 5. 7.]\n" ] } ], "source": [ "from __future__ import division, print_function\n", "import tensorflow as tf\n", "\n", "def fn(previous_output, current_input):\n", " return previous_output + current_input\n", "\n", "elems = tf.Variable([1.0, 2.0, 2.0, 2.0])\n", "elems = tf.identity(elems)\n", "initializer = tf.constant(0.0)\n", "out = tf.scan(fn, elems, initializer=initializer)\n", "\n", "with tf.Session() as sess:\n", " sess.run(tf.initialize_all_variables())\n", " print(sess.run(out))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This example shows how `scan` is used: it loops over the first dimension of `elems`, at each step applying `fn`, which takes in the previous step's output and the current step's input. The very first step's previous output is given by `initializer`:\n", "- Iteration 0: `fn(0.0, 1.0) == 1.0`\n", "- Iteration 1: `fn(1.0, 2.0) == 3.0`\n", "- Iteration 2: `fn(3.0, 2.0) == 5.0`\n", "- Iteration 3: `fn(5.0, 2.0) == 7.0`\n", "\n", "And why the `elems = tf.identity(elems)` line? `scan` is new, and this is just a temporary workaround for [a bug](https://github.com/tensorflow/tensorflow/issues/1725)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Learning the Cumulative Sum\n", "\n", "Now a more complex example: we'll build a recurrent neural network and *learn* the cumulative-sum function from data." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%reset -f" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [], "source": [ "from __future__ import division, print_function\n", "\n", "%matplotlib inline\n", "from IPython.display import set_matplotlib_formats\n", "set_matplotlib_formats('svg')\n", "\n", "import matplotlib.pyplot as plt\n", "\n", "import os\n", "import shutil\n", "import numpy as np\n", "import tensorflow as tf\n", "from tensorflow.python.ops import functional_ops" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Generating Inputs and Targets\n", "\n", "First let's write a function for generating input, target sequences, one pair at a time. We'll limit ourselves to inputs with independent time steps, drawn from a standard normal distribution.\n", "\n", "Each sequence is 2-D with time on the first axis and inputs or targets on the second. (This way it'd be easy to generalize to the case of multiple inputs/targets per time step.)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def input_target_generator(min_duration=5, max_duration=50):\n", " \"\"\" Generate toy input, target sequences.\n", " \n", " Each input sequence has values that are drawn from the standard normal\n", " distribution, and each target sequence is the corresponding cumulative sum.\n", " Sequence durations are chosen at random using a discrete uniform\n", " distribution over `[min_duration, max_duration]`.\n", " \n", " Args:\n", " min_duration: A positive integer. The minimum sequence duration.\n", " max_duration: A positive integer. The maximum sequence duration.\n", "\n", " Yields:\n", " A tuple,\n", " inputs: A 2-D float32 NumPy array with shape `[duration, 1]`.\n", " targets: A 2-D float32 NumPy array with shape `[duration, 1]`.\n", " \"\"\"\n", " \n", " while True:\n", " duration = np.random.randint(min_duration, max_duration)\n", " inputs = np.random.randn(duration).astype(np.float32)\n", " targets = np.cumsum(inputs).astype(np.float32)\n", " yield inputs.reshape(-1, 1), targets.reshape(-1, 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Defining the RNN Model from Scratch\n", "\n", "Next let's define the RNN model. The code is a bit verbose because it's meant to be self explanatory, but the pieces are simple:\n", "- The update for the vanilla RNN is $h_t = \\tanh( W_h h_{t-1} + W_x x_t + b )$.\n", "- `_vanilla_rnn_step` is the core of the vanilla RNN: it applies this update by taking in a previous hidden state along with a current input and producing a new hidden state. (The only difference below is that both sides of the equation are transposed, and each variable is replaced with its transpose.)\n", "- `_compute_predictions` applies `_vanilla_rnn_step` to all time steps using `scan`, resulting in hidden states for each time step, and then applies a final linear layer to each state to yield final predictions.\n", "- `_compute_loss` just computes the mean squared Euclidean distance between the ground-truth targets and our predictions." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class Model(object):\n", " \n", " def __init__(self, hidden_layer_size, input_size, target_size, init_scale=0.1):\n", " \"\"\" Create a vanilla RNN.\n", " \n", " Args:\n", " hidden_layer_size: An integer. The number of hidden units.\n", " input_size: An integer. The number of inputs per time step.\n", " target_size: An integer. The number of targets per time step.\n", " init_scale: A float. All weight matrices will be initialized using\n", " a uniform distribution over [-init_scale, init_scale].\n", " \"\"\"\n", " \n", " self.hidden_layer_size = hidden_layer_size\n", " self.input_size = input_size\n", " self.target_size = target_size\n", " self.init_scale = init_scale\n", " \n", " self._inputs = tf.placeholder(tf.float32, shape=[None, input_size],\n", " name='inputs')\n", " self._targets = tf.placeholder(tf.float32, shape=[None, target_size],\n", " name='targets')\n", " \n", " initializer = tf.random_uniform_initializer(-init_scale, init_scale)\n", " with tf.variable_scope('model', initializer=initializer):\n", " self._states, self._predictions = self._compute_predictions()\n", " self._loss = self._compute_loss()\n", " \n", " def _vanilla_rnn_step(self, h_prev, x):\n", " \"\"\" Vanilla RNN step.\n", "\n", " Args:\n", " h_prev: A 1-D float32 Tensor with shape `[hidden_layer_size]`.\n", " x: A 1-D float32 Tensor with shape `[input_size]`.\n", "\n", " Returns:\n", " The updated state `h`, with the same shape as `h_prev`.\n", " \"\"\"\n", "\n", " h_prev = tf.reshape(h_prev, [1, self.hidden_layer_size])\n", " x = tf.reshape(x, [1, self.input_size])\n", "\n", " with tf.variable_scope('rnn_block'):\n", " W_h = tf.get_variable(\n", " 'W_h', shape=[self.hidden_layer_size, self.hidden_layer_size])\n", " W_x = tf.get_variable(\n", " 'W_x', shape=[self.input_size, self.hidden_layer_size])\n", " b = tf.get_variable('b', shape=[self.hidden_layer_size],\n", " initializer=tf.constant_initializer(0.0))\n", " h = tf.tanh( tf.matmul(h_prev, W_h) + tf.matmul(x, W_x) + b )\n", " h = tf.reshape(h, [self.hidden_layer_size], name='h')\n", " \n", " return h\n", "\n", " def _compute_predictions(self):\n", " \"\"\" Compute vanilla-RNN states and predictions. \"\"\"\n", "\n", " with tf.variable_scope('states'):\n", " initial_state = tf.zeros([self.hidden_layer_size],\n", " name='initial_state')\n", " states = tf.scan(self._vanilla_rnn_step, self.inputs,\n", " initializer=initial_state, name='states')\n", "\n", " with tf.variable_scope('predictions'):\n", " W_pred = tf.get_variable(\n", " 'W_pred', shape=[self.hidden_layer_size, self.target_size])\n", " b_pred = tf.get_variable('b_pred', shape=[self.target_size],\n", " initializer=tf.constant_initializer(0.0))\n", " predictions = tf.add(tf.matmul(states, W_pred), b_pred, name='predictions')\n", " \n", " return states, predictions\n", "\n", " def _compute_loss(self):\n", " \"\"\" Compute l2 loss between targets and predictions. \"\"\"\n", "\n", " with tf.variable_scope('loss'):\n", " loss = tf.reduce_mean((self.targets - self.predictions)**2, name='loss')\n", " return loss\n", " \n", " @property\n", " def inputs(self):\n", " \"\"\" A 2-D float32 placeholder with shape `[dynamic_duration, input_size]`. \"\"\"\n", " return self._inputs\n", " \n", " @property\n", " def targets(self):\n", " \"\"\" A 2-D float32 placeholder with shape `[dynamic_duration, target_size]`. \"\"\"\n", " return self._targets\n", " \n", " @property\n", " def states(self):\n", " \"\"\" A 2-D float32 Tensor with shape `[dynamic_duration, hidden_layer_size]`. \"\"\"\n", " return self._states\n", " \n", " @property\n", " def predictions(self):\n", " \"\"\" A 2-D float32 Tensor with shape `[dynamic_duration, target_size]`. \"\"\"\n", " return self._predictions\n", " \n", " @property\n", " def loss(self):\n", " \"\"\" A 0-D float32 Tensor. \"\"\"\n", " return self._loss" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Defining an Optimizer\n", "\n", "Next let's write an optimizer class. We'll use vanilla gradient descent after gradient \"clipping,\" according to the method described by [Pascanu, Mikolov, and Bengio](http://arxiv.org/abs/1211.5063).\n", "\n", "The gradient-clipping method is simple and could instead be called gradient scaling: if the global norm is smaller than `max_global_norm`, do nothing. Otherwise, rescale all gradients so that the global norm becomes `max_global_norm`.\n", "\n", "What is the global norm? It's just the norm over *all* gradients, as if they were concatenated together to form one global vector." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class Optimizer(object):\n", " \n", " def __init__(self, loss, initial_learning_rate, num_steps_per_decay,\n", " decay_rate, max_global_norm=1.0):\n", " \"\"\" Create a simple optimizer.\n", " \n", " This optimizer clips gradients and uses vanilla stochastic gradient\n", " descent with a learning rate that decays exponentially.\n", " \n", " Args:\n", " loss: A 0-D float32 Tensor.\n", " initial_learning_rate: A float.\n", " num_steps_per_decay: An integer.\n", " decay_rate: A float. The factor applied to the learning rate\n", " every `num_steps_per_decay` steps.\n", " max_global_norm: A float. If the global gradient norm is less than\n", " this, do nothing. Otherwise, rescale all gradients so that\n", " the global norm because `max_global_norm`.\n", " \"\"\"\n", " \n", " trainables = tf.trainable_variables()\n", " grads = tf.gradients(loss, trainables)\n", " grads, _ = tf.clip_by_global_norm(grads, clip_norm=max_global_norm)\n", " grad_var_pairs = zip(grads, trainables)\n", " \n", " global_step = tf.Variable(0, trainable=False, dtype=tf.int32)\n", " learning_rate = tf.train.exponential_decay(\n", " initial_learning_rate, global_step, num_steps_per_decay,\n", " decay_rate, staircase=True)\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " self._optimize_op = optimizer.apply_gradients(grad_var_pairs,\n", " global_step=global_step)\n", " \n", " @property\n", " def optimize_op(self):\n", " \"\"\" An Operation that takes one optimization step. \"\"\"\n", " return self._optimize_op" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Training\n", "\n", "Next let's define and run our training function. This is where we'll run the main optimization loop and export TensorBoard summaries." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def train(sess, model, optimizer, generator, num_optimization_steps,\n", " logdir='./logdir'):\n", " \"\"\" Train.\n", " \n", " Args:\n", " sess: A Session.\n", " model: A Model.\n", " optimizer: An Optimizer.\n", " generator: A generator that yields `(inputs, targets)` tuples, with\n", " `inputs` and `targets` both having shape `[dynamic_duration, 1]`.\n", " num_optimization_steps: An integer.\n", " logdir: A string. The log directory.\n", " \"\"\"\n", " \n", " if os.path.exists(logdir):\n", " shutil.rmtree(logdir)\n", " \n", " tf.scalar_summary('loss', model.loss)\n", " \n", " ema = tf.train.ExponentialMovingAverage(decay=0.99)\n", " update_loss_ema = ema.apply([model.loss])\n", " loss_ema = ema.average(model.loss)\n", " tf.scalar_summary('loss_ema', loss_ema)\n", " \n", " summary_op = tf.merge_all_summaries()\n", " summary_writer = tf.train.SummaryWriter(logdir=logdir, graph=sess.graph)\n", " \n", " sess.run(tf.initialize_all_variables())\n", " for step in xrange(num_optimization_steps):\n", " inputs, targets = generator.next()\n", " loss_ema_, summary, _, _ = sess.run(\n", " [loss_ema, summary_op, optimizer.optimize_op, update_loss_ema],\n", " {model.inputs: inputs, model.targets: targets})\n", " summary_writer.add_summary(summary, global_step=step)\n", " print('\\rStep %d. Loss EMA: %.6f.' % (step+1, loss_ema_), end='')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can train our model:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Step 45000. Loss EMA: 0.024024." ] } ], "source": [ "generator = input_target_generator()\n", "model = Model(hidden_layer_size=256, input_size=1, target_size=1, init_scale=0.1)\n", "optimizer = Optimizer(model.loss, initial_learning_rate=1e-2, num_steps_per_decay=15000,\n", " decay_rate=0.1, max_global_norm=1.0)\n", "\n", "sess = tf.Session()\n", "train(sess, model, optimizer, generator, num_optimization_steps=45000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After running `tensorboard --logdir ./logdir` and navigating to [http://localhost:6006](http://localhost:6006), we can view our loss summaries. Here the exponential moving average is especially helpful because our raw losses correspond to individual sequences (and are therefore very noisy estimates).\n", "\n", "![Loss (Raw)](images/loss_raw.png)\n", "\n", "![Loss (EMA)](images/loss_ema.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Testing Qualitatively\n", "\n", "Finally let's write a function to test the trained RNN qualitatively: we'll plot the original inputs (random real numbers), the ground-truth target (the cumulative sum), and our trained RNN's predictions (hopefully matching the cumulative sum)." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def test_qualitatively(sess, model, generator, num_examples=5, figsize=(10, 3)):\n", " \"\"\" Test qualitatively.\n", " \n", " Args:\n", " sess: A Session.\n", " model: A Model.\n", " generator: A generator that yields `(inputs, targets)` tuples, with\n", " `inputs` and `targets` both having shape `[dynamic_duration, 1]`.\n", " num_examples: An integer. The number of examples to plot.\n", " figsize: A tuple `(width, height)`, the size of each example's figure.\n", " \"\"\"\n", " \n", " for i in xrange(num_examples):\n", " \n", " inputs, targets = generator.next()\n", " predictions = sess.run(model.predictions, {model.inputs: inputs})\n", " \n", " fig, ax = plt.subplots(nrows=2, sharex=True, figsize=figsize)\n", " ax[0].plot(inputs.flatten(), label='inputs')\n", " ax[0].legend()\n", " ax[1].plot(targets.flatten(), label='targets')\n", " ax[1].plot(predictions.flatten(), 'o', label='predictions')\n", " ax[1].legend()" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "scrolled": false }, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "test_qualitatively(sess, model, generator, figsize=(8, 2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Ideas for Playing with the Code\n", "\n", "If you'd like to, you can download this notebook and experiment with the code yourself. Here are a few questions to help spark a some ideas:\n", "\n", "- **How many hidden units are needed?** The cumulative sum can be computed using a *linear* transformation at each time step with just one hidden unit: we just initialize it to $0.0$ and then add $x_t$ at each time step. This is *not* the solution that the RNN learned (we used 256 hidden units, and each is squashed between -1.0 and 1.0 at each time step). Try reducing the hidden layer size by factors of 2, and see if/when performance takes a hit.\n", "- **How sensitive is training to the initial learning rate?** Try varying the initial learning rate above by one order of magnitude at a time. Would you say that performance depends heavily on this parameter?\n", "- **How sensitive is training to the initial learning rate without gradient clipping?** In the implementation above, we clipped our gradients (or in reality we scaled them globally). Try training without scaled gradients and see if you can find an initial learning rate that works. Is convergence more sensitive or less sensitive to the initial learning rate?\n", "- **Does the RNN generalize to other inputs?** Above, we trained and tested the RNN using inputs with time steps that were drawn from a standard normal distribution. See what happens if you test this trained model using other inputs, for example a sine wave or a function that doesn't have an average value of 0.0." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Some Final Thoughts\n", "\n", "Aside the simple `scan` example, we built and trained a vanilla RNN from scratch. It is written with simplicity in mind and is largely a toy example. Here are some reasons why:\n", "\n", "- **We're using a batch size of 1.** This is inefficient: it's analogous to performing 20 different vector-vector multiplies to multiply one vector by twenty others, instead of performing one matrix-vector multiply.\n", "- **We're using one hidden layer.** We could easily extend the above code to handle multiple layers: we'd run `scan` to get the first layer, then run it again to get the second layer, and so on. (The states of the previous layer become the inputs to the next layer.)\n", "- **The vanilla RNN isn't often used because it has trouble learning.** In practice we'd replace the vanilla-RNN block with a long short-term memory (LSTM) block or a gated recurrent unit (GRU) block, both of which were designed to avoid the vanishing gradient problem, or perhaps one could use the recent [identity-initialization-plus-ReLU modification of the vanilla RNN](http://arxiv.org/abs/1504.00941).\n", "- **We're using full backpropagation through time, performing a complete forward, backward pass for *each* gradient update.** This means that the number of updates per time is low, leading to longer training times. An alternative is to use truncated backpropagation through time, updating after shorter forward-backward passes in time, but still capturing long-term dependencies by carrying the hidden states over from pass to pass." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Acknowledgements\n", "\n", "I'd like to thank my advisor, [Gregory D. Hager](http://www.cs.jhu.edu/~hager/), for being supportive of this work." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## About Me\n", "\n", "![Rob](images/rob_small.jpg)\n", "\n", "I'm [Rob DiPietro](http://rdipietro.github.com), a PhD student in the [Department of Computer Science at Johns Hopkins](https://www.cs.jhu.edu/), where I'm advised by [Gregory D. Hager](http://www.cs.jhu.edu/~hager/). I'm part of the [Computational Interaction and Robotics Laboratory](http://cirl.lcsr.jhu.edu/) and the [Malone Center for Engineering in Healthcare](http://malonecenter.jhu.edu/). Previously, I was an associate research-staff member at [MIT Lincoln Laboratory](http://www.ll.mit.edu/) and a BS/MS student at [Northeastern University](http://www.northeastern.edu/).\n", "\n", "You can find my other tutorials [here](http://rdipietro.github.io/#tutorials)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" } }, "nbformat": 4, "nbformat_minor": 0 }