{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Learning to Write like Shakespeare: Long short-term memory\n", "\n", "In this chapter, we will:\n", "\n", "- Implement character language modeling\n", "- Learn about truncated Backpropagation\n", "- Explore the Vanishing & Exploding Gradients problem\n", "- Play around with a toy example of RNN backpropagation\n", "- Implement Long Short-term memory (LSTM) cells\n", "\n", "> [William Shakespeare] \"Lord, what fools these mortals be\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Requirements\n", "\n", "First, let's import the `autograd` framework we built in the previous chapter:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "class Tensor(object):\n", " def __init__(self, data, autograd=False, parents=None, creation_op=None, id=None):\n", " self.data = np.array(data)\n", " self.autograd = autograd\n", " self.parents = parents\n", " self.children = {}\n", " self.creation_op = creation_op\n", " self.grad = None\n", " if (id is None): id = np.random.randint(100)\n", " self.id = id\n", " \n", " if (parents is not None):\n", " for parent in parents:\n", " if (self.id not in parent.children):\n", " parent.children[self] = 1\n", " else:\n", " parent.children[self] += 1\n", " \n", " def all_grads_propagated(self):\n", " for _, grads_count in self.children.items():\n", " if (grads_count != 0): return False\n", " return True\n", " \n", " def __add__(self, other):\n", " if (self.autograd and other.autograd):\n", " return Tensor(self.data + other.data, \n", " autograd=True,\n", " parents=[self, other], \n", " creation_op=\"+\")\n", " return Tensor(self.data + other.data)\n", " \n", " def __sub__(self, other):\n", " if (self.autograd and other.autograd):\n", " return Tensor(self.data-other.data,\n", " autograd=True,\n", " parents=[self, other],\n", " creation_op=\"-\")\n", " return Tensor(self.data-other.data)\n", " \n", " def __mul__(self, other):\n", " if (self.autograd and other.autograd):\n", " return Tensor(self.data * other.data,\n", " autograd=True,\n", " parents=[self, other],\n", " creation_op=\"*\")\n", " return Tensor(self.data * other.data)\n", " \n", " def sum(self, dim):\n", " if (self.autograd):\n", " return Tensor(self.data.sum(dim),\n", " autograd=True,\n", " parents=[self],\n", " creation_op=\"sum_\" + str(dim))\n", " return Tensor(self.data.sum(dim))\n", "\n", " def __neg__(self):\n", " if (self.autograd):\n", " return Tensor(self.data * -1,\n", " autograd=True,\n", " parents=[self],\n", " creation_op=\"neg\")\n", " return Tensor(self.data * -1)\n", "\n", " def __repr__(self):\n", " return str('Tensor(' + self.id.__repr__() + ')')\n", "\n", " def __str__(self):\n", " return str(self.data.__str__())\n", " \n", " def expand(self, dim, copies):\n", " trans_cmd = list(range(0, len(self.data.shape)))\n", " trans_cmd.insert(dim, len(self.data.shape))\n", " new_shape = list(self.data.shape) + [copies]\n", " new_data = self.data.repeat(copies).reshape(new_shape)\n", " new_data = new_data.transpose(trans_cmd)\n", " \n", " if (self.autograd):\n", " return Tensor(new_data, \n", " autograd=True,\n", " parents=[self],\n", " creation_op=\"expand_\"+str(dim))\n", " return Tensor(new_data)\n", " \n", " def transpose(self):\n", " if (self.autograd):\n", " return Tensor(self.data.transpose(),\n", " autograd=True,\n", " parents=[self],\n", " creation_op=\"T\")\n", " return Tensor(self.data.transpose())\n", " \n", " def mm(self, x):\n", " if (self.autograd):\n", " return Tensor(self.data.dot(x.data),\n", " autograd=True,\n", " parents=[self, x],\n", " creation_op=\"mm\")\n", " return Tensor(self.data.dot(x.data))\n", " \n", " def sigmoid(self):\n", " if (self.autograd):\n", " return Tensor(1/(1+np.exp(-self.data)),\n", " autograd=True,\n", " parents=[self],\n", " creation_op=\"sigmoid\")\n", " return Tensor(1/(1+np.exp(-self.data)))\n", " \n", " def tanh(self):\n", " if (self.autograd):\n", " return Tensor(np.tanh(self.data),\n", " autograd=True,\n", " parents=[self],\n", " creation_op=\"tanh\")\n", " return Tensor(np.tanh(self.data))\n", " \n", " def index_select(self, indices):\n", " if (self.autograd):\n", " new = Tensor(self.data[indices.data],\n", " autograd=True,\n", " parents=[self],\n", " creation_op=\"index_select\")\n", " new.index_select_indices = indices\n", " return new\n", " return Tensor(self.data[indices.data])\n", " \n", " def softmax(self):\n", " temp = np.exp(self.data)\n", " softmax_output = temp / np.sum(temp,\n", " axis=len(self.data.shape)-1,\n", " keepdims=True)\n", " return softmax_output\n", " \n", " def cross_entropy(self, target_indices):\n", " temp = np.exp(self.data)\n", " softmax_output = temp / np.sum(temp, axis=len(self.data.shape)-1, keepdims=True)\n", " t = target_indices.data.flatten()\n", " p = softmax_output.reshape(len(t), -1)\n", " target_dist = np.eye(p.shape[1])[t]\n", " loss = - (np.log(p) * (target_dist)).sum(1).mean()\n", "\n", " if (self.autograd):\n", " out = Tensor(loss,\n", " autograd=True,\n", " parents=[self],\n", " creation_op=\"cross_entropy\")\n", " out.softmax_output = softmax_output\n", " out.target_dist = target_dist\n", " return out\n", " return Tensor(loss)\n", " \n", " def backward(self, grad=None, grad_origin=None):\n", " if (self.autograd):\n", " if (grad == None):\n", " grad = Tensor(np.ones_like(self.data))\n", " if (grad_origin is not None):\n", " if (self.children[grad_origin] == 0):\n", " raise Exception(\"cannot backprop more than once\")\n", " else:\n", " self.children[grad_origin] -= 1\n", " if (self.grad is None):\n", " self.grad = grad\n", " else:\n", " self.grad += grad\n", " if ((self.parents is not None) and (self.all_grads_propagated() or grad_origin is None)):\n", " if (self.creation_op == \"+\"):\n", " self.parents[0].backward(self.grad, grad_origin=self)\n", " self.parents[1].backward(self.grad, grad_origin=self)\n", " if (self.creation_op == \"neg\"):\n", " self.parents[0].backward(self.grad.__neg__())\n", " if (self.creation_op == '-'):\n", " self.parents[0].backward(self.grad, grad_origin=self)\n", " self.parents[1].backward(self.grad.__neg__(), grad_origin=self)\n", " if (self.creation_op == '*'):\n", " self.parents[0].backward(self.grad*self.parents[1], grad_origin=self)\n", " self.parents[1].backward(self.grad*self.parents[0], grad_origin=self)\n", " if (self.creation_op == 'mm'):\n", " activation = self.parents[0] # usually an activation function\n", " weights = self.parents[1] # usually a weights matrix\n", " activation.backward(self.grad.mm(weights.transpose()))\n", " weights.backward(self.grad.transpose().mm(activation).transpose())\n", " if (self.creation_op == 'T'):\n", " self.parents[0].backward(self.grad.transpose())\n", " if (\"sum\" in self.creation_op):\n", " dim = int(self.creation_op.split(\"_\")[1])\n", " ds = self.parents[0].data.shape[dim]\n", " self.parents[0].backward(self.grad.expand(dim, ds))\n", " if (\"expand\" in self.creation_op):\n", " dim = int(self.creation_op.split(\"_\")[1])\n", " self.parents[0].backward(self.grad.sum(dim))\n", " if (self.creation_op == 'sigmoid'):\n", " ones = Tensor(np.ones_like(self.grad.data))\n", " self.parents[0].backward(self.grad * (self * (ones - self)))\n", " if (self.creation_op == 'tanh'):\n", " ones = Tensor(np.ones_like(self.grad.data))\n", " self.parents[0].backward(self.grad * (ones - (self * self)))\n", " if (self.creation_op == 'index_select'):\n", " new_grad = np.zeros_like(self.parents[0].data)\n", " indices_ = self.index_select_indices.data.flatten()\n", " grad_ = grad.data.reshape(len(indices_), -1)\n", " for i in range(len(indices_)):\n", " new_grad[indices_[i]] += grad_[i]\n", " self.parents[0].backward(Tensor(new_grad))\n", " if (self.creation_op == 'cross_entropy'):\n", " dx = self.softmax_output - self.target_dist\n", " self.parents[0].backward(Tensor(dx))" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "class Layer(object):\n", " def __init__(self):\n", " self.parameters = list()\n", " \n", " def get_parameters(self):\n", " return self.parameters" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "class Tanh(Layer):\n", " def __init__(self):\n", " super().__init__()\n", " \n", " def forward(self, input):\n", " return input.tanh()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "class Sigmoid(Layer):\n", " def __init__(self):\n", " super().__init__()\n", " \n", " def forward(self, input):\n", " return input.sigmoid()" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "class Linear(Layer):\n", " def __init__(self, n_inputs, n_outputs, bias=True):\n", " super().__init__()\n", " \n", " self.use_bias = bias\n", " W = np.random.randn(n_inputs, n_outputs)*np.sqrt(2.0/n_inputs)\n", " self.weight = Tensor(W, autograd=True)\n", " if self.use_bias:\n", " self.bias = Tensor(np.zeros(n_outputs), autograd=True)\n", " self.parameters.append(self.weight)\n", " if self.use_bias:\n", " self.parameters.append(self.bias)\n", " \n", " def forward(self, input):\n", " # expand for broadcasting\n", " if self.use_bias:\n", " return input.mm(self.weight)+self.bias.expand(0,len(input.data))\n", " return input.mm(self.weight)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "class Embedding(Layer):\n", " def __init__(self, vocab_size, dim):\n", " super().__init__()\n", " self.vocab_size = vocab_size\n", " self.dim = dim\n", " # this initialization style is a convention from word2vec\n", " self.weight = Tensor((np.random.rand(vocab_size, dim) - 0.5) / dim, autograd=True)\n", " self.parameters.append(self.weight)\n", " \n", " def forward(self, input):\n", " return self.weight.index_select(input)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# Cross Entropy Layer\n", "class CrossEntropyLoss(object):\n", " def __init__(self):\n", " super().__init__()\n", " \n", " def forward(self, input, target):\n", " return input.cross_entropy(target)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "class SGD(object):\n", " def __init__(self, parameters, alpha):\n", " self.parameters = parameters\n", " self.alpha = alpha\n", " \n", " def zero(self):\n", " for p in self.parameters:\n", " p.grad.data *= 0\n", " \n", " def step(self, zero=True):\n", " for p in self.parameters:\n", " p.data -= p.grad.data * self.alpha\n", " if (zero):\n", " p.grad.data *= 0" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "class RNNCell(Layer):\n", " def __init__(self, n_inputs, n_hidden, n_output, activation='sigmoid'):\n", " super().__init__()\n", " self.n_inputs = n_inputs\n", " self.n_hidden = n_hidden\n", " self.n_output = n_output\n", " \n", " if (activation == 'sigmoid'):\n", " self.activation = Sigmoid()\n", " elif (activation == 'tanh'):\n", " self.activation = Tanh()\n", " else:\n", " raise Exception(\"Non-Linearity not found\")\n", " \n", " self.w_ih = Linear(n_inputs, n_hidden)\n", " self.w_hh = Linear(n_hidden, n_hidden)\n", " self.w_ho = Linear(n_hidden, n_output)\n", " \n", " self.parameters += self.w_ih.get_parameters()\n", " self.parameters += self.w_hh.get_parameters()\n", " self.parameters += self.w_ho.get_parameters()\n", " \n", " def forward(self, input, hidden):\n", " from_prev_hidden = self.w_hh.forward(hidden)\n", " combined = self.w_ih.forward(input) + from_prev_hidden\n", " new_hidden = self.activation.forward(combined)\n", " output = self.w_ho.forward(new_hidden)\n", " return output, new_hidden\n", "\n", " def init_hidden(self, batch_size=1):\n", " return Tensor(np.zeros((batch_size,self.n_hidden)),autograd=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Character Language Modeling\n", "### Let's tackle a more challenging task with RNN\n", "\n", "In this chapter, we'll attempt language modeling over a much more challenging dataset: **The Works of Shakespeare**.\n", "\n", "Instead of learning to predict next words based on the previous sequence of words, now you'll learn how to predict next characters. So we are building a Character-based Language Model using the Works of Shakespeare." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "import sys, random, math\n", "from collections import Counter\n", "import numpy as np\n", "\n", "np.random.seed(0)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "f = open('static/data/Shakespeare/shakespear.txt', 'r')\n", "raw = f.read()\n", "f.close()" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "vocab = list(set(raw))\n", "word2index = {}\n", "for i, word in enumerate(vocab):\n", " word2index[word] = i" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "raw_indices = np.array(list(map(lambda x: word2index[x], raw)))" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "embed = Embedding(vocab_size=len(vocab), dim=8)\n", "model = RNNCell(n_inputs=8, n_hidden=512, n_output=len(vocab))" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "criterion = CrossEntropyLoss()\n", "optimizer = SGD(parameters=model.get_parameters() + embed.get_parameters(), alpha=0.05)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We initialized the embeddings to be of dimensionality 8 and the hidden vector to be of size 512. The output weights are initialized to be of weight of `0`s.\n", "\n", "Finally, we initialize the cross entropy loss function and the stochastic gradient optimizer." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Need for truncated backpropagation\n", "### Backpropagating through 100,000 character is intractable\n", "\n", "One of the more challenging aspects of reading RNN code is the mini-batching logic when feeding in data.\n", "\n", "The previous RNN Setup had an inner loop of 5 words fed to the network to predict the sixth, it turns out that the previous dataset didn't have any example longer than 6 words.\n", "\n", "Even more important is the backpropagation step, in the case of MNIST, the gradients always backpropagated all the way through the network. The same logic applied to a vanilla RNN architecture, with a short loop, we can backpropagate all the way to the first input word.\n", "\n", "We could do this because we aren't feeding that many data points at a time, but the shakespeare dataset has 100K characters, this is too many to backpropagate through, so what should we do?\n", "\n", "**We don't!**, we backpropagate for a fixed number of steps into the past and then stop. This is called **truncated backpropagation**, and it's the industry standard. The length we backpropagate becomes another tunable hyperparameter of the network, like `batch size` and `alpha` (the learning rate)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Truncated Backpropagation\n", "### Technically, It weakens the theoritical Maximum of the neural network \n", "\n", "The downside of using truncated backpropagation is that it limits the memory of the neural network, meaning it shortens the distance a neural network can take to remember things. Cutting off gradients after, let's say five timesteps, means that the neural network can't learn to remember events that are longer than five timesteps in the past.\n", "\n", "For language modeling, the truncation variable is called `bptt`, and it's usually set between `16` and `64`:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "batch_size = 32\n", "bptt = 16 # how far the model can look in the past — input sequence size?\n", "n_batches = int(raw_indices.shape[0]/batch_size)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The other downside of truncated backpropagation is that it makes the internal mini-batching loop a bit more complex.\n", "\n", "We pretend that instead of having one big dataset, we have a bunch of small datasets of `bptt` size.\n", "\n", "Next, we need to group the datasets accordingly:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "trimmed_indices = raw_indices[:n_batches*batch_size]\n", "batched_indices = trimmed_indices.reshape(batch_size, n_batches)\n", "batched_indices = batched_indices.transpose()" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "input_batched_indices = batched_indices[0:-1]\n", "output_batched_indices = batched_indices[1:]" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "n_bptt = int((n_batches - 1) / bptt)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "input_batches = input_batched_indices[:n_bptt*bptt]\n", "input_batches = input_batches.reshape(n_bptt, bptt, batch_size)\n", "output_batches = output_batched_indices[:n_bptt*bptt]\n", "output_batches = output_batches.reshape(n_bptt, bptt, batch_size)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "min_loss = 1000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The top Line makes the dataset an even multiple between `batch_size` and `n_batches`. The decond & third lines reshape the dataset so that each column is a section of the initial `indices` array.\n", "\n", "If `batch_size` was set to 8:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "That,\n", "[14 6 24 52 40]\n" ] } ], "source": [ "print(raw[0:5])\n", "print(raw_indices[0:5])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Those are the Five Basic Characters in the Shakespeare Dataset.\n", "\n", "Following are the first 5 rows of the output of the transformation combined within `batched_indices`:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[14 19 43 39 58 6 14 35 35 24 24 33 58 35 40 31 35 54 6 43 33 24 35 35\n", " 35 7 35 7 7 35 35 47]\n", " [ 6 31 31 39 32 33 24 55 54 47 8 33 8 52 35 31 3 58 58 31 8 11 3 9\n", " 6 47 27 5 47 5 50 35]\n", " [24 31 10 33 33 24 8 36 58 35 8 43 9 6 25 38 33 55 8 31 9 33 28 33\n", " 33 29 58 40 35 58 58 3]\n", " [52 51 14 28 35 28 3 35 55 47 35 35 19 33 33 41 33 40 54 0 36 35 58 33\n", " 28 8 55 31 53 28 5 58]\n", " [40 41 7 10 25 35 58 28 35 58 25 48 31 35 35 23 47 35 35 58 19 54 25 32\n", " 40 24 8 41 55 33 33 54]]\n" ] } ], "source": [ "print(batched_indices[0:5])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See how the indices for the phrase \"That,\" are in the first column on the left?\n", " \n", "The reason there are `N` columns is because the batch size is `N`. This tensor is then used to construct a list of smaller data sets, each with length `bptt`.\n", "\n", "We should notice that the target indices are the input indices often by one row (so that the network predicts the next character):" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[14 19 43 39 58 6 14 35 35 24 24 33 58 35 40 31 35 54 6 43 33 24 35 35\n", " 35 7 35 7 7 35 35 47]\n", " [ 6 31 31 39 32 33 24 55 54 47 8 33 8 52 35 31 3 58 58 31 8 11 3 9\n", " 6 47 27 5 47 5 50 35]\n", " [24 31 10 33 33 24 8 36 58 35 8 43 9 6 25 38 33 55 8 31 9 33 28 33\n", " 33 29 58 40 35 58 58 3]\n", " [52 51 14 28 35 28 3 35 55 47 35 35 19 33 33 41 33 40 54 0 36 35 58 33\n", " 28 8 55 31 53 28 5 58]\n", " [40 41 7 10 25 35 58 28 35 58 25 48 31 35 35 23 47 35 35 58 19 54 25 32\n", " 40 24 8 41 55 33 33 54]]\n" ] } ], "source": [ "print(input_batches[0][0:5])" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 6 31 31 39 32 33 24 55 54 47 8 33 8 52 35 31 3 58 58 31 8 11 3 9\n", " 6 47 27 5 47 5 50 35]\n", " [24 31 10 33 33 24 8 36 58 35 8 43 9 6 25 38 33 55 8 31 9 33 28 33\n", " 33 29 58 40 35 58 58 3]\n", " [52 51 14 28 35 28 3 35 55 47 35 35 19 33 33 41 33 40 54 0 36 35 58 33\n", " 28 8 55 31 53 28 5 58]\n", " [40 41 7 10 25 35 58 28 35 58 25 48 31 35 35 23 47 35 35 58 19 54 25 32\n", " 40 24 8 41 55 33 33 54]\n", " [35 44 36 9 7 33 52 33 25 35 33 6 31 9 25 42 35 5 3 36 31 58 35 35\n", " 35 25 9 47 39 35 40 36]]\n" ] } ], "source": [ "print(output_batches[0][0:5])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This type of preprocessing doesn't have much to do with deep learning theory. It's just a particularly complex part of setting up RNNs that you'll run into from time to time." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Let's see how to iterate using truncated backpropagation\n", "\n", "The following example shows truncated backpropagation in practice. The only difference is that we generate a `batch_loss` at each step. After every `bptt` steps, we backpropagate and perform a weight update, then we keep reading through the dataset like nothing happened.\n", "\n", "We use the hidden state from before and it only gets a reset with each epoch." ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "def generate_sample(n=30, init_char=' '):\n", " s = \"\"\n", " hidden = model.init_hidden(batch_size=1)\n", " input = Tensor(np.array([word2index[init_char]]))\n", " for i in range(n):\n", " rnn_input = embed.forward(input)\n", " output, hidden = model.forward(rnn_input, hidden)\n", " # temperature for sampling, higher -> greedier\n", " output.data *= 10\n", " temp_dist = output.softmax()\n", " temp_dist /= temp_dist.sum()\n", " \n", " # samples from pred\n", " m = (temp_dist > np.random.rand()).argmax()\n", " c = vocab[m]\n", " input = Tensor(np.array([m]))\n", " s += c\n", " return s" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "def train(iterations=7, min_loss=1000):\n", " for iter in range(iterations):\n", " total_loss = 0\n", " n_loss = 0\n", " \n", " hidden = model.init_hidden(batch_size=batch_size)\n", " for batch_i in range(len(input_batches)):\n", " \n", " hidden = Tensor(hidden.data, autograd=True)\n", " loss = None\n", " losses = list()\n", " for t in range(bptt):\n", " input = Tensor(input_batches[batch_i][t], autograd=True)\n", " rnn_input = embed.forward(input)\n", " output, hidden = model.forward(input=rnn_input, hidden=hidden)\n", " target = Tensor(output_batches[batch_i][t], autograd=True)\n", " batch_loss = criterion.forward(output, target)\n", " losses.append(batch_loss)\n", " if (t == 0):\n", " loss = batch_loss\n", " else:\n", " loss = loss + batch_loss\n", " loss = losses[-1]\n", " loss.backward()\n", " optimizer.step()\n", " total_loss += loss.data/bptt\n", " \n", " epoch_loss = np.exp(total_loss/(batch_i + 1))\n", " if (epoch_loss < min_loss):\n", " min_loss = epoch_loss\n", " log = \"\\r Iter:\" + str(iter)\n", " log += \" - Alpha:\" + str(optimizer.alpha)[0:5]\n", " log += \" - Batch \"+str(batch_i+1)+\"/\"+str(len(input_batches))\n", " log += \" - Min Loss:\" + str(min_loss)[0:5]\n", " log += \" - Loss:\" + str(epoch_loss)\n", " if(batch_i == 0):\n", " log += \" - \" + generate_sample(n=70, init_char='T').replace(\"\\n\",\" \")\n", " if(batch_i % 10 == 0):\n", " sys.stdout.write(log)\n", " optimizer.alpha *= 0.99 \n", " print()" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Iter:0 - Alpha:0.05 - Batch 191/195 - Min Loss:1.272 - Loss:1.2723613677367251 \n" ] } ], "source": [ "train(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A Sample of the Output\n", "### By sampling from the Predictions of the Model, you can write Shakespeare!\n", "\n", "The following code uses a subset of the training logic to make predictions using the model. We store the predictions in a string and return the string version as output of the function. The sample that's generated looks quite shakespearnian and even includes characters talking:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(generate_sample(n=1000, init_char='\\n'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Vanishing & Exploding Gradients\n", "### Vanilla RNNs Suffer from Vanishing & Exploding Gradients\n", "\n", "The whole idea was to be able to combine the word embeddings in a way that preserves order. We did this by learning a matrix that transforms the vector representation of all of the previous embeddings into the next time step.\n", "\n", "Forward Propagation then became a 3-step process:\n", "1. Start with the first word embedding.\n", "2. Multiply it by the shared weight matrix.\n", "3. Add the Next Embedding.\n", "\n", "We loop over this process, repeating until we've read the entire series of words. \n", "\n", "An additional non-linearity was added to the hidden-state generation process. Thus, forward propagation becomes a 4-step process, applying the activation function was added. This non-linearity plays an important role in stabalizing the network. No matter how long the sequence of words is, the hidden states are forced to stay between the values of the non-linearity.\n", "\n", "However, backpropagation occurs in a slightly different way than forward propagation, which doesn't have this nice property. Backpropagation tends to lead to either extremely large or extremely small values. \n", "- Large values can cause divergence (NaNs).\n", "- Extremely small values keep the netword from learning (small updates).\n", "\n", "Let's take a closer look at RNNs backpropagation:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A Toy Example of RNN backpropagation\n", "### To see vanishing/exploding gradients firsthand, Let's synthesize an Example\n", "\n", "The following example shows a recurrent backpropagation loop for `sigmoid` and `relu` activations.\n", "During Backprop:\n", "- `ReLU`: Gradients become Large as a result of matrix Multiplication.\n", "- `Sigmoid`: Gradients become small as a result of the slope of the sigmoid curvature in much of its domain of definition (flat tails)." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "(sigmoid, relu) = (lambda x: 1/(1+np.exp(-x)), lambda x: (x>0).astype(float)*x) " ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "weights = np.array([[1,4], [4,1]])" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "activation = sigmoid(np.array([1, 0.01]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sigmoid Activations" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "activations = list()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0.93940638 0.96852968]\n", "[0.9919462 0.99121735]\n", "[0.99301385 0.99302901]\n", "[0.9930713 0.99307098]\n", "[0.99307285 0.99307285]\n", "[0.99307291 0.99307291]\n", "[0.99307291 0.99307291]\n", "[0.99307291 0.99307291]\n", "[0.99307291 0.99307291]\n", "[0.99307291 0.99307291]\n" ] } ], "source": [ "for iter in range(10):\n", " activation = sigmoid(activation.dot(weights))\n", " activations.append(activation)\n", " print(activation)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sigmoid Gradients" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "gradient = np.ones_like(activations)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[0.03439552 0.03439552]\n", " [0.03439552 0.03439552]\n", " [0.03439552 0.03439552]\n", " [0.03439552 0.03439552]\n", " [0.03439552 0.03439552]\n", " [0.03439552 0.03439552]\n", " [0.03439552 0.03439552]\n", " [0.03439552 0.03439552]\n", " [0.03439552 0.03439552]\n", " [0.03439552 0.03439552]]\n", "[[0.00118305 0.00118305]\n", " [0.00118305 0.00118305]\n", " [0.00118305 0.00118305]\n", " [0.00118305 0.00118305]\n", " [0.00118305 0.00118305]\n", " [0.00118305 0.00118305]\n", " [0.00118305 0.00118305]\n", " [0.00118305 0.00118305]\n", " [0.00118305 0.00118305]\n", " [0.00118305 0.00118305]]\n", "[[4.06916726e-05 4.06916726e-05]\n", " [4.06916726e-05 4.06916726e-05]\n", " [4.06916726e-05 4.06916726e-05]\n", " [4.06916726e-05 4.06916726e-05]\n", " [4.06916726e-05 4.06916726e-05]\n", " [4.06916726e-05 4.06916726e-05]\n", " [4.06916726e-05 4.06916726e-05]\n", " [4.06916726e-05 4.06916726e-05]\n", " [4.06916726e-05 4.06916726e-05]\n", " [4.06916726e-05 4.06916726e-05]]\n", "[[1.39961115e-06 1.39961115e-06]\n", " [1.39961115e-06 1.39961115e-06]\n", " [1.39961115e-06 1.39961115e-06]\n", " [1.39961115e-06 1.39961115e-06]\n", " [1.39961115e-06 1.39961115e-06]\n", " [1.39961115e-06 1.39961115e-06]\n", " [1.39961115e-06 1.39961115e-06]\n", " [1.39961115e-06 1.39961115e-06]\n", " [1.39961115e-06 1.39961115e-06]\n", " [1.39961115e-06 1.39961115e-06]]\n", "[[4.81403643e-08 4.81403637e-08]\n", " [4.81403643e-08 4.81403637e-08]\n", " [4.81403643e-08 4.81403637e-08]\n", " [4.81403643e-08 4.81403637e-08]\n", " [4.81403643e-08 4.81403637e-08]\n", " [4.81403643e-08 4.81403637e-08]\n", " [4.81403643e-08 4.81403637e-08]\n", " [4.81403643e-08 4.81403637e-08]\n", " [4.81403643e-08 4.81403637e-08]\n", " [4.81403643e-08 4.81403637e-08]]\n", "[[1.65582672e-09 1.65582765e-09]\n", " [1.65582672e-09 1.65582765e-09]\n", " [1.65582672e-09 1.65582765e-09]\n", " [1.65582672e-09 1.65582765e-09]\n", " [1.65582672e-09 1.65582765e-09]\n", " [1.65582672e-09 1.65582765e-09]\n", " [1.65582672e-09 1.65582765e-09]\n", " [1.65582672e-09 1.65582765e-09]\n", " [1.65582672e-09 1.65582765e-09]\n", " [1.65582672e-09 1.65582765e-09]]\n", "[[5.69682675e-11 5.69667160e-11]\n", " [5.69682675e-11 5.69667160e-11]\n", " [5.69682675e-11 5.69667160e-11]\n", " [5.69682675e-11 5.69667160e-11]\n", " [5.69682675e-11 5.69667160e-11]\n", " [5.69682675e-11 5.69667160e-11]\n", " [5.69682675e-11 5.69667160e-11]\n", " [5.69682675e-11 5.69667160e-11]\n", " [5.69682675e-11 5.69667160e-11]\n", " [5.69682675e-11 5.69667160e-11]]\n", "[[1.97259346e-12 1.97517920e-12]\n", " [1.97259346e-12 1.97517920e-12]\n", " [1.97259346e-12 1.97517920e-12]\n", " [1.97259346e-12 1.97517920e-12]\n", " [1.97259346e-12 1.97517920e-12]\n", " [1.97259346e-12 1.97517920e-12]\n", " [1.97259346e-12 1.97517920e-12]\n", " [1.97259346e-12 1.97517920e-12]\n", " [1.97259346e-12 1.97517920e-12]\n", " [1.97259346e-12 1.97517920e-12]]\n", "[[8.45387597e-14 8.02306381e-14]\n", " [8.45387597e-14 8.02306381e-14]\n", " [8.45387597e-14 8.02306381e-14]\n", " [8.45387597e-14 8.02306381e-14]\n", " [8.45387597e-14 8.02306381e-14]\n", " [8.45387597e-14 8.02306381e-14]\n", " [8.45387597e-14 8.02306381e-14]\n", " [8.45387597e-14 8.02306381e-14]\n", " [8.45387597e-14 8.02306381e-14]\n", " [8.45387597e-14 8.02306381e-14]]\n", "[[1.45938177e-14 2.16938983e-14]\n", " [1.45938177e-14 2.16938983e-14]\n", " [1.45938177e-14 2.16938983e-14]\n", " [1.45938177e-14 2.16938983e-14]\n", " [1.45938177e-14 2.16938983e-14]\n", " [1.45938177e-14 2.16938983e-14]\n", " [1.45938177e-14 2.16938983e-14]\n", " [1.45938177e-14 2.16938983e-14]\n", " [1.45938177e-14 2.16938983e-14]\n", " [1.45938177e-14 2.16938983e-14]]\n" ] } ], "source": [ "for activation in reversed(activations):\n", " gradient = activation * (1 - activation) * gradient\n", " gradient = gradient.dot(weights.transpose())\n", " print(gradient)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ReLU Activations" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "activations = list()" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[4.8135251 4.72615519]\n", "[23.71814585 23.98025559]\n", "[119.63916823 118.852839 ]\n", "[595.05052421 597.40951192]\n", "[2984.68857188 2977.61160877]\n", "[14895.13500696 14916.36589628]\n", "[74560.59859209 74496.90592414]\n", "[372548.22228863 372739.30029248]\n", "[1863505.42345854 1862932.18944699]\n", "[9315234.18124649 9316953.88328115]\n" ] } ], "source": [ "for iter in range(10):\n", " activation = relu(activation.dot(weights))\n", " activations.append(activation)\n", " print(activation)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ReLU Gradients" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "gradient = np.ones_like(activation)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[5. 5.]\n", "[25. 25.]\n", "[125. 125.]\n", "[625. 625.]\n", "[3125. 3125.]\n", "[15625. 15625.]\n", "[78125. 78125.]\n", "[390625. 390625.]\n", "[1953125. 1953125.]\n", "[9765625. 9765625.]\n" ] } ], "source": [ "for activation in reversed(activations):\n", " gradient = ((activation > 0) * gradient).dot(weights.transpose())\n", " print(gradient)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Long short-term Memory (LSTM) Cells\n", "### LSTMs are the industry standard model to counter vanishing/exploding gradients\n", "\n", "The problem with vanishing (sigmoid) / exploding (matrix multiplication) gradients is the combination of matrix multiplication and non-linearity being used to form the next hidden state. The Solution that LSTMs provide is quite simple:\n", "- The Gated Copy Trick\n", " - LSTMs create the next hidden state by copying the previous hidden state and then adding or removing information as necessary.\n", " - The mechanisms the LSTM uses for adding & removing information are called gates.\n", "- The LSTM has 2 hidden state vectors:\n", " - **h**: for hidden.\n", " - **cell**.\n", "\n", "The one we care about is `cell`, Each new cell is the previous cell + `u` weighted by `i` and `f`:\n", "- `f` is the forget gate: if it takes a value of **0** the next cell will erase what it saw previously.\n", "- If `i` is `1`, it will fully add in the value of `u` to create the new cell.\n", "- `o` is an output gate that controls how much of the cells state the output prediction is allowed to see." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "def forward(self, input, hidden):\n", " prev_hidden, prev_cell = (hidden[0], hidden[1])\n", " f = (self.xf.forward(input) + self.hf.forward(prev_hidden)).sigmoid() # forget gate\n", " i = (self.xi.forward(input) + self.hi.forward(prev_hidden)).sigmoid()\n", " o = (self.xo.forward(input) + self.ho.forward(prev_hidden)).sigmoid() # output gate\n", " u = (self.xc.forward(input) + self.hc.forward(prev_hidden)).tanh()\n", " \n", " cell = (f * prev_cell) + (i * u)\n", " h = o * cell.tanh()\n", " output = self.w_ho.forward(h)\n", " return output, (h, cell)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Some Intuition about LSTM Gates\n", "### LSTM gates are semantically similar to reading/writing from Memory\n", "\n", "
\n", " \n", "
\n", "\n", "An LSTM cell has $3$ gates, `f`, `i`, `o` & a cell update vector `u`. We think of these gates as **forget**, **Input**, **Output**, and **Update**. They work together to ensure that any information to be stored in `c` can be saved without requiring each update of `c` to have matrix multiplications or non-linearities applied to it.\n", "\n", "This is what allows the LSTM to store Information across a time series without worrying about vanishing or exploding gradients. Each step is a copy plus an update. The hidden value $h$ is then a masked version of the cell that's used for prediction.\n", "\n", "Each gate has its own weight matrices.\n", "\n", "One last possible critique is about `h`. Clearly it's still prone to vanishing & exploding gradients. Exploding gradients aren't really a problem since we can always clip them, the only serious problem are vanishing gradients. But this ends up being Okay because `h` is conditioned on `c`, which can carry long range information.\n", "\n", "All long range information is transported using `c`. `h` is only a localized interpretaion of `c`\n", "\n", "In short, `c` can learn to transport information over long distances, so it doesn't matter if `h` can't." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Long-Short term Memory Layer\n", "### You can use the Autograd system to implement an LSTM" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class LSTMCell(Layer):\n", " def __init__(self, n_inputs, n_hidden, n_output):\n", " super().__init__()\n", " \n", " self.n_inputs = n_inputs\n", " self.n_hidden = n_hidden\n", " self.n_outputs = n_output\n", " \n", " self.xf = Linear(n_inputs, n_hidden)\n", " self.xi = Linear(n_inputs, n_hidden)\n", " self.xo = Linear(n_inputs, n_hidden)\n", " self.xc = Linear(n_inputs, n_hidden)\n", " \n", " self.hf = Linear(n_hidden, n_hidden, bias=False)\n", " self.hi = Linear(n_hidden, n_hidden, bias=False)\n", " self.ho = Linear(n_hidden, n_hidden, bias=False)\n", " self.hc = Linear(n_hidden, n_hidden, bias=False)\n", " \n", " self.w_ho = Linear(n_hidden, n_output, bias=False)\n", " \n", " self.parameters += self.xf.get_parameters()\n", " self.parameters += self.xi.get_parameters()\n", " self.parameters += self.xo.get_parameters()\n", " self.parameters += self.xc.get_parameters()\n", " self.parameters += self.hf.get_parameters()\n", " self.parameters += self.hi.get_parameters()\n", " self.parameters += self.ho.get_parameters()\n", " self.parameters += self.hc.get_parameters()\n", " self.parameters += self.w_ho.get_parameters()\n", " \n", " \n", " def forward(self, input, hidden):\n", " prev_hidden = hidden[0]\n", " prev_cell = hidden[1]\n", " \n", " f=(self.xf.forward(input)+self.hf.forward(prev_hidden)).sigmoid()\n", " i=(self.xi.forward(input)+self.hi.forward(prev_hidden)).sigmoid()\n", " o=(self.xo.forward(input)+self.ho.forward(prev_hidden)).sigmoid()\n", " g = (self.xc.forward(input) +self.hc.forward(prev_hidden)).tanh()\n", " c = (f * prev_cell) + (i * g) \n", " h = o * c.tanh()\n", " \n", " output = self.w_ho.forward(h)\n", " return output, (h, c)\n", " \n", " def init_hidden(self, batch_size=1):\n", " h = Tensor(np.zeros((batch_size, self.n_hidden)), autograd=True)\n", " c = Tensor(np.zeros((batch_size, self.n_hidden)), autograd=True)\n", " h.data[:, 0] += 1\n", " c.data[:, 0] += 1\n", " return (h, c)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Upgrading the Character Language Model\n", "### Let's Swap out the Vanilla RNN with the new LSTMCell\n", "\n", "Let's train an LSTM-based Model to predict Shakespeare." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sys, random, math\n", "from collections import Counter\n", "import numpy as np\n", "\n", "np.random.seed(0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f = open('static/data/Shakespeare/shakespear.txt', 'r')\n", "raw = f.read()\n", "f.close()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vocab = list(set(raw))\n", "word2index = {}\n", "for i, word in enumerate(vocab):\n", " word2index[word] = i" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "raw_indices = np.array(list(map(lambda x: word2index[x], raw)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "embed = Embedding(vocab_size=len(vocab), dim=8)\n", "model = LSTMCell(n_inputs=8, n_hidden=256, n_output=len(vocab))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "criterion = CrossEntropyLoss()\n", "optimizer = SGD(parameters=model.get_parameters() + embed.get_parameters(), alpha=0.05)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "batch_size = 16\n", "bptt = 25" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n_batches = int((raw_indices.shape[0] / (batch_size)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "trimmed_indices = raw_indices[:n_batches*batch_size]\n", "batched_indices = trimmed_indices.reshape(batch_size, n_batches)\n", "batched_indices = batched_indices.transpose()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "input_batched_indices = batched_indices[0:-1]\n", "output_batched_indices = batched_indices[1:]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n_bptt = int((n_batches - 1) / bptt)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "input_batches = input_batched_indices[:n_bptt*bptt]\n", "input_batches = input_batches.reshape(n_bptt, bptt, batch_size)\n", "output_batches = output_batched_indices[:n_bptt*bptt]\n", "output_batches = output_batches.reshape(n_bptt, bptt, batch_size)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "min_loss = 1000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training the LSTM character Language Model\n", "### The training Logic also hasn't changed much\n", "\n", "The only real change we have to make from the vanilla RNN logic is the truncated backpropagation logic because there are two hidden vectors per timestep instead of one:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def train(iterations=7, min_loss=1000):\n", " for iter in range(iterations):\n", " total_loss = 0\n", " n_loss = 0\n", " \n", " hidden = model.init_hidden(batch_size=batch_size)\n", " for batch_i in range(len(input_batches)):\n", " \n", " hidden = (Tensor(hidden[0].data, autograd=True),\n", " Tensor(hidden[1].data, autograd=True))\n", " \n", " loss = None\n", " losses = list()\n", " for t in range(bptt):\n", " input = Tensor(input_batches[batch_i][t], autograd=True)\n", " rnn_input = embed.forward(input)\n", " output, hidden = model.forward(input=rnn_input, hidden=hidden)\n", " target = Tensor(output_batches[batch_i][t], autograd=True)\n", " batch_loss = criterion.forward(output, target)\n", " losses.append(batch_loss)\n", " if (t == 0):\n", " loss = batch_loss\n", " else:\n", " loss = loss + batch_loss\n", " loss = losses[-1]\n", " loss.backward()\n", " optimizer.step()\n", " total_loss += loss.data/bptt\n", " \n", " epoch_loss = np.exp(total_loss/(batch_i + 1))\n", " if (epoch_loss < min_loss):\n", " min_loss = epoch_loss\n", " log = \"\\r Iter:\" + str(iter)\n", " log += \" - Alpha:\" + str(optimizer.alpha)[0:5]\n", " log += \" - Batch \"+str(batch_i+1)+\"/\"+str(len(input_batches))\n", " log += \" - Min Loss:\" + str(min_loss)[0:5]\n", " log += \" - Loss:\" + str(epoch_loss)\n", " if(batch_i == 0):\n", " log += \" - \" + generate_sample(n=70, init_char='T').replace(\"\\n\",\" \")\n", " if(batch_i % 10 == 0):\n", " sys.stdout.write(log)\n", " optimizer.alpha *= 0.99 \n", " print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tuning the LSTM character language model\n", "\n", "We should note that this model takes a long time to train (lots of parameters).\n", "- I also had to train it many times to find a good tuning, (learning rate, batch size, & so on)\n", "- In General, the Longer you train, the better your results will be:\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "### LSTMs are incredibly powerful Models.\n", "\n", "Language is an incredibly complex statistical distribution to learn, and the fact that LSTMs can do so well, still baffles me. Small variants of this model either are or have recently been the state of the art in a wide variety of tasks.\n", "\n", "---" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.6" } }, "nbformat": 4, "nbformat_minor": 4 }