{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\nDeep Learning with PyTorch\n**************************\n\nDeep Learning Building Blocks: Affine maps, non-linearities and objectives\n==========================================================================\n\nDeep learning consists of composing linearities with non-linearities in\nclever ways. The introduction of non-linearities allows for powerful\nmodels. In this section, we will play with these core components, make\nup an objective function, and see how the model is trained.\n\n\nAffine Maps\n~~~~~~~~~~~\n\nOne of the core workhorses of deep learning is the affine map, which is\na function $f(x)$ where\n\n\\begin{align}f(x) = Ax + b\\end{align}\n\nfor a matrix $A$ and vectors $x, b$. The parameters to be\nlearned here are $A$ and $b$. Often, $b$ is refered to\nas the *bias* term.\n\n\nPytorch and most other deep learning frameworks do things a little\ndifferently than traditional linear algebra. It maps the rows of the\ninput instead of the columns. That is, the $i$'th row of the\noutput below is the mapping of the $i$'th row of the input under\n$A$, plus the bias term. Look at the example below.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Author: Robert Guthrie\n\nimport torch\nimport torch.autograd as autograd\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\ntorch.manual_seed(1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "lin = nn.Linear(5, 3) # maps from R^5 to R^3, parameters A, b\n# data is 2x5. A maps from 5 to 3... can we map \"data\" under A?\ndata = autograd.Variable(torch.randn(2, 5))\nprint(lin(data)) # yes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Non-Linearities\n~~~~~~~~~~~~~~~\n\nFirst, note the following fact, which will explain why we need\nnon-linearities in the first place. Suppose we have two affine maps\n$f(x) = Ax + b$ and $g(x) = Cx + d$. What is\n$f(g(x))$?\n\n\\begin{align}f(g(x)) = A(Cx + d) + b = ACx + (Ad + b)\\end{align}\n\n$AC$ is a matrix and $Ad + b$ is a vector, so we see that\ncomposing affine maps gives you an affine map.\n\nFrom this, you can see that if you wanted your neural network to be long\nchains of affine compositions, that this adds no new power to your model\nthan just doing a single affine map.\n\nIf we introduce non-linearities in between the affine layers, this is no\nlonger the case, and we can build much more powerful models.\n\nThere are a few core non-linearities.\n$\\tanh(x), \\sigma(x), \\text{ReLU}(x)$ are the most common. You are\nprobably wondering: \"why these functions? I can think of plenty of other\nnon-linearities.\" The reason for this is that they have gradients that\nare easy to compute, and computing gradients is essential for learning.\nFor example\n\n\\begin{align}\\frac{d\\sigma}{dx} = \\sigma(x)(1 - \\sigma(x))\\end{align}\n\nA quick note: although you may have learned some neural networks in your\nintro to AI class where $\\sigma(x)$ was the default non-linearity,\ntypically people shy away from it in practice. This is because the\ngradient *vanishes* very quickly as the absolute value of the argument\ngrows. Small gradients means it is hard to learn. Most people default to\ntanh or ReLU.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# In pytorch, most non-linearities are in torch.functional (we have it imported as F)\n# Note that non-linearites typically don't have parameters like affine maps do.\n# That is, they don't have weights that are updated during training.\ndata = autograd.Variable(torch.randn(2, 2))\nprint(data)\nprint(F.relu(data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Softmax and Probabilities\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe function $\\text{Softmax}(x)$ is also just a non-linearity, but\nit is special in that it usually is the last operation done in a\nnetwork. This is because it takes in a vector of real numbers and\nreturns a probability distribution. Its definition is as follows. Let\n$x$ be a vector of real numbers (positive, negative, whatever,\nthere are no constraints). Then the i'th component of\n$\\text{Softmax}(x)$ is\n\n\\begin{align}\\frac{\\exp(x_i)}{\\sum_j \\exp(x_j)}\\end{align}\n\nIt should be clear that the output is a probability distribution: each\nelement is non-negative and the sum over all components is 1.\n\nYou could also think of it as just applying an element-wise\nexponentiation operator to the input to make everything non-negative and\nthen dividing by the normalization constant.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Softmax is also in torch.functional\ndata = autograd.Variable(torch.randn(5))\nprint(data)\nprint(F.softmax(data))\nprint(F.softmax(data).sum()) # Sums to 1 because it is a distribution!\nprint(F.log_softmax(data)) # theres also log_softmax" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Objective Functions\n~~~~~~~~~~~~~~~~~~~\n\nThe objective function is the function that your network is being\ntrained to minimize (in which case it is often called a *loss function*\nor *cost function*). This proceeds by first choosing a training\ninstance, running it through your neural network, and then computing the\nloss of the output. The parameters of the model are then updated by\ntaking the derivative of the loss function. Intuitively, if your model\nis completely confident in its answer, and its answer is wrong, your\nloss will be high. If it is very confident in its answer, and its answer\nis correct, the loss will be low.\n\nThe idea behind minimizing the loss function on your training examples\nis that your network will hopefully generalize well and have small loss\non unseen examples in your dev set, test set, or in production. An\nexample loss function is the *negative log likelihood loss*, which is a\nvery common objective for multi-class classification. For supervised\nmulti-class classification, this means training the network to minimize\nthe negative log probability of the correct output (or equivalently,\nmaximize the log probability of the correct output).\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Optimization and Training\n=========================\n\nSo what we can compute a loss function for an instance? What do we do\nwith that? We saw earlier that autograd.Variable's know how to compute\ngradients with respect to the things that were used to compute it. Well,\nsince our loss is an autograd.Variable, we can compute gradients with\nrespect to all of the parameters used to compute it! Then we can perform\nstandard gradient updates. Let $\\theta$ be our parameters,\n$L(\\theta)$ the loss function, and $\\eta$ a positive\nlearning rate. Then:\n\n\\begin{align}\\theta^{(t+1)} = \\theta^{(t)} - \\eta \\nabla_\\theta L(\\theta)\\end{align}\n\nThere are a huge collection of algorithms and active research in\nattempting to do something more than just this vanilla gradient update.\nMany attempt to vary the learning rate based on what is happening at\ntrain time. You don't need to worry about what specifically these\nalgorithms are doing unless you are really interested. Torch provies\nmany in the torch.optim package, and they are all completely\ntransparent. Using the simplest gradient update is the same as the more\ncomplicated algorithms. Trying different update algorithms and different\nparameters for the update algorithms (like different initial learning\nrates) is important in optimizing your network's performance. Often,\njust replacing vanilla SGD with an optimizer like Adam or RMSProp will\nboost performance noticably.\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Creating Network Components in Pytorch\n======================================\n\nBefore we move on to our focus on NLP, lets do an annotated example of\nbuilding a network in Pytorch using only affine maps and\nnon-linearities. We will also see how to compute a loss function, using\nPytorch's built in negative log likelihood, and update parameters by\nbackpropagation.\n\nAll network components should inherit from nn.Module and override the\nforward() method. That is about it, as far as the boilerplate is\nconcerned. Inheriting from nn.Module provides functionality to your\ncomponent. For example, it makes it keep track of its trainable\nparameters, you can swap it between CPU and GPU with the .cuda() or\n.cpu() functions, etc.\n\nLet's write an annotated example of a network that takes in a sparse\nbag-of-words representation and outputs a probability distribution over\ntwo labels: \"English\" and \"Spanish\". This model is just logistic\nregression.\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Example: Logistic Regression Bag-of-Words classifier\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOur model will map a sparse BOW representation to log probabilities over\nlabels. We assign each word in the vocab an index. For example, say our\nentire vocab is two words \"hello\" and \"world\", with indices 0 and 1\nrespectively. The BoW vector for the sentence \"hello hello hello hello\"\nis\n\n\\begin{align}\\left[ 4, 0 \\right]\\end{align}\n\nFor \"hello world world hello\", it is\n\n\\begin{align}\\left[ 2, 2 \\right]\\end{align}\n\netc. In general, it is\n\n\\begin{align}\\left[ \\text{Count}(\\text{hello}), \\text{Count}(\\text{world}) \\right]\\end{align}\n\nDenote this BOW vector as $x$. The output of our network is:\n\n\\begin{align}\\log \\text{Softmax}(Ax + b)\\end{align}\n\nThat is, we pass the input through an affine map and then do log\nsoftmax.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data = [(\"me gusta comer en la cafeteria\".split(), \"SPANISH\"),\n (\"Give it to me\".split(), \"ENGLISH\"),\n (\"No creo que sea una buena idea\".split(), \"SPANISH\"),\n (\"No it is not a good idea to get lost at sea\".split(), \"ENGLISH\")]\n\ntest_data = [(\"Yo creo que si\".split(), \"SPANISH\"),\n (\"it is lost on me\".split(), \"ENGLISH\")]\n\n# word_to_ix maps each word in the vocab to a unique integer, which will be its\n# index into the Bag of words vector\nword_to_ix = {}\nfor sent, _ in data + test_data:\n for word in sent:\n if word not in word_to_ix:\n word_to_ix[word] = len(word_to_ix)\nprint(word_to_ix)\n\nVOCAB_SIZE = len(word_to_ix)\nNUM_LABELS = 2\n\n\nclass BoWClassifier(nn.Module): # inheriting from nn.Module!\n\n def __init__(self, num_labels, vocab_size):\n # calls the init function of nn.Module. Dont get confused by syntax,\n # just always do it in an nn.Module\n super(BoWClassifier, self).__init__()\n\n # Define the parameters that you will need. In this case, we need A and b,\n # the parameters of the affine mapping.\n # Torch defines nn.Linear(), which provides the affine map.\n # Make sure you understand why the input dimension is vocab_size\n # and the output is num_labels!\n self.linear = nn.Linear(vocab_size, num_labels)\n\n # NOTE! The non-linearity log softmax does not have parameters! So we don't need\n # to worry about that here\n\n def forward(self, bow_vec):\n # Pass the input through the linear layer,\n # then pass that through log_softmax.\n # Many non-linearities and other functions are in torch.nn.functional\n return F.log_softmax(self.linear(bow_vec))\n\n\ndef make_bow_vector(sentence, word_to_ix):\n vec = torch.zeros(len(word_to_ix))\n for word in sentence:\n vec[word_to_ix[word]] += 1\n return vec.view(1, -1)\n\n\ndef make_target(label, label_to_ix):\n return torch.LongTensor([label_to_ix[label]])\n\n\nmodel = BoWClassifier(NUM_LABELS, VOCAB_SIZE)\n\n# the model knows its parameters. The first output below is A, the second is b.\n# Whenever you assign a component to a class variable in the __init__ function\n# of a module, which was done with the line\n# self.linear = nn.Linear(...)\n# Then through some Python magic from the Pytorch devs, your module\n# (in this case, BoWClassifier) will store knowledge of the nn.Linear's parameters\nfor param in model.parameters():\n print(param)\n\n# To run the model, pass in a BoW vector, but wrapped in an autograd.Variable\nsample = data[0]\nbow_vector = make_bow_vector(sample[0], word_to_ix)\nlog_probs = model(autograd.Variable(bow_vector))\nprint(log_probs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Which of the above values corresponds to the log probability of ENGLISH,\nand which to SPANISH? We never defined it, but we need to if we want to\ntrain the thing.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "label_to_ix = {\"SPANISH\": 0, \"ENGLISH\": 1}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So lets train! To do this, we pass instances through to get log\nprobabilities, compute a loss function, compute the gradient of the loss\nfunction, and then update the parameters with a gradient step. Loss\nfunctions are provided by Torch in the nn package. nn.NLLLoss() is the\nnegative log likelihood loss we want. It also defines optimization\nfunctions in torch.optim. Here, we will just use SGD.\n\nNote that the *input* to NLLLoss is a vector of log probabilities, and a\ntarget label. It doesn't compute the log probabilities for us. This is\nwhy the last layer of our network is log softmax. The loss function\nnn.CrossEntropyLoss() is the same as NLLLoss(), except it does the log\nsoftmax for you.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Run on test data before we train, just to see a before-and-after\nfor instance, label in test_data:\n bow_vec = autograd.Variable(make_bow_vector(instance, word_to_ix))\n log_probs = model(bow_vec)\n print(log_probs)\n\n# Print the matrix column corresponding to \"creo\"\nprint(next(model.parameters())[:, word_to_ix[\"creo\"]])\n\nloss_function = nn.NLLLoss()\noptimizer = optim.SGD(model.parameters(), lr=0.1)\n\n# Usually you want to pass over the training data several times.\n# 100 is much bigger than on a real data set, but real datasets have more than\n# two instances. Usually, somewhere between 5 and 30 epochs is reasonable.\nfor epoch in range(100):\n for instance, label in data:\n # Step 1. Remember that Pytorch accumulates gradients.\n # We need to clear them out before each instance\n model.zero_grad()\n\n # Step 2. Make our BOW vector and also we must wrap the target in a\n # Variable as an integer. For example, if the target is SPANISH, then\n # we wrap the integer 0. The loss function then knows that the 0th\n # element of the log probabilities is the log probability\n # corresponding to SPANISH\n bow_vec = autograd.Variable(make_bow_vector(instance, word_to_ix))\n target = autograd.Variable(make_target(label, label_to_ix))\n\n # Step 3. Run our forward pass.\n log_probs = model(bow_vec)\n\n # Step 4. Compute the loss, gradients, and update the parameters by\n # calling optimizer.step()\n loss = loss_function(log_probs, target)\n loss.backward()\n optimizer.step()\n\nfor instance, label in test_data:\n bow_vec = autograd.Variable(make_bow_vector(instance, word_to_ix))\n log_probs = model(bow_vec)\n print(log_probs)\n\n# Index corresponding to Spanish goes up, English goes down!\nprint(next(model.parameters())[:, word_to_ix[\"creo\"]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We got the right answer! You can see that the log probability for\nSpanish is much higher in the first example, and the log probability for\nEnglish is much higher in the second for the test data, as it should be.\n\nNow you see how to make a Pytorch component, pass some data through it\nand do gradient updates. We are ready to dig deeper into what deep NLP\nhas to offer.\n\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 0 }