{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "a569ad9a41c0d43b58bb9425c5bad9df", "grade": false, "grade_id": "cell-2dfc0bc1e6fbbbd3", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "# Part 3: Sequence Classification" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "4e155a44e17248e3d102e1b80e24bf6c", "grade": false, "grade_id": "cell-16d5c7a45d3f9b23", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__Before starting, we recommend you enable GPU acceleration if you're running on Colab.__" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "b1b77d0af7b67787cd29f90504f29014", "grade": false, "grade_id": "cell-9fa514521b79541d", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "# Execute this code block to install dependencies when running on colab\n", "try:\n", " import torch\n", "except:\n", " from os.path import exists\n", " from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag\n", " platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())\n", " cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\\.\\([0-9]*\\)\\.\\([0-9]*\\)$/cu\\1\\2/'\n", " accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'\n", "\n", " !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl torchvision\n", "\n", "try: \n", " import torchbearer\n", "except:\n", " !pip install torchbearer\n", "\n", "try:\n", " import torchtext\n", "except:\n", " !pip install torchtext\n", " \n", "try:\n", " import spacy\n", "except:\n", " !pip install spacy\n", " \n", "try:\n", " spacy.load('en')\n", "except:\n", " !python -m spacy download en" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "e0fd6d0300fd2e1ce9a1b34ffd2fefe0", "grade": false, "grade_id": "cell-cabb9cac57ae217e", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "## Sequence Classification\n", "The problem that we will use to demonstrate sequence classification in this lab is the IMDB movie review sentiment classification problem. Each movie review is a variable sequence of words and the sentiment of each movie review must be classified.\n", "\n", "The Large Movie Review Dataset (often referred to as the IMDB dataset) contains 25,000 highly-polar movie reviews (good or bad) for training and the same amount again for testing. The problem is to determine whether a given movie review has a positive or negative sentiment. The data was collected by Stanford researchers and was used in a 2011 paper where a split of 50-50 of the data was used for training and test. An accuracy of 88.89% was achieved.\n", "\n", "We'll be using a **recurrent neural network** (RNN) as they are commonly used in analysing sequences. An RNN takes in sequence of words, $X=\\{x_1, ..., x_T\\}$, one at a time, and produces a _hidden state_, $h$, for each word. We use the RNN _recurrently_ by feeding in the current word $x_t$ as well as the hidden state from the previous word, $h_{t-1}$, to produce the next hidden state, $h_t$. \n", "\n", "$$h_t = \\text{RNN}(x_t, h_{t-1})$$\n", "\n", "Once we have our final hidden state, $h_T$, (from feeding in the last word in the sequence, $x_T$) we feed it through a linear layer, $f$, (also known as a fully connected layer), to receive our predicted sentiment, $\\hat{y} = f(h_T)$.\n", "\n", "Below shows an example sentence, with the RNN predicting zero, which indicates a negative sentiment. The RNN is shown in orange and the linear layer shown in silver. Note that we use the same RNN for every word, i.e. it has the same parameters. The initial hidden state, $h_0$, is a tensor initialized to all zeros. \n", "\n", "![](http://comp6248.ecs.soton.ac.uk/labs/lab7/assets/sentiment1.png)\n", "\n", "**Note:** some layers and steps have been omitted from the diagram, but these will be explained later.\n", "\n", "\n", "The TorchText library provides easy access to the IMDB dataset. The `IMDB` class allows you to load the dataset in a format that is ready for use in neural network and deep learning models, and TorchText's utility methods allow us to easily create batches of data that are `padded` to the same length (we need to pad shorter sentences in the batch to the length of the longest sentence)." ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "494c01d089301731550196dabe047067", "grade": false, "grade_id": "cell-23e92e167a2ccd52", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "One of the main concepts of TorchText is the `Field`. These define how your data should be processed. In our sentiment classification task the data consists of both the raw string of the review and the sentiment, either \"pos\" or \"neg\".\n", "\n", "The parameters of a `Field` specify how the data should be processed. \n", "\n", "We use the `TEXT` field to define how the review should be processed, and the `LABEL` field to process the sentiment. \n", "\n", "Our `TEXT` field has `tokenize='spacy'` as an argument. This defines that the \"tokenization\" (the act of splitting the string into discrete \"tokens\") should be done using the [spaCy](https://spacy.io) tokenizer. If no `tokenize` argument is passed, the default is simply splitting the string on spaces.\n", "\n", "`LABEL` is defined by a `LabelField`, a special subset of the `Field` class specifically used for handling labels. We will explain the `dtype` argument later.\n", "\n", "For more on `Fields`, go [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "ecbcba375e6e1012f085383a6ccee602", "grade": false, "grade_id": "cell-e0561eba5550d048", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "import torch\n", "from torchtext import data\n", "\n", "TEXT = data.Field(tokenize='spacy', lower=True, include_lengths=True)\n", "LABEL = data.LabelField(dtype=torch.float)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "c45641b2f9364a3c13d1c00d9c92682e", "grade": false, "grade_id": "cell-0689e30f35617f29", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "The following code automatically downloads the IMDb dataset and splits it into the canonical train/test splits as `torchtext.datasets` objects. It process the data using the `Fields` we have previously defined. Note that the following can take a couple of minutes to run due to the tokenisation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "73091f32f831e502e86041779578b7d0", "grade": false, "grade_id": "cell-bfc816072fd54de0", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "from torchtext import datasets\n", "\n", "train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "2ade9f38fd31f83a376798748847ceb4", "grade": false, "grade_id": "cell-a98e9c37bcda1b30", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We can see how many examples are in each split by checking their length." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "a31d190aee8c06a8f3fa0fe2957bb31e", "grade": false, "grade_id": "cell-a05363648fa59cae", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "print(f'Number of training examples: {len(train_data)}')\n", "print(f'Number of testing examples: {len(test_data)}')" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "f868c66c17e0c39a48728520e07cb468", "grade": false, "grade_id": "cell-83b4651e016c211e", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We can also check an example." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "76a3507d5c228460a24aea95f68c5507", "grade": false, "grade_id": "cell-a3aaf4270ecf8c11", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "print(vars(train_data.examples[0]))" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "5f7210df2512451c6d38c52c76ddf6c1", "grade": false, "grade_id": "cell-1c8ef9d389e1ea7c", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "The IMDb dataset only has train/test splits, so we need to create a validation set. We can do this with the `.split()` method. \n", "\n", "By default this splits 70/30, however by passing a `split_ratio` argument, we can change the ratio of the split, i.e. a `split_ratio` of 0.8 would mean 80% of the examples make up the training set and 20% make up the validation set. \n", "\n", "We also pass our random seed to the `random_state` argument, ensuring that we get the same train/validation split each time." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "6282e5f4b4da6ea29f0fa5d349055147", "grade": false, "grade_id": "cell-9a9d0a261cd1d62c", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "import random\n", "\n", "train_data, valid_data = train_data.split()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "8b77b32ded3821f4d03d18d256df19ae", "grade": false, "grade_id": "cell-ebac196da95db0fb", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Again, we'll view how many examples are in each split." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "8e00a4cf9bddfe86221ed1b820fcea6c", "grade": false, "grade_id": "cell-11de3fcbde1d6f7f", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "print(f'Number of training examples: {len(train_data)}')\n", "print(f'Number of validation examples: {len(valid_data)}')\n", "print(f'Number of testing examples: {len(test_data)}')" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "eae6f73416a822fca491ebf20f878aa5", "grade": false, "grade_id": "cell-921d2a5f1737e53b", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Next, we have to build a _vocabulary_. This is effectively a look up table where every unique word in your data set has a corresponding _index_ (an integer).\n", "\n", "We do this as our machine learning model cannot operate on strings, only numbers. Each _index_ is used to construct a _one-hot_ vector for each word. A one-hot vector is a vector where all of the elements are 0, except one, which is 1, and dimensionality is the total number of unique words in your vocabulary, commonly denoted by $V$.\n", "\n", "![](http://comp6248.ecs.soton.ac.uk/labs/lab7/assets/sentiment5.png)\n", "\n", "The number of unique words in our training set is over 100,000, which means that our one-hot vectors will have over 100,000 dimensions! This will make training slow and possibly won't fit onto your GPU (if you're using one). \n", "\n", "There are two ways to effectively cut-down our vocabulary, we can either only take the top $n$ most common words or ignore words that appear less than $m$ times. We'll do the former, only keeping the top 25,000 words.\n", "\n", "What do we do with words that appear in examples but we have cut from the vocabulary? We replace them with a special _unknown_ or `` token. For example, if the sentence was \"This film is great and I love it\" but the word \"love\" was not in the vocabulary, it would become \"This film is great and I `` it\".\n", "\n", "The following builds the vocabulary, only keeping the most common `max_size` tokens." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "6746775e3c0746e78afa6b2382d0f249", "grade": false, "grade_id": "cell-1cf0d6f0d09b9333", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "TEXT.build_vocab(train_data, max_size=25000)\n", "LABEL.build_vocab(train_data)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "c406d281eaa4f81948f1e423fd4876ed", "grade": false, "grade_id": "cell-1ca43190cc40ef00", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Why do we only build the vocabulary on the training set? When testing any machine learning system you do not want to look at the test set in any way. We do not include the validation set as we want it to reflect the test set as much as possible." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "4e073ef87fd96107616b5016c51b53b7", "grade": false, "grade_id": "cell-79a871ebea509deb", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "print(f\"Unique tokens in TEXT vocabulary: {len(TEXT.vocab)}\")\n", "print(f\"Unique tokens in LABEL vocabulary: {len(LABEL.vocab)}\")" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "0d084d49bcadb8addbb1b8cd39bda543", "grade": false, "grade_id": "cell-74d663b76304878f", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Why is the vocab size 25002 and not 25000? One of the addition tokens is the `` token and the other is a `` token.\n", "\n", "When we feed sentences into our model, we feed a _batch_ of them at a time, i.e. more than one at a time, and all sentences in the batch need to be the same size. Thus, to ensure each sentence in the batch is the same size, any sentences which are shorter than the longest within the batch are padded.\n", "\n", "![](http://comp6248.ecs.soton.ac.uk/labs/lab7/assets/sentiment6.png)\n", "\n", "We can also view the most common words in the vocabulary and their frequencies." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "5c353d4eec3c94b10fb4228a9af8f274", "grade": false, "grade_id": "cell-0efc8b86fea8d6e2", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "print(TEXT.vocab.freqs.most_common(20))" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "6f0044693302832d5b864ea9f20bcaa2", "grade": false, "grade_id": "cell-68985316db3edb24", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We can also see the vocabulary directly using either the `stoi` (**s**tring **to** **i**nt) or `itos` (**i**nt **to** **s**tring) method." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "862acc66ef827e8a3e33ca822b682728", "grade": false, "grade_id": "cell-3f6931771dfb8b05", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "print(TEXT.vocab.itos[:10])" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "e5f1fcac73f30d4b4f3fff4d3ba247b3", "grade": false, "grade_id": "cell-126deacfb7443e94", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We can also check the labels, ensuring 0 is for negative and 1 is for positive." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "7d95213f5534ac30134149c90fa6c6c7", "grade": false, "grade_id": "cell-c45080555e1e47a8", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "print(LABEL.vocab.stoi)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "217fe362a504a59655710774d6bd4f3e", "grade": false, "grade_id": "cell-35a9ad2d4bf17b42", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "The final step of preparing the data is creating the iterators. We iterate over these in the training/evaluation loop, and they return a batch of examples (indexed and converted into tensors) at each iteration.\n", "\n", "We'll use a `BucketIterator` which is a special type of iterator that will return a batch of examples where each example is of a similar length, minimizing the amount of padding per example. Torchtext will pad for us automatically (handled by the `Field` object). We'll request the items within each batch produced by the `BucketIterator` are sorted by length.\n", "\n", "We also want to place the tensors returned by the iterator on the GPU (if you're using one). PyTorch handles this using `torch.device`, we then pass this device to the iterator." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "526210054aa53f54d8dad8acf67d1dcf", "grade": false, "grade_id": "cell-722b81c8ccb13d25", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "BATCH_SIZE = 64\n", "\n", "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n", "\n", "train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(\n", " (train_data, valid_data, test_data), \n", " batch_size=BATCH_SIZE,\n", " device=device,\n", " sort_key=lambda x: len(x.text),\n", " sort_within_batch=True)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "def10308b65f2ed87e6fe3d3f2d89727", "grade": false, "grade_id": "cell-5d7acf8d6db191d3", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "## Build the Model\n", "\n", "The next stage is building the model that we'll eventually train and evaluate. \n", "\n", "There is a small amount of boilerplate code when creating models in PyTorch, note how our `RNN` class is a sub-class of `nn.Module` and the use of `super`.\n", "\n", "Within the `__init__` we define the _layers_ of the module. Our three layers are an _embedding_ layer, our RNN, and a _linear_ layer. All layers have their parameters initialized to random values, unless explicitly specified.\n", "\n", "The embedding layer is used to transform our sparse one-hot vector (sparse as most of the elements are 0) into a dense embedding vector (dense as the dimensionality is a lot smaller and all the elements are real numbers). This embedding layer is simply a single fully connected layer. As well as reducing the dimensionality of the input to the RNN, there is the theory that words which have similar impact on the sentiment of the review are mapped close together in this dense vector space. For more information about word embeddings, see [here](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/).\n", "\n", "The RNN layer is our RNN which takes in our dense vector and the previous hidden state $h_{t-1}$, which it uses to calculate the next hidden state, $h_t$.\n", "\n", "![](http://comp6248.ecs.soton.ac.uk/labs/lab7/assets/sentiment7.png)\n", "\n", "Finally, the linear layer takes the final hidden state and feeds it through a fully connected layer, $f(h_T)$, transforming it to the correct output dimension.\n", "\n", "The `forward` method is called when we feed examples into our model.\n", "\n", "Each batch, `text_len`, is a tuple containing a tensor of size _**[max_sentence length, batch size]**_ and a tensor of **batch_size** containing the true lengths of each sentence (remember, they won't necessarily be the same; some reviews are much longer than others). \n", "\n", "The first tensor in the tuple contains the ordered word indexes for each review in the batch. The act of converting a list of tokens into a list of indexes is commonly called *numericalizing*.\n", "\n", "The input batch is then passed through the embedding layer to get `embedded`, which gives us a dense vector representation of our sentences. `embedded` is a tensor of size _**[sentence length, batch size, embedding dim]**_. \n", "\n", "`embedded` is then fed into a function called `pack_padded_sequence` before being fed into the RNN. `pack_padded_sequence` is used to create a datastructure that allows the RNN to 'mask' off the padding during the BPTT process (we don't want to learn the padding, as this could drastically influence the results!). In some frameworks you must feed the initial hidden state, $h_0$, into the RNN, however in PyTorch, if no initial hidden state is passed as an argument it defaults to a tensor of all zeros.\n", "\n", "The RNN returns 2 tensors, `output` of size _**[sentence length, batch size, hidden dim]**_ and `hidden` of size _**[1, batch size, hidden dim]**_. `output` is the concatenation of the hidden state from every time step, whereas `hidden` is simply the final hidden state. \n", "\n", "Finally, we feed the last hidden state, `hidden`, through the linear layer, `fc`, to produce a prediction. Note the `squeeze` method, which is used to remove a dimension of size 1. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "c785b0842586d20d70510c6f8a2c6b39", "grade": false, "grade_id": "cell-fbb5f94b744dd6db", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "import torch.nn as nn\n", "\n", "class RNN(nn.Module):\n", " def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):\n", " super().__init__()\n", " \n", " self.embedding = nn.Embedding(input_dim, embedding_dim)\n", " self.rnn = nn.RNN(embedding_dim, hidden_dim)\n", " self.fc = nn.Linear(hidden_dim, output_dim)\n", " \n", " def forward(self, text, lengths):\n", " embedded = self.embedding(text)\n", " embedded = nn.utils.rnn.pack_padded_sequence(embedded, lengths)\n", " packed_output, hidden = self.rnn(embedded)\n", "\n", " return self.fc(hidden.squeeze(0))" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "3c834acdf19cf4124c69e3b92c9fbd91", "grade": false, "grade_id": "cell-8b4748e05072b330", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We now create an instance of our RNN class. \n", "\n", "The input dimension is the dimension of the one-hot vectors, which is equal to the vocabulary size. \n", "\n", "The embedding dimension is the size of the dense word vectors. This is usually around 50-250 dimensions, but depends on the size of the vocabulary.\n", "\n", "The hidden dimension is the size of the hidden states. This is usually around 100-500 dimensions, but also depends on factors such as on the vocabulary size, the size of the dense vectors and the complexity of the task.\n", "\n", "The output dimension is usually the number of classes, however in the case of only 2 classes the output value is between 0 and 1 and thus can be 1-dimensional, i.e. a single scalar real number." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "a7f66791ac9794a7da9ee4e6fc743b24", "grade": false, "grade_id": "cell-751c2df54b71d158", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "INPUT_DIM = len(TEXT.vocab)\n", "EMBEDDING_DIM = 50\n", "HIDDEN_DIM = 100\n", "OUTPUT_DIM = 1\n", "\n", "model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "591ccfef47e2bcc226185c87e568821a", "grade": false, "grade_id": "cell-daf23924e258a608", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "# Train the model\n", "\n", "Now we'll set up the training and then train the model.\n", "\n", "First, we'll create an optimizer. This is the algorithm we use to update the parameters of the module. Here, we'll use _stochastic gradient descent_ (SGD). The first argument is the parameters that will be updated by the optimizer, the second is the learning rate, i.e. how much we'll change the parameters by when we do a parameter update." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "4a0e4ee985e7d35fcf80b8b3eaabdbde", "grade": false, "grade_id": "cell-d7566606b6f480ec", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "import torch.optim as optim\n", "\n", "optimizer = optim.SGD(model.parameters(), lr=0.001)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "072bc1b948afe036e31f60a05469fc27", "grade": false, "grade_id": "cell-8981a2c109df3c53", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Next, we'll define our loss function. In PyTorch this is commonly called a criterion. \n", "\n", "The loss function here is _binary cross entropy with logits_. \n", "\n", "Our model currently outputs an unbound real number. As our labels are either 0 or 1, we want to restrict the predictions to a number between 0 and 1. We do this using the _sigmoid_ function. \n", "\n", "We then use this this bound scalar to calculate the loss using binary cross entropy. \n", "\n", "The `BCEWithLogitsLoss` criterion carries out both the sigmoid and the binary cross entropy steps." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "dee16f7d05d182806906fc2f3d6c4484", "grade": false, "grade_id": "cell-99068d084d5dfb73", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "criterion = nn.BCEWithLogitsLoss()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "7a7cef0ba9cb2d6b49111244fb8f5841", "grade": false, "grade_id": "cell-71e6ceff6ffba4f7", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Finally, before creating a Torchbearer trial and copying the model to the GPU (if we have one), we need to adapt the format of the batches returned by TorchText. TorchText's iterators return an object representing a batch of data from which you can access the fields used to store the underlying data (e.g. the labels and words in our case). Torchbearer on the other hand needs to know what the X's and y's are explicitly. \n", "\n", "There are a number of ways in which we can adapt the batch data format, but one of the easiest conceptually is to write a simple wrapper iterator that reads the next torchtext batch and returns a tuple of objects (the X and y). Note that because we specified that we want the lengths of the sentences earlier, the X's will be a tuple of the sentences and their lengths. If we had not requested the lengths, then the X's would just be a tensor encoding the padded sentences." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "fd0f46de194f98826875d7431eee5918", "grade": false, "grade_id": "cell-f011ec7d73d7ef46", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "from torchbearer import Trial\n", " \n", "class MyIter:\n", " def __init__(self, it):\n", " self.it = it\n", " def __iter__(self):\n", " for batch in self.it:\n", " yield (batch.text, batch.label.unsqueeze(1))\n", " def __len__(self):\n", " return len(self.it)\n", "\n", "torchbearer_trial = Trial(model, optimizer, criterion, metrics=['acc', 'loss']).to(device)\n", "torchbearer_trial.with_generators(train_generator=MyIter(train_iterator), val_generator=MyIter(valid_iterator), test_generator=MyIter(test_iterator))\n", "torchbearer_trial.run(epochs=5)\n", "torchbearer_trial.predict()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "9853268a5e221536d6e9650e8a77d473", "grade": false, "grade_id": "cell-03b5931999ab62d1", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__Use the box below to comment on and give insight into the performance of the above model:__" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "nbgrader": { "checksum": "615ab310b18718e361d870399422e5ae", "grade": true, "grade_id": "cell-5bf61cbc741af01b", "locked": false, "points": 5, "schema_version": 1, "solution": true } }, "source": [ "YOUR ANSWER HERE" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "fbeb865b3ebb7348e9d0e0e2b3f1e6d2", "grade": false, "grade_id": "cell-acff7e648aa99e42", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Now try and build a better model. Rather than using a plain RNN, we'll instead use a (single layer) LSTM, and we'll use Adam with an initial learning rate of 0.01 as the optimiser. __Complete the following code to implement the improved model, and then train it:__" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "ef8857370d3a96cdb46acea6f94f99ac", "grade": true, "grade_id": "cell-7c7913d0313ff2e8", "locked": false, "points": 5, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "class ImprovedRNN(nn.Module):\n", " def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):\n", " super().__init__()\n", " \n", " self.embedding = nn.Embedding(input_dim, embedding_dim)\n", " # YOUR CODE HERE\n", " raise NotImplementedError()\n", " self.fc = nn.Linear(hidden_dim, output_dim)\n", " \n", " def forward(self, text, lengths):\n", " embedded = self.embedding(text)\n", " embedded = nn.utils.rnn.pack_padded_sequence(embedded, lengths)\n", " \n", " # YOUR CODE HERE\n", " raise NotImplementedError()\n", " \n", "INPUT_DIM = len(TEXT.vocab)\n", "EMBEDDING_DIM = 50\n", "HIDDEN_DIM = 100\n", "OUTPUT_DIM = 1\n", "\n", "imodel = ImprovedRNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)\n", "\n", "# TODO: Train and evaluate the model\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "a84da93b0d8631bd2279f0a0ca7cecf7", "grade": false, "grade_id": "cell-9da7d879835eafa4", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__What do you observe about the performance of this model? What would you do next if you wanted to improve it further? Write your answers in the box below:__" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "bd9fe3f4f88270f113a2731db2b1b2b7", "grade": true, "grade_id": "cell-856a0834622f664f", "locked": false, "points": 10, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "21752b32fcb29f71d7da22bf21a14a51", "grade": false, "grade_id": "cell-f5e06c722621a0d2", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "## User Input\n", "\n", "We can now use our models to predict the sentiment of any sentence we give it. As it has been trained on movie reviews, the sentences provided should also be movie reviews.\n", "\n", "Our `predict_sentiment` function does a few things:\n", "- tokenizes the sentence, i.e. splits it from a raw string into a list of tokens\n", "- indexes the tokens by converting them into their integer representation from our vocabulary\n", "- converts the indexes, which are a Python list into a PyTorch tensor\n", "- add a batch dimension by `unsqueeze`ing \n", "- squashes the output prediction from a real number between 0 and 1 with the `sigmoid` function\n", "- converts the tensor holding a single value into an integer with the `item()` method\n", "\n", "We are expecting reviews with a negative sentiment to return a value close to 0 and positive reviews to return a value close to 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "03dd780275233d49f92dc46d21217de6", "grade": false, "grade_id": "cell-256f5d0cab3585a2", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "import spacy\n", "nlp = spacy.load('en')\n", "\n", "def predict_sentiment(model, sentence):\n", " tokenized = [tok.text for tok in nlp.tokenizer(sentence)]\n", " indexed = [TEXT.vocab.stoi[t] for t in tokenized]\n", " tensor = torch.LongTensor(indexed).to(device)\n", " tensor = tensor.unsqueeze(1)\n", " prediction = torch.sigmoid(model((tensor, torch.tensor([tensor.shape[0]]))))\n", " return prediction.item()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "2f0ff29f9b4ff26535014b40ac64f159", "grade": false, "grade_id": "cell-44ca76b13f0ef977", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "An example negative review..." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "d05743968f5f1f6c0665d09c34659f71", "grade": false, "grade_id": "cell-85a9deecd4e90b8b", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "predict_sentiment(imodel, \"This film is terrible\")" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "f890392d505e39def4b5dc80d028019f", "grade": false, "grade_id": "cell-78424acf52854f0e", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "and an example positive review..." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "ff7023e20c6bba3f0a62bd3784853846", "grade": false, "grade_id": "cell-dc6a5b212298be67", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "predict_sentiment(imodel, \"This film is great\")" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "3033258c6669e324eb0eba06b20b0275", "grade": false, "grade_id": "cell-433552e8e38ce037", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__Use the box below to try classifying some of your own 'movie reviews':__" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }