{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.\n",
    "- Author: Sebastian Raschka\n",
    "- GitHub Repository: https://github.com/rasbt/deeplearning-models"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "vY4SK0xKAJgm"
   },
   "source": [
    "# Model Zoo -- RNN with LSTM and pre-trained GloVe Word Vectors (IMDB sentiment classification)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "sc6xejhY-NzZ"
   },
   "source": [
    "Demo of a simple RNN for sentiment classification (here: a binary classification problem with two labels, positive and negative) using LSTM (Long Short Term Memory) cells.\n",
    "\n",
    "This version of the RNN uses pre-trained word vectors (here: GloVe [1]), which are readily available in PyTorch via the `build_vocab` method of a tokenized data field.\n",
    "\n",
    "\n",
    "- [1] Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "moNmVfuvnImW"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sebastian Raschka \n",
      "\n",
      "CPython 3.6.8\n",
      "IPython 7.2.0\n",
      "\n",
      "torch 1.1.0\n"
     ]
    }
   ],
   "source": [
    "%load_ext watermark\n",
    "%watermark -a 'Sebastian Raschka' -v -p torch\n",
    "\n",
    "import torch\n",
    "import torch.nn.functional as F\n",
    "from torchtext import data\n",
    "from torchtext import datasets\n",
    "import time\n",
    "import random\n",
    "\n",
    "torch.backends.cudnn.deterministic = True"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "GSRL42Qgy8I8"
   },
   "source": [
    "## General Settings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "OvW1RgfepCBq"
   },
   "outputs": [],
   "source": [
    "RANDOM_SEED = 123\n",
    "torch.manual_seed(RANDOM_SEED)\n",
    "\n",
    "VOCABULARY_SIZE = 20000\n",
    "LEARNING_RATE = 1e-4\n",
    "BATCH_SIZE = 128\n",
    "NUM_EPOCHS = 15\n",
    "DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
    "\n",
    "EMBEDDING_DIM = 128\n",
    "HIDDEN_DIM = 256\n",
    "OUTPUT_DIM = 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "mQMmKUEisW4W"
   },
   "source": [
    "## Dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "4GnH64XvsV8n"
   },
   "source": [
    "Load the IMDB Movie Review dataset:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 68
    },
    "colab_type": "code",
    "id": "WZ_4jiHVnMxN",
    "outputId": "dfa51c04-4845-44c3-f50b-d36d41f132b8"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Num Train: 20000\n",
      "Num Valid: 5000\n",
      "Num Test: 25000\n"
     ]
    }
   ],
   "source": [
    "TEXT = data.Field(tokenize='spacy',\n",
    "                  include_lengths=True) # necessary for packed_padded_sequence\n",
    "LABEL = data.LabelField(dtype=torch.float)\n",
    "train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)\n",
    "train_data, valid_data = train_data.split(random_state=random.seed(RANDOM_SEED),\n",
    "                                          split_ratio=0.8)\n",
    "\n",
    "print(f'Num Train: {len(train_data)}')\n",
    "print(f'Num Valid: {len(valid_data)}')\n",
    "print(f'Num Test: {len(test_data)}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "L-TBwKWPslPa"
   },
   "source": [
    "Build the vocabulary based on the top \"VOCABULARY_SIZE\" words:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 51
    },
    "colab_type": "code",
    "id": "e8uNrjdtn4A8",
    "outputId": "6cf499d7-7722-4da0-8576-ee0f218cc6e3"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Vocabulary size: 20002\n",
      "Number of classes: 2\n"
     ]
    }
   ],
   "source": [
    "TEXT.build_vocab(train_data,\n",
    "                 max_size=VOCABULARY_SIZE,\n",
    "                 vectors='glove.6B.100d',\n",
    "                 unk_init=torch.Tensor.normal_)\n",
    "LABEL.build_vocab(train_data)\n",
    "\n",
    "print(f'Vocabulary size: {len(TEXT.vocab)}')\n",
    "print(f'Number of classes: {len(LABEL.vocab)}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "JpEMNInXtZsb"
   },
   "source": [
    "The TEXT.vocab dictionary will contain the word counts and indices. The reason why the number of words is VOCABULARY_SIZE + 2 is that it contains to special tokens for padding and unknown words: `<unk>` and `<pad>`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "eIQ_zfKLwjKm"
   },
   "source": [
    "Make dataset iterators:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "i7JiHR1stHNF"
   },
   "outputs": [],
   "source": [
    "train_loader, valid_loader, test_loader = data.BucketIterator.splits(\n",
    "    (train_data, valid_data, test_data), \n",
    "    batch_size=BATCH_SIZE,\n",
    "    sort_within_batch=True, # necessary for packed_padded_sequence\n",
    "    device=DEVICE)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "R0pT_dMRvicQ"
   },
   "source": [
    "Testing the iterators (note that the number of rows depends on the longest document in the respective batch):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 204
    },
    "colab_type": "code",
    "id": "y8SP_FccutT0",
    "outputId": "fe33763a-4560-4dee-adee-31cc6c48b0b2"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train\n",
      "Text matrix size: torch.Size([133, 128])\n",
      "Target vector size: torch.Size([128])\n",
      "\n",
      "Valid:\n",
      "Text matrix size: torch.Size([63, 128])\n",
      "Target vector size: torch.Size([128])\n",
      "\n",
      "Test:\n",
      "Text matrix size: torch.Size([42, 128])\n",
      "Target vector size: torch.Size([128])\n"
     ]
    }
   ],
   "source": [
    "print('Train')\n",
    "for batch in train_loader:\n",
    "    print(f'Text matrix size: {batch.text[0].size()}')\n",
    "    print(f'Target vector size: {batch.label.size()}')\n",
    "    break\n",
    "    \n",
    "print('\\nValid:')\n",
    "for batch in valid_loader:\n",
    "    print(f'Text matrix size: {batch.text[0].size()}')\n",
    "    print(f'Target vector size: {batch.label.size()}')\n",
    "    break\n",
    "    \n",
    "print('\\nTest:')\n",
    "for batch in test_loader:\n",
    "    print(f'Text matrix size: {batch.text[0].size()}')\n",
    "    print(f'Target vector size: {batch.label.size()}')\n",
    "    break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "G_grdW3pxCzz"
   },
   "source": [
    "## Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "nQIUm5EjxFNa"
   },
   "outputs": [],
   "source": [
    "import torch.nn as nn\n",
    "\n",
    "class RNN(nn.Module):\n",
    "    def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):\n",
    "        \n",
    "        super().__init__()\n",
    "        \n",
    "        self.embedding = nn.Embedding(input_dim, embedding_dim)\n",
    "        self.rnn = nn.LSTM(embedding_dim, hidden_dim)\n",
    "        self.fc = nn.Linear(hidden_dim, output_dim)\n",
    "        \n",
    "    def forward(self, text, text_length):\n",
    "\n",
    "        #[sentence len, batch size] => [sentence len, batch size, embedding size]\n",
    "        embedded = self.embedding(text)\n",
    "        \n",
    "        packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, text_length)\n",
    "        \n",
    "        #[sentence len, batch size, embedding size] => \n",
    "        #  output: [sentence len, batch size, hidden size]\n",
    "        #  hidden: [1, batch size, hidden size]\n",
    "        packed_output, (hidden, cell) = self.rnn(packed)\n",
    "        \n",
    "        return self.fc(hidden.squeeze(0)).view(-1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "Ik3NF3faxFmZ"
   },
   "outputs": [],
   "source": [
    "INPUT_DIM = len(TEXT.vocab)\n",
    "\n",
    "torch.manual_seed(RANDOM_SEED)\n",
    "model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)\n",
    "model = model.to(DEVICE)\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Lv9Ny9di6VcI"
   },
   "source": [
    "## Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "T5t1Afn4xO11"
   },
   "outputs": [],
   "source": [
    "def compute_binary_accuracy(model, data_loader, device):\n",
    "    model.eval()\n",
    "    correct_pred, num_examples = 0, 0\n",
    "    with torch.no_grad():\n",
    "        for batch_idx, batch_data in enumerate(data_loader):\n",
    "            text, text_lengths = batch_data.text\n",
    "            logits = model(text, text_lengths)\n",
    "            predicted_labels = (torch.sigmoid(logits) > 0.5).long()\n",
    "            num_examples += batch_data.label.size(0)\n",
    "            correct_pred += (predicted_labels == batch_data.label.long()).sum()\n",
    "        return correct_pred.float()/num_examples * 100"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1836
    },
    "colab_type": "code",
    "id": "EABZM8Vo0ilB",
    "outputId": "5d45e293-9909-4588-e793-8dfaf72e5c67"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch: 001/015 | Batch 000/157 | Cost: 0.6968\n",
      "Epoch: 001/015 | Batch 050/157 | Cost: 0.6824\n",
      "Epoch: 001/015 | Batch 100/157 | Cost: 0.6816\n",
      "Epoch: 001/015 | Batch 150/157 | Cost: 0.6831\n",
      "training accuracy: 57.72%\n",
      "valid accuracy: 56.58%\n",
      "Time elapsed: 0.16 min\n",
      "Epoch: 002/015 | Batch 000/157 | Cost: 0.6804\n",
      "Epoch: 002/015 | Batch 050/157 | Cost: 0.6673\n",
      "Epoch: 002/015 | Batch 100/157 | Cost: 0.6163\n",
      "Epoch: 002/015 | Batch 150/157 | Cost: 0.6680\n",
      "training accuracy: 68.96%\n",
      "valid accuracy: 67.60%\n",
      "Time elapsed: 0.32 min\n",
      "Epoch: 003/015 | Batch 000/157 | Cost: 0.6752\n",
      "Epoch: 003/015 | Batch 050/157 | Cost: 0.5386\n",
      "Epoch: 003/015 | Batch 100/157 | Cost: 0.6342\n",
      "Epoch: 003/015 | Batch 150/157 | Cost: 0.5390\n",
      "training accuracy: 73.32%\n",
      "valid accuracy: 72.52%\n",
      "Time elapsed: 0.48 min\n",
      "Epoch: 004/015 | Batch 000/157 | Cost: 0.5875\n",
      "Epoch: 004/015 | Batch 050/157 | Cost: 0.5189\n",
      "Epoch: 004/015 | Batch 100/157 | Cost: 0.4593\n",
      "Epoch: 004/015 | Batch 150/157 | Cost: 0.5351\n",
      "training accuracy: 78.23%\n",
      "valid accuracy: 77.30%\n",
      "Time elapsed: 0.64 min\n",
      "Epoch: 005/015 | Batch 000/157 | Cost: 0.6019\n",
      "Epoch: 005/015 | Batch 050/157 | Cost: 0.4646\n",
      "Epoch: 005/015 | Batch 100/157 | Cost: 0.4797\n",
      "Epoch: 005/015 | Batch 150/157 | Cost: 0.4749\n",
      "training accuracy: 80.94%\n",
      "valid accuracy: 78.84%\n",
      "Time elapsed: 0.80 min\n",
      "Epoch: 006/015 | Batch 000/157 | Cost: 0.4406\n",
      "Epoch: 006/015 | Batch 050/157 | Cost: 0.4994\n",
      "Epoch: 006/015 | Batch 100/157 | Cost: 0.3868\n",
      "Epoch: 006/015 | Batch 150/157 | Cost: 0.3962\n",
      "training accuracy: 83.11%\n",
      "valid accuracy: 80.14%\n",
      "Time elapsed: 0.97 min\n",
      "Epoch: 007/015 | Batch 000/157 | Cost: 0.4196\n",
      "Epoch: 007/015 | Batch 050/157 | Cost: 0.4324\n",
      "Epoch: 007/015 | Batch 100/157 | Cost: 0.4493\n",
      "Epoch: 007/015 | Batch 150/157 | Cost: 0.3913\n",
      "training accuracy: 84.67%\n",
      "valid accuracy: 81.64%\n",
      "Time elapsed: 1.13 min\n",
      "Epoch: 008/015 | Batch 000/157 | Cost: 0.3031\n",
      "Epoch: 008/015 | Batch 050/157 | Cost: 0.4083\n",
      "Epoch: 008/015 | Batch 100/157 | Cost: 0.4039\n",
      "Epoch: 008/015 | Batch 150/157 | Cost: 0.3657\n",
      "training accuracy: 85.99%\n",
      "valid accuracy: 83.06%\n",
      "Time elapsed: 1.29 min\n",
      "Epoch: 009/015 | Batch 000/157 | Cost: 0.3764\n",
      "Epoch: 009/015 | Batch 050/157 | Cost: 0.3988\n",
      "Epoch: 009/015 | Batch 100/157 | Cost: 0.4684\n",
      "Epoch: 009/015 | Batch 150/157 | Cost: 0.4127\n",
      "training accuracy: 85.90%\n",
      "valid accuracy: 81.32%\n",
      "Time elapsed: 1.46 min\n",
      "Epoch: 010/015 | Batch 000/157 | Cost: 0.3277\n",
      "Epoch: 010/015 | Batch 050/157 | Cost: 0.3413\n",
      "Epoch: 010/015 | Batch 100/157 | Cost: 0.3745\n",
      "Epoch: 010/015 | Batch 150/157 | Cost: 0.4328\n",
      "training accuracy: 82.66%\n",
      "valid accuracy: 78.84%\n",
      "Time elapsed: 1.62 min\n",
      "Epoch: 011/015 | Batch 000/157 | Cost: 0.3272\n",
      "Epoch: 011/015 | Batch 050/157 | Cost: 0.2448\n",
      "Epoch: 011/015 | Batch 100/157 | Cost: 0.2647\n",
      "Epoch: 011/015 | Batch 150/157 | Cost: 0.3507\n",
      "training accuracy: 87.94%\n",
      "valid accuracy: 84.28%\n",
      "Time elapsed: 1.79 min\n",
      "Epoch: 012/015 | Batch 000/157 | Cost: 0.2801\n",
      "Epoch: 012/015 | Batch 050/157 | Cost: 0.2928\n",
      "Epoch: 012/015 | Batch 100/157 | Cost: 0.3279\n",
      "Epoch: 012/015 | Batch 150/157 | Cost: 0.1846\n",
      "training accuracy: 90.02%\n",
      "valid accuracy: 85.04%\n",
      "Time elapsed: 1.95 min\n",
      "Epoch: 013/015 | Batch 000/157 | Cost: 0.3107\n",
      "Epoch: 013/015 | Batch 050/157 | Cost: 0.3891\n",
      "Epoch: 013/015 | Batch 100/157 | Cost: 0.3288\n",
      "Epoch: 013/015 | Batch 150/157 | Cost: 0.2808\n",
      "training accuracy: 90.03%\n",
      "valid accuracy: 84.62%\n",
      "Time elapsed: 2.12 min\n",
      "Epoch: 014/015 | Batch 000/157 | Cost: 0.2815\n",
      "Epoch: 014/015 | Batch 050/157 | Cost: 0.2925\n",
      "Epoch: 014/015 | Batch 100/157 | Cost: 0.2910\n",
      "Epoch: 014/015 | Batch 150/157 | Cost: 0.3109\n",
      "training accuracy: 89.25%\n",
      "valid accuracy: 83.54%\n",
      "Time elapsed: 2.29 min\n",
      "Epoch: 015/015 | Batch 000/157 | Cost: 0.3956\n",
      "Epoch: 015/015 | Batch 050/157 | Cost: 0.3538\n",
      "Epoch: 015/015 | Batch 100/157 | Cost: 0.2333\n",
      "Epoch: 015/015 | Batch 150/157 | Cost: 0.1989\n",
      "training accuracy: 90.81%\n",
      "valid accuracy: 85.36%\n",
      "Time elapsed: 2.45 min\n",
      "Total Training Time: 2.45 min\n",
      "Test accuracy: 84.60%\n"
     ]
    }
   ],
   "source": [
    "start_time = time.time()\n",
    "\n",
    "for epoch in range(NUM_EPOCHS):\n",
    "    model.train()\n",
    "    for batch_idx, batch_data in enumerate(train_loader):\n",
    "        \n",
    "        text, text_lengths = batch_data.text\n",
    "        \n",
    "        ### FORWARD AND BACK PROP\n",
    "        logits = model(text, text_lengths)\n",
    "        cost = F.binary_cross_entropy_with_logits(logits, batch_data.label)\n",
    "        optimizer.zero_grad()\n",
    "        \n",
    "        cost.backward()\n",
    "        \n",
    "        ### UPDATE MODEL PARAMETERS\n",
    "        optimizer.step()\n",
    "        \n",
    "        ### LOGGING\n",
    "        if not batch_idx % 50:\n",
    "            print (f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} | '\n",
    "                   f'Batch {batch_idx:03d}/{len(train_loader):03d} | '\n",
    "                   f'Cost: {cost:.4f}')\n",
    "\n",
    "    with torch.set_grad_enabled(False):\n",
    "        print(f'training accuracy: '\n",
    "              f'{compute_binary_accuracy(model, train_loader, DEVICE):.2f}%'\n",
    "              f'\\nvalid accuracy: '\n",
    "              f'{compute_binary_accuracy(model, valid_loader, DEVICE):.2f}%')\n",
    "        \n",
    "    print(f'Time elapsed: {(time.time() - start_time)/60:.2f} min')\n",
    "    \n",
    "print(f'Total Training Time: {(time.time() - start_time)/60:.2f} min')\n",
    "print(f'Test accuracy: {compute_binary_accuracy(model, test_loader, DEVICE):.2f}%')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "jt55pscgFdKZ"
   },
   "outputs": [],
   "source": [
    "import spacy\n",
    "nlp = spacy.load('en')\n",
    "\n",
    "def predict_sentiment(model, sentence):\n",
    "    # based on:\n",
    "    # https://github.com/bentrevett/pytorch-sentiment-analysis/blob/\n",
    "    # master/2%20-%20Upgraded%20Sentiment%20Analysis.ipynb\n",
    "    model.eval()\n",
    "    tokenized = [tok.text for tok in nlp.tokenizer(sentence)]\n",
    "    indexed = [TEXT.vocab.stoi[t] for t in tokenized]\n",
    "    length = [len(indexed)]\n",
    "    tensor = torch.LongTensor(indexed).to(DEVICE)\n",
    "    tensor = tensor.unsqueeze(1)\n",
    "    length_tensor = torch.LongTensor(length)\n",
    "    prediction = torch.sigmoid(model(tensor, length_tensor))\n",
    "    return prediction.item()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 51
    },
    "colab_type": "code",
    "id": "O4__q0coFJyw",
    "outputId": "1a7f84ba-a977-455e-e248-3b7036d496d0"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Probability positive:\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "0.8910857439041138"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "print('Probability positive:')\n",
    "predict_sentiment(model, \"I really love this movie. This movie is so great!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "7lRusB3dF80X"
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "rnn_lstm_packed_imdb.ipynb",
   "provenance": [],
   "version": "0.3.2"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}