{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Assignment 2. Language modeling.\n", "\n", "This task is devoted to language modeling. Its goal is to write in PyTorch an RNN-based language model. Since word-based language modeling requires long training and is memory-consuming due to large vocabulary, we start with character-based language modeling. We are going to train the model to generate words as sequence of characters. During training we teach it to predict characters of the words in the training set.\n", "\n", "\n", "\n", "## Task 1. Character-based language modeling: data preparation (15 points)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We train the language models on the materials of **Sigmorphon 2018 Shared Task**. First, download the Russian datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!wget https://raw.githubusercontent.com/sigmorphon/conll2018/master/task1/surprise/russian-train-high\n", "!wget https://raw.githubusercontent.com/sigmorphon/conll2018/master/task1/surprise/russian-dev\n", "!wget https://raw.githubusercontent.com/sigmorphon/conll2018/master/task1/surprise/russian-test" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.1 (1 points)**\n", "All the files contain tab-separated triples ```-
-```, where `````` may contain spaces (*будете соответствовать*). Write a function that loads a list of all word forms, that do not contain spaces. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def read_infile(infile):\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"\n", " return words" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_words = read_infile(\"russian-train-high\")\n", "dev_words = read_infile(\"russian-dev\")\n", "test_words = read_infile(\"russian-test\")\n", "print(len(train_words), len(dev_words), len(test_words))\n", "print(*train_words[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.2 (2 points)** Write a **Vocabulary** class that allows to transform symbols into their indexes. The class should have the method ```__call__``` that applies this transformation to sequences of symbols and batches of sequences as well. You can also use [SimpleVocabulary](https://github.com/deepmipt/DeepPavlov/blob/c10b079b972493220c82a643d47d718d5358c7f4/deeppavlov/core/data/simple_vocab.py#L31) from DeepPavlov. Fit an instance of this class on the training data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from deeppavlov.core.data.simple_vocab import SimpleVocabulary\n", "\"\"\"\n", "== YOUR CODE HERE ==\n", "\"\"\"\n", "\n", "vocab = # == YOUR CODE HERE ==\n", "vocab.fit([list(x) for x in train_words])\n", "print(len(vocab))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.3 (2 points)** Write a **Dataset** class, which should be inherited from ```torch.utils.data.Dataset```. It should take a list of words and the ```vocab``` as initialization arguments." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "from torch.utils.data import Dataset as TorchDataset\n", "\n", "class Dataset(TorchDataset):\n", " \n", " \"\"\"Custom data.Dataset compatible with data.DataLoader.\"\"\"\n", " def __init__(self, data, vocab):\n", " self.data = data\n", " self.vocab = vocab\n", "\n", " def __getitem__(self, index):\n", " \"\"\"\n", " Returns one tensor pair (source and target). The source tensor corresponds to the input word,\n", " with \"BEGIN\" and \"END\" symbols attached. The target tensor should contain the answers\n", " for the language model that obtain these word as input. \n", " \"\"\"\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"\n", "\n", " def __len__(self):\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_dataset = Dataset(train_words, vocab)\n", "dev_dataset = Dataset(dev_words, vocab)\n", "test_dataset = Dataset(test_words, vocab)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.4 (3 points)** Use a standard ```torch.utils.data.DataLoader``` to obtain an iterable over batches. Print the shape of first 10 input batches with ```batch_size=1```." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from torch.utils.data import DataLoader\n", "\n", "\"\"\"\n", "== YOUR CODE HERE ==\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(1.5) 1 point** Explain, why this does not work with larger batch size." ] }, { "cell_type": "raw", "metadata": {}, "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(1.6) 5 points** Write a function **collate** that allows you to deal with batches of greater size. See [discussion](https://discuss.pytorch.org/t/dataloader-for-various-length-of-data/6418/8) for an example. Implement your function as a class ```__call__``` method to make it more flexible." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def pad_tensor(vec, length, dim, pad_symbol):\n", " \"\"\"\n", " Pads a vector ``vec`` up to length ``length`` along axis ``dim`` with pad symbol ``pad_symbol``.\n", " \"\"\"\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"\n", "\n", "class Padder:\n", " \n", " def __init__(self, dim=0, pad_symbol=0):\n", " self.dim = dim\n", " self.pad_symbol = pad_symbol\n", " \n", " def __call__(self, batch):\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(1.7) 1 points** Again, use ```torch.utils.data.DataLoader``` to obtain an iterable over batches. Print the shape of first 10 input batches with the batch size you like." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from torch.utils.data import DataLoader\n", "\n", "\"\"\"\n", "== YOUR CODE HERE ==\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Task 2. Character-based language modeling. (35 points)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.1 (5 points)** Write a network that performs language modeling. It should include three layers:\n", "1. **Embedding** layer that transforms input symbols into vectors.\n", "2. An **RNN** layer that outputs a sequence of hidden states (you may use https://pytorch.org/docs/stable/nn.html#gru).\n", "3. A **Linear** layer with ``softmax`` activation that produces the output distribution for each symbol." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch.nn as nn\n", "\n", "class RNNLM(nn.Module):\n", "\n", " def __init__(self, vocab_size, embeddings_dim, hidden_size):\n", " super(RNNLM, self).__init__()\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"\n", " \n", " def forward(self, inputs, hidden=None):\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.2 (1 points)** Write a function ``validate_on_batch`` that takes as input a model, a batch of inputs and a batch of outputs, and the loss criterion, and outputs the loss tensor for the whole batch. This loss should not be normalized." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def validate_on_batch(model, criterion, x, y):\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.3 (1 points)** Write a function ``train_on_batch`` that accepts all the arguments of ``validate_on_batch`` and also an optimizer, calculates loss and makes a single step of gradient optimization. This function should call ``validate_on_batch`` inside." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def train_on_batch(model, criterion, x, y, optimizer):\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.4 (3 points)** Write a training loop. You should define your ``RNNLM`` model, the criterion, the optimizer and the hyperparameters (number of epochs and batch size). Then train the model for a required number of epochs. On each epoch evaluate the average training loss and the average loss on the validation set. \n", "\n", "**2.5 (3 points)** Do not forget to average your loss over only non-padding symbols, otherwise it will be too optimistic." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "== YOUR CODE HERE ==\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.6 (5 points)** Write a function **predict_on_batch** that outputs letter probabilities of all words in the batch." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "== YOUR CODE HERE ==\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.7 (1 points)** Calculate the letter probabilities for all words in the test dataset. Print them for 20 last words. Do not forget to disable shuffling in the ``DataLoader``." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "== YOUR CODE HERE ==\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.8 (5 points)** Write a function that generates a single word (sequence of indexes) given the model. Do not forget about the hidden state! Be careful about start and end symbol indexes. Use ``torch.multinomial`` for sampling." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def generate(model, max_length=20, start_index=1, end_index=2):\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.9 (1 points)** Use ``generate`` to sample 20 pseudowords. Do not forget to transform indexes to letters." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in range(20):\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(2.10) 5 points** Write a batched version of the generation function. You should sample the following symbol only for the words that are not finished yet, so apply a boolean mask to trace active words." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def generate_batch(model, batch_size, max_length = 20, start_index=1, end_index=2):\n", " \"\"\"\n", " == YOUR CODE HERE ==\n", " \"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "generated = []\n", "for _ in range(2):\n", " generated += generate_batch(model, batch_size=10)\n", "\"\"\"\n", "== YOUR CODE HERE ==\n", "\"\"\"\n", "for elem in transformed:\n", " print(\"\".join(elem))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(2.11) 5 points** Experiment with the type of RNN, number of layers, units and/or dropout to improve the perplexity of the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 4 }