{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

Neural Translation Model in PyTorch

\n", "by Mac Brennan" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

\n", " Translation Model Summary\n", "

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This project will be broken up into several parts as follows:\n", "\n", "__Part 1:__ Preparing the words\n", "\n", "+ Inspecting the Dataset\n", "+ Using Word Embeddings\n", "+ Organizing the Data\n", "\n", "__Part 2:__ Building the Model\n", "\n", "+ Bi-Directional Encoder\n", "+ Building Attention\n", "+ Decoder with Attention\n", "\n", "__Part 3:__ Training the Model\n", "\n", "+ Training Function\n", "+ Training Loop\n", "\n", "__Part 4:__ Evaluation\n", "\n", "\n", "This project closely follows the [PyTorch Sequence to Sequence tutorial](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html), while attempting to go more in depth with both the model implementation and the explanation. Thanks to [Sean Robertson](https://github.com/spro/practical-pytorch) and [PyTorch](https://pytorch.org/tutorials/) for providing such great tutorials.\n", "\n", "If you are working through this notebook, it is strongly recommended that [Jupyter Notebook Extensions](https://github.com/ipython-contrib/jupyter_contrib_nbextensions) is installed so you can turn on collapsable headings. It makes the notebook much easier to navigate." ] }, { "cell_type": "code", "execution_count": 250, "metadata": {}, "outputs": [], "source": [ "# Before we get started we will load all the packages we will need\n", "\n", "# Pytorch\n", "import torch\n", "import torch.nn as nn\n", "import torch.nn.functional as F\n", "import torch.optim as optim\n", "from torch.utils.data import Dataset, DataLoader\n", "\n", "import numpy as np\n", "import os.path\n", "import time\n", "import math\n", "import random\n", "import matplotlib.pyplot as plt\n", "import string\n", "\n", "# Use gpu if available\n", "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")" ] }, { "cell_type": "code", "execution_count": 251, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "device(type='cuda')" ] }, "execution_count": 251, "metadata": {}, "output_type": "execute_result" } ], "source": [ "device" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true }, "source": [ "## Part 1: Preparing the Words" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "### Inspecting the Dataset" ] }, { "cell_type": "markdown", "metadata": { "hidden": true }, "source": [ "The dataset that will be used is a text file of english sentences and the corresponding french sentences.\n", "\n", "Each sentence is on a new line. The sentences will be split into a list." ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "#### Load the data\n", "The data will be stored in two lists where each item is a sentence. The lists are:\n", "+ english_sentences\n", "+ french_sentences\n", "\n", "Download the first dataset from the projects' github repo. Place it in the same folder as the notebook or create a data folder in the notebook's folder." ] }, { "cell_type": "code", "execution_count": 252, "metadata": { "hidden": true }, "outputs": [], "source": [ "with open('data/small_vocab_en', \"r\") as f:\n", " data1 = f.read()\n", "with open('data/small_vocab_fr', \"r\") as f:\n", " data2 = f.read()\n", " \n", "# The data is just in a text file with each sentence on its own line\n", "english_sentences = data1.split('\\n')\n", "french_sentences = data2.split('\\n')" ] }, { "cell_type": "code", "execution_count": 253, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of English sentences: 137861 \n", "Number of French sentences: 137861 \n", "\n", "Example/Target pair:\n", "\n", " california is usually quiet during march , and it is usually hot in june .\n", " california est généralement calme en mars , et il est généralement chaud en juin .\n" ] } ], "source": [ "print('Number of English sentences:', len(english_sentences), \n", " '\\nNumber of French sentences:', len(french_sentences),'\\n')\n", "print('Example/Target pair:\\n')\n", "print(' '+english_sentences[2])\n", "print(' '+french_sentences[2])" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "#### Vocabulary\n", "Let's take a closer look at the dataset.\n" ] }, { "cell_type": "code", "execution_count": 254, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "['california',\n", " 'is',\n", " 'usually',\n", " 'quiet',\n", " 'during',\n", " 'march',\n", " ',',\n", " 'and',\n", " 'it',\n", " 'is',\n", " 'usually',\n", " 'hot',\n", " 'in',\n", " 'june',\n", " '.']" ] }, "execution_count": 254, "metadata": {}, "output_type": "execute_result" } ], "source": [ "english_sentences[2].split()" ] }, { "cell_type": "code", "execution_count": 255, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The longest english sentence in our dataset is: 17\n" ] } ], "source": [ "max_en_length = 0\n", "for sentence in english_sentences:\n", " length = len(sentence.split())\n", " max_en_length = max(max_en_length, length)\n", "print(\"The longest english sentence in our dataset is:\", max_en_length) " ] }, { "cell_type": "code", "execution_count": 256, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The longest french sentence in our dataset is: 23\n" ] } ], "source": [ "max_fr_length = 0\n", "for sentence in french_sentences:\n", " length = len(sentence.split())\n", " max_fr_length = max(max_fr_length, length)\n", "print(\"The longest french sentence in our dataset is:\", max_fr_length)" ] }, { "cell_type": "code", "execution_count": 257, "metadata": { "hidden": true }, "outputs": [], "source": [ "max_seq_length = max(max_fr_length, max_en_length) + 1\n", "seq_length = max_seq_length" ] }, { "cell_type": "code", "execution_count": 258, "metadata": { "hidden": true }, "outputs": [], "source": [ "en_word_count = {}\n", "fr_word_count = {}\n", "\n", "for sentence in english_sentences:\n", " for word in sentence.split():\n", " if word in en_word_count:\n", " en_word_count[word] +=1\n", " else:\n", " en_word_count[word] = 1\n", " \n", "for sentence in french_sentences:\n", " for word in sentence.split():\n", " if word in fr_word_count:\n", " fr_word_count[word] +=1\n", " else:\n", " fr_word_count[word] = 1\n" ] }, { "cell_type": "code", "execution_count": 259, "metadata": { "hidden": true }, "outputs": [], "source": [ "# Add end of sentence token to word count dict\n", "en_word_count[''] = len(english_sentences)\n", "fr_word_count[''] = len(english_sentences)" ] }, { "cell_type": "code", "execution_count": 260, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of unique English words: 228\n", "Number of unique French words: 356\n" ] } ], "source": [ "print('Number of unique English words:', len(en_word_count))\n", "print('Number of unique French words:', len(fr_word_count))" ] }, { "cell_type": "code", "execution_count": 261, "metadata": { "hidden": true }, "outputs": [], "source": [ "def get_value(items_tuple):\n", " return items_tuple[1]\n", "\n", "# Sort the word counts to see what words or most/least common\n", "sorted_en_words= sorted(en_word_count.items(), key=get_value, reverse=True)" ] }, { "cell_type": "code", "execution_count": 262, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "[('is', 205858),\n", " (',', 140897),\n", " ('', 137861),\n", " ('.', 129039),\n", " ('in', 75525),\n", " ('it', 75137),\n", " ('during', 74933),\n", " ('the', 67628),\n", " ('but', 63987),\n", " ('and', 59850)]" ] }, "execution_count": 262, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sorted_en_words[:10]" ] }, { "cell_type": "code", "execution_count": 263, "metadata": { "hidden": true }, "outputs": [], "source": [ "sorted_fr_words = sorted(fr_word_count.items(), key=get_value, reverse=True)" ] }, { "cell_type": "code", "execution_count": 264, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "[('est', 196809),\n", " ('', 137861),\n", " ('.', 135619),\n", " (',', 123135),\n", " ('en', 105768),\n", " ('il', 84079),\n", " ('les', 65255),\n", " ('mais', 63987),\n", " ('et', 59851),\n", " ('la', 49861)]" ] }, "execution_count": 264, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sorted_fr_words[:10]" ] }, { "cell_type": "markdown", "metadata": { "hidden": true }, "source": [ "So the dataset is pretty small, we may want to get a bigger data set, but we'll see how this one does." ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "#### Alternate Dataset\n", "Skip this section for now. You can come back and try training on this second dataset later. It is more diverse so it takes longer to train.\n", "\n", "Download the French-English dataset from [here](http://www.manythings.org/anki/), Although you could train the model on any of the other language pairs. However, you would need different word embeddings or they would need to be trained from scratch." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "hidden": true }, "outputs": [], "source": [ "with open('data/fra.txt', \"r\") as f:\n", " data1 = f.read()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "hidden": true }, "outputs": [], "source": [ "pairs = data1.split('\\n')\n", "english_sentences = []\n", "french_sentences = []\n", "for i, pair in enumerate(pairs):\n", " pair_split = pair.split('\\t')\n", " if len(pair_split)!= 2:\n", " continue\n", " english = pair_split[0].lower()\n", " french = pair_split[1].lower()\n", " \n", " # Remove punctuation and limit sentence length\n", " max_sent_length = 10\n", " punctuation_table = english.maketrans({i:None for i in string.punctuation})\n", " english = english.translate(punctuation_table)\n", " french = french.translate(punctuation_table)\n", " if len(english.split()) > max_sent_length or len(french.split()) > max_sent_length:\n", " continue\n", " \n", " english_sentences.append(english)\n", " french_sentences.append(french)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "139692 139692\n" ] }, { "data": { "text/plain": [ "['i', 'have', 'to', 'fight']" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(len(english_sentences), len(french_sentences))\n", "english_sentences[10000].split()\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "['il', 'me', 'faut', 'me', 'battre']" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "french_sentences[10000].split()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['would', 'you', 'consider', 'taking', 'care', 'of', 'my', 'children', 'next', 'saturday']\n" ] }, { "data": { "text/plain": [ "['pourriezvous',\n", " 'réfléchir',\n", " 'à',\n", " 'vous',\n", " 'occuper',\n", " 'de',\n", " 'mes',\n", " 'enfants',\n", " 'samedi',\n", " 'prochain']" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(english_sentences[-100].split())\n", "french_sentences[-100].split()\n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The longest english sentence in our dataset is: 10\n" ] } ], "source": [ "max_en_length = 0\n", "for sentence in english_sentences:\n", " length = len(sentence.split())\n", " max_en_length = max(max_en_length, length)\n", "print(\"The longest english sentence in our dataset is:\", max_en_length) " ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The longest french sentence in our dataset is: 10\n" ] } ], "source": [ "max_fr_length = 0\n", "for sentence in french_sentences:\n", " length = len(sentence.split())\n", " max_fr_length = max(max_fr_length, length)\n", "print(\"The longest french sentence in our dataset is:\", max_fr_length) " ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "hidden": true }, "outputs": [], "source": [ "max_seq_length = max(max_fr_length, max_en_length) + 1\n", "seq_length = max_seq_length" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "hidden": true }, "outputs": [], "source": [ "en_word_count = {}\n", "fr_word_count = {}\n", "\n", "for sentence in english_sentences:\n", " for word in sentence.split():\n", " if word in en_word_count:\n", " en_word_count[word] +=1\n", " else:\n", " en_word_count[word] = 1\n", " \n", "for sentence in french_sentences:\n", " for word in sentence.split():\n", " if word in fr_word_count:\n", " fr_word_count[word] +=1\n", " else:\n", " fr_word_count[word] = 1\n" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "hidden": true }, "outputs": [], "source": [ "en_word_count[''] = len(english_sentences)\n", "fr_word_count[''] = len(english_sentences)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of unique English words: 12603\n", "Number of unique French words: 25809\n" ] } ], "source": [ "print('Number of unique English words:', len(en_word_count))\n", "print('Number of unique French words:', len(fr_word_count))" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "hidden": true }, "outputs": [], "source": [ "fr_word2idx = {k:v+3 for v, k in enumerate(fr_word_count.keys())}\n", "en_word2idx = {k:v+3 for v, k in enumerate(en_word_count.keys())}" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "hidden": true }, "outputs": [], "source": [ "fr_word2idx[''] = 0\n", "fr_word2idx[''] = 1\n", "fr_word2idx[''] = 2\n", "\n", "en_word2idx[''] = 0\n", "en_word2idx[''] = 1\n", "en_word2idx[''] = 2" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "25812" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(fr_word2idx)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "hidden": true }, "outputs": [], "source": [ "def get_value(items_tuple):\n", " return items_tuple[1]\n", "\n", "sorted_en_words= sorted(en_word_count.items(), key=get_value, reverse=True)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "[('impossibilities', 1),\n", " ('offers', 1),\n", " ('profound', 1),\n", " ('insights', 1),\n", " ('promoting', 1),\n", " ('domestically', 1),\n", " ('feat', 1),\n", " ('hummer', 1),\n", " ('limousines', 1),\n", " ('imprison', 1)]" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sorted_en_words[-10:]" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "### Using Word Embeddings" ] }, { "cell_type": "markdown", "metadata": { "hidden": true }, "source": [ "Here we are building an embedding matrix of pretrained word vectors. The word embeddings used here were downloaded from the [fastText repository](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md). These embeddings have 300 dimensions. To start we will add a few token embeddings for our specific case. We want a token to signal the start of the sentence, A token for words that we do not have an embedding for, and a token to pad sentences so all the sentences we use have the same length. This will allow us to train the model on batches of sentences that are different lengths, rather than one at a time.\n", "\n", "After this step we will have a dictionary and an embedding matrix for each language. The dictionary will map words to an index value in the embedding matrix where its' corresponding embedding vector is stored." ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "#### Load Embeddings for the English data" ] }, { "cell_type": "code", "execution_count": 265, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Embeddings load from .npy file\n" ] } ], "source": [ "# The data file containing the embeddings is very large so once we have the embeddings we want\n", "# we will save them as a numpy array. This way we can load this much faster then having to re read from\n", "# the large embedding file\n", "if os.path.exists('data/en_words.npy') and os.path.exists('data/en_vectors.npy'):\n", " en_words = np.load('data/en_words.npy')\n", " en_vectors = np.load('data/en_vectors.npy')\n", " print('Embeddings load from .npy file')\n", "else:\n", " # make a dict with the top 100,000 words\n", " en_words = ['', # Padding Token\n", " '', # Start of sentence token\n", " ''# Unknown word token\n", " ]\n", "\n", " en_vectors = list(np.random.uniform(-0.1, 0.1, (3, 300)))\n", " en_vectors[0] *= 0 # make the padding vector zeros\n", "\n", " with open('data/wiki.en.vec', \"r\") as f:\n", " f.readline()\n", " for _ in range(100000):\n", " en_vecs = f.readline()\n", " word = en_vecs.split()[0]\n", " vector = np.float32(en_vecs.split()[1:])\n", "\n", " # skip lines that don't have 300 dim\n", " if len(vector) != 300:\n", " continue\n", "\n", " if word not in en_words:\n", " en_words.append(word)\n", " en_vectors.append(vector)\n", " print(word, vector[:10]) # Last word embedding read from the file\n", " en_words = np.array(en_words)\n", " en_vectors = np.array(en_vectors)\n", " # Save the arrays so we don't have to load the full word embedding file\n", " np.save('data/en_words.npy', en_words)\n", " np.save('data/en_vectors.npy', en_vectors)" ] }, { "cell_type": "code", "execution_count": 266, "metadata": { "hidden": true }, "outputs": [], "source": [ "en_word2idx = {word:index for index, word in enumerate(en_words)}" ] }, { "cell_type": "code", "execution_count": 267, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "index for word hemophilia: 99996 \n", "vector for word hemophilia:\n", " [ 0.16189 -0.056121 -0.65560001 0.21569 -0.11878 -0.02066\n", " 0.37613001 -0.24117 -0.098989 -0.010058 ]\n" ] } ], "source": [ "hemophilia_idx = en_word2idx['hemophilia']\n", "print('index for word hemophilia:', hemophilia_idx, \n", " '\\nvector for word hemophilia:\\n',en_vectors[hemophilia_idx][:10])" ] }, { "cell_type": "markdown", "metadata": { "hidden": true }, "source": [ "The word embedding for hemophilia matches the one read from the file, so it looks like everything worked properly." ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "#### Load Embeddings for the Frech data" ] }, { "cell_type": "code", "execution_count": 268, "metadata": { "hidden": true, "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Embeddings load from .npy file\n" ] } ], "source": [ "if os.path.exists('data/fr_words.npy') and os.path.exists('data/fr_vectors.npy'):\n", " fr_words = np.load('data/fr_words.npy')\n", " fr_vectors = np.load('data/fr_vectors.npy')\n", " print('Embeddings load from .npy file')\n", "else:\n", " # make a dict with the top 100,000 words\n", " fr_words = ['',\n", " '',\n", " '']\n", "\n", " fr_vectors = list(np.random.uniform(-0.1, 0.1, (3, 300)))\n", " fr_vectors[0] = np.zeros(300) # make the padding vector zeros\n", "\n", " with open('data/wiki.fr.vec', \"r\") as f:\n", " f.readline()\n", " for _ in range(100000):\n", " fr_vecs = f.readline()\n", " word = fr_vecs.split()[0]\n", " try:\n", " vector = np.float32(fr_vecs.split()[1:])\n", " except ValueError:\n", " continue\n", "\n", " # skip lines that don't have 300 dim\n", " if len(vector) != 300:\n", " continue\n", "\n", " if word not in fr_words:\n", " fr_words.append(word)\n", " fr_vectors.append(vector)\n", " print(word, vector[:10])\n", " fr_words = np.array(fr_words)\n", " fr_vectors = np.array(fr_vectors)\n", " # Save the arrays so we don't have to load the full word embedding file\n", " np.save('data/fr_words.npy', fr_words)\n", " np.save('data/fr_vectors.npy', fr_vectors)" ] }, { "cell_type": "code", "execution_count": 269, "metadata": { "hidden": true }, "outputs": [], "source": [ "fr_word2idx = {word:index for index, word in enumerate(fr_words)}" ] }, { "cell_type": "code", "execution_count": 270, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "index for word chabeuil: 99783 \n", "vector for word chabeuil:\n", " [-0.18058001 -0.24758001 0.075607 0.17299999 0.24116001 -0.11223\n", " -0.28173 0.27373999 0.37997001 0.48008999]\n" ] } ], "source": [ "chabeuil_idx = fr_word2idx['chabeuil']\n", "print('index for word chabeuil:', chabeuil_idx, \n", " '\\nvector for word chabeuil:\\n',fr_vectors[chabeuil_idx][:10])" ] }, { "cell_type": "code", "execution_count": 271, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "99783" ] }, "execution_count": 271, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fr_word2idx[\"chabeuil\"]" ] }, { "cell_type": "markdown", "metadata": { "hidden": true }, "source": [ "The word embedding for chabeuil matches as well so everything worked correctly for the french vocab." ] }, { "cell_type": "markdown", "metadata": { "hidden": true }, "source": [ "Ok, so we have all the pieces needed to take words and convert them into word embeddings. These word embeddings already have a lot of useful information about how words relate since we loaded the pre-trained word embeddings. Now we can build the translation model with the embedding matrices built in." ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "### Setting up PyTorch Dataset and Dataloader" ] }, { "cell_type": "markdown", "metadata": { "hidden": true }, "source": [ "Rather than organizing all the data from a file and storing it in a list or some other data structure, PyTorch allows us to create a dataset object. To get an example from a dataset we just index the dataset object like we would a list. However, all our processing can be contained in the objects initialization or indexing process.\n", "\n", "This will also make training easier when we want to iterate through batches." ] }, { "cell_type": "code", "execution_count": 272, "metadata": { "hidden": true }, "outputs": [], "source": [ "class French2EnglishDataset(Dataset):\n", " '''\n", " French and associated English sentences.\n", " '''\n", " \n", " def __init__(self, fr_sentences, en_sentences, fr_word2idx, en_word2idx, seq_length):\n", " self.fr_sentences = fr_sentences\n", " self.en_sentences = en_sentences\n", " self.fr_word2idx = fr_word2idx\n", " self.en_word2idx = en_word2idx\n", " self.seq_length = seq_length\n", " self.unk_en = set()\n", " self.unk_fr = set()\n", " \n", " def __len__(self):\n", " return len(french_sentences)\n", " \n", " def __getitem__(self, idx):\n", " '''\n", " Returns a pair of tensors containing word indices\n", " for the specified sentence pair in the dataset.\n", " '''\n", " \n", " # init torch tensors, note that 0 is the padding index\n", " french_tensor = torch.zeros(self.seq_length, dtype=torch.long)\n", " english_tensor = torch.zeros(self.seq_length, dtype=torch.long)\n", " \n", " # Get sentence pair\n", " french_sentence = self.fr_sentences[idx].split()\n", " english_sentence = self.en_sentences[idx].split()\n", " \n", " # Add tags\n", " french_sentence.append('')\n", " english_sentence.append('')\n", " \n", " # Load word indices\n", " for i, word in enumerate(french_sentence):\n", " if word in fr_word2idx and fr_word_count[word] > 5:\n", " french_tensor[i] = fr_word2idx[word]\n", " else:\n", " french_tensor[i] = fr_word2idx['']\n", " self.unk_fr.add(word)\n", " \n", " for i, word in enumerate(english_sentence):\n", " if word in en_word2idx and en_word_count[word] > 5:\n", " english_tensor[i] = en_word2idx[word]\n", " else:\n", " english_tensor[i] = en_word2idx['']\n", " self.unk_en.add(word)\n", " \n", " sample = {'french_tensor': french_tensor, 'french_sentence': self.fr_sentences[idx],\n", " 'english_tensor': english_tensor, 'english_sentence': self.en_sentences[idx]}\n", " return sample" ] }, { "cell_type": "code", "execution_count": 273, "metadata": { "hidden": true }, "outputs": [], "source": [ "french_english_dataset = French2EnglishDataset(french_sentences,\n", " english_sentences,\n", " fr_word2idx,\n", " en_word2idx,\n", " seq_length = seq_length)" ] }, { "cell_type": "markdown", "metadata": { "hidden": true }, "source": [ "#### Example output of dataset" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "hidden": true }, "outputs": [], "source": [ "test_sample = french_english_dataset[-10] # get 10th to last item in dataset" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Input example:\n", "Sentence: la station spatiale internationale est une étonnante prouesse technologique\n", "Tensor: tensor([ 9, 787, 2, 730, 21, 23, 2, 2, 2, 3,\n", " 0])\n", "\n", "Target example:\n", "Sentence: the international space station is an amazing feat of engineering\n", "Tensor: tensor([ 5, 214, 657, 309, 16, 32, 6425, 2, 7,\n", " 2, 6])\n" ] } ], "source": [ "print('Input example:')\n", "print('Sentence:', test_sample['french_sentence'])\n", "print('Tensor:', test_sample['french_tensor'])\n", "\n", "print('\\nTarget example:')\n", "print('Sentence:', test_sample['english_sentence'])\n", "print('Tensor:', test_sample['english_tensor'])" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "3\n" ] }, { "data": { "text/plain": [ "6" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Check that both tensors end with the end of sentence token\n", "print(fr_word2idx[''])\n", "en_word2idx['']" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "hidden": true }, "outputs": [], "source": [ "# Build dataloader to check how the batching works\n", "dataloader = DataLoader(french_english_dataset, batch_size=5,\n", " shuffle=True, num_workers=4)" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "hidden": true, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 torch.Size([5, 11]) torch.Size([5, 11])\n", "1 torch.Size([5, 11]) torch.Size([5, 11])\n", "2 torch.Size([5, 11]) torch.Size([5, 11])\n", "3 torch.Size([5, 11]) torch.Size([5, 11])\n" ] } ], "source": [ "# Prints out 10 batches from the dataloader\n", "for i_batch, sample_batched in enumerate(dataloader):\n", " print(i_batch, sample_batched['french_tensor'].shape,\n", " sample_batched['english_tensor'].shape)\n", " if i_batch == 3:\n", " break" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "French Sentence: plus nous vieillissons plus notre mémoire faiblit\n", "English Sentence: the older we get the weaker our memory becomes \n", "\n", "French Sentence: personne ne peut aider\n", "English Sentence: no one can help \n", "\n", "French Sentence: cest très gentil de ta part\n", "English Sentence: thats very sweet of you \n", "\n", "French Sentence: quand avezvous commencé à apprendre lallemand \n", "English Sentence: when did you start learning german \n", "\n", "French Sentence: passezle trente secondes au microondes\n", "English Sentence: zap it in the microwave for thirty seconds \n", "\n" ] } ], "source": [ "for i in dataloader:\n", " batch = i\n", " break\n", "\n", "for i in range(5):\n", " print('French Sentence:', batch['french_sentence'][i])\n", " print('English Sentence:', batch['english_sentence'][i],'\\n')" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true }, "source": [ "## Part 2: Building the Model" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "### Bi-Directional Encoder" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "hidden": true }, "outputs": [], "source": [ "class EncoderBiLSTM(nn.Module):\n", " def __init__(self, hidden_size, pretrained_embeddings):\n", " super(EncoderBiLSTM, self).__init__()\n", " \n", " # Model Parameters\n", " self.hidden_size = hidden_size\n", " self.embedding_dim = pretrained_embeddings.shape[1]\n", " self.vocab_size = pretrained_embeddings.shape[0]\n", " self.num_layers = 2\n", " self.dropout = 0.1 if self.num_layers > 1 else 0\n", " self.bidirectional = True\n", " \n", " \n", " # Construct the layers\n", " self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim)\n", " \n", " self.embedding.weight.data.copy_(torch.from_numpy(pretrained_embeddings)) #Load the pretrained embeddings\n", " self.embedding.weight.requires_grad = False #Freeze embedding layer\n", " \n", " self.lstm = nn.LSTM(self.embedding_dim,\n", " self.hidden_size,\n", " self.num_layers,\n", " batch_first = True,\n", " dropout=self.dropout,\n", " bidirectional=self.bidirectional)\n", " \n", " # Initialize hidden to hidden weights in LSTM to the Identity matrix\n", " # This improves training and prevents exploding gradients\n", " # PyTorch LSTM has the 4 different hidden to hidden weights stacked in one matrix\n", " identity_init = torch.eye(self.hidden_size)\n", " self.lstm.weight_hh_l0.data.copy_(torch.cat([identity_init]*4, dim=0))\n", " self.lstm.weight_hh_l0_reverse.data.copy_(torch.cat([identity_init]*4, dim=0))\n", " self.lstm.weight_hh_l1.data.copy_(torch.cat([identity_init]*4, dim=0))\n", " self.lstm.weight_hh_l1_reverse.data.copy_(torch.cat([identity_init]*4, dim=0))\n", " \n", " def forward(self, input, hidden):\n", " embedded = self.embedding(input)\n", " output = self.lstm(embedded, hidden)\n", " return output\n", " \n", " def initHidden(self, batch_size):\n", " \n", " hidden_state = torch.zeros(self.num_layers*(2 if self.bidirectional else 1),\n", " batch_size,\n", " self.hidden_size, \n", " device=device)\n", " \n", " cell_state = torch.zeros(self.num_layers*(2 if self.bidirectional else 1),\n", " batch_size,\n", " self.hidden_size, \n", " device=device)\n", " \n", " return (hidden_state, cell_state)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "hidden": true }, "outputs": [], "source": [ "class EncoderBiGRU(nn.Module):\n", " def __init__(self, hidden_size, pretrained_embeddings):\n", " super(EncoderBiGRU, self).__init__()\n", " \n", " # Model parameters\n", " self.hidden_size = hidden_size\n", " self.embedding_dim = pretrained_embeddings.shape[1]\n", " self.vocab_size = pretrained_embeddings.shape[0]\n", " self.num_layers = 2\n", " self.dropout = 0.1 if self.num_layers > 1 else 0\n", " self.bidirectional = True\n", " \n", " \n", " # Construct the layers\n", " self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim)\n", " self.embedding.weight.data.copy_(torch.from_numpy(pretrained_embeddings))\n", " self.embedding.weight.requires_grad = False\n", " \n", " self.gru = nn.GRU(self.embedding_dim,\n", " self.hidden_size,\n", " self.num_layers,\n", " batch_first = True,\n", " dropout=self.dropout,\n", " bidirectional=self.bidirectional)\n", " \n", " # Initialize hidden to hidden weights in GRU to the Identity matrix\n", " # PyTorch GRU has 3 different hidden to hidden weights stacked in one matrix\n", " identity_init = torch.eye(self.hidden_size)\n", " self.gru.weight_hh_l0.data.copy_(torch.cat([identity_init]*3, dim=0))\n", " self.gru.weight_hh_l0_reverse.data.copy_(torch.cat([identity_init]*3, dim=0))\n", " self.gru.weight_hh_l1.data.copy_(torch.cat([identity_init]*3, dim=0))\n", " self.gru.weight_hh_l1_reverse.data.copy_(torch.cat([identity_init]*3, dim=0))\n", " \n", " def forward(self, input, hidden):\n", " embedded = self.embedding(input)\n", " output = self.gru(embedded, hidden)\n", " return output\n", " \n", " def initHidden(self, batch_size):\n", " \n", " hidden_state = torch.zeros(self.num_layers*(2 if self.bidirectional else 1),\n", " batch_size,\n", " self.hidden_size, \n", " device=device)\n", " \n", " return hidden_state" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "#### Testing the Encoder" ] }, { "cell_type": "code", "execution_count": 229, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The final output of the BiLSTM Encoder on our test input is: \n", "\n", " torch.Size([1, 3, 10])\n", "\n", "\n", "Encoder output tensor: \n", "\n", " tensor([[[ 0.1696, -0.0685, -0.1059, 0.0245, -0.0932, -0.1734, 0.0031,\n", " 0.0233, -0.0100, 0.1628],\n", " [ 0.2719, -0.1025, -0.1429, 0.0170, -0.1392, -0.1159, -0.0073,\n", " 0.0053, 0.0103, 0.1306],\n", " [ 0.3649, -0.1420, -0.1676, 0.0222, -0.1329, -0.0704, -0.0404,\n", " 0.0054, -0.0080, 0.0976]]], device='cuda:0')\n" ] } ], "source": [ "# Test the encoder on a sample input, input tensor has dimensions (batch_size, seq_length)\n", "# all the variable have test_ in front of them so they don't reassign variables needed later on with the real models\n", "\n", "test_batch_size = 1\n", "test_seq_length = 3\n", "test_hidden_size = 5\n", "test_encoder = EncoderBiLSTM(test_hidden_size, fr_vectors).to(device)\n", "test_hidden = test_encoder.initHidden(test_batch_size)\n", "\n", "# Create an input tensor of random indices\n", "test_inputs = torch.randint(0, 50, (test_batch_size, test_seq_length), dtype=torch.long, device=device)\n", "\n", "test_encoder_output, test_encoder_hidden = test_encoder.forward(test_inputs, test_hidden)\n", "\n", "print(\"The final output of the BiLSTM Encoder on our test input is: \\n\\n\", test_encoder_output.shape)\n", "\n", "print('\\n\\nEncoder output tensor: \\n\\n', test_encoder_output)" ] }, { "cell_type": "code", "execution_count": 230, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "(tensor([[[-0.1083, -0.2036, 0.1844, -0.1568, -0.0570]],\n", " \n", " [[ 0.1193, -0.1132, -0.1703, -0.1203, 0.0673]],\n", " \n", " [[ 0.3649, -0.1420, -0.1676, 0.0222, -0.1329]],\n", " \n", " [[-0.1734, 0.0031, 0.0233, -0.0100, 0.1628]]], device='cuda:0'),\n", " tensor([[[-0.3360, -0.5497, 0.4430, -0.2667, -0.1674]],\n", " \n", " [[ 0.1950, -0.2336, -0.3140, -0.3467, 0.1642]],\n", " \n", " [[ 0.7378, -0.2309, -0.2565, 0.0606, -0.3088]],\n", " \n", " [[-0.3480, 0.0054, 0.0424, -0.0174, 0.4835]]], device='cuda:0'))" ] }, "execution_count": 230, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_encoder_hidden# Tuple where first item is the hidden states, second item is the cell states.\n", "\n", "# The lstm has 2 layers, each layer has a forward and backward pass giving 4" ] }, { "cell_type": "code", "execution_count": 231, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "tensor([[[-0.1083, -0.2036, 0.1844, -0.1568, -0.0570]],\n", "\n", " [[ 0.3649, -0.1420, -0.1676, 0.0222, -0.1329]]], device='cuda:0')" ] }, "execution_count": 231, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_encoder_hidden[0][::2] # Hidden states from forward pass for both lstm layers." ] }, { "cell_type": "code", "execution_count": 232, "metadata": { "hidden": true }, "outputs": [], "source": [ "test_encoder_gru = EncoderBiGRU(test_hidden_size, fr_vectors).to(device)\n", "test_hidden = test_encoder_gru.initHidden(test_batch_size)\n", "o,h = test_encoder_gru(test_inputs, test_hidden)" ] }, { "cell_type": "code", "execution_count": 233, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "tensor([[[-0.2638, 0.0444, 0.0588, -0.1062, -0.0780, -0.1639, 0.1332,\n", " -0.5315, 0.0886, 0.1702],\n", " [-0.4556, 0.1273, 0.0350, -0.3055, -0.0166, -0.2639, 0.0371,\n", " -0.4623, 0.0706, 0.1357],\n", " [-0.5962, 0.2580, 0.1032, -0.4162, -0.0136, -0.1962, -0.0306,\n", " -0.2940, -0.0043, 0.1003]]], device='cuda:0')" ] }, "execution_count": 233, "metadata": {}, "output_type": "execute_result" } ], "source": [ "o" ] }, { "cell_type": "code", "execution_count": 234, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[[-0.5236, -0.1632, -0.1012, -0.1132, -0.3256]],\n", "\n", " [[ 0.2437, -0.0019, -0.1655, 0.0607, 0.2190]],\n", "\n", " [[-0.5962, 0.2580, 0.1032, -0.4162, -0.0136]],\n", "\n", " [[-0.1639, 0.1332, -0.5315, 0.0886, 0.1702]]], device='cuda:0')\n" ] }, { "data": { "text/plain": [ "tensor([[[ 0.2437, -0.0019, -0.1655, 0.0607, 0.2190]],\n", "\n", " [[-0.1639, 0.1332, -0.5315, 0.0886, 0.1702]]], device='cuda:0')" ] }, "execution_count": 234, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(h)\n", "h[1::2]" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "### Attention\n", "Let's take a moment test how attention is being modeled. Weighted sum of sequence items from encoder output." ] }, { "cell_type": "code", "execution_count": 235, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Attention weights:\n", " tensor([[[ 1., 0., 0.]]], device='cuda:0')\n", "\n", "First sequence item in Encoder output: \n", " tensor([[ 0.1696, -0.0685, -0.1059, 0.0245, -0.0932, -0.1734, 0.0031,\n", " 0.0233, -0.0100, 0.1628]], device='cuda:0')\n", "\n", "Encoder Output after attention is applied: \n", " tensor([ 0.1696, -0.0685, -0.1059, 0.0245, -0.0932, -0.1734, 0.0031,\n", " 0.0233, -0.0100, 0.1628], device='cuda:0')\n", "\n", " torch.Size([10])\n" ] } ], "source": [ "# Initialize attention weights to one, note the dimensions\n", "attn_weights = torch.ones((test_batch_size, test_seq_length),device=device)\n", "\n", "# Set all weights except the weights associated with the first sequence item equal to zero\n", "# This would represent full attention on the first word in the sequence\n", "attn_weights[:, 1:] = 0\n", "\n", "attn_weights.unsqueeze_(1) # Add dimension for batch matrix multiplication\n", "\n", "# BMM(Batch Matrix Multiply) muliplies the [1 x seq_length] matrix by the [seq_length x hidden_size] matrix for\n", "# each batch. This produces a single vector(for each batch) of length(encoder_hidden_size) that is the weighted\n", "# sum of the encoder hidden vectors for each item in the sequence.\n", "attn_applied = torch.bmm(attn_weights, test_encoder_output)\n", "attn_applied.squeeze_() # Remove extra dimension\n", "\n", "print('Attention weights:\\n', attn_weights)\n", "print('\\nFirst sequence item in Encoder output: \\n', test_encoder_output[:,0,:])\n", "print('\\nEncoder Output after attention is applied: \\n', attn_applied)\n", "print('\\n', attn_applied.shape)" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "### Decoder with Attention" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "hidden": true }, "outputs": [], "source": [ "class AttnDecoderLSTM(nn.Module):\n", " def __init__(self, decoder_hidden_size, pretrained_embeddings, seq_length):\n", " super(AttnDecoderLSTM, self).__init__()\n", " # Embedding parameters\n", " self.embedding_dim = pretrained_embeddings.shape[1]\n", " self.output_vocab_size = pretrained_embeddings.shape[0]\n", " \n", " # LSTM parameters\n", " self.decoder_hidden_size = decoder_hidden_size\n", " self.num_layers = 2 # Potentially add more layers to LSTM later\n", " self.dropout = 0.1 if self.num_layers > 1 else 0 # Potentially add dropout later\n", " \n", " # Attention parameters\n", " self.seq_length = seq_length\n", " self.encoder_hidden_dim = 2*decoder_hidden_size\n", " \n", " # Construct embedding layer for output language\n", " self.embedding = nn.Embedding(self.output_vocab_size, self.embedding_dim)\n", " self.embedding.weight.data.copy_(torch.from_numpy(pretrained_embeddings))\n", " self.embedding.weight.requires_grad = False # we don't want to train the embedding weights\n", " \n", " # Construct layer that calculates attentional weights\n", " self.attn = nn.Linear((self.decoder_hidden_size + self.embedding_dim), self.seq_length)\n", " \n", " # Construct layer that compresses the combined matrix of the input embeddings\n", " # and the encoder inputs after attention has been applied\n", " self.attn_with_input = nn.Linear(self.embedding_dim + self.encoder_hidden_dim, self.embedding_dim)\n", " \n", " # LSTM for Decoder\n", " self.lstm = nn.LSTM(self.embedding_dim,\n", " self.decoder_hidden_size,\n", " self.num_layers,\n", " dropout=self.dropout)\n", " \n", " # Initialize hidden to hidden weights in LSTM to the Identity matrix\n", " # PyTorch LSTM has 4 different hidden to hidden weights stacked in one matrix\n", " identity_init = torch.eye(self.decoder_hidden_size)\n", " self.lstm.weight_hh_l0.data.copy_(torch.cat([identity_init]*4, dim=0))\n", " self.lstm.weight_hh_l1.data.copy_(torch.cat([identity_init]*4, dim=0))\n", " \n", " # Output layer\n", " self.out = nn.Linear(self.decoder_hidden_size, self.output_vocab_size)\n", " \n", " def forward(self, input, hidden, encoder_output):\n", " # Input word indices, should have dim(1, batch_size), output will be (1, batch_size, embedding_dim)\n", " embedded = self.embedding(input)\n", " \n", " # Calculate Attention weights\n", " attn_weights = F.softmax(self.attn(torch.cat((hidden[0][1], embedded[0]), 1)), dim=1)\n", " attn_weights = attn_weights.unsqueeze(1) # Add dimension for batch matrix multiplication\n", " \n", " # Apply Attention weights\n", " attn_applied = torch.bmm(attn_weights, encoder_output)\n", " attn_applied = attn_applied.squeeze(1) # Remove extra dimension, dim are now (batch_size, encoder_hidden_size)\n", " \n", " # Prepare LSTM input tensor\n", " attn_combined = torch.cat((embedded[0], attn_applied), 1) # Combine embedding input and attn_applied,\n", " lstm_input = F.relu(self.attn_with_input(attn_combined)) # pass through fully connected with ReLU\n", " lstm_input = lstm_input.unsqueeze(0) # Add seq dimension so tensor has expected dimensions for lstm\n", " \n", " output, hidden = self.lstm(lstm_input, hidden) # Output dim = (1, batch_size, decoder_hidden_size)\n", " output = F.log_softmax(self.out(output[0]), dim=1) # softmax over all words in vocab\n", " \n", " \n", " return output, hidden, attn_weights" ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "hidden": true }, "outputs": [], "source": [ "class AttnDecoderGRU(nn.Module):\n", " def __init__(self, decoder_hidden_size, pretrained_embeddings, seq_length):\n", " super(AttnDecoderGRU, self).__init__()\n", " # Embedding parameters\n", " self.embedding_dim = pretrained_embeddings.shape[1]\n", " self.output_vocab_size = pretrained_embeddings.shape[0]\n", " \n", " # GRU parameters\n", " self.decoder_hidden_size = decoder_hidden_size\n", " self.num_layers = 2 # Potentially add more layers to LSTM later\n", " self.dropout = 0.1 if self.num_layers > 1 else 0 # Potentially add dropout later\n", " \n", " # Attention parameters\n", " self.seq_length = seq_length\n", " self.encoder_hidden_dim = 2*decoder_hidden_size\n", " \n", " # Construct embedding layer for output language\n", " self.embedding = nn.Embedding(self.output_vocab_size, self.embedding_dim)\n", " self.embedding.weight.data.copy_(torch.from_numpy(pretrained_embeddings))\n", " self.embedding.weight.requires_grad = False # we don't want to train the embedding weights\n", " \n", " # Construct layer that calculates attentional weights\n", " self.attn = nn.Linear(self.decoder_hidden_size + self.embedding_dim, self.seq_length)\n", " \n", " # Construct layer that compresses the combined matrix of the input embeddings\n", " # and the encoder inputs after attention has been applied\n", " self.attn_with_input = nn.Linear(self.embedding_dim + self.encoder_hidden_dim, self.embedding_dim)\n", " \n", " # gru for Decoder\n", " self.gru = nn.GRU(self.embedding_dim,\n", " self.decoder_hidden_size,\n", " self.num_layers,\n", " dropout=self.dropout)\n", " \n", " # Initialize hidden to hidden weights in GRU to the Identity matrix\n", " # PyTorch GRU has 3 different hidden to hidden weights stacked in one matrix\n", " identity_init = torch.eye(self.decoder_hidden_size)\n", " self.gru.weight_hh_l0.data.copy_(torch.cat([identity_init]*3, dim=0))\n", " self.gru.weight_hh_l1.data.copy_(torch.cat([identity_init]*3, dim=0))\n", " \n", " # Output layer\n", " self.out = nn.Linear(self.decoder_hidden_size, self.output_vocab_size)\n", " \n", " def forward(self, input, hidden, encoder_output):\n", " # Input word indices, should have dim(1, batch_size), output will be (1, batch_size, embedding_dim)\n", " embedded = self.embedding(input)\n", " \n", " # Calculate Attention weights\n", " attn_weights = F.softmax(self.attn(torch.cat((hidden[0], embedded[0]), 1)), dim=1)\n", " attn_weights = attn_weights.unsqueeze(1) # Add dimension for batch matrix multiplication\n", " \n", " # Apply Attention weights\n", " attn_applied = torch.bmm(attn_weights, encoder_output)\n", " attn_applied = attn_applied.squeeze(1) # Remove extra dimension, dim are now (batch_size, encoder_hidden_size)\n", " \n", " # Prepare GRU input tensor\n", "\n", " attn_combined = torch.cat((embedded[0], attn_applied), 1) # Combine embedding input and attn_applied,\n", " gru_input = F.relu(self.attn_with_input(attn_combined)) # pass through fully connected with ReLU\n", " gru_input = gru_input.unsqueeze(0) # Add seq dimension so tensor has expected dimensions for lstm\n", " \n", " output, hidden = self.gru(gru_input, hidden) # Output dim = (1, batch_size, decoder_hidden_size)\n", " output = F.log_softmax(self.out(output[0]), dim=1) # softmax over all words in vocab\n", " \n", " return output, hidden, attn_weights" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "#### Testing the Decoder" ] }, { "cell_type": "code", "execution_count": 238, "metadata": { "hidden": true }, "outputs": [], "source": [ "# Test the decoder on sample inputs to check that the dimensions of everything is correct\n", "test_decoder_hidden_size = 5\n", "\n", "test_decoder = AttnDecoderLSTM(test_decoder_hidden_size, en_vectors, test_seq_length).to(device)" ] }, { "cell_type": "code", "execution_count": 239, "metadata": { "hidden": true }, "outputs": [], "source": [ "input_idx = torch.tensor([fr_word2idx['']]*test_batch_size, dtype=torch.long, device=device)" ] }, { "cell_type": "code", "execution_count": 240, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "torch.Size([1])" ] }, "execution_count": 240, "metadata": {}, "output_type": "execute_result" } ], "source": [ "input_idx.shape" ] }, { "cell_type": "code", "execution_count": 241, "metadata": { "hidden": true }, "outputs": [], "source": [ "input_idx = input_idx.unsqueeze_(0)\n", "test_decoder_hidden = (test_encoder_hidden[0][1::2].contiguous(), test_encoder_hidden[1][1::2].contiguous())" ] }, { "cell_type": "code", "execution_count": 242, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "torch.Size([1, 1])" ] }, "execution_count": 242, "metadata": {}, "output_type": "execute_result" } ], "source": [ "input_idx.shape" ] }, { "cell_type": "code", "execution_count": 243, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([1, 99997])\n" ] } ], "source": [ "output, hidden, attention = test_decoder.forward(input_idx, test_decoder_hidden, test_encoder_output)\n", "print(output.shape)" ] }, { "cell_type": "code", "execution_count": 244, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "torch.Size([2, 1, 5])" ] }, "execution_count": 244, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_decoder_hidden[0].shape" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true }, "source": [ "## Part 3: Training the Model" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "### Training Function" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "hidden": true }, "outputs": [], "source": [ "def train(input_tensor, target_tensor, encoder, decoder,\n", " encoder_optimizer, decoder_optimizer, criterion):\n", " \n", " # Initialize encoder hidden state\n", " encoder_hidden = encoder.initHidden(input_tensor.shape[0])\n", " \n", " # clear the gradients in the optimizers\n", " encoder_optimizer.zero_grad()\n", " decoder_optimizer.zero_grad()\n", " \n", " # run forward pass through encoder on entire sequence\n", " encoder_output, encoder_hidden = encoder.forward(input_tensor, encoder_hidden)\n", " \n", " # Initialize decoder input(Start of Sentence tag) and hidden state from encoder\n", " decoder_input = torch.tensor([en_word2idx['']]*input_tensor.shape[0], dtype=torch.long, device=device).unsqueeze(0)\n", " \n", " # Use correct initial hidden state dimensions depending on type of RNN\n", " try:\n", " encoder.lstm\n", " decoder_hidden = (encoder_hidden[0][1::2].contiguous(), encoder_hidden[1][1::2].contiguous())\n", " except AttributeError:\n", " decoder_hidden = encoder_hidden[1::2].contiguous()\n", " \n", " # Initialize loss\n", " loss = 0\n", " \n", " # Implement teacher forcing\n", " use_teacher_forcing = True if random.random() < 0.5 else False\n", "\n", " if use_teacher_forcing:\n", " # Step through target output sequence\n", " for di in range(seq_length):\n", " output, decoder_hidden, attn_weights = decoder(decoder_input,\n", " decoder_hidden,\n", " encoder_output)\n", " \n", " # Feed target as input to next item in the sequence\n", " decoder_input = target_tensor[di].unsqueeze(0)\n", " loss += criterion(output, target_tensor[di])\n", " else:\n", " # Step through target output sequence\n", " for di in range(seq_length):\n", " \n", " # Forward pass through decoder\n", " output, decoder_hidden, attn_weights = decoder(decoder_input,\n", " decoder_hidden,\n", " encoder_output)\n", " \n", " # Feed output as input to next item in the sequence\n", " decoder_input = output.topk(1)[1].view(1,-1).detach()\n", " \n", " # Calculate loss\n", " loss += criterion(output, target_tensor[di])\n", " \n", " # Compute the gradients\n", " loss.backward()\n", " \n", " # Clip the gradients\n", " nn.utils.clip_grad_norm_(encoder.parameters(), 25)\n", " nn.utils.clip_grad_norm_(decoder.parameters(), 25)\n", " \n", " # Update the weights\n", " encoder_optimizer.step()\n", " decoder_optimizer.step()\n", " \n", " return loss.item()" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "### Training Loop" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "hidden": true }, "outputs": [], "source": [ "def trainIters(encoder, decoder, dataloader, epochs, print_every_n_batches=100, learning_rate=0.01):\n", " \n", " # keep track of losses\n", " plot_losses = []\n", "\n", " # Initialize Encoder Optimizer\n", " encoder_parameters = filter(lambda p: p.requires_grad, encoder.parameters())\n", " encoder_optimizer = optim.Adam(encoder_parameters, lr=learning_rate)\n", " \n", " # Initialize Decoder Optimizer\n", " decoder_parameters = filter(lambda p: p.requires_grad, decoder.parameters())\n", " decoder_optimizer = optim.Adam(decoder_parameters, lr=learning_rate)\n", "\n", " # Specify loss function, ignore the token index so it does not contribute to loss.\n", " criterion = nn.NLLLoss(ignore_index=0)\n", " \n", " # Cycle through epochs\n", " for epoch in range(epochs):\n", " loss_avg = 0\n", " print(f'Epoch {epoch + 1}/{epochs}')\n", " # Cycle through batches\n", " for i, batch in enumerate(dataloader):\n", " \n", " input_tensor = batch['french_tensor'].to(device)\n", " target_tensor = batch['english_tensor'].transpose(1,0).to(device)\n", " \n", "\n", " loss = train(input_tensor, target_tensor, encoder, decoder,\n", " encoder_optimizer, decoder_optimizer, criterion)\n", " \n", " loss_avg += loss\n", " if i % print_every_n_batches == 0 and i != 0:\n", " loss_avg /= print_every_n_batches\n", " print(f'After {i} batches, average loss/{print_every_n_batches} batches: {loss_avg}')\n", " plot_losses.append(loss)\n", " loss_avg = 0\n", " return plot_losses" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "hidden": true }, "source": [ "### Training the Model" ] }, { "cell_type": "code", "execution_count": 274, "metadata": { "hidden": true }, "outputs": [], "source": [ "# Set hyperparameters and construct dataloader\n", "hidden_size = 256\n", "batch_size = 16\n", "dataloader = DataLoader(french_english_dataset, batch_size=batch_size,\n", " shuffle=True, num_workers=4) " ] }, { "cell_type": "code", "execution_count": 275, "metadata": { "hidden": true }, "outputs": [], "source": [ "# Construct encoder and decoder instances\n", "encoder_lstm = EncoderBiLSTM(hidden_size, fr_vectors).to(device)\n", "decoder_lstm = AttnDecoderLSTM(hidden_size, en_vectors, seq_length).to(device)\n", "\n", "encoder_gru = EncoderBiGRU(hidden_size, fr_vectors).to(device)\n", "decoder_gru = AttnDecoderGRU(hidden_size, en_vectors, seq_length).to(device)" ] }, { "cell_type": "code", "execution_count": 276, "metadata": { "hidden": true }, "outputs": [], "source": [ "from_scratch = True # Set to False if you have saved weights and want to load them\n", "\n", "if not from_scratch:\n", " # Load weights from earlier model\n", " encoder_lstm_state_dict = torch.load('models/encoder1_lstm.pth')\n", " decoder_lstm_state_dict = torch.load('models/decoder1_lstm.pth')\n", "\n", " encoder_lstm.load_state_dict(encoder_lstm_state_dict)\n", " decoder_lstm.load_state_dict(decoder_lstm_state_dict)\n", " \n", " # Load weights from earlier model\n", " encoder_gru_state_dict = torch.load('models/encoder1_gru.pth')\n", " decoder_gru_state_dict = torch.load('models/decoder1_gru.pth')\n", "\n", " encoder_gru.load_state_dict(encoder_gru_state_dict)\n", " decoder_gru.load_state_dict(decoder_gru_state_dict)\n", "else:\n", " print('Training model from scratch.')" ] }, { "cell_type": "code", "execution_count": 361, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n" ] } ], "source": [ "# For dataset 1, models were trained for 3 epochs\n", "# For dataset 2, models were trained for 50 epochs\n", "\n", "learning_rate = 0.0001\n", "encoder_lstm.train() # Set model to training mode\n", "decoder_lstm.train() # Set model to training mode\n", "\n", "lstm_losses_cont = trainIters(encoder_lstm, decoder_lstm, dataloader, epochs=50, learning_rate = learning_rate)\n", "\n", "\n", "# For dataset 1, models were trained for 3 epochs\n", "# For dataset 2, models were trained for 50 epochs\n", "print('Training GRU based network.')\n", "learning_rate = 0.0001\n", "encoder_gru.train() # Set model to training mode\n", "decoder_gru.train() # Set model to training mode\n", "\n", "gru_losses = trainIters(encoder_gru, decoder_gru, dataloader, epochs=50, learning_rate = learning_rate)" ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "hidden": true }, "outputs": [], "source": [ "np.save('data/lstm2_losses.npy', lstm_losses)" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "hidden": true }, "outputs": [], "source": [ "np.save('data/gru2_losses.npy', gru_losses)" ] }, { "cell_type": "code", "execution_count": 277, "metadata": { "hidden": true }, "outputs": [], "source": [ "lstm_losses = np.load('data/lstm1_losses.npy')\n", "gru_losses = np.load('data/gru1_losses.npy')" ] }, { "cell_type": "code", "execution_count": 294, "metadata": { "hidden": true }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 294, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEWCAYAAABrDZDcAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAIABJREFUeJzs3Xd8VFX6+PHPk0knvZAEQgoQkB6aIk1E1y5W7K6yKrprd13LFnXVr2X1Z9u19w6ygiCrKKJURek19BpaCqT35Pz+uDchgZQBMjMpz/v1mtfM3PrcyeQ+c86551wxxqCUUqr98vJ0AEoppTxLE4FSSrVzmgiUUqqd00SglFLtnCYCpZRq5zQRKKVUO6eJoA0QkTEiku6G/QSIyNcikisiU1y9P3X8ROQfIvKGi7adLiJjXLHt1kREnhSRDzwdR3PQRHCMRGSHiJzpgf3eKCKVIlIgInkislJELjiO7XwgIk8eZxiXAzFApDFm/HFuo3YsY0Skyj6mAvsE84WIDD2GbTwmIp+caCzNsR8RuUNElopI6bGcIETkr7U+g5Jaf+cCEVl3PPEaY54wxtx2POu2FCJyt4gsE5EyEXmniWVvPuJzq350dFe8rZkmgtblF2NMEBAGvAt8ISIRbtx/IrDJGFNxrCuKiHcDs/baxxQMDAM2AAtE5IzjD9Nj9gJPAu8dy0rGmKeMMUH253Ab9t/ZfvQ5cvlGPsu2Zg/wOPCBk8svqPW5VT8yXBde26GJoBmJyC0iskVEDorIDBHpZE8XEXlRRDLsapXVItLXnneeiKwXkXwR2SMi9ze1H2NMFdbJJgDoWk8cvURkrojkiMg6ERlnT58IXAs8YP9a+tqe/qC973wR2VjfSVhE/gk8Alxpr3uTiHiJyN9FZKd9bB+JSKi9fJKIGHu5XcCPTRyTMcakG2MeAd4Bnq2175dFZLddElomIqPs6ecAf60V0yp7+gQRSbOPZ5uI3FprW1EiMtP+bA6KyAIR8bLndRKRL0UkU0S2i8hdje2nnmOYaoz5Csiub769z5GNfQ4NrOdtf5Z/EpEtWMkSEfmPXYrKE5ElIjK81jo11RYi0t1e//f28pki8lCtZb3sUslWEckSkUkiEl5r/o323zir9noNxBomIp/Y+9ghIg+LiNjzbhaRefb/Qo79tzmroW0ZY/5rjJkOHDzWz6yeuNLt73maiBwSkXdFxK/W/Nvs/91sEflKROJqzesnIj/Y35f9IvJArU372cebLyJrRWTQicbqEcYYfRzDA9gBnFnP9LFAFjAI8AP+Dcy3550NLMP6JS9ALyDOnrcPGGW/DgcGNbDfG4GF9mtv4G4gHwgFxgDp9jwfYAvWicvXjisf6GnP/wB4stZ2ewK7gU72+ySgWwMxPAZ8Uuv9H+x9dQWCgKnAx7W2Y4CPgA5AQD3bq4m7ns+yCuhgv78OiLSP+8/AfsC/vpjsaecD3ezP+jSgqPpzBZ4G3rA/Jx9glL2cl/03esT+3LoC24CzG9pPI9+RJ4EPjvP7VfN3rjXN2/4sZ9nfkQB7+vVAhD3/Qaxf0H5HxgB0t9d/A/DH+o6WAin2/PuBRUBne/67tf6O/YACYATW9/oVoAIY00D8n9nfg2D7M9wC3GDPuxkot783DuBOYLcTn8kzwDtNLHMzMLeR+enAaiAeiAIWA4/Z884CMoBU+/hfA36054UCB7D+3/yAEODkWp9xMdb/twN47si/XWt5aImg+VwLvGeMWW6MKQUeBk4VkSSsL38wcBIgxpg0Y8w+e71yoLeIhBhjDhljljeyj2EikoN1IrwauMQYk3vkMlgn5WeMMWXGmB+Bmfby9anE+oL3FhEfY8wOY8zWYzjmF4wx24wxBfYxXyV1qy4eM8YUGmOKndwmWFUsgpU4McZ8YozJNsZUGGP+nx1vz4ZWNsb8zxiz1VjmAd9jnfDB+rzjgERjTLkxZoGx/quHAtHGmMftz20b8DZw1THE7WpP2d+RYgBjzMfGmIPGqqr7F9ZJqnsj6z9mjCmxv2PrgAH29FuBvxpj9hhjSrCS3hV2SWk88JUxZpH9vf4r1t/mKCLiA1wBPGSMybc/wxexEla1rcaY94wxlcCHQLyIRB3Ph1GPkXZJo/qx8Yj5rxir1JkFPMXh/4lrsRLNSvv4HwJOE5F4YBxWsnrZGFNqjMkzxvxWa5vzjDHf2cfzMVYyaXU0ETSfTsDO6jf2iTEb6GyfjP8DvAocEJG3RCTEXvQy4Dxgp11sPrWRfSw2xoQZY6KMMcOMMT80EMduY1UfVduJ9WvvKMaYLcA9WP/8GXa1QCdnDpgjjtl+7Y3VoFxtt5Pbqq0z1i/YHAAR+bNdpM+1E2Eo1q+6eonIuSKy2C7K52B9vtXLP4f1K/V7u2qiuqojEehU+0SCddKLOWoHnlPnsxSRB0Rkg4jkAoewSl4Nfi7GmP213hZh/WAASAC+rnXca7A+/47Y36da2yig4aqajli/jI/8TtT+7h0ZA7XiOFEL7f+P6seRPxZqf347sY4Njv7fzcP6PDsDXbC+Lw058ng6HG/wnqSJoPnsxTqZACAiHbCqM/YAGGNeMcYMBvoAPYC/2NOXGGMuwvon+gr4ohni6FJd721LqI4D6x+8DmPMZ8aYkXb8hlr1807sK7HW+wSsaoMDtTfv5LZquwRYbowpFKs94EGsX5rhxpgwIJfDv0rrbN+u9/0SeB6IsZf/pnp5+5fqn40xXYELgfvEahPZDWw/4kQSbIw57wSOo7nVxCAipwP3Yf2QCMOqMiqggV/rTUgHfnfEsfvbiWMf1smwer9BWNVR9cnAKmEe+Z3YU//ibtel1usErO8vHP2/G4z1ee7B+l50c1eAnqKJ4Pj4iIh/rYc3Vt3oBBFJtU9GTwG/GmN2iMhQETnFLjoXAiVApYj4isi1IhJqjCkH8rD+kU7Er/Y+HhARH7Gu974QmGTPP0CtBmYR6SkiY+2YS7DqPJ2N4XPgXhFJtk8QTwGTzfFdVSQi0llEHsWq7/2rPSsYK7lkAt4i8ghWFUi1A0BSrcTni1V1lAlUiMi5WHXA1fu5QKzGU+Hw510J/Abk2Q2KASLiEJG+cvhS1iP3U98xeIuIP9avYket70b1fCPNd/199eeShdXW8RjH/2v0DeApEUkAEJGOYl9gAEwBLhKRU+3vyJM0kBTt7/B/7W0FiUgycC9wXJf3NvB5Oo5nW7Y77O9YJFY15mR7+ufATSLS3z7Gp7GuQEoHZgAJYl0a7CsiISJy8gnE0CJpIjg+32CdMKsfjxlj5gD/wPo1ug/rV0R1/XIIVn3zIawiaDbWL1aw6k93iEge1qWD151IYMaYMqx6zXOxThKvAb83xmywF3kXqz0gR0S+wjppPmMvux+rZPLXozZcv/ew6kXnA9uxEsmdxxhyJxEpwPo1uwSrcXKMMeZ7e/53wLfAJqzProS6Rfzqjm3ZIrLcGJMP3IVVsjoEXIP1z1wtBfjB3t8vwGvGmLl2He+FWHW827E+j3ewqqGO2k8Dx/J3rO/DQ1h/x2J7GnZ9cwFWtUtz+MY+js1YFzDkYX3vjscLWA3Rc0QkH/gZq80EY8xqrIbSL7B+Ie+nbnXIkf4ElGF9hvOw2gE+Os64HsP6DO/HakQvxjqBN2SUHN2PYGCt+Z9jfWZbgY1YP1wwxszCukx1GtZnmIDVboDdBvc7rJJXBtb38LTjPJ4WS6x2MqWUK4nIdUAfY0xjJzLlImL1vL/OGDPX07G0RO2lY4pSHmWMcXnvZ6WOl1YNKaVUO6dVQ0op1c5piUAppdq5VtFGEBUVZZKSkjwdhlJKtSrLli3LMsZEN7Vcq0gESUlJLF261NNhKKVUqyIiO5teSquGlFKq3dNEoJRS7ZwmAqWUaudaRRuBUko5o7y8nPT0dEpKSjwdilv5+/sTHx+Pj4/Pca2viUAp1Wakp6cTHBxMUlIS1riCbZ8xhuzsbNLT00lOTj6ubWjVkFKqzSgpKSEyMrLdJAEAESEyMvKESkGaCJRSbUp7SgLVTvSY23QimLYinU8WO3UZrVJKtVttOhF8s2a/JgKllFsFBR19582NGzcyZswYUlNT6dWrFxMnTuS7774jNTWV1NRUgoKC6NmzJ6mpqfz+979n7ty5iAjvvvtuzTZWrFiBiPD8888ftf0T1aYTQUyIH/vz2tfVA0qplueuu+7i3nvvZeXKlaSlpXHnnXdy9tlns3LlSlauXMmQIUP49NNPWblyJR99ZN3Hp1+/fkyePLlmG5MmTWLAgAEuia9NJ4KTK5YzomQBJeUnevdHpZQ6fvv27SM+Pr7mfb9+/ZpcJyEhgZKSEg4cOIAxhlmzZnHuuee6JL42ffno4Myp9PTeQkbewyREBno6HKWUG/3z63Ws35vXrNvs3SmERy/sc8zr3XvvvYwdO5bhw4dz1llnMWHCBMLCwppc7/LLL2fKlCkMHDiQQYMG4efndzxhN6lNlwiqIrqTJAc4kFvo6VCUUu3YhAkTSEtLY/z48cydO5dhw4ZRWlra5HpXXHEFU6ZM4fPPP+fqq692WXxtukTg07Enfmnl5O3fBl2bHIlVKdWGHM8vd1fq1KkTf/jDH/jDH/5A3759Wbt2LYMHD250ndjYWHx8fJg9ezYvv/wyP//8s0tia9OJIKhzLwDKD2wATvFsMEqpdmvWrFmcccYZ+Pj4sH//frKzs+ncubNT6z7++ONkZGTgcDhcFl+bTgQdOp0EgOPgFg9HopRqL4qKiuo0DN93332kp6dz99134+/vD8Bzzz1HbGysU9sbPny4S+KsrU0nAukQRR5BBORt93QoSql2oqqqqt7pL7zwQoPrzJ07t877MWPGMGbMmKOWe+yxx04gsoa16cZiRNjn04WwYu1UppRSDWnbiQA4FJBIXPluT4ehlFItVptPBEUhyUSaQ5iSXE+HopRSLVKbTwSV4d0BKNy7wcORKKVUy9TmE4F3x54A5O9J83AkSinVMrX5RBAUl0KF8aL8wCZPh6KUUi1Sm08EMeEh7DbRSPZmT4eilGonDhw4wDXXXEPXrl0ZPHgwp556KtOmTWPu3LmEhoYycOBATjrpJO6///6adR577LGjhphOSkoiKyvL5fG6NBGIyA4RWSMiK0VkqT0tQkRmi8hm+znclTF0DPFjm+lEQN42V+5GKaUA6x7CF198MaNHj2bbtm0sW7aMSZMmkZ6eDsCoUaNYsWIFK1asYObMmSxatMjDEbunRHC6MSbVGDPEfv8QMMcYkwLMsd+7jL+Pg3RHPKFFu6CBjh5KKdVcfvzxR3x9fbnttttqpiUmJnLnnXfWWS4gIIDU1FT27Nnj7hCP4omexRcBY+zXHwJzgQdducNDAYn4FJdB7m4IT3TlrpRSLcW3D8H+Nc27zdh+cO4zjS6ybt06Bg0a1OSmDh06xObNmxk9enRzRXfcXF0iMMD3IrJMRCba02KMMfsA7OeO9a0oIhNFZKmILM3MzDyhIAqDk60XWdpOoJRyr9tvv50BAwYwdOhQABYsWED//v2JjY3lggsuqBlzqKEb0J/ojemd4eoSwQhjzF4R6QjMFhGnL+Y3xrwFvAUwZMgQcyJBVIR3gwwgezOknHkim1JKtRZN/HJ3lT59+vDll1/WvH/11VfJyspiyBCrdnzUqFHMnDmTTZs2MXLkSC655BJSU1OJjIxk3759dbaVn5/v1A1sTpRLSwTGmL32cwYwDTgZOCAicQD2c4YrYwAIiogj13SgSksESikXGzt2LCUlJbz++us104qKio5arkePHjz88MM8++yzAIwePZoZM2aQn58PwNSpUxkwYIBLh5+u5rJEICIdRCS4+jVwFrAWmAHcYC92AzDdVTFU6xgawDYTR8WBja7elVKqnRMRvvrqK+bNm0dycjInn3wyN9xwQ80Jv7bbbruN+fPns337dvr3788dd9zByJEjSU1N5Y033uCdd95xS8yurBqKAabZ9VvewGfGmFkisgT4QkRuAnYB410YAwCxIf5sM3H0zdZOZUop14uLi2PSpEn1zqs9vHRAQECdq4ZuvfVWbr31VleHdxSXJQJjzDZgQD3Ts4EzXLXf+sSE+LGiKg6fogVQmg9+we7cvVJKtWhtvmcxWCWCraaT9SZb71amlFK1tYtEEBnkx06xE0GmVg8p1ZYZc0IXGbZKJ3rM7SIROLyE/MBkysQP9q7wdDhKKRfx9/cnOzu7XSUDYwzZ2dk190M+Hm36nsW1RYUGsj0vhZ57lnk6FKWUi8THx5Oens6JdkJtbfz9/YmPjz/u9dtNIogJ8WdNXnd67vsWKsrA29fTISmlmpmPjw/JycmeDqPVaRdVQ2Algl/LkqGyFA6s9XQ4SinVYrSbRBAb6s/PJUnWG60eUkqpGu0mEcSE+LOHKCoDozURKKVULe0oEfgBQl5Ef0hf6ulwlFKqxWg3iSA2xLq0an9wH2sU0uIcD0eklFItQ7tJBImRHfD38WJJRTdrwt7lng1IKaVaiHaTCHy9vRicGM6MjBhrgrYTKKUU0I4SAcCw5EiWZVRRGZEC6ZoIlFIK2lsi6BaJMbA/uC/sWQrtqBu6Uko1pF0lgv7xofj7eLGyqisUZlo3s1dKqXauXSUCP28HgxPDmZXTxZqgl5EqpVT7SgRgtRN8lxWBcfhpg7FSStEeE0G3SMqMNzlhvTURKKUU7TARVLcTbPLqBvvXeDocpZTyuHaXCPy8HQxKCGd9fiCUFUBZkadDUkopj2p3iQBgWNdINuXb9yMoPujZYJRSysPabSI4aIKtN0WaCJRS7Vu7TAQDuoSS7xVivSnK9mwwSinlYe0yEfh5O+jcqbP1RhOBUqqda5eJAKB7UiIApXnt6ybXSil1pAYTgYicVOu13xHzhrkyKHfomZwAQMaBfR6ORCmlPKuxEsFntV7/csS811wQi1v1T4gix3QgJ3u/p0NRSimPaiwRSAOv63vf8EZEHCKyQkRm2u+TReRXEdksIpNFxPcY4m02ER18KfAKoSQ3wxO7V0qpFqOxRGAaeF3f+8bcDaTVev8s8KIxJgU4BNx0DNtqVuV+4ZhCvXxUKdW+NZYI4kXkFRH5d63X1e87O7NxEYkHzgfesd8LMBb4r73Ih8DFxx39CXIERRFYmUtGfomnQlBKKY/zbmTeX2q9PnK8ZmfHb34JeACwe28RCeQYYyrs9+k0kFREZCIwESAhIcHJ3R2bwNBoHJnrWL07lzN7+7tkH0op1dI1mAiMMR8eOU1EwrFO5E1WDYnIBUCGMWaZiIypnlzfrhrY/1vAWwBDhgxxya3EQqNiqdiSz6r0HM7sHeOKXSilVIvX2OWjj1RfQioifiLyI7AVOCAiZzqx7RHAOBHZAUzCqhJ6CQgTkeoEFA/sPYH4T4hPUBQBUsb6nXrlkFKq/WqsjeBKYKP9+gasX/PRwGnAU01t2BjzsDEm3hiTBFwF/GiMuRb4Cbi81nanH1/ozSAwEoDde/bgRCFHKaXapMYSQVmtKqCzgUnGmEpjTBqNty005UHgPhHZgtVm8O4JbOvEBEYA4FN6iJ3ZOhy1Uqp9auyEXioifYEDwOnA/bXmBR7LTowxc4G59uttwMnHFKWr2CWCcLHaCZKiOng4IKWUcr/GSgT3YF3muQHruv/tACJyHrDCDbG5np0IOnoXsmp3roeDUUopz2jsqqHFwEn1TP8G+MaVQbmNnQh6h5bzbXqOh4NRSinPaDARiMh9ja1ojHmh+cNxM/8wAFKCynhudy7llVX4ONrtgKxKqXaqsbPe88B1WA26QVidwmo/Wj+HN/iH0cW/mNKKKjbuz/d0REop5XaNNRYPwrrs83xgGfA5MMeZzmStSmAkMd6FAKxOz6Vv51APB6SUUu7VYInAGLPSGPOQMSYV6xLPi4D1IjLObdG5Q2AEgZW5xIT48UPaAU9Ho5RSbtdkhbiIRAMDgX5YYwO1rXGbAyORomzGD+7C3I0Z7M0p9nRESinlVo0NMTFBRGYBU7B6FV9hjPmdfTVR2xEYCUWHuHJoFwzwxdLdno5IKaXcqrESwbtAHJCP1bP4HRGZUf1wS3TuEBgBRdl0iQhkVEo0k5fsprKqbTWDKKVUYxprLD7dbVF4UkAEVBRDWRFXD+3CHz9dzrxNGYw9SUcjVUq1D411KJvnzkA8xu5URvFBzuzdiaggPz77dbcmAqVUu6G9p6oTQVE2Pg4vxg+J56eNGezP1buWKaXaB00EtRIBwFVDu1BZZZiijcZKqXZCE4E9FDVF1k3sEyM7MKJ7JJO00Vgp1U4ccyIQkadE5EERiXRFQG5XUyI4WDPpskHx7MkpJm1fnoeCUkop9zmeEsFvQAXwYjPH4hn+YYDUVA0BDEm0SgmrdERSpVQ7cMx3GjPGfOWKQDzG4Q3+oXUSQZeIAMIDfVi1O4drT0n0YHBKKeV6TSYCe4iJW4Ck2ssbY/7gurDcLDCyTiIQEQZ0CdOb1Sil2gVnSgTTgQXAD0Cla8PxkMBIKD5YZ9KA+DDmbdpMQWkFQX4ncotmpZRq2Zw5wwUaYx50eSSeFBgJeel1JqUmhGEMrEnP5dRubaNdXCml6uNMY/FM+z7FbVdgRJ2rhsAqEYA2GCul2r7GblWZDxiskUf/KiKlQLn93hhjQtwTohvUkwgiOviSEBHIqt2aCJRSbVtjYw21jdtROiMwsmbgOXwDayandgljyY6DjayolFKtnzM3prlEREJrvQ8TkYtdG5abBVT3Ls6uM3lAlzD25ZZwIE/HHVJKtV3OtBE8aoypuY7SGJMDPOq6kDzgiPGGqqV2sfKfVg8ppdoyZxJBfcu0respaw1FXVufTqF4e4k2GCul2jRnEsFSEXlBRLqJSFcReRFY5urA3Kqe8YYA/H0cnBQXzEotESil2jBnEsGdQBkwGfgCKAb+1NRKIuIvIr+JyCoRWSci/7SnJ4vIryKyWUQmi4jviRxAswisv40ArMtIV+/OpUpHIlVKtVHOJILzjDEPGWOG2I+/Auc7sV4pMNYYMwBIBc4RkWHAs8CLxpgU4BBw0/EG32zqGXiu2oAuYeSXVrAtq9D9cSmllBs4kwgednJaHcZSYL/1sR8GGAv8157+IeD5K5Ac3hAaD9lbjpqV2sXuWKbVQ0qpNqqxDmXnAucBnUXklVqzQrCGoW6SiDiw2hO6A68CW4EcY0z1+ulA5wbWnQhMBEhISHBmdycmth/sW33U5G7RQQT5efPb9oNcNjje9XEopZSbNVYi2AssBUqwTubVjxnA2c5s3BhTaYxJBeKBk4Fe9S3WwLpvVVdHRUdHO7O7ExPb3yoRlNWtAnJ4CWf1iWHy0t28u3C76+NQSik3a6xn8SpglYh8ZowpP5GdGGNyRGQuMAwIExFvu1QQj5VwPC+uP2DgwDrocnKdWU9f2o+i0kqemLmewtIK7hzbHRHxTJxKKdXMnGkjSBKR/4rIehHZVv1oaiURiRaRMPt1AHAmkAb8BFxuL3YD1jDXnhfb33ret+qoWX7eDv5zzUAuHdiZF2Zv4plvN2CMXkWklGobnOkY9j5WT+IXgdOBCVgDzzUlDvjQbifwAr4wxswUkfXAJBF5ElgBvHtckTe30Hjr6qH9R7cTAHg7vHh+/AAC/Ry8OX8bAxPCOadvrJuDVEqp5udMIggwxswRETHG7AQeE5EFNDHMhDFmNTCwnunbsNoLWhYRq3qongbjal5ewmMX9mH2+gN8sXS3JgKlVJvgTNVQiYh4AZtF5A4RuQTo6OK4PCO2P2SkQWXDTSLeDi8uHRTP3I0ZOhidUqpNcCYR3AMEAncBg4Hrser22564AVBZClmbGl1s/OB4qgxMXb7HTYEppZTrNJkIjDFLjDEFxph0Y8wEY8ylxpjF7gjO7WoajBuuHgLoGh3EkMRwpizbrY3GSqlWr8FEICJRIvKoiNwlIkEi8rqIrBWR6SLS3Z1Buk1UCngHNNhgXNsVQ7qwLbOQ5bu0x7FSqnVrrETwGeAHpAC/AduwLvucCbzj+tA8wMsBMb1h/5omFz2vfxwBPg6mLN3thsCUUsp1GksEMfYAc3cBQcaY54wxG4wxbwNh7gnPA2L7WyWCJqp8gvy8Ob9/HDNX76OozKkRN5RSqkVqLBFUgjV4HJB1xLwql0XkaXH9oSQXcnY2uej4wfEUlFYwa+1+NwSmlFKu0Vg/gq4iMgOr81j1a+z3yS6PzFNiB1jP+1ZDeFKji56cHEFiZCCf/bqLSwZ21mEnlFKtUmOJ4KJar58/Yt6R79uOmN4gXlb1UO9xjS4qItw8Mpl/TF/HjFV7uSi13oFUlVKqRWts0Ll57gykxfAJgKgeTjUYA1xzSiJfLt/D41+v57Qe0YQFev6Ga0opdSyc6VDW/sQ2PtREbQ4v4elL+5FTXM7T32xwcWBKKdX8NBHUJ64/5O+FfOcagXvFhXDzqGQmL93Nr9uOvt2lUkq1ZI0mAhFxiMhz7gqmxeg6xnre+K3Tq9xzRg+6RATw8LQ1lFZUuiQspZRyhUYTgTGmEhgs7e1ymJi+EJ4MaTOaXtYW4OvgyYv7sS2zkMlLtJOZUqr1cKZqaAUwXUSuF5FLqx+uDsyjRKD3RbB9PhQddHq103pEkxzVgZ82ZLgwOKWUal7OJIIIIBsYC1xoPy5wZVAtQu9xUFVxTNVDAKNSoli87aBWDymlWo0mb0xjjJngjkBanE6DILSLVT008FqnVxuVEs1Hv+xk+c4cTu0W6cIAlVKqeTRZIhCRHiIyR0TW2u/7i8jfXR+ah4lAr3Gw9UcoyXN6tWFdI/D2EhZsznRhcEop1XycqRp6G3gYKIeaW1Be5cqgWoze46CyDDZ95/Qqwf4+DEoIZ8HmI4dnUkqplsmZRBBojPntiGntY7jN+JMhKBbSph+etmMhfHI55KY3uNrIlCjW7s3lYGGZG4JUSqkT40wiyBKRboABEJHLgX0ujaql8PKCXhfC5h+grBAWvw4fjoMts2F9w5eWjkqJwhhYtEVLBUqpls+ZRHA78CZwkojswbqH8W0ujaol6T0OKorhvXNg1kPQ4xyrEXnnogZX6R8fRoi/t7YTKKVaBWeuGtoGnCkiHQAvY0y+68NqQRKGQ2CUNQgfKwYmAAAgAElEQVTd2L/DyD/D9Nth83fWzWvq6Wvn8BJGpkSxYHMWxhgdnlop1aI5c9VQpIi8AiwA5orIyyLSfq6LdHjDlR/DhG9h9F+s6qLEU6EoG7I2NbjaqJRo9uWWsDWzwI3BKqXUsXOmamgSkAlchnXP4kxgsiuDanESh1sn/5r3I6znnT83uMrI7lEAevWQUqrFc6pnsTHmCWPMdvvxJG35nsXOiOgKHTo2mgi6RASSHNVBE4FSqsVzJhH8JCJXiYiX/bgC+J+rA2vRRKxSwq5fGl1sVEoUv2zNZl9usZsCU0qpY+dMIrgV+AwotR+TgPtEJF9EGuxyKyJdROQnEUkTkXUicrc9PUJEZovIZvs5vDkOxO0Sh0PubsjZ1eAiNw5Pwkvgjs9WUF5Z5cbglFLKeU0mAmNMsDHGyxjjYz+87GnBxpiQRlatAP5sjOkFDANuF5HewEPAHGNMCjDHft/6JA63nnc2XCroGh3E05f1Z9nOQ/xrlt69TCnVMrnsDmXGmH3GmOX263wgDegMXAR8aC/2IXCxq2JwqY69wS+00f4EAOMGdOL6YYm8vWA7361z7o5nSinlTm65VaWIJAEDgV+BGGPMPrCSBdCxgXUmishSEVmamdkCO2Z5OSBhWJPtBAB/v6AX/eNDuX/KKnZmF7ohOKWUcp7LE4GIBAFfAvcYY5wextMY85YxZogxZkh0dLTrAjwRiadafQkKGk9Uft4OXr1mEAL8Y/o698SmlFJOcqZDWTcR8bNfjxGRu0TEqctHRcQHKwl8aoyZak8+ICJx9vw4oPXezqu6P4ETpYIuEYH86fTuzN+UybKdh1wcmFJKOc+ZEsGXQKWIdAfeBZKxriJqlH2f43eBNGPMC7VmzQBusF/fAEw/ct1WIy4VvAMa7U9Q2+9PTSSygy8v/dBwj2SllHI3ZxJBlTGmArgEeMkYcy8Q58R6I4DrgbEistJ+nAc8A/xORDYDv7Pft07evhA/xBqa2gmBvt7celpXFmzOYskO5++FrJRSruRMIigXkauxfr3PtKf5NLWSMWahMUaMMf2NMan24xtjTLYx5gxjTIr93LrPiD3PhQNrIMO5y0OvH5ZEVJAfL87WUoFSqmVwJhFMAE4F/s8Ys11EkoFPXBtWK9LvCvDyhlVN1pYBEODr4LbTuvLz1mwWb8t2cXBKKdU0ZzqUrTfG3GWM+dzuBRxsjGm91TnNLSgaUs6CVZOh0rkbt103LJHoYD9emL0JY4yLA1RKqcY5c9XQXBEJEZEIYBXwvoi80NR67cqAq6FgP2z7yanF/X0c3D6mG79tP8ictNZ70ZRSqm1wpmoo1L7+/1LgfWPMYOBM14bVyvQ4BwIiYKVz1UMA15ySSI+YIB6dsY7C0vZxC2ilVMvkTCLwtq/3v4LDjcWqNm9f6DceNvwPip3rI+Dr7cXTl/ZjT04xL2jDsVLKg5xJBI8D3wFbjTFLRKQrsNm1YbVCqddAZSmsndr0srbBiRFcc0oC7y/azpr0XBcGp5RSDZPW0Fg5ZMgQs3TpUk+H0Thj4PUR4BMAt8yBrC2w/ANIXwr+YRAYCYERkHotdDypZrXc4nLOfGEeMSF+fPWnEXg73DL8k1KqHRCRZcaYIU0t50xjcbyITBORDBE5ICJfikh884TZhohYpYI9S+Hds+A/g2Hx62CqIC8dtv5ovf/k0jrVR6EBPjx6YW/W7snj7QXbj9psZZXh8a/Xc83biykpr3TnESml2glnfn6+jzUsRCesYaS/tqepI/W/AnyDIH8/nPEI3LsObvoeblsIf06zXhccgP/dX2e18/vFcVbvGJ6dtYF/fr2OsgrrJjbFZZXc9sky3lu0nZ+3ZvPMt3pPgzatrAgqSj0dhWqHvJ1YJtoYU/vE/4GI3OOqgFq1oI5wX5qVDLzqybGdB8GYh+DHJ60rjfqPB0BEePXaQTz9zQbeW7Sd1em5PHFRX/721RpW7s7hn+P6sD2rkA9+3sEZvToyKqWFjsaqTsynl0N0T7jgRU9HotoZZ0oEWSJynYg47Md1gHaJbYh/SP1JoNrI+6DLMPjfn+vc5tLH4cUjF/bmP9cMZMO+PM57ZQHr9ubx+rWDuGF4Eg+dexLdOwZx/5RV5BSWuOFAlNtlboRMvYJMuZ8zieAPWJeO7gf2AZdjDTuhjoeXAy5902o7mHYblObXmX1B/05Mv2MkF/SP47ObT+Gcvtb4fv4+Dl66MpXKgmx4oTdm8eueiF65SlUlFB+EIv2NpdyvyaohY8wuYFztaXbV0EuuCqrNC0+C85+HabfCi31h2B/hlFshIByA7h2D+M81g45arW/nUN5JnkvYnmy2z36Dh1al4u/jICrIjwkjkujbOdTNB6KaTfEh68dBUZanI1Ht0PFeq3hfs0bRHg24Cm750bq5zdyn4cV+8M0DsH1Bw2MWZW9lwL4pFPhEkFy5g87lu8gpLuf79fu54N8LufnDpazdo/0RWqVCOwEUHYSqKs/Gotqd400E0qxRtFedB8PVn8Fti6DHWbDsA/jwAni+O0z7IxzaWXf5Hx5DHL4ETZgGCC/03cH020ew6KGx3Pe7Hvy2PZsL/r2Qh6euprxSTyatSqF9u1NTCSU5no1FtTvHmwhafi+01iS2L1z+HjywDa74CFLOhrQZ8Pbph+9+tmuxNW3E3dApFRKHw7ppAIT4+3DXGSksfGgst47uyue/7eaWj5bqGEatSe0qoUKtHlLu1WAiEJF8Ecmr55GP1adANTe/IOh9kdWYPHGu1Wbw4ThY+j589zcIjoPhd1jL9rkEMtMgI61m9RB/Hx4+rxfPXNqP+ZsyufrtxWQV6HXprULtk7+2Eyg3azARGGOCjTEh9TyCjTHO9D9QJyIqBW6eA8mjYeY9Vo/lsX8H3w7W/F7jAKkpFdR21ckJvHX9EDYdyOey139m9voDVFVpIa5FK9QSgfIcHdimJQsIg2unWH0Peo2z7ntQLTgGkkZaiaCe8aLO7B3DpzcPo7LKcMtHSznzxXl8/tsuHaaipSrMpKbpTUsEys00EbR0Xg4481G48mPrdW19LoGsTZCxvt5VByeGM/f+Mbxy9UACfR08PHUNZ780nw378+ouWF5sNUznH3DRQagmFWVBeKL1ulD7Eij30iqe1qzXOPjmfmvo65g+9S7i7fBi3IBOXNg/jgWbs7h/yioufe1n3htTwrDNL0D2NiizO7X5BsPdK6FDlBsPQgHWyT+ks3X5qJYIlJtpImjNgqIhaRSs/sL6NekTaA2D3bEXRHSts6iIMLpHNF/fNpCl79zDsPkzOOQXT/CAq/EOjrHW++5v8OsbVluEE0orKpm7MZNftmZz++ndiQ72c8VRtg+FmdY4Q4ER2kag3E4TQWs38DqYegvMuLPu9LhUq+qoxzlQUQL5+yA3nZhfXuX84u0sirycm/ecR+XPgaR2CeOUrhFc3/knIhe/SVbfW4mIjMSngXsjrEnP5bPfdvHNmn3kFpcDUGUMj1/U19VH23YVZUGHkRAYpSUC5XaaCFq7/ldAyllQVmDV9Zfmw85FViPyD49aj9rCEuGGmYxIHsVbmzOZvymTX7cf5NWftjCP0czwm817L/+DNysv5NSukdx6WldO6xGNiLA/t4R/zdrAlpXzKfcO4vQ+A7l4YGdmrNrL5CW7ueuMFKKCtFRwzKoqrSqhDlHWI3ePpyNS7YwmgrYgIMx6VOs8CIbfCYd2wI6F4B8KwZ0gJA6CYmoanUelRNcMaZ1fUs6mAwVk/e9b7sn5Hr+Bf+SLFVnc+P4SesWFcGpyBHuXTOdmr68Y4reRqg4d8Rr3KwRG0CUikGkr9vD+ou385eyT6glQNaroIGCs0kBgFOxb5emIVDujVw21ZeFJVtVRrwshfjCEdDr6yiNbsL8PgxPDiTr3YQLKsrkvagnzHzidf13Wj4ElvzJ+6VW84XiW1JACOO1BvIoPwjd/AaBbdBDn9Inlo192kl9S7sYDbCOqh5foEAUdIq02glZwC1nVdmgiUHUljYLOQ2DRK/hmreeKDXfzVPETpET6wMVv4H3PKjj9r3Dag7D2v7B+OgB/HNON/JIKPv11VxM7UEepbhPoYJcIqsqhNK/xdZRqRi5LBCLynn2f47W1pkWIyGwR2Ww/h7tq/+o4icCoP0POTnhjBKQvg7OfxvuOXyH1anD4WMuNvNdqkJ55LxRk0j8+jJHdo3h34XbttHasakoE0Ycv3dUrh5QbubJE8AFwzhHTHgLmGGNSgDn2e9XS9DgH+lwKQ2+Gu5bDqX86nACqOXzgkjesxumZ90DuHu4d4ktQwQ5mLK6/g5tqQHUHsuo2AtAb1Ci3clljsTFmvogkHTH5ImCM/fpDYC7woKtiUMfJywvGv9/0ch17WX0OZj8CG2YyGPjJD4rm+JNb/ldCx9zRYJuEqqV6eInACKuNALREoNzK3VcNxRhj9gEYY/aJSMeGFhSRicBEgISEBDeFp47ZqXdYl6QWHwKHD7tzy9n+04eMnv8I5Ru/xOfif0PcAAC2ZhawclcOlw7qjIje0qJGUZaVBLwctUoEmgiU+7TYy0eNMW8BbwEMGTJEL6Foqbwc0OfimrddgP2J47j3/Zd4JOMjwt46nareFzOVsfxtZRhllcK+3GLuGJty1KbKKqrw9W7e2srHZqwjNtSf207r1qzbbVaFmVb7ABxuI9CqIeVG7r5q6ICIxAHYzxlu3r9yg6HJkVxy3V38rux5ZvidT9G6WYxfdzs/B/6Fl+N/4q3vVzBr7b6a5SurDM9/t5E+j87iX7M2UFbRPHdXW7c3lw9+3sH7i7ZjWvLlmIXZh0sCvh3AO0CrhpRbuTsRzABusF/fAEx38/6Vm4zuEc3/XTOK+/Ku5rLA99g4/AWiOnfjoqy3+SXgbnZ88TBpW7eTV1LOLR8t5T8/beGk2BBem7uVS19fxJaM/BOO4fW5WwE4kFfK5oyCE96eyxRm1h3or0OUlgiUW7msakhEPsdqGI4SkXTgUeAZ4AsRuQnYBYx31f6V553dJ5Z5fxlDVJAf/j4O4CbYtxrHT/9i4qavKP34G17zu5n5+SN54uK+XHdKAt+vP8DDU9dw/isLefLivowf0uW49r0ts4D/rdnHRamdmL5yLws2Z9EjJrh5D7C5FGXVTQSBkVoiUG7lshKBMeZqY0ycMcbHGBNvjHnXGJNtjDnDGJNiPx901f5VyxAfHmgnAVtcf/yv+YStl89mpUnh7tLXmXGhcP2wRESEs/vEMuueUQxODOfhqWtI23d8HavenLcNX4cXfz+/N12jO7Bgc2YzHVEzqyy3Gtqr2wjALhFoIlDuoz2LlUek9B1K3K1fIuFJ9F50DxQcbi7qGOzPq9cMIizQhwf+u5qKymNrM9ibU8zUFelcObQL0cF+jE6JZvG2bEorWmBHtyL7t1Bg5OFpgVF6cxrlVpoIlMckdYrFceXHUJID//0DVFbUzAvv4MsTF/VlzZ5c3lqw7Zi2+/aCbRgDE0db92QYlRJFSXkVy3Ycatb4m0XN8BJaIlCeo4lAeVZsXzj/BdixAH58wqomKSuEijLO7RvLef1ieemHzU43HmcVlPL5b7u4KLUz8eGBAAzrGomPQ5i/uQWeXGsPOFctMBLKi6CsyDMxqXanxfYjUO3IwGth92JY9JL1qBYcx//reg73+MTz0JRArh/RnaU7DrFkx0FEhKcu6cvAhMPDVe3NKeamD5dSUWn44xj7Dm0ZaXTwCWBQQjgLNmfy0LktbJjs6kbhwCOuGgKrVOCrnSmV62kiUC3Dec9DwnCrRFBVDhVlsH8VAesn86YpIiejAzP/O4zNXqcTnTCUbVlFjH/jF/5ydk9uGdWV1XtyueWjpZSUVfLODUPoHu4N3/8DfvkP+ARyfcoT3LEshuyCUiJb0s1zCuupGgqsNfBcmCYC5XqaCFTL4O1njW56pLIizNY5lCyezDXps7mucg6U9KC03xnM35zNztn5fLvYl7SCQMYGJHPbteeR7LcV3rgDsrfAwOth/2rOX3cfaxxXsXDzAC4aGO/+42tIURaIFwTUGohXexcrN9NEoFo230Ck14XE9roQSvJg/Vew4lP8lr/LmeKg3M+L0qIqzncUQxnw2VPWeqEJcP1X0O10K5l89SceXv85S+cehN7vgl8L6VNQmGm1CXjVaq4L1IHnlHtpIlCth38IDPq99QAE8AWKisqopAhH9mbI3GDdv3ngdYdP9r6BeI1/n6//E8aF2e9jXk5FTnsABk8Ab98TDuvZWRsQ4IFzjqP9oTCrbvsA1G0jUMoN9Koh1eqFBfriCAyDLkNh0PUw7I9H/+IXoWjYfYwrfYKC0BT49gF4dSikfV3vNtftzeWrFU3fRP67dft5fe5WXp+3lS3HM4xF4RG9igH8QsDLR0sEym00Eah248xeMewP6s2YA/ex+7yPwTcIJl8HPz1dc4/grIJSHp66mgv+vZB7Jq9kYSOXnOYUlfG3aWvpEROEn7cXb87beuxB1RpeIm1fHntyiq27xGlfAuVGmghUuxEZ5MekicPw9vbi4u8C2DBuOqReC/OeoXTKLbz1UxqnPzeXKUvTmTA8mfjwAJ7+No2qqiNGLq2qhAPr+OeMdeQUlfHilalcNTSBaSv2WCfyY2EPQb0ru4hLX/uZ69751Rp9VXsXKzfSRKDala7RQUyaeCo+Di+ueW8FH0Tdz5ehN+K3fgr9f5rABZ3zmHX3KB65sDd/Obsn6/bmMWPV3sMbOLgd3j8PXh/Oaev+yj2jO9GnUyi32L2Y355/DL2gK8qgJJeqgEge/HI1lVWG7VmFfPrrTutOZVoiUG6iiUC1O8lRHZg0cRh+3l48NjONF0ov4uvuj3Oy91ae3nsz3SeNgm8f4sKAtYyNLeb5WWmUlFXA8o/hjZFUZaxnupzBhY7F3L5lImRsoHNYABcP7MykJbvILig9vLOqKutRH/vy0CWZDn7Zls0/L+rDyO5RvDxnM2W+4dpGoNxGWvQNO2xDhgwxS5cu9XQYqo3JyCthb24JA+JDrVtn5u2Fjd/Apu9g2zyotE7opcaHsoBogkv2cqjjMK7LuoEtZeHMGmdInneXNSTGwOvIKSrju1U76N/Rl15BRZCXbm2zQzRc9i4knlo3gP1r4I2R3FN1H1kJ5/DxTSeTti+f8/+9gMldpnFy7nfw8G4PfDKqrRCRZcaYIU0tp5ePqnarY4g/HUP8D08I6QRDb7YeZYWwdwVkb2HO3IX45++gqNPl3LntZHrGhvL11QNJjgmGXoPgqz/Cik8I8/bnbF8hL9ubXK8umIgBOLqeR4dts/D68AI45xlr2/b9mk1BJgJkE8LTl/ZDROjdKYQrBnfh51VwsiMPKkqtznZKuZAmAqXq49sBkkZC0kiSO13Kea8swGyDm0Ym85ezex6+x0JwLFw/rWa19D25XPraz5TtPlwd1Nl/KB+Hv0fXb+6HPcug3+Xs3ruP7Uu/ZTRw+ahUukQE1iz/57N68PrqUOtN0UEIiXPHEat2TBOBUk3oFRfCy1cNJKqDL8O7RzW6bN/Oocx/4HR2HSziUFEZOUVlzEnL4Iz1t3K/fxx/WjUJWfU5XYAuQLF3KBeOGlpnGx1D/BnaJwXSYO36NfQdVjcR5BSVcdsnyxicGM6tp3UjxN+nmY9YtTfaRqCUG6zdk8srczazLW0ZCQFl/G5wTy44pTfBYR3r7d1ckr4Gn3dGU4UXjr6X4HXKrRA/BER4ZPpaPl68E2MgLNCHO07vznXDEuveCa4eWQWl/PPr9RwsLOWiAZ05t18swceQRErKK5vch2pZnG0j0ESglBvtzy0hLNDHqRPqjwsXsWPWK1znvxDfigJIGsXG4c9z7vtbuX5YIuOHdOFf321k/qZMEiICeXt8V3qWpcHe5dDtDEg4pWZb8zdlct8Xq8grKadTqD87sovw8/bi7D6xPHBOz5p7NzTk33M289aCbcy8cySJkR1O+HNQ7qGJQKlWzhjDFW/+wv6MLL4fm47//P8jr9yLv8md/N/99xIa6AMFGez8/lUq1nxJN1PrCiOHL1z+HgVdz+Xfczbz5vxtpHQM4t/XDKRnTDArd+cwdfkepi5Px9/HwRvXD2ZoUkS9cWzYn8cFryykospwZq8Y3rmhnvNKeQn4+B89XXmUJgKl2oC1e3K58D8LuWlEMqeEZBP/wx/p5bUbTr3D6oew9kuoLKOsywi+yO7GjEOJnDNqOJdtfZjg7NX8vWoin5WN5tpTEvj7+b0J8K1bEtmSUcAtHy0l/VART17clyuH1r3/QWWV4bLXf2bXwSLGD47nzfnb+GDCUMb07Hh4oY2zYMqNcNWn0P2MOusXlFawI6uQnrHB+Di025K7aSJQqo148L+r+XJ5OqEBPnQNE75I+ApZYY+VlHotnDwRorpTUl7JQ1+u5quVewmghPf8X+JUVrN30H106jMKjN25zT8EwhIhKAa8vMgtKueOz5ezYHMW1w1L4MFzTqppO/hg0XYe+3o9L12Zyrn9YjnnpQWIwKy7R+Pr7WUNt/H6cGvU1/BkCm9eyP/SDrFwcxZr9+ayPasQYyClYxBPXNyXYV0jGz/YynJwaON3c9FEoFQbkZlfyunPz6WwrIKv/jSCAV3C4MB6CO0M/qF1ljXGMGvtfry8hNO7heI741ZYP73+DTv8ICIZRt5HRZ/Lefa7jbyzcDuRHfx47pRihm99kf+3tx8bk67lgwlDERF+2pDBhA+W8LfzelnDaqyaDNMmknXSdURt+IRXzeU8V3opsSH+9I8PpW/nUKKD/Xj1py2kHyrm4tRO3HNmD/x9HFRUVVFZZYgLDbCSyvznYO6z0HscnPJHazTZVqJ6PCovL/FwJHVpIlCqDflpQwb780q4+uRjvHVlVZXVMa6q3LoTmnhBcQ7k7IBDO2HHAmt+z/PhghdZnS1s+uJvXFI4hTJ8CJAycofeQ+h5j9V0hLvpgyX8uv0gL17em8Ffn0VWuR9nFz/Bf3xf42zHEjZc8h19+g2yemvbissqeW3uFt6ct42yyrpDbgT6Ovhb1HyuPfgqBdGD8D20Cd+KAtZ7pfBhwPX49TiDYV0jOSU5ot7bjBaXVbIlo4CUmKAmG+ErqwwZ+SXsyy1hf671nBARyJm9OtaJt97PccVHkDgColIoq6hi0ZYslu08xLKdh1iVnoO3lzAwIZxBCeEMTQ7n1K6RCMC6aVCSA4NuAC/3XnWliUAp1bSqSlj8Gsx5AnwDITgOMtazPeEy7si6hJcjptE9/UsYeguc+y/w8mJHViFnvTify5nNUz7v8nLM/xE35CLOS4agt4dbl7leN7UmcdS2I6uQBVuycIjgbf969lozict3Pcl3lUP4U/nd+FHOjYE/c6PXN0RW7OfBqjv4b9kwAGJC/EjpGEz3jkGIwPKdh1i3N4+KKkO36A68cEWqVWKyZeaX8t6i7axOzyH9UDF7c4oprzz6nDcwIYy/ndeLIXaD+cHCMhZuySK/pJxzeoYR+f1d1t3xguP4ecxk/v7TIbZlFeLwEnrHhTAoIYzSiiqW7zrE5owCjIFrEvN41PsD/PYstnbSZRhc/BpEdmvuv2KDNBEopZyXuQmm/8kqJYx7BXqea003BmY/Aj+/Ar0vgtMehJg+LErbzeDpY/GJTMJx8/eHT/q/vmnd9Of8/wf9xh9VdXWUDd9Y94RIGsHOcz5gU3YFveKC6RwWgJQVwGdXYXYuYteIp5nlexabDhRQuXcl5x76lGgOsSp0LAUpFxMd05mX52wmI7+0pl/Fuwu38+HPOyirrKJf51C6RAQSHx5Atw4lREbFEhsWSEyIPz+sP8D/m72RA3mlnNYjmkNFZazZk4sxEEoBb/u+wMleG9jcfQKdt05mV2Ukfwl+htvPHcLoHlEE+tbtl5ufvYc90x8nZedk8ghi64D7Gdw1Bvn2Qatk9rvHYchNdW9P6iKaCJRSx8YYq4Tg8D56+qKXrFKDqYSYvhCeBBtmwo3/s4biqFZZAe+cAftWWu+D4yD6JKtRu++lh6tGyopg7lPwy6vQaSD8fnr995EuK4Ivfg9bZsOoP0NGGmz8BuMXAmEJyIG14OUNKWdTMPAmHlkZwdSVe2vy0rgBnbj7jBS6RgfB/rUw+x+w9UfwDYaYPhDbD2L7UdyxP+9u8OeDX/eQFBHA2UkOTg/PoMtvj+PI2cEjXnfyWeEQzvDbwFteTyEJp+J1/dS6nQFzdlsJc/lHUFlGft/ruevABfy0q5yTYoPpF1zALYdeoEfBEgoC48k96Uo6nHIjIdFdyM7YQ+GmebBrMd6BoYT2GEVw92FNJ9ImtOhEICLnAC8DDuAdY8wzjS2viUCpFqAwC9ZOhdWTYc9Sq9Pa9VOPXq40H3YstK4kytwE6b9B9haITIHTHrCuVpp5DxzcBoNvtH4hN3bCqyiDL2+CtBnWcsNuh1NuhQC70XzVZ1ajdWEGxA9lWeLNfF3Ul6tPSaRnpA/kpsOiF2HFp9b6J0+06uz3r7GSQ1m+tR+HH0SlWCPGFh+0pvmFwlWfUpEwgpW7c0iO6kDk1q9g2kTo/juI7mmNB1VwALbPs9YZcBWMuBeiulNZZfj4lx3M2ZBBVkEZmXklnFIyn2u85jDCsY4K48Vu05Fkr/0AFBtffCnHIYYqhHSfZBxXvE/nlNTj+pO12EQgIg5gE/A7IB1YAlxtjFnf0DqaCJRqYXJ2QUB4/b/ij1RVZZUe5j0LB9Za08KTrSqo5NHO7a+ywioVJA6vP2mUl8DKT2Dhy5C7y0o25cX8//buPUaqs4zj+PcHy22p7bKwNAgUFlO1pFFAbAErqW0DhaqoqQmJpmg11TYaL7EN2D+8JNoqxjQmxqbSNtKgrWJTsdJQgjWNjdxdWhAoa1crFQXLZatFbvv4x/suTGF3uLi7szvn90k2c86Zd86c98k7+8y5zHM40se9vvwAAAetSURBVJqe7zcgJYCZX4Hakh/OtbXBgZZ0wnxPE+zdkYr8jZwII6+AUZNSwjnd7++DNd9MlWFrh6dYjJuRft9RN7ZsV46faGP3gcPsadlG7dZlXNTaTOuIyRwfew1DGqdysLWVQ7vWUvPKehoONjH+s8upH16+xlVnenMimA58PSJm5/lFABFxT2evcSIwqwJtbbDzN+kb+pQF6eR0VztxLO2xtDwLQ+rTvZ+HNqSEU9/Yxe91/MzDaL1Mb74fwWig9G4bu4GrT28k6TbgNoDLLjvPS+bMrPfp1w+u+ED3vkf/ATD54+mvu/XyJHA+KvGb744u1j1jtyQiHoiIqRExtaGhoQc2y8ysmCqRCHaTSrG3GwP8vZO2ZmbWzSqRCDYAl0tqlDQQmA+sqMB2mJkZFThHEBHHJX0OWEW6fPShiNjW09thZmZJRc52RMRKYGUl3tvMzN7IBcLNzArOicDMrOCcCMzMCq5PFJ2TtA/46wW+fATwry7cnGrj+HTOsSnP8SmvN8RnXESc9YdYfSIR/D8kbTyXn1gXlePTOcemPMenvL4UHx8aMjMrOCcCM7OCK0IieKDSG9DLOT6dc2zKc3zK6zPxqfpzBGZmVl4R9gjMzKwMJwIzs4Kr6kQg6UZJOyU1S1pY6e3pCZLGSnpG0nZJ2yR9IS+vl7Ra0q78OCwvl6Qf5Bg9L2lKyboW5Pa7JC2oVJ+6mqT+kv4o6ck83yhpXe7nY7kqLpIG5fnm/Pz4knUsyst3SppdmZ50PUl1kpZL2pHH0HSPnVMkfSl/rrZK+pmkwVUxfiKiKv9IlU3/DEwABgJbgImV3q4e6PcoYEqefhPp/tATge8CC/PyhcB38vRc4CnSDYOmAevy8nrgpfw4LE8Pq3T/uihGXwZ+CjyZ538OzM/T9wO35+k7gPvz9HzgsTw9MY+nQUBjHmf9K92vLorNT4BP5+mBQJ3HzsnYjAZagCEl4+YT1TB+qnmP4CqgOSJeioijwKPAvApvU7eLiD0RsTlPvwZsJw3geaQPOfnxQ3l6HrA0krVAnaRRwGxgdUTsj4gDwGrgxh7sSreQNAa4CViS5wVcByzPTU6PTXvMlgPX5/bzgEcj4khEtADNpPHWp0m6GJgJPAgQEUcj4iAeO6VqgCGSaoBaYA9VMH6qORF0dG/k0RXalorIu6KTgXXApRGxB1KyAEbmZp3FqVrjdx9wF9CW54cDByPieJ4v7efJGOTnD+X21RqbCcA+4OF86GyJpKF47AAQEa8A3wNeJiWAQ8AmqmD8VHMiOKd7I1crSRcBvwS+GBGt5Zp2sCzKLO+zJL0f2BsRm0oXd9A0zvJc1cUmqwGmAD+KiMnAf0iHgjpTqPjkcyPzSIdz3gwMBeZ00LTPjZ9qTgSFvTeypAGkJLAsIh7Pi/+Zd9vJj3vz8s7iVI3xew/wQUl/IR0qvI60h1CXd/Xhjf08GYP8/CXAfqozNpD6tTsi1uX55aTE4LGT3AC0RMS+iDgGPA7MoArGTzUngkLeGzkfg3wQ2B4R3y95agXQfvXGAuBXJctvyVeATAMO5d3/VcAsScPyN6FZeVmfFRGLImJMRIwnjYffRsTHgGeAm3Oz02PTHrObc/vIy+fnq0IagcuB9T3UjW4TEf8A/ibpbXnR9cCf8Nhp9zIwTVJt/py1x6fvj59Kn4nvzj/SVQ0vks7K313p7emhPl9D2s18HmjKf3NJxybXALvyY31uL+CHOUYvAFNL1nUr6URWM/DJSveti+N0LaeuGppA+iA2A78ABuXlg/N8c35+Qsnr784x2wnMqXR/ujAuk4CNefw8Qbrqx2PnVL++AewAtgKPkK786fPjxyUmzMwKrpoPDZmZ2TlwIjAzKzgnAjOzgnMiMDMrOCcCM7OCcyKwwpF0QlKTpC2SNkuacZb2dZLuOIf1/k5Sn7hZuVkpJwIrosMRMSki3gksAu45S/s6UiVJs6rkRGBFdzFwAFJ9Jklr8l7CC5Laq9XeC7wl70Uszm3vym22SLq3ZH0flbRe0ouS3pvb9pe0WNKGXLf/M3n5KEnP5vVubW9v1tNqzt7ErOoMkdRE+uXnKFLNIYD/Ah+OiFZJI4C1klaQCq9dGRGTACTNIZUavjoiXpdUX7Lumoi4StJc4Guk+jSfIpVfeLekQcBzkp4GPgKsiohvSepPKmts1uOcCKyIDpf8U58OLJV0JalkwrclzSSVqR4NXNrB628AHo6I1wEiYn/Jc+1F/jYB4/P0LOAdktrr0VxCqi+zAXgoFwl8IiKauqh/ZufFicAKLSL+kL/9N5BqMjUA74qIY7lK6eAOXiY6Lxt8JD+e4NTnS8DnI+KMwms56dwEPCJpcUQsveDOmF0gnyOwQpP0dtJtTV8lfVPfm5PA+4BxudlrpNt+tnsauFVSbV5H6aGhjqwCbs/f/JH0VklDJY3L7/djUsXYKeVWYtZdvEdgRdR+jgDSt/UFEXFC0jLg15I2kqq27gCIiFclPSdpK/BURNwpaRKwUdJRYCXw1TLvt4R0mGhzLl+8j3SO4VrgTknHgH8Dt3R1R83OhauPmpkVnA8NmZkVnBOBmVnBORGYmRWcE4GZWcE5EZiZFZwTgZlZwTkRmJkV3P8AGzOgQUhlNNEAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "plt.plot(lstm_losses)\n", "plt.plot(gru_losses)\n", "\n", "plt.title('Loss Plots for Dataset 1; Trained on 1 Epoch')\n", "plt.xlabel('Batches')\n", "plt.xticks([0,20,40,60,80],[0,2000,4000,6000,8000])\n", "plt.ylabel('Loss per Batch, MSE')\n", "plt.legend(['LSTM', 'GRU'])" ] }, { "cell_type": "code", "execution_count": 170, "metadata": { "hidden": true }, "outputs": [], "source": [ "# Save the model weights to continue later\n", "torch.save(encoder_lstm.state_dict(), 'models/encoder2_lstm.pth')\n", "torch.save(decoder_lstm.state_dict(), 'models/decoder2_lstm.pth')" ] }, { "cell_type": "code", "execution_count": 171, "metadata": { "hidden": true }, "outputs": [], "source": [ "torch.save(encoder_gru.state_dict(), 'models/encoder2_gru.pth')\n", "torch.save(decoder_gru.state_dict(), 'models/decoder2_gru.pth')" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true }, "source": [ "## Part 4: Using the Model for Evaluation" ] }, { "cell_type": "code", "execution_count": 295, "metadata": { "hidden": true }, "outputs": [], "source": [ "# Build the idx to word dictionaries to convert predicted indices to words\n", "en_idx2word = {k:i for i, k in en_word2idx.items()}\n", "fr_idx2word = {k:i for i, k in fr_word2idx.items()}" ] }, { "cell_type": "code", "execution_count": 309, "metadata": { "hidden": true }, "outputs": [], "source": [ "def get_batch(dataloader):\n", " for batch in dataloader:\n", " return batch" ] }, { "cell_type": "code", "execution_count": 310, "metadata": { "hidden": true }, "outputs": [], "source": [ "def evaluate(input_tensor, encoder, decoder):\n", " with torch.no_grad():\n", " encoder_hidden = encoder.initHidden(1)\n", " encoder.eval()\n", " decoder.eval()\n", "\n", " encoder_output, encoder_hidden = encoder(input_tensor.to(device), encoder_hidden)\n", "\n", " decoder_input = torch.tensor([fr_word2idx['']]*input_tensor.shape[0], dtype=torch.long, device=device).unsqueeze(0)\n", " try:\n", " encoder.lstm\n", " decoder_hidden = (encoder_hidden[0][1::2].contiguous(), encoder_hidden[1][1::2].contiguous())\n", " except AttributeError:\n", " decoder_hidden = encoder_hidden[1::2].contiguous()\n", "\n", " output_list = []\n", " attn_weight_list = np.zeros((seq_length, seq_length))\n", " for di in range(seq_length):\n", " output, decoder_hidden, attn_weights = decoder(decoder_input,\n", " decoder_hidden,\n", " encoder_output)\n", "\n", " decoder_input = output.topk(1)[1].detach()\n", " output_list.append(output.topk(1)[1])\n", " word = en_idx2word[output.topk(1)[1].item()]\n", "\n", " attn_weight_list[di] += attn_weights[0,0,:].cpu().numpy()\n", " return output_list, attn_weight_list" ] }, { "cell_type": "code", "execution_count": 357, "metadata": { "hidden": true }, "outputs": [], "source": [ "batch = get_batch(dataloader)\n", "input_tensor = batch['french_tensor'][11].unsqueeze_(0)\n", "output_list, attn = evaluate(input_tensor, encoder_lstm, decoder_lstm)\n", "gru_output_list, gru_attn = evaluate(input_tensor, encoder_gru, decoder_gru)" ] }, { "cell_type": "code", "execution_count": 358, "metadata": { "hidden": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Input Sentence:\n", " elle déteste les poires , les fraises et les oranges . \n", "\n", "Target Sentence:\n", " she dislikes pears , strawberries , and oranges .\n", "\n", "LSTM model output:\n", " she dislikes pears , strawberries , and oranges . \n", "\n", "GRU model output:\n", " she dislikes pears , strawberries , and oranges . \n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/mac/anaconda3/envs/pytorch/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.\n", " warnings.warn(message, mplDeprecation, stacklevel=1)\n" ] }, { "data": { "text/plain": [ "[Text(0,0,'she'),\n", " Text(0,0,'dislikes'),\n", " Text(0,0,'pears'),\n", " Text(0,0,','),\n", " Text(0,0,'strawberries'),\n", " Text(0,0,','),\n", " Text(0,0,'and'),\n", " Text(0,0,'oranges'),\n", " Text(0,0,'.'),\n", " Text(0,0,'')]" ] }, "execution_count": 358, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAUYAAAFRCAYAAAAIBATTAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAIABJREFUeJzt3XmYXGWd9vHvnRAIJIQdBCRsSlhlC7LJiAOyqSM6DKggDg5kdBBERB0EF3BXZF7EkZeIsoorIi7IIqIsDpAQEhJAdIRElKAiWwgQknDPH+dpKA5JupKu6qrq3J/rqqurzjn11K+qq+9+zvrINhER8YJhnS4gIqLbJBgjImoSjBERNQnGiIiaBGNERE2CMSKiJsEYyx1Jn5R0SZPL/krS0e2uqZUkPSlps07X0csSjPEikmZK2ncx8z4q6f7yh/cnSd8t0+8q056UtFDSMw2PPyrpXyVZ0pm19g4u0y9YzOvtXeb/sDZ9+zL9V6151wPT8P4OrU3fW9KfatOaDuUmX/slwW17tO37WvUay6MEYzRF0ruAdwL72h4NjAeuA7C9TfljHA3cCLyv77Htz5Ym/gAcJmmFhmaPBH7Xz0v/DdhD0loN097VxPMG07uAR8rPGAISjNGsXYCrbf8BwPZDticuxfMfAqYD+wNIWhPYA/hxP897FvgR8LbyvOHAocC3GheStIekSZIeLz/3aJi3qaRfS5oj6Vpg7dpzd5P0G0mPSZomae9m35SkjYHXAhOA/SWtV6aPAn4ObNDQe34H8FGqfxBPSppWll1N0jckzZb0Z0mfLu+zrzd6k6QzJD1aeuwHlnmfAfYCvlra+2qZbkmvaGj7Ikl/kzRL0qmShvXX9vIuwRjNugU4UtKHJI3v+8NdShdR9RKhCrorgHlL+bz9gbuAB/tmlpD9GfAVYC3gTOBnDb3MS4HbqQLxUzT07CRtWJ77aWBN4CTgMknrNPmejgQm274MuAc4HMD2XOBA4MGG3vOlwGeB75bH25c2LgQWAK8AdgT2AxpXj3cF7i31fxH4hiTZPoUX99Dft4j6zgZWAzajCvAjgaP6a7vJ9z5kJRijKbYvAY6jCqZfA3+V9J9L2czlwN6SVqP6A72oydf+DbCmpHGLed4bgN/bvtj2AtvfBn4LvEnSWKre7sdsz7N9A/CThuceAVxp+0rbz9m+FpgMHNTkezqSKngpP5dqdbr0MA8ETrA91/Zfgf+i9JCLWba/bnshVYiuD6zXRNvDgcOAk23PsT0T+DLVJpEBtT3UJRijaba/ZXtfYHXgPcDpkvZfiuc/TdU7OxVY2/bNS/HyFwPvA15HFbCNNgBm1abNAjYs8x4tPbjGeX02Bv6lrEY/Jukx4DVUAbFEkvYENgW+UyZdCmwnaYfm3tLzrz8CmN3w+ucC6zYs81DfHdtPlbujm2h7bWBFXvx++z6XgbY9pK3Q/yIRL2Z7PvB9SR8BtgWuXoqnXwT8EjhtKV/2YuB/gYtsP1Vb23uQKmAajQWuAmYDa0ga1RCOY4G+y0o9AFxs+5ilrAeq3qGAqbV6jgSmNrxGo/q0B6g2J6xte8Ey1LCky2M9DMyn+mzuLtPGAn9ehtdZrqTHGIsyQtLIhtsKZUP9GyStKmlY2Ui/DXDrUrb9a+D1VNu+mmb7fqptZKcsYvaVwBaS3lFqPQzYGvip7VlUq8anSVpR0muANzU89xKqVe79JQ0v73dvSS9fUj2SRlLtBJoA7NBwOw44vOx9/wuwVtl00OcvwCZ9O0BszwauAb4saUz5bDeX9NomP5q/UG0/fImyevw94DPl97YxcGJ5z7EECcZYlCuBpxtunwSeoNqj+kfgMaoN9e+1fdPSNOzKdbYfWdqibN9k+8FFTP878Ebgg8DfgQ8Db7T9cFnkHVQ7GR4BPkHDNkrbDwBvLu/tb1Q9uA/R/9/GwVSfzUVlD/1Dth8CvgEMBw6w/Vvg28B9ZTV5A+D75fl/lzSl3D+SapX3buBR4Ac0sSpfnAUcUvYqf2UR848D5gL3ATdRre5/s8m2l1vKhWojIl4sPcaIiJoEY0RETYIxIqImwRgRUZNgjIioSTBGRNQkGCMiahKMERE1CcaIiJoEY0RETYIxIqImwRgRUZNgjIioSTBGRNQkGCMiahKMERE1CcaIiJoEY0RETYIxIqImwRgRUZNgjIioSTBGRNQkGCMiahKMERE1CcaIiJoEY0RETYIxIqImwRgRUZNgjIioSTBGRNQkGCMiahKMERE1CcaIiJoEY0RETYIxIqImwRgRUZNgjIioSTBGRNQkGCMiahKMERE1CcaIiJoEY0RETYIxIqImwRgRUZNgjIioSTBGRNQkGCOibSS9X9IYVb4haYqk/TpdV38SjBHRTu+2/QSwH7AOcBTw+c6W1L8EYw+RtLKkcZ2uI2IpqPw8CDjf9rSGaV0rwdgjJL0JmApcVR7vIOnHna0qol+3S7qGKhivlrQq8FyHa+qXbHe6hmiCpNuBfwR+ZXvHMu1O26/qbGURiydpGLADcJ/txyStBWxo+84Ol7ZE6TH2jgW2H+90ERFLycDWwPHl8ShgZOfKaU6CsXfMkPQOYLikV0o6G/hNp4uK6MfXgN2Bt5fHc4D/7lw5zUkw9o7jgG2AecClwOPA+ztaUSxWrx6m0ga72j4WeAbA9qPAip0tqX8Jxt7xBtun2N6l3E4F/qnTRcVi9eRhKm0wX9JwqlVqJK1DD+x8STD2jpObnNZ1JA2TNKbTdQyynjxMpQ2+AlwOrCvpM8BNwGc7W1L/Vuh0AbFkkg6k+uPaUNJXGmaNARZ0pqr+SboUeA+wELgdWE3Smba/1NnKBk3fYSqbAif3ymEqrWb7W+WIin2o/jEcbPueDpfVrxyu0+UkbU91uMPpwMcbZs0Bri/bbLqOpKm2d5B0OLAz8BHg9uXl8KJePUyl1SStuYjJc2zPH/RilkJ6jF2urIJNk3Rp35dJ0hrARt0aisUISSOAg4Gv2p4vaXn6L9x3mMobqf6p9cRhKm0wBdgIeJSqx7g6MFvSX4FjbN/eyeIWJ9sYe8e1ZS/nmsA04HxJZ3a6qCU4F5hJFQg3SNoYeKKjFQ2unjxMpQ2uAg6yvbbttYADge8B/0H1GXWlrEr3CEl32N5R0tFUvcVP9NqZL5JWsN2120VbSdIU2zv1/d7KtGm2t+90bYNJ0mTb4xc1rW9zS6dqW5L0GHvHCpLWBw4FftrpYvojab1y/N7Py+OtgXd1uKzB1JOHqbTBI5I+Imnjcvsw8Gj5bLr280gw9o7TgauBP9ieJGkz4PcdrmlJLqCqd4Py+HfACR2rZvD15GEqbfAO4OXAj4ArgLFl2nCqf/JdKavS0RaSJtnepbYq2bWrTu0gaUteOEzlul44TCUq6TH2CElbSLpO0ozy+FWSTu10XUswtxyi0rcquRvVaYzLBUmbA/fb/m9gBvB6Sat3uKxBV763EyVdI+mXfbdO19Wf9Bh7hKRfAx8Czm3ogc2wvW1nK1s0STsBZwPbUgXDOsAhy8txfJKmAuOBTaj2zP4EGGf7oE7WNdgkTQP+P9VB/gv7pnfrYTp9chxj71jF9m3Si84q68o9vOXg5pHAa4FxVKuS93b7Qb0t9pztBZLeCpxl+2xJd3S6qA5YYPucThextLIq3TseLqtnfaumhwCzO1vSotl+Dviy7QW277I9YzkLRaj2Sr8dOJIXjiIY0cF6OuUnkv5D0vqS1uy7dbqo/mRVug0kvQZ4pe3zy2Eao23fP8A2NwMmAntQnUVwP3C47VkDLrgNJJ0G3An80Mvhl6wcnvQe4H9sf1vSpsBhtperK+xIWtT33rY3G/RilkKCscUkfYJq29I421tI2gD4vu09B9juprbvlzQKGGZ7Tt+0VtTdapLmUJ31soDqWnyi+oNY3q6yEz0owdhiZaP7jsCUVo7N0ncmRW3a7bZ3Hki70VqSvmf7UEnTKZs9GvXSmUqtImlbqvPGnz9X3PZFnauof9n50nrP2nbfBRNKD2+ZlWPhtqG6bNdbG2aNoQsvSiBpS9u/LXulX8L2lMGuaZD1XVX9jR2tokuUNai9qYLxSqpzpW8CEozLme9JOhdYXdIxwLuBrw+gvXFUf2SrA29qmD4HOGYA7QIg6V+Aq8qq+anATsCnBxBgJwITgC8vYp6pRjrsKq38DGzPLj+7ctvv4rThe9DnEGB74A7bR0laDzhvgG22n+3cWnwDXg98CTgDeH2L2ty9TbXeWX6+BrgReDNwa6c/w0H+fbX8MwB2AyYBTwLPUh3D90Sn3+tgfw+A28rP26nWcgTc1en3298th+u0ge1rbX/I9km2r21Rs39v05kvfQfdvgE4x/YVtGCwIkkjJB0v6Qfl9r5yfcZu1I7P4KtUlxz7PbAycDTVAe/dqi3fA2ByOePn61ThOAW4rQXttlV2vrRI2Qu7qA9zmffGSnoP8CtX2+zacuaLpJ8Cfwb2pbrS9tNU/+UHdHksSedRHbd3YZn0TmCh7aMH0m47tOMzaLi01vM73iT9xvYeLSm6xdr0GQh4ue0HyuNNgDHuhbOfOt1lzW3xN2A0cGG5P6n8vKNh/tQWvMYqwFupjrsEWB/YrwXtTmtmWjfc2vEZADdQ9bguAr4IfKAV7x+4uJlp3fAZlHZu7/Tvd1luWZVukcaj+hd1W5Y2bT9JtQoGbTrzxfZTwF+pti1BddxhKy5ntrDUCzx/gPrCJSzfMW36DN5JdWbZ+4C5VJf3/+cBtgnVEQrPk7QCVQ9vQNr4PbhF0i4taGdQZVW6RcoR/qasOvdNLj/tAR7p364zX9p4QPo+wPnAfWXSJsBRtq8fSLvt0OrPoFyE9ULbR7SwxpOBj1Jtr3yqbzLVjp2Jtgc0lG4bvwd3Ux1ZMZPqH0TfpqWuPp4zwdhi5QIKhwOb2j5d0lhgfdu3LmN7J9YmrUzVE5kLYHtA47608YD0kcAHqa5HCHAt8F+2nxlIu+3Qjs9A0tXAm2w/26Iy+9r9IjAd2Mz2aeX79TLbA9qh0cbvwcbAGsBeZdINwGMD/YfeblmVbr3/pjpUo3EQpK8OoL1Vy2088F6qL9nqVOfhbj2Advs86+q/Y0sOSG9wEdWYyp8qt02Bi1vUdqu14zOYCdws6WOSTuy7taDdMVTfr7eVx60aZKtd34ODqX7va1Ndeu5i4J9a1Hbb5ADv1tvVZRAkANuPSlrmwx5snwagavD2nWzPKY8/CXy/BfW2+oD0PuP84j2a16u6Nl83atlnIOli2+8EDgP+i6rzsWrLKoVXt/L71aBd34N/A3azPRdA0heA/6G7D11KMLZBuwZBGku1PanPs1Tb7QbE9hmSXk81tOk44ONuzbGXd0jazfYtAJJ2BW5uQbvPUzU42CO25w2knRZ/BjuX1cc/0p4//rZ8v9r4PRAv3um2kBe2vXetbGNsMUmHU/UWdqI6hu8Q4FTbA+rdSTqFavCgy6n+KN4CfNf25wZWcXtIuofqD+yPZdJY4B6qP+KWbHyX9Atgc+Ay2ycNtL1WkHQ81SaPTYEHG2fRmp1wbfl+tUvZfPAuqu8tVKvWF9j+f214rZfZfqglbSUYW09tGgSpXJjh+Y3Ytpf5itDtOCC91v7GS5rfqo3v5SDirW3ftQzPbdtnIOkc2+9d1uf303bLvl/t/h6U19iJ6jAgMcDvbT+v8zPbb2hJWwnGiIgXy17piIiaBGNERE2CsY0kTUi77Wm3l2rttXZ7qdZ2tZtgbK+2fBHSbtvaTLvta7On2k0wRkTUZK/0Mhix4iiPXGWNfpeb/+xcRqzY/JlVeuLpppab72cYoaUY7qXJ3/F85jGClZpvt0ntaLeXau21dnup1qVt9xnm8qzn9XuAec58WQYjV1mDHfc6vvXtXtOeM+Y8v6XXMYhGatNJHOmwtMWtvq6p5bIqHRFRk2CMiKhJMEZE1CQYIyJqEowRETXLRTBKmilp7U7XERG9YbkIxoiIpTHkglHSKEk/kzRN0gxJh5VZx0maIml6uZ5d37LflDRJ0h2S3tzB0iOiSwy5YAQOAB60vb3tbYGryvSHbe8EnAP0Xe35FOCXtncBXgd8qYWDAEVEjxqKwTgd2FfSFyTtZfvxMv2H5eftvDBWyn7Af5ahI38FjKS6BP9LSJogabKkyfOfndu24iOi84bcKYG2fydpZ+Ag4HNldD2AvgGTFvLC+xbwz7bvbaLdiVQD3rPq6i/P+VoRQ9iQ6zFK2gB4yvYlwBlUgwYtztVU2x5VnrvjIJQYEV1uyAUjsB1wW1k9PgX49BKW/RQwArhT0ozyOCKWc0NxVfpqqp5go00a5k8G9i73nwb+fbBqi4jeMBR7jBERA5JgjIioSTBGRNQkGCMiahKMERE1Q26v9GAYsf48XvbRP7S83Tl3vazlbQIsmPnHtrQbZGyWISo9xoiImgRjRERNgjEioibBGBFRk2CMiKhJMEZE1CQYIyJquvI4RkmfBJ4ExgA32P7FkpazfYak0/uWlTQTGG/74UEqOSKGkK4Mxj62P96OZSMilqRrVqUlnSLpXkm/AMaVaRdIOqTc/7ykuyXdKemMRTz/+WUbpq0s6SpJx5THR0i6TdJUSedKGl5uF5QRBadL+sAgvN2I6GJd0WMsY7S8DdiRqqYpVINW9c1fE3gLsKVtS1q9iWZHA98BLrJ9kaStgMOAPW3Pl/Q14HDgLmDDMqIgi2tb0gRgAsAq641etjcaET2hW3qMewGX237K9hPAj2vznwCeAc6T9FbgqSbavAI43/ZF5fE+wM7ApDLswT7AZsB9wGaSzpZ0QHmtl7A90fZ42+NXWmPk0r6/iOgh3RKMAIs9G9/2AuDVwGXAwbwwVvSS3Awc2DfQFdWIgBfa3qHcxtn+pO1Hge2phk89FjhvAO8hIoaAbgnGG4C3lG2CqwJvapwpaTSwmu0rgROAHZpo8+PA34GvlcfXAYdIWre0uaakjSWtDQyzfRnwMZY8qmBELAe6Yhuj7SmSvgtMBWYBN9YWWRW4QtJIqp5fsztITgC+KemLtj8s6VTgGknDgPlUPcSngfPLNICTB/h2IqLHybme3FJbc6t1vO8339ryducc3p6dOrkeY0TlVl/HE35E/S3XLavSERFdI8EYEVGTYIyIqEkwRkTUJBgjImq64nCdXrPZinP4zqa/bHm7+89s5vDMiGi39BgjImoSjBERNQnGiIiaBGNERE2CMSKiJsEYEVGTYIyIqFnug1FSjuWMiBfpqVCQtAnV1btvpRof5nfAkcBWwJlU47w8DPyr7dllEKwJwIrA/wLvtP2UpAuAR0obUyT9GDirvIyBf7A9Z5DeVkR0mV7sMY4DJtp+FdX4LMcCZwOH2N4Z+CbwmbLsD23vYnt74B7g3xra2QLY1/YHgZOAY23vQDX+zNOD81Yiohv1VI+xeMD2zeX+JcBHgW2Ba8vwLsOB2WX+tpI+DaxO1Zu8uqGd79teWO7fDJwp6VtUYfqn+os2jhI4dsNe/Ngiolm92GOsX3J8DnBXwyBX29ner8y7AHif7e2A04DG4f3mPt+g/XngaGBl4BZJW77kRRtGCVxnreEtfDsR0W16MRjHStq93H87cAuwTt80SSMkbVPmrwrMljSCagzpRZK0ue3ptr8ATAZeEowRsfzoxWC8B3iXpDuBNSnbF4EvSJpGNaDWHmXZj1HtqLkW+O0S2jxB0ozy/KeBn7er+Ijofr24sew52++pTZsK/EN9QdvnAOcsYvq/1h4f18oCI6K39WKPMSKirXqqx2h7JtUe6IiItkmPMSKiJsEYEVGTYIyIqOmpbYzd4p4/rcPuJ9V3jA/cGG5peZvRZtXZVq3n+nkMMZjSY4yIqEkwRkTUJBgjImoSjBERNQnGiIiaBGNERE2CMSKiJsEYEVGTYIyIqEkwRkTUJBgjImoSjE2SNEHSZEmTFzwzt/8nRETPSjA2qXGUwBVGjup0ORHRRgnGGknXSdqw03VEROckGBtIGga8Anik07VEROckGF9sa+Ay2093upCI6JxcqLaB7RnAiZ2uIyI6Kz3GiIiaBGNERE2CMSKiJsEYEVGTnS/LYMEo+MturR/FbcylLW8y2mz4amPa0u7Cxx5vS7ttMQRHSkyPMSKiJsEYEVGTYIyIqEkwRkTUJBgjImoSjBERNQnGiIiaZQpGSSdIWqXVxUi6QNIhLW7zSkmrt7LNiBjalrXHeAKwyGCUNHzZy1l29ddVZZjtg2w/1omaIqI39RuMkkZJ+pmkaZJmSPoEsAFwvaTryzJPSjpd0q3A7pI+LmlSWX5iCal1Jd1elt9ekiWNLY//0NAD3VfSjZJ+J+mNZf5wSV8qbd4p6d/L9L0lXS/pUmC6pE0k3SPpa8AUYCNJMyWtXZY/QtJtkqZKOre0O7z0VGdImi7pAy39hCOi5zRzSuABwIO23wAgaTXgKOB1th8uy4wCZtj+eFnmbtunl/sXA2+0/RNJIyWNAfYCJgN7SboJ+Kvtp1SdWrQJ8Fpgc6rwfQVwJPC47V0krQTcLOma8tqvBra1fb+kTYBxwFG2/6O8PuXnVsBhwJ6255fwPBy4C9jQ9rZluax2RyznmlmVnk7Vi/uCpL1sL+okzoXAZQ2PXyfpVknTgX8EtinTfwPsCfwD8Nnycy/gxobnfs/2c7Z/D9wHbAnsBxwpaSpwK7AW8Mqy/G227294/izbtyyixn2AnYFJpZ19gM3Ka2wm6WxJBwBPLOpDaBwlcOGTTy5qkYgYIvrtMdr+naSdgYOAzzX01Bo9Y3shgKSRwNeA8bYfkPRJYGRZ7kaqINwYuAL4CGDgp40vWS8BEHCc7asbZ0jaG6iPZbq4sU0FXGj75JfMkLYH9geOBQ4F3l1fxvZEYCLASmM36tzZ7RHRds1sY9wAeMr2JcAZwE7AHGDVxTylLwQfljQaaNzLfANwBPB7289RDTp1EHBzwzL/ImmYpM2penT3AlcD75U0otS0haSlHcP0OuAQSeuWNtaUtHHZ/jjM9mXAx8r7i4jlWDPbGLcDviTpOWA+8F5gd+Dnkmbbfl3jwrYfk/R1qlXwmcCkhnkzyza/G8qkm4CX2360oYl7gV8D6wHvsf2MpPOotj1OUdXA34CDl+aN2r5b0qnANWU0wPlUPcSngfPLNICX9CgjYvkid/CaZ71qpbEbef0Pv7/l7b7y+Ftb3ma01/DVV2tLu7keI225HuOtvo4n/Ei/BefMl4iImgRjRERNgjEioibBGBFRk2CMiKjJKIHLYORD89jqc7Na3u7ClVZqeZsAnjevLe0G+Jl8tsPa9L197pln2tJuM9JjjIioSTBGRNQkGCMiahKMERE1CcaIiJoEY0RETYIxIqImwRgRUZNgjIioSTBGRNTklMAmSZoATAAYOXx0h6uJiHZKj7FJtifaHm97/IrDVu50ORHRRgnGiIiaBGONpOskbdjpOiKicxKMDcpIga+gGtY1IpZTCcYX2xq4zPbTnS4kIjone6Ub2J4BnNjpOiKis9JjjIioSTBGRNQkGCMiahKMERE12fmyDBaOWpE5u45tebsr/+ihlrcZ7TVsvXXa0u5zsx5oS7vt8NuztmtLu1v8+6S2tNuM9BgjImoSjBERNQnGiIiaBGNERE2CMSKiJsEYEVGTYGwg6clO1xARnZdgjIioGXLBKOlHkm6XdFcZpwVJT0r6jKRpkm6RtF6Zvqmk/5E0SdKnOlt5RHSLIReMwLtt7wyMB46XtBYwCrjF9vbADcAxZdmzgHNs7wLktJOIAIZmMB4vaRpwC7AR8ErgWeCnZf7twCbl/p7At8v9i5fUqKQJkiZLmrxg3tyWFx0R3WNIBaOkvYF9gd1L7/AOYCQw37bLYgt58TnipgmNowSusNKoFlYdEd1mSAUjsBrwqO2nJG0J7NbP8jcDbyv3D29rZRHRM4ZaMF4FrCDpTuBTVKvTS/J+4FhJk6hCNSJiaF12zPY84MBFzBrdsMwPgB+U+/cDuzcs9/m2FhgRPWGo9RgjIgYswRgRUZNgjIioSTBGRNQkGCMiahKMERE1euGEkGjWGK3pXbVPp8uIiKV0q6/jCT+i/pZLjzEioibBGBFRk2CMiKhJMEZE1CQYIyJqEowRETUJxoiImq4NRlW6tr6IGLo6GjySTpQ0o9xOkLSJpHskfQ2YAmwk6Zwy1spdkk5reO5MSadJmiJperliN5LWkXRtmX6upFmS1i7zjpB0m6SpZd7wcrug1DBd0gc682lERLfoWDBK2hk4CtiVagiCY4A1gHHARbZ3tD0LOMX2eOBVwGslvaqhmYdt7wScA5xUpn0C+GWZfjkwtrzeVsBhwJ62d6Aa++VwYAdgQ9vb2t4OOL+d7zsiul8ne4yvAS63Pdf2k8APgb2AWbYbhyQ4VNIUqoGttgG2bpj3w/KzceS/1wDfAbB9FfBomb4PsDMwSdLU8ngz4D5gM0lnSzoAeGJRxTaOEjifeQN42xHR7To5tMHizld8fmxSSZtS9QR3sf2opAuoRv3r05dQjSP/La5dARfaPvklM6Ttgf2BY4FDgXfXl7E9EZgI1bnSi3mNiBgCOtljvAE4WNIqkkYBbwFurC0zhiooH5e0Hosez6XuJqpwQ9J+VKvnANcBh0hat8xbU9LGZfvjMNuXAR8Ddhrg+4qIHtexHqPtKaUHeFuZdB4vrPb2LTNN0h3AXVSrvDc30fRpwLclHQb8GpgNzLH9sKRTgWvK3u75VD3Ep4HzG/aAv6RHGRHLlyF32TFJKwELbS+QtDtwTtnZ0jK57FhEb2r2smNDavjUYizwvdIDfJZqb3dERNOGXDDa/j2wY6friIjelTNLIiJqEowRETUJxoiImiG3jTEiBtdlf7ql/4WWwT+/fLe2tNuM9BgjImoSjBERNQnGiIiaBGNERE2CMSKiJsEYEVGTYIyIqEkwRkTUJBgjImoSjBERNQnGiIianCvdJEkTgAkAI1mlw9VERDulx9gk2xNtj7c9fgQrdbqciGijBGNERE2CsYGkKyVt0Ok6IqKzso2xge2DOl1DRHReeowRETUJxoiImgRjRERNgjEioibBGBFRk73SETEgo4eN7HQJLZceY0RETYIxIqImwRgRUZNE+EzkAAACIUlEQVRgjIioSTBGRNQkGCMiahKMERE1QzYYJb1d0imdriMies+QCUZJK0oa1TDpAOCqJpeNiHhezwejpK0kfRm4F9iiTBOwAzBF0mslTS23OyStCqwB3CXpXEm7dK76iOhGPXlKYOntHQr8GyDgfOBVtueURXYEptm2pJOAY23fLGk08IztOZLGAW8BPiNpndLGJbYfWcxrZjCsiOVEr/YYZ1OF4tG297R9XkMoQrUa/fNy/2bgTEnHA6vbXgBge57t79jeD3gzsC/w4OKGNshgWBHLj14NxkOAPwOXS/q4pI1r8/cDrgGw/XngaGBl4BZJW/YtJGldSR8EfgIMB94B/GUQ6o+ILtaTq9K2rwGukbQWcARwhaSHqQLwUWAF238HkLS57enAdEm7A1tKmg1cCGwJXAIcZPvPnXgvEdF9ejIY+5TwOws4S9KrgYXA64FfNCx2gqTXlXl3U61ijwS+Alxv24NbdUR0u54Oxka2bwOQ9AngvIbpxy1i8XnALweptIjoMUMmGPvYPrrTNUREb+vVnS8REW2TYIyIqEkwRkTUJBgjImqUo1WWnqS/AbOaWHRt4OE2lJB2e6vWXmu3l2pd2nY3tr1OfwslGNtI0mTb49Nu69vtpVp7rd1eqrVd7WZVOiKiJsEYEVGTYGyviWm3be32Uq291m4v1dqWdrONMSKiJj3GiIiaBGNERE2CMSKiJsEYEVGTYIyIqPk/04lXkmIK3DwAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAUYAAAFRCAYAAAAIBATTAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAIABJREFUeJzt3Xm4HGWdxfHvSUgIW8QQREAhgBo2JUBQEFEUREUd0WFABFEUEGVUREYHwQWXcUNmGEaRyMjmuDCi484ioijKEgKBAG7DIkoYjGwhCVnP/FHVoVPc5HZyq9Ld957P8/ST21XVv3670vfct9ZXtomIiCeM6nYDIiJ6TYIxIqIiwRgRUZFgjIioSDBGRFQkGCMiKhKMMexJmiTJktbpYNm3SvrV2mhXXSR9WdKHu92O4STBGCsl6Y2SrpM0T9ID5c/vkqRy/vmSFkl6TNKDkq6QtH3b6z8m6WsD1LWkZ63kPe8ua06sTL+5fN2kej/l6pO0QfmZfzzAvLsl7d/2vONQ7vC9nxTcto+z/Yk66kchwRgDkvR+4Ezg88DTgc2A44C9gbFti37O9obAlsBfgP+s4e3vAg5ra8tzgfVqqFuXg4GFwAGSNu92Y6J+CcZ4EklPAT4OvMv2t23PdeEm24fbXlh9je0FwMXAlBqacBFwZNvztwAXVtso6UJJf5V0j6RTJY0q542WdLqkOZLuBF49wGv/U9JsSX+R9ElJo1ejfW8BvgzcAhzeVvciYCvgB2WP8gPA1eXsh8tpe5XLvk3SHZIeknSZpK3b6ljScZL+UM7/ogo7lO+7V1nr4XL58yV9su31x0j6Y9mL/76kLQarvRqffURIMMZA9gLWBb7X6QskbUDRy/tjDe9/LTBe0g5lYB0KVDfJzwKeAmwLvIQiSI8q5x0DvAbYFZhK0cNrdwGwBHhWucwBwNGdNEzSVsC+wH+Vj+UBbvvNwJ+A19re0PbngBeXszcup/1G0kHAh4A3AJsCvwS+UXmr1wB7ALsAhwCvsH0HRa/9N2WtjQdo38uAT5ev2Ry4B/jmYLU7+ewjSYIxBjIRmGN7SWuCpF9LeljSAkkvblv2pLLnMhd4EfDmmtrQ6jW+HPgtxWZ6qy2tsDy57M3eDXyh7b0PAf7N9r22H6QIitZrNwNeBZxge57tB4B/Bd7YYbuOBG6xfTtFmO0kadfV/GzvAD5t+45yHf8LMKW91wh8xvbDtv8EXEXnPfHDga/anlH27E+m6GFOqqH2iJFgjIH8DZjYfsDA9gvLHsrfWPF7c3o5fRKwAJjcNm8JMKa9sKTW88WDtOEi4E3AW6lsRlME91iK3lDLPRT7OQG2AO6tzGvZumzT7DLoHwbOAZ42SHtajqToKWL7PuAXFJvWq2Nr4My2938QUFv7Ae5v+3k+sGGHtbeg7fPafozi/6yO2iNGgjEG8huKgwuv6/QFZe/jvRS/8K0DJX+iCMx22wBLaesBrqTePRQHYQ4EvlOZPYciWNt7WFu11ZwNPLMyr+Veis820fbG5WO87Z1W1R4ASS8Eng2cLOl+SfcDLwAOa/sjUr1d1UC3r7oXeEfb+29sez3bvx6sDSup1+4+2tZLuYtjEwZZ37GiBGM8ie2HgdOAL0k6WNKGkkZJmgJssIrXXUHxi3lsOelSYLKkN0saI2kCxWbjt9s301fh7cDLbM+rvM9SigM9n5K0UbkJeiJP7Ie8GHiPpGdIeirwz22vnQ1cDnxB0vjyc20n6SUdtOctwBXAjhSbn1OAnYH1KTbPAf6PYr9ny1+BZZVpX6YI151g+cGgf+jg/Vv1nyFp7Ermfx04StIUSetSrO/ryt0N0aEEYwyoPHBwIvAB4AGKX8hzgA8Cq+rZfB74gKR1y/13B1LsU3sAmAU8Aryzwzb8r+3pK5n9bmAecCfwK4pA+Go57yvAZcBMYAZP7nEeSbEpfjvwEPBtigMVKyVpHMW+y7Ns39/2uItis7+1Of1p4NRyM/kk2/OBTwHXlNP2tP1d4LPANyU9SrFeXvXkdx3Qz4DbgPslzanOtH0l8GHgEoqe83Z0vv80SsqNaiMiVpQeY0RERYIxIqIiwRgRUZFgjIioSDBGRFQkGCMiKhKMEREVCcaIiIoEY0RERYIxIqIiwRgRUZFgjIioSDBGRFQkGCMiKhKMEREVCcaIiIoEY0RERYIxIqIiwRgRUZFgjIioSDBGRFQkGCMiKhKMEREVCcaIiIoEY0RERYIxIqIiwRgRUZFgjIioSDBGRFQkGCMiKhKMEREVCcaIiIoEY0RERYIxIqIiwRgRUZFgjIioSDBGRFQkGCMiKhKMEREVCcaIiIoEY0RERYIxIqIiwRgRUZFgjIioSDBGRFQkGCMiKhKMEdEYSe+VNF6F/5Q0Q9IB3W7XYBKMEdGkt9l+FDgA2BQ4CvhMd5s0uARjH5G0nqTJ3W5HxGpQ+e+BwHm2Z7ZN61kJxj4h6bXAzcCl5fMpkr7f3VZFDOpGSZdTBONlkjYClnW5TYOS7W63ITog6UbgZcDPbe9aTrvF9vO627KIlZM0CpgC3Gn7YUmbAFvavqXLTVul9Bj7xxLbj3S7ERGrycCOwHvK5xsA47rXnM4kGPvHLElvAkZLeraks4Bfd7tREYP4ErAXcFj5fC7wxe41pzMJxv7xbmAnYCHwdeAR4L1dbVGsVL+eptKAF9g+HngcwPZDwNjuNmlwCcb+8Wrbp9jeo3ycCvxdtxsVK9WXp6k0YLGk0RSb1EjalD44+JJg7B8ndzit50gaJWl8t9uxlvXlaSoN+Hfgu8DTJH0K+BXwL91t0uDW6XYDYtUkvYril2tLSf/eNms8sKQ7rRqcpK8DxwFLgRuBp0g6w/bnu9uytaZ1mso2wMn9cppK3Wz/V3lGxX4UfxgOsn1Hl5s1qJyu0+Mk7UJxusPHgY+0zZoLXFXus+k5km62PUXS4cDuwAeBG0fK6UX9eppK3SRNGGDyXNuL13pjVkN6jD2u3ASbKenrrS+TpKcCz+zVUCyNkTQGOAj4D9uLJY2kv8Kt01ReQ/FHrS9OU2nADOCZwEMUPcaNgdmSHgCOsX1jNxu3MtnH2D+uKI9yTgBmAudJOqPbjVqFc4C7KQLhaklbA492tUVrV1+eptKAS4EDbU+0vQnwKuBi4F0U66gnZVO6T0i6yfauko6m6C1+tN+ufJG0ju2e3S9aJ0kzbO/W+n8rp820vUu327Y2SZpue+pA01q7W7rVtlVJj7F/rCNpc+AQ4IfdbsxgJG1Wnr/3k/L5jsBbutystakvT1NpwIOSPihp6/LxAeChct307PpIMPaPjwOXAf9r+wZJ2wJ/6HKbVuV8ivZuUT7/PXBC11qz9vXlaSoNeBPwDOB/gO8BW5XTRlP8ke9J2ZSORki6wfYelU3Jnt10aoKk7XniNJUr++E0lSikx9gnJD1H0pWSZpXPnyfp1G63axXmlaeotDYl96S4jHFEkLQdcJftLwKzgJdL2rjLzVrryu/tNEmXS/pZ69Htdg0mPcY+IekXwD8B57T1wGbZ3rm7LRuYpN2As4CdKYJhU+DgkXIen6SbganAJIojsz8AJts+sJvtWtskzQS+THGS/9LW9F49Tacl5zH2j/VtXy+tcFVZTx7hLU9uHge8BJhMsSn5u14/qbdmy2wvkfQG4EzbZ0m6qduN6oIlts/udiNWVzal+8eccvOstWl6MDC7u00amO1lwBdsL7F9m+1ZIywUoTgqfRhwJE+cRTCmi+3plh9IepekzSVNaD263ajBZFO6AZJeBDzb9nnlaRob2r5riDW3BaYBL6S4iuAu4HDb9wy5wQ2QdBpwC/Adj8AvWXl60nHAb2x/Q9I2wKG2R9QddiQN9L237W3XemNWQ4KxZpI+SrFvabLt50jaAvhv23sPse42tu+StAEwyvbc1rQ62l03SXMprnpZQnEvPlH8Qoy0u+xEH0ow1qzc6b4rMKPOsVlaV1JUpt1oe/eh1I16SbrY9iGSbqXc7dGun65UqouknSmuG19+rbjtC7vXosHl4Ev9Ftl264YJZQ9vjZXnwu1EcduuN7TNGk8P3pRA0va2f1selX4S2zPWdpvWstZd1V/T1Vb0iHILal+KYPwxxbXSvwISjCPMxZLOATaWdAzwNuArQ6g3meKXbGPgtW3T5wLHDKEuAJL+Abi03DQ/FdgN+OQQAuxE4FjgCwPMM8VIhz2lznVge3b5b0/u+12ZBr4HLQcDuwA32T5K0mbAuUOs2TzbedT8AF4OfB44HXh5TTX3aqitt5T/vgj4JfA64Lpur8O1/P9V+zoA9gRuAB4DFlGcw/dotz/r2v4eANeX/95IsZUj4LZuf97BHjldpwG2r7D9T7ZPsn1FTWX/1tCVL62Tbl8NnG37e9QwWJGkMZLeI+nb5eMfy/sz9qIm1sF/UNxy7A/AesDRFCe896pGvgfA9PKKn69QhOMM4Poa6jYqB19qUh6FHWhlrvHRWEnHAT93sc+ukStfJP0Q+AuwP8WdthdQ/JUf0u2xJJ1Lcd7eBeWkNwNLbR89lLpNaGIdtN1aa/mBN0m/tv3CWhpds4bWgYBn2L63fD4JGO9+uPqp213WPFb+ADYELih/vqH896a2+TfX8B7rA2+gOO8SYHPggBrqzuxkWi88mlgHwNUUPa4Lgc8B76vj8wMXdTKtF9ZBWefGbv//rskjm9I1aT+rf6DHmtS0/RjFJhg0dOWL7fnAAxT7lqA477CO25ktLdsLLD9Bfekqlu+ahtbBmymuLPtHYB7F7f3/fog1oThDYTlJ61D08Iakwe/BtZL2qKHOWpVN6ZqUZ/ibctO5Nbn81x7imf5NXfnS4Anp+wHnAXeWkyYBR9m+aih1m1D3OihvwnqB7SNqbOPJwIco9lfOb02mOLAzzfaQhtJt8HtwO8WZFXdT/IFo7Vrq6fM5E4w1K2+gcDiwje2PS9oK2Nz2dWtY78TKpPUoeiLzAGwPadyXBk9IHwe8n+J+hABXAP9q+/Gh1G1CE+tA0mXAa20vqqmZrbqfA24FtrV9Wvn9errtIR3QaPB7sDXwVGCfctLVwMND/YPetGxK1++LFKdqtA+C9B9DqLdR+ZgKvJPiS7YxxXW4Ow6hbssiF38dazkhvc2FFGMqf6J8bANcVFPtujWxDu4GrpH0YUknth411B1P8f16Y/m8rkG2mvoeHETx/z6R4tZzFwF/V1PtxuQE7/q9wOUgSAC2H5K0xqc92D4NQMXg7bvZnls+/xjw3zW0t+4T0lsme8UjmlepuDdfL6ptHUi6yPabgUOBf6XofGxUW0vh+XV+v9o09T14O7Cn7XkAkj4L/IbePnUpwdiApgZB2opif1LLIor9dkNi+3RJL6cY2nQy8BHXc+7lTZL2tH0tgKQXANfUUHc5FYODPWh74VDq1LwOdi83H/9EM7/8jXy/GvweiBUPui3liX3vPSv7GGsm6XCK3sJuFOfwHQycantIvTtJp1AMHvRdil+K1wPfsv3pobW4GZLuoPgF+1M5aSvgDopf4lp2vkv6KbAdcIntk4Zarw6S3kOxy2Mb4L72WdRzEK6R71dTyt0Hb6H43kKxaX2+7X9r4L2ebvv+WmolGOunhgZBKm/MsHwntu01viN0EyekV+pvvar5de18L08i3tH2bWvw2sbWgaSzbb9zTV8/SO3avl9Nfw/K99iN4jQgMcTv7SDv8yPbr66lVoIxImJFOSodEVGRYIyIqEgwNkjSsanbTN1+amu/1e2ntjZVN8HYrEa+CKnbWM3Uba5mX9VNMEZEVOSo9BoYq3Feb9SGgy63yI8zVqsxLMvYzu7jumjJfMaus37ndRd2drnuIhYylnU7qzmuw+VYvfZ6VGfn/i5eMo8x66zGVWvzFnRWl4WM6XQdAMXZQoNbrXULeIPOvjeLFs9j7JjO1oMWdPg98ALGar2OlgWgwwxZ3d+HpeM7a8PihY8xZt3Bfx8BFs5/kMUL5w36n5YrX9bAeqM2ZM/16x/rSM/YvPaaAL7nz/UXnbxN/TWBpRvUcXXbk+nXzVyNOGpcM+ORLZm6Q+01x8y8c/CF1oAXL2mk7mMvG9I9mAc082dndrRcNqUjIioSjBERFQnGiIiKBGNEREWCMSKiYkQEo6S7JU3sdjsioj+MiGCMiFgdwy4YJW0g6UeSZkqaJenQcta7Jc2QdGt5P7vWsl+VdIOkmyS9rotNj4geMeyCEXglcJ/tXWzvDFxaTp9jezfgbKB1t+dTgJ/Z3gN4KfD5GgcBiog+NRyD8VZgf0mflbSP7UfK6d8p/72RJ8ZKOQD453LoyJ8D4yhuwf8kko6VNF3S9EW9NwJoRNRo2F0SaPv3knYHDgQ+XY6uB9AaMGkpT3xuAX9v+3cd1J1GMeA9Txk9MReYRwxjw67HKGkLYL7trwGnUwwatDKXUex7VPnaXddCEyOixw27YASeC1xfbh6fAnxyFct+AhgD3CJpVvk8Ika44bgpfRlFT7DdpLb504F9y58XAO9YW22LiP4wHHuMERFDkmCMiKhIMEZEVCQYIyIqEowRERXD7qj0WrHOaEZNeGrtZf94xKa11wSY9NG7aq8571nja68JMP7K3zZSd2kjVcFLmhnv5PFN6h/7ZuxGG9VeE2DpX2Y3Unf89ffWXnP0vM4GBEuPMSKiIsEYEVGRYIyIqEgwRkRUJBgjIioSjBERFQnGiIiKnjyPUdLHgMeA8cDVtn+6quVsny7p461lJd0NTLU9Zy01OSKGkZ4MxhbbH2li2YiIVemZTWlJp0j6naSfApPLaedLOrj8+TOSbpd0i6TTB3j98mXbpq0n6VJJx5TPj5B0vaSbJZ0jaXT5OL8cUfBWSe9bCx83InpYT/QYyzFa3gjsStGmGRSDVrXmTwBeD2xv25I27qDshsA3gQttXyhpB+BQYG/biyV9CTgcuA3YshxRkJXVlnQscCzAuNHNXFoVEb2hV3qM+wDftT3f9qPA9yvzHwUeB86V9AZgfgc1vwecZ/vC8vl+wO7ADeWwB/sB2wJ3AttKOkvSK8v3ehLb02xPtT117Oj1VvfzRUQf6ZVgBFjpyHu2lwDPBy4BDuKJsaJX5RrgVa2BrihGBLzA9pTyMdn2x2w/BOxCMXzq8cC5Q/gMETEM9EowXg28vtwnuBHw2vaZkjYEnmL7x8AJwJQOan4E+BvwpfL5lcDBkp5W1pwgaWtJE4FRti8BPsyqRxWMiBGgJ/Yx2p4h6VvAzcA9wC8ri2wEfE/SOIqeX6cHSE4Avirpc7Y/IOlU4HJJo4DFFD3EBcB55TSAk4f4cSKiz/VEMALY/hTwqVUs8vwBXvOxtp/f2vbzpLbFjmqb/i3gWwPUTi8xIpbrlU3piIiekWCMiKhIMEZEVCQYIyIqEowRERU9c1S6n3jxYpbOvr/2utt9YW7tNQGWLqt/jLyNLp1Ve02ApQseb6Quy8/zr5eXNjP+4Ea3PFB/0VHNrAMa+H4B/OmwSbXXXHRhZ6MvpscYEVGRYIyIqEgwRkRUJBgjIioSjBERFQnGiIiKBGNERMWID0ZJOZczIlbQV6EgaRLF3buvoxgf5vfAkcAOwBkU47zMAd5qe3Y5CNaxwFjgj8Cbbc+XdD7wYFljhqTvA2eWb2PgxbabOds6InpeP/YYJwPTbD+PYnyW44GzgINt7w58lSfu6/gd23vY3gW4A3h7W53nAPvbfj9wEnC87SkU488sWDsfJSJ6UV/1GEv32r6m/PlrwIeAnYEryuFdRgOzy/k7S/oksDFFb/Kytjr/bbt1LdM1wBmS/osiTP9cfdMVRglk/Xo/UUT0lH4MxuqgWXOB22zvNcCy5wMH2Z4p6a3Avm3z5i0vaH9G0o+AA4FrJe1v+7crvKk9DZgGMH7UhJUO3BUR/a8fN6W3ktQKwcOAa4FNW9MkjZG0Uzl/I2C2pDEUY0gPSNJ2tm+1/VlgOrB9c82PiF7Xj8F4B/AWSbcAEyj3LwKflTSTYkCtF5bLfpjiQM0VwG8HqNVygqRZ5esXAD9pqvER0fv6cVN6me3jKtNuBl5cXdD22cDZA0x/a+X5u+tsYET0t37sMUZENKqveoy276Y4Ah0R0Zj0GCMiKhKMEREVCcaIiIq+2sfYMwxesqT+spO2rL0mADc/Un/NUQ39TW1oYKV+8/t3PL32mqMX1V4SgEmn3NtI3S3PurH2mvcunN/RcukxRkRUJBgjIioSjBERFQnGiIiKBGNEREWCMSKiIsEYEVGRYIyIqEgwRkRUJBgjIioSjBERFblWukMZJTBi5EiPsUO2p9meanvqGNbtdnMiokEJxgpJV0pq6DY3EdEPEoxtJI0CngU82O22RET3JBhXtCNwie0F3W5IRHRPDr60sT0LOLHb7YiI7kqPMSKiIsEYEVGRYIyIqEgwRkRU5OBLDxn1f82cJbSsiaJLM5pfk9Z9SLXXvO5dZ9ReE+DvT9mzkbpeuLCBou5osfQYIyIqEowRERUJxoiIigRjRERFgjEioiLBGBFRkWCMiKhYo2CUdIKk2m9jLel8SQfXXPPHkjaus2ZEDG9r2mM8AQa+v7+k0WvenDVXfV8VRtk+0PbD3WhTRPSnQYNR0gaSfiRppqRZkj4KbAFcJemqcpnHJH1c0nXAXpI+IumGcvlpZUg9TdKN5fK7SLKkrcrn/9vWA91f0i8l/V7Sa8r5oyV9vqx5i6R3lNP3lXSVpK8Dt0qaJOkOSV8CZgDPlHS3pInl8kdIul7SzZLOKeuOLnuqsyTdKul9ta7hiOg7nVwS+ErgPtuvBpD0FOAo4KW255TLbADMsv2RcpnbbX+8/Pki4DW2fyBpnKTxwD7AdGAfSb8CHrA9XxLAJOAlwHYU4fss4EjgEdt7SFoXuEbS5eV7Px/Y2fZdkiYBk4GjbL+rfH/Kf3cADgX2tr24DM/DgduALW3vXC6Xze6IEa6TTelbKXpxn5W0j+1HBlhmKXBJ2/OXSrpO0q3Ay4Cdyum/BvYGXgz8S/nvPsAv2157se1ltv8A3AlsDxwAHCnpZuA6YBPg2eXy19u+q+3199i+doA27gfsDtxQ1tkP2LZ8j20lnSXplcCjA60EScdKmi5p+mIauIYzInrGoD1G27+XtDtwIPDptp5au8dtLwWQNA74EjDV9r2SPgaMK5f7JUUQbg18D/ggYOCH7W9ZbQIg4N22L2ufIWlfYF5l+erz5YsDF9g++UkzpF2AVwDHA4cAb6suY3saMA1gvCZ0diV6RPSlTvYxbgHMt/014HRgN2AusNFKXtIKwTmSNgTajzJfDRwB/MH2MopBpw4Ermlb5h8kjZK0HUWP7nfAZcA7JY0p2/QcSRt0+BlbrgQOlvS0ssYESVuX+x9H2b4E+HD5+SJiBOtkH+Nzgc9LWgYsBt4J7AX8RNJs2y9tX9j2w5K+QrEJfjdwQ9u8u8t9fleXk34FPMP2Q20lfgf8AtgMOM7245LOpdj3OENFgb8CB63OB7V9u6RTgcvL0QAXU/QQFwDnldMAntSjjIiRRe7w/mTxhPGa4Bdov9rrrrP502uvCbBk9v211xy1fu2nsQKwbP78Rur2mz9/6IW112zsfozPaOZ+jE24zlfyqB8c9GaXufIlIqIiwRgRUZFgjIioSDBGRFQkGCMiKnJUeg00dVQa1T8yHNDxyGirpZ/a2oeaOOrvhkZ2bGQ0v4bkqHRExBpKMEZEVCQYIyIqEowRERUJxoiIigRjRERFgjEioiLBGBFRkWCMiKhIMEZEVHRyB++gGAwLOBZg3MBDakfEMJEeY4dsT7M91fbUMazb7eZERIMSjBERFQnGCklXStqy2+2IiO5JMLYpRwp8FsWwrhExQiUYV7QjcIntBd1uSER0T45Kt7E9Czix2+2IiO5KjzEioiLBGBFRkWCMiKhIMEZEVOTgSy9RQ3+n3MzocNGgZctqL+lFi2qvOVylxxgRUZFgjIioSDBGRFQkGCMiKhKMEREVCcaIiIoEYxtJj3W7DRHRfQnGiIiKYReMkv5H0o2SbivHaUHSY5I+JWmmpGslbVZO30bSbyTdIOkT3W15RPSKYReMwNts7w5MBd4jaRNgA+Ba27sAVwPHlMueCZxtew/g/q60NiJ6znAMxvdImglcCzwTeDawCPhhOf9GYFL5897AN8qfL1pVUUnHSpouafpiFtbe6IjoHcPqWmlJ+wL7A3vZni/p58A4YLFtl4stZcXPbTpgexowDWC8JnT0mojoT8Otx/gU4KEyFLcH9hxk+WuAN5Y/H95oyyKibwy3YLwUWEfSLcAnKDanV+W9wPGSbqAI1YgI9MQWZnRqvCb4Bdqv/sKjRtdfE2BZA7cdk+qvCZDvIwCjxo2rveayhQ3tG++j/7PrfCWP+sFBv7zDrccYETFkCcaIiIoEY0RERYIxIqIiwRgRUZFgjIioGFZXvvS7y/58YyN1X7HlrrXXHD1xYu01AZb+9a+N1G3q9CKNbuYUq1GbbVp7zWX33ld7zbJyQ3Ub0OGZRekxRkRUJBgjIioSjBERFQnGiIiKBGNEREWCMSKiIsEYEVHRs8GoQs+2LyKGr64Gj6QTJc0qHydImiTpDklfAmYAz5R0djnWym2STmt77d2STpM0Q9Kt5R27kbSppCvK6edIukfSxHLeEZKul3RzOW90+Ti/bMOtkt7XnbUREb2ia8EoaXfgKOAFFEMQHAM8FZgMXGh7V9v3AKfYngo8D3iJpOe1lZljezfgbOCkctpHgZ+V078LbFW+3w7AocDetqdQjP1yODAF2NL2zrafC5zX5OeOiN7XzR7ji4Dv2p5n+zHgO8A+wD2224ckOETSDOAmYCdgx7Z53yn/bR/570XANwFsXwo8VE7fD9gduEHSzeXzbYE7gW0lnSXplcCjAzU2owRGjBzdvFZ6ZRevzlu+gLQNRU9wD9sPSTqfYtS/llZCtY/8t7K6Ai6wffKTZki7AK8AjgcOAd5WXSajBEaMHN3sMV4NHCRpfUkbAK8HfllZZjxFUD4iaTPgVR3U/RVFuCHpAIrNc4ArgYMlPa2cN0HS1uX+x1G2LwE+DOw2xM8VEX2uaz1G2zPKHuD15aRzeWKzt7XMTEk3AbdRbPJe00Hp04BvSDqnKq5pAAADhElEQVQU+AUwG5hre46kU4HLy6Pdiyl6iAuA89qOgD+pRxkRI0tXbztm+wzgjMrknSvLvHUlr53U9vN0YN/y6SPAK2wvkbQX8FLbC8vlvgV8a4By6SVGxHLD8X6MWwEXlz3ARRRHuyMiOjbsgtH2H4D678waESNGriyJiKhIMEZEVCQYIyIqht0+xn72ii2mNFS5/vPRGxu0qilu5px8L1nSSN0l99zbSN3oTHqMEREVCcaIiIoEY0RERYIxIqIiwRgRUZFgjIioSDBGRFQkGCMiKhKMEREVCcaIiIoEY0RERa6V7pCkY4FjAcaxfpdbExFNSo+xQ7an2Z5qe+oY1u12cyKiQQnGiIiKBGMbST+WtEW32xER3ZV9jG1sH9jtNkRE96XHGBFRkWCMiKhIMEZEVCQYIyIqEowRERU5Kt1LpGbqNjRCXl/ps3WrMWNrr+nFi2qvOVylxxgRUZFgjIioSDBGRFQkGCMiKhKMEREVCcaIiIoEY0RExbANRkmHSTql2+2IiP4zbIJR0lhJG7RNeiVwaYfLRkQs1/fBKGkHSV8Afgc8p5wmYAowQ9JLJN1cPm6StBHwVOA2SedI2qN7rY+IXtSXlwSWvb1DgLcDAs4Dnmd7brnIrsBM25Z0EnC87WskbQg8bnuupMnA64FPSdq0rPE12w+u5D0zGFbECNGvPcbZFKF4tO29bZ/bFopQbEb/pPz5GuAMSe8BNra9BMD2QtvftH0A8Dpgf+C+lQ1tkMGwIkaOfg3Gg4G/AN+V9BFJW1fmHwBcDmD7M8DRwHrAtZK2by0k6WmS3g/8ABgNvAn4v7XQ/ojoYX25KW37cuBySZsARwDfkzSHIgAfAtax/TcASdvZvhW4VdJewPaSZgMXANsDXwMOtP2XbnyWiOg9fRmMLWX4nQmcKen5wFLg5cBP2xY7QdJLy3m3U2xijwP+HbjKzj25ImJFfR2M7WxfDyDpo8C5bdPfPcDiC4GfraWmRUSfGTbB2GL76G63ISL6W78efImIaEyCMSKiIsEYEVGRYIyIqFDOVll9kv4K3NPBohOBOQ00IXX7q639Vref2rq6dbe2velgCyUYGyRpuu2pqVt/3X5qa7/V7ae2NlU3m9IRERUJxoiIigRjs6albmN1+6mt/Va3n9raSN3sY4yIqEiPMSKiIsEYEVGRYIyIqEgwRkRUJBgjIir+HwW8p2+L7eiVAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "print('Input Sentence:')\n", "output = ''\n", "for index in input_tensor[0]:\n", " word = fr_idx2word[index.item()]\n", " if word != '':\n", " output += ' ' + word\n", " else:\n", " output += ' ' + word\n", " print(output)\n", " break\n", "\n", "print('\\nTarget Sentence:')\n", "print(' ' + batch['english_sentence'][11] + '')\n", "input_len = len(batch['french_sentence'][11].split())\n", "\n", "print('\\nLSTM model output:')\n", "output = ''\n", "for index in output_list:\n", " word = en_idx2word[index.item()]\n", " if word != '':\n", " output += ' ' + word\n", " else:\n", " output += ' ' + word\n", " print(output)\n", " break\n", "\n", "fig = plt.figure()\n", "plt.title('LSTM Model Attention\\n\\n\\n\\n\\n')\n", "ax = fig.add_subplot(111)\n", "ax.matshow(attn[:len(output.split()), :input_len])\n", "ax.set_xticks(np.arange(0,input_len, step=1))\n", "ax.set_yticks(np.arange(0,len(output.split())))\n", "ax.set_xticklabels(batch['french_sentence'][11].split(), rotation=90)\n", "ax.set_yticklabels(output.split()+[''])\n", "\n", "\n", "output = ''\n", "print('\\nGRU model output:')\n", "for index in gru_output_list:\n", " word = en_idx2word[index.item()]\n", " if word != '':\n", " output += ' ' + word\n", " else:\n", " output += ' ' + word\n", " print(output)\n", " break\n", " \n", "fig = plt.figure()\n", "plt.title('GRU Model Attention\\n\\n\\n\\n\\n')\n", "ax2 = fig.add_subplot(111)\n", "ax2.matshow(gru_attn[:len(output.split()), :input_len])\n", "ax2.set_xticks(np.arange(0,input_len, step=1))\n", "ax2.set_yticks(np.arange(0,len(output.split())))\n", "ax2.set_xticklabels(batch['french_sentence'][11].split(), rotation=90)\n", "ax2.set_yticklabels(output.split()+[''])\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hidden": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hidden": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }