{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Important: This notebook will only work with fastai-0.7.x. Do not try to run any fastai-1.x code from this path in the repository because it will load fastai-0.7.x**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## IMDb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At Fast.ai we have introduced a new module called fastai.text which replaces the torchtext library that was used in our 2018 dl1 course. The fastai.text module also supersedes the fastai.nlp library but retains many of the key functions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from fastai.text import *\n",
"import html"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Fastai.text module introduces several custom tokens.\n",
"\n",
"We need to download the IMDB large movie reviews from this site: http://ai.stanford.edu/~amaas/data/sentiment/\n",
"Direct link : [Link](http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz) and untar it into the PATH location. We use pathlib which makes directory traveral a breeze."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"DATA_PATH=Path('data/')\n",
"DATA_PATH.mkdir(exist_ok=True)\n",
"#! curl -O http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz \n",
"#! tar -xzfv aclImdb_v1.tar.gz -C {DATA_PATH}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"BOS = 'xbos' # beginning-of-sentence tag\n",
"FLD = 'xfld' # data field tag\n",
"\n",
"PATH=Path('data/aclImdb/')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Standardize format"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"CLAS_PATH=Path('data/imdb_clas/')\n",
"CLAS_PATH.mkdir(exist_ok=True)\n",
"\n",
"LM_PATH=Path('data/imdb_lm/')\n",
"LM_PATH.mkdir(exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The imdb dataset has 3 classes. positive, negative and unsupervised(sentiment is unknown). \n",
"There are 75k training reviews(12.5k pos, 12.5k neg, 50k unsup)\n",
"There are 25k validation reviews(12.5k pos, 12.5k neg & no unsup)\n",
"\n",
"Refer to the README file in the imdb corpus for further information about the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"CLASSES = ['neg', 'pos', 'unsup']\n",
"\n",
"def get_texts(path):\n",
" texts,labels = [],[]\n",
" for idx,label in enumerate(CLASSES):\n",
" for fname in (path/label).glob('*.*'):\n",
" texts.append(fname.open('r', encoding='utf-8').read())\n",
" labels.append(idx)\n",
" return np.array(texts),np.array(labels)\n",
"\n",
"trn_texts,trn_labels = get_texts(PATH/'train')\n",
"val_texts,val_labels = get_texts(PATH/'test')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(75000, 25000)"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(trn_texts),len(val_texts)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"col_names = ['labels','text']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We use a random permutation np array to shuffle the text reviews."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"trn_idx = np.random.permutation(len(trn_texts))\n",
"val_idx = np.random.permutation(len(val_texts))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trn_texts = trn_texts[trn_idx]\n",
"val_texts = val_texts[val_idx]\n",
"\n",
"trn_labels = trn_labels[trn_idx]\n",
"val_labels = val_labels[val_idx]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_trn = pd.DataFrame({'text':trn_texts, 'labels':trn_labels}, columns=col_names)\n",
"df_val = pd.DataFrame({'text':val_texts, 'labels':val_labels}, columns=col_names)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The pandas dataframe is used to store text data in a newly evolving standard format of label followed by text columns. This was influenced by a paper by Yann LeCun ([Link to Paper](https://arxiv.org/pdf/1509.01626.pdf) [Link to Paper’s Datasets](https://drive.google.com/drive/u/0/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M)). Fastai adopts this new format for NLP datasets. In the case of IMDB, there is only one text column."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_trn[df_trn['labels']!=2].to_csv(CLAS_PATH/'train.csv', header=False, index=False)\n",
"df_val.to_csv(CLAS_PATH/'test.csv', header=False, index=False)\n",
"\n",
"(CLAS_PATH/'classes.txt').open('w', encoding='utf-8').writelines(f'{o}\\n' for o in CLASSES)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We start by creating the data for the Language Model(LM). The LM's goal is to learn the structure of the english language. It learns language by trying to predict the next word given a set of previous words(ngrams). Since the LM does not classify reviews, the labels can be ignored.\n",
"\n",
"The LM can benefit from all the textual data and there is no need to exclude the unsup/unclassified movie reviews.\n",
"\n",
"We first concat all the train(pos/neg/unsup = **75k**) and test(pos/neg=**25k**) reviews into a big chunk of **100k** reviews. And then we use sklearn splitter to divide up the 100k texts into 90% training and 10% validation sets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trn_texts,val_texts = sklearn.model_selection.train_test_split(\n",
" np.concatenate([trn_texts,val_texts]), test_size=0.1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(90000, 10000)"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(trn_texts), len(val_texts)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_trn = pd.DataFrame({'text':trn_texts, 'labels':[0]*len(trn_texts)}, columns=col_names)\n",
"df_val = pd.DataFrame({'text':val_texts, 'labels':[0]*len(val_texts)}, columns=col_names)\n",
"\n",
"df_trn.to_csv(LM_PATH/'train.csv', header=False, index=False)\n",
"df_val.to_csv(LM_PATH/'test.csv', header=False, index=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Language model tokens"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this section, we start cleaning up the messy text. There are 2 main activities we need to perform:\n",
"\n",
"1. Clean up extra spaces, tab chars, new ln chars and other characters and replace them with standard ones\n",
"2. Use the awesome [spacy](http://spacy.io) library to tokenize the data. Since spacy does not provide a parallel/multicore version of the tokenizer, the fastai library adds this functionality. This parallel version uses all the cores of your CPUs and runs much faster than the serial version of the spacy tokenizer.\n",
"\n",
"Tokenization is the process of splitting the text into separate tokens so that each token can be assigned a unique index. This means we can convert the text into integer indexes our models can use.\n",
"\n",
"We use an appropriate chunksize as the tokenization process is memory intensive"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chunksize=24000"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"re1 = re.compile(r' +')\n",
"\n",
"def fixup(x):\n",
" x = x.replace('#39;', \"'\").replace('amp;', '&').replace('#146;', \"'\").replace(\n",
" 'nbsp;', ' ').replace('#36;', '$').replace('\\\\n', \"\\n\").replace('quot;', \"'\").replace(\n",
" ' ', \"\\n\").replace('\\\\\"', '\"').replace('','u_n').replace(' @.@ ','.').replace(\n",
" ' @-@ ','-').replace('\\\\', ' \\\\ ')\n",
" return re1.sub(' ', html.unescape(x))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_texts(df, n_lbls=1):\n",
" labels = df.iloc[:,range(n_lbls)].values.astype(np.int64)\n",
" texts = f'\\n{BOS} {FLD} 1 ' + df[n_lbls].astype(str)\n",
" for i in range(n_lbls+1, len(df.columns)): texts += f' {FLD} {i-n_lbls} ' + df[i].astype(str)\n",
" texts = list(texts.apply(fixup).values)\n",
"\n",
" tok = Tokenizer().proc_all_mp(partition_by_cores(texts))\n",
" return tok, list(labels)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_all(df, n_lbls):\n",
" tok, labels = [], []\n",
" for i, r in enumerate(df):\n",
" print(i)\n",
" tok_, labels_ = get_texts(r, n_lbls)\n",
" tok += tok_;\n",
" labels += labels_\n",
" return tok, labels"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_trn = pd.read_csv(LM_PATH/'train.csv', header=None, chunksize=chunksize)\n",
"df_val = pd.read_csv(LM_PATH/'test.csv', header=None, chunksize=chunksize)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0\n",
"1\n",
"2\n",
"3\n",
"0\n"
]
}
],
"source": [
"tok_trn, trn_labels = get_all(df_trn, 1)\n",
"tok_val, val_labels = get_all(df_val, 1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"(LM_PATH/'tmp').mkdir(exist_ok=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"np.save(LM_PATH/'tmp'/'tok_trn.npy', tok_trn)\n",
"np.save(LM_PATH/'tmp'/'tok_val.npy', tok_val)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tok_trn = np.load(LM_PATH/'tmp'/'tok_trn.npy')\n",
"tok_val = np.load(LM_PATH/'tmp'/'tok_val.npy')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('the', 1207984),\n",
" ('.', 991762),\n",
" (',', 985975),\n",
" ('and', 587317),\n",
" ('a', 583569),\n",
" ('of', 524362),\n",
" ('to', 484813),\n",
" ('is', 393574),\n",
" ('it', 341627),\n",
" ('in', 337461),\n",
" ('i', 308563),\n",
" ('this', 270705),\n",
" ('that', 261447),\n",
" ('\"', 236753),\n",
" (\"'s\", 221112),\n",
" ('-', 188249),\n",
" ('was', 180235),\n",
" ('\\n\\n', 178679),\n",
" ('as', 165610),\n",
" ('with', 159164),\n",
" ('for', 158981),\n",
" ('movie', 157676),\n",
" ('but', 150203),\n",
" ('film', 144108),\n",
" ('you', 124114)]"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"freq = Counter(p for o in tok_trn for p in o)\n",
"freq.most_common(25)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The *vocab* is the **unique set of all tokens** in our dataset. The vocab provides us a way for us to simply replace each word in our datasets with a unique integer called an index.\n",
"\n",
"In a large corpus of data one might find some rare words which are only used a few times in the whole dataset. We discard such rare words and avoid trying to learn meaningful patterns out of them.\n",
"\n",
"Here we have set a minimum frequency of occurence to 2 times. It has been observed by NLP practicioners that a maximum vocab of 60k usually yields good results for classification tasks. So we set maz_vocab to 60000."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"max_vocab = 60000\n",
"min_freq = 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"itos = [o for o,c in freq.most_common(max_vocab) if c>min_freq]\n",
"itos.insert(0, '_pad_')\n",
"itos.insert(0, '_unk_')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We create a reverse mapping called stoi which is useful to lookup the index of a given token. stoi also has the same number of elements as itos. We use a high performance container called [collections.defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict) to store our stoi mapping."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"60002"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"stoi = collections.defaultdict(lambda:0, {v:k for k,v in enumerate(itos)})\n",
"len(itos)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trn_lm = np.array([[stoi[o] for o in p] for p in tok_trn])\n",
"val_lm = np.array([[stoi[o] for o in p] for p in tok_val])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"np.save(LM_PATH/'tmp'/'trn_ids.npy', trn_lm)\n",
"np.save(LM_PATH/'tmp'/'val_ids.npy', val_lm)\n",
"pickle.dump(itos, open(LM_PATH/'tmp'/'itos.pkl', 'wb'))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trn_lm = np.load(LM_PATH/'tmp'/'trn_ids.npy')\n",
"val_lm = np.load(LM_PATH/'tmp'/'val_ids.npy')\n",
"itos = pickle.load(open(LM_PATH/'tmp'/'itos.pkl', 'rb'))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(60002, 90000)"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vs=len(itos)\n",
"vs,len(trn_lm)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## wikitext103 conversion"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are now going to build an english language model for the IMDB corpus. We could start from scratch and try to learn the structure of the english language. But we use a technique called transfer learning to make this process easier. In transfer learning (a fairly recent idea for NLP) a pre-trained LM that has been trained on a large generic corpus(_like wikipedia articles_) can be used to transfer it's knowledge to a target LM and the weights can be fine-tuned.\n",
"\n",
"Our source LM is the wikitext103 LM created by Stephen Merity @ Salesforce research. [Link to dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/)\n",
"The language model for wikitext103 (AWD LSTM) has been pre-trained and the weights can be downloaded here: [Link](http://files.fast.ai/models/wt103/). Our target LM is the IMDB LM. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# ! wget -nH -r -np -P {PATH} http://files.fast.ai/models/wt103/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The pre-trained LM weights have an embedding size of 400, 1150 hidden units and just 3 layers. We need to match these values with the target IMDB LM so that the weights can be loaded up."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"em_sz,nh,nl = 400,1150,3"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"PRE_PATH = PATH/'models'/'wt103'\n",
"PRE_LM_PATH = PRE_PATH/'fwd_wt103.h5'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"wgts = torch.load(PRE_LM_PATH, map_location=lambda storage, loc: storage)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We calculate the mean of the layer0 encoder weights. This can be used to assign weights to unknown tokens when we transfer to target IMDB LM."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"enc_wgts = to_np(wgts['0.encoder.weight'])\n",
"row_m = enc_wgts.mean(0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"itos2 = pickle.load((PRE_PATH/'itos_wt103.pkl').open('rb'))\n",
"stoi2 = collections.defaultdict(lambda:-1, {v:k for k,v in enumerate(itos2)})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before we try to transfer the knowledge from wikitext to the IMDB LM, we match up the vocab words and their indexes. \n",
"We use the defaultdict container once again, to assign mean weights to unknown IMDB tokens that do not exist in wikitext103."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"new_w = np.zeros((vs, em_sz), dtype=np.float32)\n",
"for i,w in enumerate(itos):\n",
" r = stoi2[w]\n",
" new_w[i] = enc_wgts[r] if r>=0 else row_m"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now overwrite the weights into the wgts odict.\n",
"The decoder module, which we will explore in detail is also loaded with the same weights due to an idea called weight tying."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"wgts['0.encoder.weight'] = T(new_w)\n",
"wgts['0.encoder_with_dropout.embed.weight'] = T(np.copy(new_w))\n",
"wgts['1.decoder.weight'] = T(np.copy(new_w))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have the weights prepared, we are ready to create and start training our new IMDB language pytorch model!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Language model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is fairly straightforward to create a new language model using the fastai library. Like every other lesson, our model will have a backbone and a custom head. The backbone in our case is the IMDB LM pre-trained with wikitext and the custom head is a linear classifier. In this section we will focus on the backbone LM and the next section will talk about the classifier custom head.\n",
"\n",
"bptt (*also known traditionally in NLP LM as ngrams*) in fastai LMs is approximated to a std. deviation around 70, by perturbing the sequence length on a per-batch basis. This is akin to shuffling our data in computer vision, only that in NLP we cannot shuffle inputs and we have to maintain statefulness. \n",
"\n",
"Since we are predicting words using ngrams, we want our next batch to line up with the end-points of the previous mini-batch's items. batch-size is constant and but the fastai library expands and contracts bptt each mini-batch using a clever stochastic implementation of a batch. (original credits attributed to [Smerity](https://twitter.com/jeremyphoward/status/980227258395770882))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"wd=1e-7\n",
"bptt=70\n",
"bs=52\n",
"opt_fn = partial(optim.Adam, betas=(0.8, 0.99))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The goal of the LM is to learn to predict a word/token given a preceeding set of words(tokens). We take all the movie reviews in both the 90k training set and 10k validation set and concatenate them to form long strings of tokens. In fastai, we use the `LanguageModelLoader` to create a data loader which makes it easy to create and use bptt sized mini batches. The `LanguageModelLoader` takes a concatenated string of tokens and returns a loader.\n",
"\n",
"We have a special modeldata object class for LMs called `LanguageModelData` to which we can pass the training and validation loaders and get in return the model itself."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trn_dl = LanguageModelLoader(np.concatenate(trn_lm), bs, bptt)\n",
"val_dl = LanguageModelLoader(np.concatenate(val_lm), bs, bptt)\n",
"md = LanguageModelData(PATH, 1, vs, trn_dl, val_dl, bs=bs, bptt=bptt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We setup the dropouts for the model - these values have been chosen after experimentation. If you need to update them for custom LMs, you can change the weighting factor (0.7 here) based on the amount of data you have. For more data, you can reduce dropout factor and for small datasets, you can reduce overfitting by choosing a higher dropout factor. *No other dropout value requires tuning*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"drops = np.array([0.25, 0.1, 0.2, 0.02, 0.15])*0.7"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We first tune the last embedding layer so that the missing tokens initialized with mean weights get tuned properly. So we freeze everything except the last layer.\n",
"\n",
"We also keep track of the *accuracy* metric."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner= md.get_model(opt_fn, em_sz, nh, nl, \n",
" dropouti=drops[0], dropout=drops[1], wdrop=drops[2], dropoute=drops[3], dropouth=drops[4])\n",
"\n",
"learner.metrics = [accuracy]\n",
"learner.freeze_to(-1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner.model.load_state_dict(wgts)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We set learning rates and fit our IMDB LM. We first run one epoch to tune the last layer which contains the embedding weights. This should help the missing tokens in the wikitext103 learn better weights."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lr=1e-3\n",
"lrs = lr"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "e3839cad2e23478a84362ed0a931abf1",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"A Jupyter Widget"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch trn_loss val_loss accuracy \n",
" 0 4.398856 4.175343 0.28551 \n",
"\n"
]
},
{
"data": {
"text/plain": [
"[4.175343, 0.2855095456305303]"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"learner.fit(lrs/2, 1, wds=wd, use_clr=(32,2), cycle_len=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we print out accuracy and keep track of how often we end up predicting the target word correctly. While this is a good metric to check, it is not part of our loss function as it can get quite bumpy. We only minimize cross-entropy loss in the LM.\n",
"\n",
"The exponent of the cross-entropy loss is called the perplexity of the LM. (low perplexity is better)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner.save('lm_last_ft')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner.load('lm_last_ft')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner.unfreeze()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "1602776e4b864be99452869968c64a93",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"A Jupyter Widget"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch trn_loss val_loss accuracy \n",
" 0 4.866224 4.736622 0.236569 \n",
"\n"
]
}
],
"source": [
"learner.lr_find(start_lr=lrs/10, end_lr=lrs*10, linear=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYsAAAEOCAYAAAB4nTvgAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAIABJREFUeJzt3Xd8VGXWwPHfSSGNJLRQA4RepUjAggVRQcHFrqBY31VXd9XVteHuu/ZXd+11V2TVtcHa1gIqqBQpIoQuvXdIaGmk57x/zE0yk0wKMCXlfD+ffLj3uc/c+0yYz5w8XVQVY4wxpiohwS6AMcaY2s+ChTHGmGpZsDDGGFMtCxbGGGOqZcHCGGNMtSxYGGOMqZYFC2OMMdWyYGGMMaZafg8WIhIqIstEZKqXax1F5EcRWSkis0Uk0e3aDSKy0fm5wd/lNMYYUznx9wxuEbkXSAbiVPWictc+Aaaq6r9FZDhwk6peJyLNgBTndQosAQap6mG/FtYYY4xXYf68uVNTGA08BdzrJUtv4B7neBbwhXM8EvheVQ859/keuACYXNmzWrRooUlJSb4puDHGNBBLliw5oKoJ1eXza7AAXgIeAGIrub4CuBx4GbgUiBWR5kA7YKdbvl1OWqWSkpJISUk54QIbY0xDIiLba5LPb30WInIRkKqqS6rIdh9wtogsA84GdgOFgHjJW6G9TERuFZEUEUlJS0vzRbGNMcZ44c8O7qHAGBHZBkwBhovIB+4ZVHWPql6mqgOBPztp6bhqEu3dsiYCe8o/QFUnqmqyqiYnJFRbizLGGHOc/BYsVHWCqiaqahIwFpipquPd84hICxEpKcME4G3neDowQkSaikhTYISTZowxJggCPs9CRB4XkTHO6TBgvYhsAFrh6gjH6dh+Aljs/Dxe0tltjDEm8Pw+dDZQkpOT1Tq4jTHm2IjIElVNri6fzeA2xhhTLQsWxhhTB2XnFbL9YHbAnmfBwhhj6qCb3lnM2c/ODtjzLFgYY0wdtGiba8xPdl5hQJ5nwcIYY+qYb1btLT3OKywOyDMtWBhjTB2Ssu0Qd3y4tPT8pw2BWb3CgoUxxtQh6TkFHuffr90fkOdasDDGmDpEyq2ctyUtMCOiLFgYY0wdIuXWWb14QNuAPNeChTHG1CEFRZ4d2o1CA/M1bsHCGGPqkNxyo59yCooC8lwLFsYYU4fklgsOeRYsjDHGlFd+XsXRfAsWxhhjyilfk8i2YGGMMaa8DfszPc4nL9rB+wtrtI32CbFgYYwxdcjHKbsqpL3/8za/P9eChTHG1BEjXpzjNT01M8/vz7ZgYYwxdYCqsmF/ltdr4jXVtyxYGGNMHVDV6rJSfg0QP7BgYYwxdcC3v+6t9FpEmP+/yi1YGGNMHXDPf1ZUes2aoYwxxlRwUb82AHRuEQNYM5QxxhggNSPX4/yMri2Asr0tdh/J8XsZ/B4sRCRURJaJyFQv1zqIyCzn+koRGeWkJ4lIjogsd37+6e9yGmNMbXX4qOeGR42cPoqD2fkBK0NYAJ5xN7AWiPNy7S/Ax6r6DxHpDXwDJDnXNqvqgACUzxhjajX3VqaBHZoQGuJKOLNbC+ZuPBCQMvi1ZiEiicBoYFIlWZSyIBIP7PFneYwxpi7Kdxs2++6NQ0pXns3OK+SmoUl88rvT/F4Gf9csXgIeAGIruf4oMENE7gRigPPcrnUSkWVABvAXVZ3rz4IaY0xt9ehXq0uP46LCSGwaDUBkeCiP/KZPQMrgt5qFiFwEpKrqkiqyjQPeVdVEYBTwvoiEAHuBDqo6ELgX+EhEKjRjicitIpIiIilpaWl+eBfGGBN8IW7tUCJlG6sWqwauDH6891BgjIhsA6YAw0Xkg3J5/gf4GEBVfwYigRaqmqeqB530JcBmoHv5B6jqRFVNVtXkhIQE/70TY4wJopF9WwMQHuo5RDaAscJ/wUJVJ6hqoqomAWOBmao6vly2HcC5ACLSC1ewSBORBBEJddI7A92ALf4qqzHG1GYl+26veGQEAEVOlAgLDcR0PJdAjIbyICKPAymq+hXwJ+AtEbkHV2f3jaqqInIW8LiIFAJFwO9U9VCgy2qMMbXBt6tcS32Eh7r+vu/eytUNPG5Ih4CVQTSQ9Rg/Sk5O1pSUlGAXwxhjfC7poWkAbH16lM9na4vIElVNri6fzeA2xphabnjPlkBglvWojAULY4ypRdKPFrBmT4ZHWnxUOB2aRQepRC4B77MwxhhTuasn/sy6fZn86fzu7EnP5enLTiInv4jI8OD+bW81C2OMqUXW7csE4PnvNzB50Q6KipXcwiKiwkODWi4LFsYYU4vlFBSRk19EhAULY4wxQOmaT+7yC4vJLSy2moUxxhj4fOkuev7vdxXS5206wIqdRzhyNHDLkXtjwcIYY2qBb3/d5zX9rsnLAFixKz2QxanAgoUxxtQCtX1+tA2dNcaYIPp580Ge/nYtK6upOTSLaRSgEnlnwcIYY4Lowc9WsuPQ0WrzHQrgFqreWDOUMcYEUV5hxRFQreIiKqQNaN8kEMWplAULY4wJojy3LVNLXHdqR5642HMHvHvOr7ClT0BZsDDGmCDyNrciJiKMqwd7Lj8eGWbLfRhjTIOVW1CxZhERFkqjsBAuPzmxNK2RBQtjjDHuIpzAcNPQpNK0fC/NVYFkwcIYY2qZCGeF2b7t4kvTkpOaBas4gA2dNcaYWqdRaNnf8VufHkVhsRIaEryNj8CChTHG1GoiQnhocAMFWDOUMcYETUGRZz9Ei8auWdrFtXDtDwsWxhgTJEfzPYfNRjVyLUNeXPtihQULY4wJlvJzLH4/rCtxkWGc0im4ndneWJ+FMcYESfmaxeldWrDy0ZFBKk3V/F6zEJFQEVkmIlO9XOsgIrOc6ytFZJTbtQkisklE1otI7fztGWOMF0kPTSPpoWlsPZBdZb6ccsEiPCz4HdmVCUQz1N3A2kqu/QX4WFUHAmOBNwBEpLdz3ge4AHhDRIK7p6Axxhyj8ZN+qfJ6TrlmKPchs7WNX0smIonAaGBSJVkUiHOO44E9zvHFwBRVzVPVrcAmYIg/y2qMMb5Q5NY73a5pVJV5525M8zgP9pIeVfF3n8VLwANAbCXXHwVmiMidQAxwnpPeDljolm+Xk2aMMbXavE0HSo8Hdqh8WfEtaVm89MNGj7TaHCz8VjIRuQhIVdUlVWQbB7yrqonAKOB9EQkBvDXcVRhMJiK3ikiKiKSkpaV5eYkxxgTWBwu3l51UMQQ2O6/iarMNtRlqKDBGRLYBU4DhIvJBuTz/A3wMoKo/A5FAC1w1ifZu+RIpa6IqpaoTVTVZVZMTEhJ8/w6MMeYYuW9SVFBUebQIcfv2/fR3p/HP8YMQaYAd3Ko6QVUTVTUJV2f1TFUdXy7bDuBcABHphStYpAFfAWNFJEJEOgHdgEX+KqsxxvhK0+iyvbJzveyCB3A0v5CHP19Vet45oTEX9G3t97KdiIDPsxCRx4EUVf0K+BPwlojcg6vCdqOqKrBaRD4G1gCFwO9V1ftv3RhjapHsvEIAmkaHVxgaW2LS3K2s2JVeeh7dqPYP9gxIsFDV2cBs5/ivbulrcDVXeXvNU8BTASieMcb4THa+K1g0bxxRabDYfTjH4zwyvPYHi9rbm2KMMXVMZm5B6QinmIiwCvMoSvRoXdkA0drLgoUxxvjIt6v2lR5HhYdUGizy3VabfWXcQL+XyxcsWBhjjI+o21jZ6EZhlTZDpecUlB6f3a1ujOS0YGGMMT7iPvQ1Kjy00ppFhluwiI8O93u5fMGChTHG+Eh+YVnzUmR4aLU1i9euqRtNUGDBwhhjfCYz1zUSau3jFxDdqIqaRW4h/ds34aJ+bQNZvBNiwcIYY3wkI7eAsBAhMjyEqEZV1yziIuvWdkIWLIwxxkcycwuIjQxDRFzNUAVFaLn9tAuLilmx80it3Ge7KhYsjDHGRzJzC4mLcnVYl8zKzi0o9sjz7oJtAMzfdDCgZTtRFiyMMcZHNqVmEeqMiIpyZmWX77d4clple8HVbhYsjDHGR1bvyWCLs5VqSbA45f9+8Jp34nWDAlYuX7BgYYwxPlDozMpOah4NQJTTDFXZMuVndGsRmIL5iAULY4zxgaU7jgAwrEdLoPpd76Ib2WgoY4xpcL5asRuADfszAeiSEFN6rfwQ2h6tbCFBY4xpkE7p1ByAh0f1AqBry7KA8OZPm0uPm8c0IjmpaWAL5wMWLIwxxgeOOvtYNHFb66mkBlGybDlAbkEREWG1f/+K8ixYGGOMD8xclwpA44iyvojIcjvgzVi9j+z8IuKj6sbige4sWBhjjA9MX70fcG16VKJ3G8++iVvfXwLAlgNZgSuYj1iwMMYYHwoPLftafeQ3fbzmKamF1CV1a+yWMcbUMtsPZpeuNlue+97ab8/bWnp834gefi+Xr1mwMMaYE3D2s7NLj8/qXvmud49PXVN6PP7Ujv4skl9YM5QxxvjIFYMSq81zx7AuhIZItflqGwsWxpgG6VB2Pue9MIdNqb7rbI4Orzgk9qNbTvE4H5zUzGfPCyS/BwsRCRWRZSIy1cu1F0VkufOzQUSOuF0rcrv2lb/LaYxpWGas3sem1CyecGseOlb70nM9zqMbVQwWcZGew2TT3fbfrksC0WdxN7AWiCt/QVXvKTkWkTsB9w1pc1R1gP+LZ4xpiPKdhf/mbEjjyalr+MtFvY/5HnM2eI5qivISLFrGRXicd3AWGqxr/FqzEJFEYDQwqQbZxwGT/VkeY4wp4T6CaZLbSKWaUlWenb7BI83b4oAtYyNpGesKGMN7tuTkDnVvqQ/wfzPUS8ADQHFVmUSkI9AJmOmWHCkiKSKyUEQu8WMZjTEN0K7DRz3OC4qq/Jqq4MjRAg5k5XmkeWuGAkoDRF3bd9ud34KFiFwEpKrqkhpkHwt8qqruSzN2UNVk4BrgJRHp4uUZtzoBJSUtLc03BTfGNAiTF+30OM/O8z5XojJHvPQ9eGuGck//YvmeY3pGbeLPmsVQYIyIbAOmAMNF5INK8o6lXBOUqu5x/t0CzMazP6Mkz0RVTVbV5ISEysc3G2NMdQ5l5x9T/tdmbqqQVlnNYtvB7OMqU23it2ChqhNUNVFVk3AFg5mqOr58PhHpATQFfnZLayoiEc5xC1yB5/iHLBhjjBvVirvX7UvPRVW9XvNmb3pOhbTISlaTXeZsjHTHsAoNJHVGjYKFiNwtInHi8i8RWSoiI47ngSLyuIiMcUsaB0xRz/+hXkCKiKwAZgHPqKoFC2PMCduXnsst76VUSH9i2lo6TfiG+z9dWaP7lMyXGNmnVWlaSCWT7f5+eT8AurVqfKzFrTVq2ttys6q+LCIjgQTgJuAdYEZNXqyqs3E1JaGqfy137VEv+RcAJ9WwbMYYUyNFxcqpT/9Yev73y/txWpfmnPn3WazdmwHAp0t28dyV/au9V25hEY3CQnjzumS+X7Of+ZsOVJr3yuREzuqeQOv4yBN/E0FS02BREi5HAe+o6goRqXvz1Y0xDdrKXUc8zgclNaVtk6gK+fILi6vdQzuvoJhIJ8/5vVtxfu9WleYVkTodKKDmfRZLRGQGrmAxXURiqWY4rDHG1HZNosK9rtP06ZJdXvMXFyt3T1nGsh2HyS0o8lhVtr6rabD4H+AhYLCqHgXCcTVFGWNMnZFb4Pk3bsmOdRf0ae2R/vB/V3nt6F645SBfLt/DpW8ssGBRidOA9ap6RETGA38B0v1XLGOM8b11+zI8zsOcjYqUioFh6Y7DgGvUU8kcjJIaR3xUOLkFxUSGN5y1WGv6Tv8BHBWR/rhmZG8H3vNbqYwxxsc27s/ksa+9D6os2RLVnYhQVKyc9vRMrpn0CwADOzQBYFiPBPIKi4ioZKhsfVTTYFHoDG29GHhZVV8GYqt5jTHG1Brnv/iTx3nP1mVfYSVBwJ1q2US9FTtdHeOFxa4ayJfL91BYrISFNpxxPjUNFpkiMgG4DpgmIqG4+i2MMabOWf3YSL78w9DS8ym3ngrAH87pWpqWV1BEeo4rWIQ7QWH+poOl1+duPEB4iDVDlXc1kIdrvsU+oB3wrN9KZYwxPpbY1DVE9pnLTiImIsyjCSkiLJRtz4zmvpFle2PnFhZx5Khr/acopyP7h7WezVUZuXVzb4rjUaNg4QSID4F4Z4HAXFW1PgtjTJ0RGR7KqJNaM3ZIhyrzffF7V40jJ7+YNc5EvYzcQj5J2cnok9p45F23L9M/ha2Farrcx1XAIuBK4CrgFxG5wp8FM8YYX8ktKGJTahYLNh+sNm+z6EaAawnzv365ujT9/k9XsutIDj1aNczu2prO4P4zrjkWqQAikgD8AHzqr4IZY4yv7M9wbX9asglRVZo3dgWLp79dV+FaSUd3Q1TTPouQkkDhOHgMrzXGmKDKcuZJ3Ht+j2pyQkxEzTcomnBhz+MuU11T0y/870RkuojcKCI3AtOAb/xXLGOM8Z3sPNe+ao2PIRBUZmjX5qXHdX29p2NR0w7u+4GJQD+gPzBRVR/0Z8GMMcZXsvJco5ZiIk58El2/xLI5GVG23EdFqvqZqt6rqveo6n/9WShjjPGlrGOsWax6tPLtetwDRGXbqNZHVQYLEckUkQwvP5kiklHVa40xprbIdOZDxEbWbC5xbGR4pUuOu68H1Tqu4TRDVRlmVbVhjhEzxtQrWbmuDu7GkTXvs3jwgp6kZubRrWVjjyXL3VeabdG4+tFV9YWNaDLG1HtZeYWIQMwxNBt1bdmYL38/lOSOTT3S3ffZtmYoY4ypRzJzC2kcEYYvNviMjy5ryoqoZje9+qThvFNjTIOVmVtI7HEOmx0zoC2XDGhb2rHt3vTUkHaXtmBhjKn3MnMLaty5XV50ozBeGjuwdCHCVnERvDJuIFcOSvRlEWu9E5+hYowxtVhuQREz1uync4uYE7rPy2MHMnXlHto1iSKxaTRj+rf1UQnrBgsWxph67avlewDYciD7hO7Tu20cvdvG+aJIdZLfm6FEJFRElonIVC/XXhSR5c7PBhE54nbtBhHZ6Pzc4O9yGmPqp5Ld7cyJCUTN4m5gLVAhJKvqPSXHInInMNA5bgY8AiQDCiwRka9U9XAAymuMqUcUCxa+4NeahYgkAqOBSTXIPg6Y7ByPBL5X1UNOgPgeuMA/pTTG1GfPz9gAwHNX9g9ySeo2fzdDvQQ8ABRXlUlEOgKdgJlOUjtgp1uWXU6aMcYck5LFA3/Tv001OU1V/BYsnO1XU1V1SQ2yjwU+VdWikpd7yVOhLikit4pIioikpKWlnUBpjTG1TXGx8sCnKzjvhTmoHntT0oqdR+j+52+JiwynbXykx57b5tj5s2YxFBgjItuAKcBwEfmgkrxjKWuCAldNor3beSKwp/yLVHWiqiaranJCQoJvSm2MqRX+MWczH6fsYlNqFoey84/59Re/Pp/8omJW78k4pjWhjHd+CxaqOkFVE1U1CVcwmKmq48vnE5EeQFPgZ7fk6cAIEWkqIk2BEU6aMaaBeHb6+tLj2etPrOXgeCfkmTIBn8EtIo+LyBi3pHHAFHWrZ6rqIeAJYLHz87iTZoyph3ILivhly8HS81W70j2uxzo1g2enr+Otn7bU6J6dE8om4cVazeKEBeQ3qKqzgdnO8V/LXXu0kte8Dbzt56IZY2qBPo9Mp6hYadckivkPDee/y3Z7XP91TwZn90jg9VmbAfifMzoRElLNukxu3RxWszhxtjaUMSag0nMKWOhWiwAocibO7T6Sw85DR4l2lv5e8NBwAF75cSM7D+WU5j+Qncc787eS9NA0Coq8D7ZMzykoPfbF3tsNnf0GjTEB9ccpy5i1Po0Vj4wgPqriX/xn/n0W4NqFLiG2bIXXP3y0tPT4rL/PIrfAFSSycgtpGtPI4x6qypGcAkRAFRqFNpzVYf3FahbGmIBavy8TgH3puQBsSs30mi8jt4Dw0BCuTnYNjFy3ryxfSaAA2JyW5eW1hRQVa+m2pzaH+8RZsDDG+E1qRi6TF+3wSItw9oXYfeQoAF8sc42K79XGc0WgUKdPYnCnZqVp3raPuOKfroGUe47kcMt7KSQ9NI3+j80AoJGzOZF7k5Q5PtYMZYzxmyH/9yMAZ3RtwTer9nIoO5+tzuqvOw66gkXXlo0B+Pvl/Rj31kKy8lz7ZXdoFg14jmpqGRvB/ow8r886/ZmZFdLeuj6ZJ6etZcKFvXz0jhouCxbGGL/Idr70AWas2c/T367zuL79kCtYLN3hWh+0dXxk6bVze7bkyUv7AnByh6ac0bUF8zYdoHurWK/BorJJe91bxfLezUNO7I0YwJqhjPGbA1l53P7BEtIyvf8lXN+t2l02V+KJqWsqXH9n/jYA3vt5O+CaC1HSyvTomD60iY8qzTush2uFhryCYrY9M7o0PTRECBE49/nZFe7/4AU9T/AdGHcWLIzxg4KiYp6bvp5vf93H67M2Bbs4QTF/04FKr5X0Jahq6d7WkeGh/PO6QYzo3Yo2brUMgIEdmgKweo8rAG17ZjTbnhnNfSN6UKxw+GjFPonTujT3yfswLhYsjPGxomLlDx8tZcpi18LJ7y7Y1iBrF7kFRZVeu2t4VwB2Hsohp6CICRe6agFDu7Zg4vXJhIV6fjX1befq/HYfSgtlK8qWmHHPWaXHNrfCtyxYGHOCtqRleYy2+eiX7UxfvZ+OzaO57ezOACze1vBWqzmQlU9i0yheu2Ygv+nflvtH9gBg0vXJxEe75kWU9FeUdGZXJiIslInXDeKD357ikb54m+d+aN1bxZYeN4m2Wdu+ZKHXmBMwZ0MaN76ziKjwUD665VTaNYniw1920L1VY2bcczYFRcV8uHAHXy7fzaiTarafgqryScouerSOpX/7Jn5+B/5zICuP5o0juKhfWy7q1xZVZXjPlvRqE8fnS3cB8Mf/LAcgvgZf7CP6tK6QNm5Ie75e4bkgdaPQEPKLimka3ahCfnP8rGZh6i1VZfKiHV4nbfnKP2dvJio8lKP5Rfxn8Q6uf3sRW9Ky+f05rmaW8NAQrjmlA9+v2c/R/MJq7uYye30aD3y2kvs+WeG3cp+Ijfsz+dXpvC4sKubuKctK+xLcHcjKJ6Fx2Re2iJTOpejTNt4jr7eZ3DUxqGPT0uOSe0+96wz+dvlJpfM0jG9YsDD11rKdR5jw+SrOfX4Ouw4f9fn996XnsnDrQW49qzPn9WrJ5EU7Wbs3g4cu7MnFA8o2dkzu2JRihd5/nc6Y1+Z5Xcsov7C4tI1/2qq9AOxNzz2uTX/87fwXf+KiV+cBsOPQUb5cvofRr8zz6KPIyS9ic1oWHZvHeL1Hx+aezU5JleSrjvuGRv+57VTA1RR19eAOx3U/UzkLFqbeet8ZkhkicOfkZcf9xauq/GfxjgrNHVNX7kEVLhnQjscu7sttZ3Xmt2d04qrB7T3yDerYtLSzdeWudH7e7LmIHsBv30vhjL/NYu7GND5d4mqiycorZMHmg6gqj329ml+2HPT6Hg5n5/Nxys7SmkvSQ9P408f+r5V8u2qvxxIc7u8rZfsh8guLOau7903JIsND2fjUhaXnMSfQGb3q0RFseupC4mxlWb+yPgtTL70zfyv/Xbab287qTOeEGB78bBVzNqQxrEfLY77XTxsP8OBnqwAY0adV6V+zKdsOk9Q8mqQWrr+KJ4zyPku4eeMIfn1sJEfzC+n7yHRSth/mlM7N+GDhDrYeyOLUzs35aYNrc5/r/rUIgCcv6ctfvviVDxZuJ7+omHfmbyudl3Ba5+Z8+NtTSpfoHvjE9wCs2eOq1QB8tnQXz1/V/5jfa3Xcg9XtHy4tnX0N8MbsTZzT0/X73X3YtUKs+/XywkND+P05XRjS6cSGuNry44FhwcLUO18u381jX69hRO9W3D/SNQ7/xe838s78bccVLL51moUA5qxPY0Sf1izbcZjvVu9jRO9WNb5PdKMwEptG88qPG3nlx42l6R8s9Fw7adyQ9ow/tSOz16fy65501n3tudDez1sOsmDzQc7o1sIj/d0F2yo07/haRo5nv8um1CzCQ4WCImXV7nSSHprGZ7efzidO7SihcYS325S6f6RNnKsrrBnK1CupGbnc/8lKhnRqxivjBhIWGkKjsBAuH9SOuRvTSM3MPab7ZecV8s2qvYw+qQ3NYhrx1Yo9qGrplp/lv7Crc8nAsr6M0Se14b4R3QG4c3hXbhqaBEAPZ/jnSe2asPNQDlsPZHPtKR2IjQhj6p1nEBcZ5rE5UEu3uQePfV02U/pY32tN7M1w1RhKZlQDXDEokUsGtC1dCXbS3C0s2e4a0loy+c7UfVazMPXK9NX7yC8q5slL+hIZXtb5eenAdrw+azMzVu9n/Kkda3y/aSv3kpFbyM1ndCIhNoJ3F2xjzvo0MvMKGXVSa8afUvN7Adx7fnf6to2jU4sYurZsTLHC1YM7kBAbweHsfAqLlMsGJQLQq03ZnIGHR/XiyUv6IiKc2T2Bz5bu4pKBbRnSqRmplUz4W7r9MBf0rdlwXXfr9mWwfMcRxg6p2Em811lW/JYzO7NyVzrhocJjY/ry3Iyy/bK//XUfABcPaHvMzza1lwULU69MXbmXzgkxdCvXVt4loTGt4iL4ZeuhGgeLlbuO8MBnK4mNDOPkDk1o2ySSH9bup1OLGDq1iOHmoTXY2tML9/kCoVI2K7lpTCOeuKRv6bVTOrva8k9qF+/RAVzSkXvdvxaVTvp77sr+dE6I4bI3FnDJgLZMW7WXpTuO0DmhMV0TGte4nEXFygUvzQXgrO4JtIqL9BiCustZ/K9TixiW/u/5Vd7rfy/qXaNnmrrBgoWpN2au288vWw8x4cKeSLmND0SEUzo1Z+GWgxQUFRMeWn3zyKszXWs6XX5yIiJCm/go5j043C9l9yY+Kpyf7j+nwoS1O4Z1Kd0j4s05WwAY2KEJXRIas+jhc4mLCmdTWhYTf9rCxJ+28NSlfbm2hjWguRvTSo9Pf2Ym44a05+nL+pWmzV6fRtv4yAprN7Utdw7YpLh6xhoUTb3x8g8b6dwihpuGdvJ6fWSf1qRm5vHgZyurvVduQRG/bDnIuT1blo4wCoYOzaMrTFhr3yyad28aXHp+4+lJdElw1aRaxkUSGR5Kj1ZlGwn9e8G2Gj8sPxN9AAAYR0lEQVRv1+Ecj/PJi3Z6nG87mE3fdvEVgvENpycx9c4zeOv65NI0mxRXv1iwMHVeyrZD9Prf71ixK51rT+1Yaafq6H5tuOG0jny+dDfbnA14vMkvLOaPU5aTkVvILWd19uj7qC1O69KcW87sRMpfzuPRMX0qXO/RuqwZbsP+LJ6cuobi4urnmRw56toX4tTOZbvTZeS61r06lJ3P5rRsmnsZ4SQi9G0Xz5CkZsQ0Ci1dKNDUHxYsTJ1UMt4/PaeAP3y0jJyCIsJDpdpO1ZKZvcOem82Pa/dXuOekuVv423fr+G71Pu46txundq6dy1xHhIXy59G9aVHJ0NQOzTxnRE+at5WJc7dUe99D2QU0jghjyq2n8eq4gQC8M28bULY/hXsgKS8+Opzlj4zgnvO71+RtmDrE730WIhIKpAC7VfUiL9evAh7Ftaf6ClW9xkkvAlY52Xao6hh/l9XUDarKbe8vYf3+THLyi0jLyuOz208jsWl0pV+eJXq3jeOFq/rz/IwN3PvxCr6/9yxaxkby+w+XsmjbodKlxLskxHDPed0C8Xb8omWc6/fQJDqcK05OZNK8rTw/Yz192sZxZjfvs6oBDh/Np2mMq9nrdGc/iBd/2MDd53Uj06lhlN8ru7ya9AeZuicQHdx3A2uBCp8wEekGTACGquphEXGfMZWjqgMCUD5Tx3y9ci8z1pTVCsYN6cCgjpX/tVveZScn0i+xCSNenMNdk5excEvZ8uFhIcLIvq259czOFdrl65LWca4O56uS2/PwqF7M23SAdfsyue5fizx2mgNYsfMI+zNyOb93Kw5k5ZV2TDdvHMGQTs3YkuZqsvt1dwZg+0Q0VH79XxeRRGA08BRwr5cstwCvq+phAFVN9Wd5TN2XfrSAx79eTf/EeD6/Yyh7juTQKq7iSJzqdG3ZmJPaxXsEiuE9W/Lob/rQwc+zoAOhbZMoZv7p7NIF+lw1LtdM8Lkb0zizWwJ703OIbhTGxa/PB+Ce87qz/eBRTkosWxH2lE7NWLTVtc7T2/O2Ase/Qqyp2/z9J8JLwANAbCXXuwOIyHwgFHhUVb9zrkWKSApQCDyjql/4uaymDnh7/lYOZefz7k1DCA0R2lezaU5VuraMZcWudF67ZiBndk2o0Z4KdUnnhLJO7pfHDmDaqr389cvV/LQhjY7NYjjr2Vn0aVtW4X/xhw0AHkuYlGzq1P0v33JmtxYs2HzwhBb9M3WX3xoXReQiIFVVl1SRLQzoBgwDxgGTRKRkt5cOqpoMXAO8JCJdvDzjVhFJEZGUtLS08pdNLZKTX8Ts9SdecZyzIY0B7ZvQt1189Zmr8eCFPZhwYU8u7Num3gWK8po3juD605Lo2y6OdfsymbLYNU9j9R5X01L3VmWBpbdbAHEfMHAwK58hSTVv7jP1iz97ooYCY0RkGzAFGC4iH5TLswv4UlULVHUrsB5X8EBV9zj/bgFmAwPLP0BVJ6pqsqomJyRU3mlngu/VmRu58Z3FpJzA9qLpOQWs3HWEM7oe23pMlWkZG8ltZ3dpUPMBerWOY+3eDKav3ueR/sJVZd2DF/UrCxCDOjbj+Stdq9duSssiLspqFQ2V34KFqk5Q1URVTQLGAjNVdXy5bF8A5wCISAtczVJbRKSpiES4pQ8F1mDqpNyCIqYsdk3uen/h9uO+z8+bD1KsMNRHwaIh6tkmjgNZrvkS7n0PfdrGER4qhIZIhXkqJaOj8gttq9KGLOBj3ETkcREpGQY7HTgoImuAWcD9qnoQ6AWkiMgKJ/0ZVbVgUUd9sWw3h7Lz6ZcYzzer9pYOTy3xw5r9jHhxDqkZVa+SOn/TAaIbhTKwQ9Mq85nKdXTr47lvZA+GdGrGAxf0QERY8NC5/PxQxeVMmrgFiCYWLBqsgAQLVZ1dMsdCVf+qql85x6qq96pqb1U9SVWnOOkLnPP+zr//CkQ5je+98uNGHv7vKvq2i+OFqwZQWKy8u2CrR55//7yNDfuzmPD5qgqzjFWVX3enU1SszN90gFM6NbNlr09A2yZRpcctYhrx8W2ncccw12zrhNgIWnoZWdbUI1jU774dUzlrgDR+8+Xy3bzw/QYuHtCWJy7pS1xkOKP6tuHfC7bTvVUsPVvH0bxxIxZsPkjnFjH8uC6VS96YjypcfnI7xg7pwAcLt/PktLW0jotkX0Yu1x7D8uKmonZuwcJbYPCmmXuwsGGzDZYFC3NC8gqLeOH7DcRFhvP7c8rWA1q1K52HP1/F4KSmPH9lf8KcWb13nduNaav2cveU5SQ1j2b8qR0pKlZev/ZkUrYf5l9ztxDVKIxHv17DczM2kFNQxID2Tdi43zVH4OxK9nQ2NePeQX1yhyZV5PT+mtZeVpc1DYMFC3PcUjNzufW9JSzfeQSAIZ2aMTipGQs2H+CWf6fQJLoRL48dWBooAHq0juWdGwezek86z83YwNPfrmNIp2b0bB1LrzZxXHdqR1SVBZsPMnXlXuIiw7h9WBcaR4SRmpnn0Yxijp2I8O3dZ9I6LrLGM9Td8/VPrFmAMfWPuG/AXpclJydrSkpKsIvRYGw9kM31b//Cgcx8nrq0L8/P2EBkeAiXnZzIyz9sJKlFNO/dfEqlf4kWFysXvjyX7Yey+fbus+jUIsZrPlM7fLViD/vTc7nlrM7BLorxMRFZ4sxpqzqfBQtzrFSV0a/MY19GLm/fOJgB7Zswe30qt72/hLzCYk7u0IS3bxxc7ciZ3UdyOJyd75MJdsaY41PTYGHNUOaYzVqfypq9GTx3ZX8GtHc1Swzr0ZIVj4zgUHY+reMia7SNZ7smUR4drsaY2suChTkmqsqrMzeR2DSqwt4RkeGh1qdgTD1lA9ZNjR3Mcm1JumzHEW47u4vtW2BMA2I1C1Mjuw4f5cp//kxaZh63ndWZcYPbB7tIxpgAsmBhqpWamcv4Sb+QnVfIf+8Y6rHfgTGmYbB2BOPhUHY+e47klJ4XFBVz+wdL2Z+Rxzs3DbFAYUwDZcGiErPWp/L379aRW1AU7KIE1P2frODi1+eTnVcIwNPfrGPJ9sP87Yp+DOpoC/gZ01BZM5QX36zay52Tl1FUrPy0MY03r0tuEEM8c/KLmLvpAPmFxfxr3lY6J8Tw9vyt3Hh6EmP6t63+BsaYestqFuVMXbmHOycvY0D7JrwybiDbDxxlzKvzWLjlYLCL5ncLtxwkv7CYdk2ieHPOZh78dCUnd2jCw6N6Bbtoxpggs2Dh5qsVe7h7ynJO7tCEf988hDH92/LFH4YSHx3OtZN+4d8LtlFfZrx7M3t9KpHhIbx1fTK5hcVEhofyxrWDbElwY4wFixJfLNvNH6csY1DHprx70xAaO5vSd0lozBe/H8o5PRJ45KvVPPRZxT0X6ovZG9I4rXNzereN463rB/HRLafaKqPGGMCCBQCfL93FvR8v55ROzXn3psHERHh25cRFhjPxumTuGNaF/6Ts5PVZm4JUUv/ZeiCb7QePMqxHSwCG92xFj9axQS6VMaa2aPAd3JtSs7jvkxWc1qU5k64fTFSjUK/5QkKE+0f2YM+RHF74YQN9E+M5x/lirasOZ+czZfFOZq1PZW+6a7hsXX9Pxhj/aPDBomvLxrx+zcmc07MlkeHeA0UJEeHpy/qxfn8Wd09extd3nkHH5nVzae15Gw/w2/cWk1tQTL/EeHq0iuPSAe3o0Dy6+hcbYxocW6L8OOw4eJTfvDaPNvGRfH7H6UQ3qlsxt6ComJEv/YQq/GP8yfRsHRfsIhljgqSmS5Rbn8Vx6NA8mpfHDmD9/kwmfL6qzo2QmrJoB1vSsnl4VC8LFMaYGrFgcZyG9WjJfSN68OXyPbzw/YY6M0IqI7eAF3/YyKmdm3FeL+ufMMbUTN1qP6llbj+7C1vSsnl15iZW7U7nhasG0Cym6t3hgu0fszdzKDufv4zuXeM9mI0xxu81CxEJFZFlIjK1kutXicgaEVktIh+5pd8gIhudnxv8Xc7jERIiPHdlP564pC8LNh1k9CtzWbL9ULCLVandR3L417ytXDawnW1laow5JoFohrobWOvtgoh0AyYAQ1W1D/BHJ70Z8AhwCjAEeEREauUqdiLCdad25PM7Tic8NISr31zICzPWsyUtK9hFq+DZ79YhwH0jewS7KMaYOsavwUJEEoHRwKRKstwCvK6qhwFUNdVJHwl8r6qHnGvfAxf4s6wnqm+7eKbedQYj+7TmlZmbGP78HIY/N5unpq1hxc4jwS4eK3cd4Yvle/jtmZ1s61NjzDHzd83iJeABoLiS692B7iIyX0QWikhJQGgH7HTLt8tJ8yAit4pIioikpKWl+bLcxyUuMpzXrz2ZuQ+cw2Nj+tCuaRTvLtjGxa/P549TlpGakRu0sr05ZwtNosP53dldglYGY0zd5bcObhG5CEhV1SUiMqyK53cDhgGJwFwR6Qt463mtMNxIVScCE8E1z8IHxfaJ9s2iueH0JG44PYnM3AIm/rSFN+ds4Ye1qfzxvG7ccHpSQPevPpiVx4w1+7j+tCRiI8MD9lxjTP3hz2+socAYEdkGTAGGi8gH5fLsAr5U1QJV3QqsxxU8dgHumzwnAnv8WFa/iY0M508jejDjnrMYnNSUJ6et5fYPlgR0bsZ/l+2moEi52vbNNsYcJ78FC1WdoKqJqpoEjAVmqur4ctm+AM4BEJEWuJqltgDTgREi0tTp2B7hpNVZSS1iePvGwTw8qic/rE3lvZ+3B+S5qsp/Fu9kYIcmdG9lCwMaY45PwCflicjjIjLGOZ0OHBSRNcAs4H5VPaiqh4AngMXOz+NOWp0mItxyZmfO6ZHAU9+sZf2+TL8/c9nOI2xMzWKs1SqMMSfA1oYKgrTMPC58+Seax0Tw5R+GVruAYU3NWL2P12dt4sELe3J6lxbk5Bdx2wdLWLLtEIv+fF6FpdeNMcbWhqrFEmIjePaK/qzfn8kz367zyT2/X7OfOz5cyuo9GYyf9At//24dl74xn7kb03jggp4WKIwxJ8S+QYLknJ4tufH0JN5dsI2YiFDuPb8HoSHVL78xe30qRcVKq7hI4qPC2Z+Ry+o9GTw5bQ192sXz5vhB/N83a3lj9mbio8J596YhnN09IQDvyBhTn1mwCKKHR/Uit6CI12dtZuWudF4ZO5CmVawttWjrIW58Z7HXa/0T43nv5iHER4Xz8tgBXDygLT3bxNHOJuAZY3zA+ixqgcmLdvDIl6tpGRfBb/q3JSo8lOhGoYzs05r2zco2Ixo/6RfW7cvgzesGkZaZR3pOAa3iImnXJIpOLWIIC+DcDWNM/VDTPgurWdQC44Z0oFebOO79eDmT5m6hoMgVwN+et5Wpd51Js5hGLNl+mHmbDvDwqJ4M6tgsyCU2xjQ0FixqiQHtmzDzT8MA1052v+5O5+qJC7lr8jL+ffMQXp25kWYxjbj2lI7BLagxpkGydotaKDw0hIEdmvLkJX2Zt+kAd3y4hNnr0/jtmZ1sVJMxJijsm6cWuyq5Pct2HGHyoh00iQ7n+tOSgl0kY0wDZcGilnt0TG8ycwsY1qMlja1WYYwJEvv2qeUiwkJ57ZqTg10MY0wDZ30WxhhjqmXBwhhjTLUsWBhjjKmWBQtjjDHVsmBhjDGmWhYsjDHGVMuChTHGmGpZsDDGGFOterNEuYikAxuP8WXxQLoP81aV53iutQAO1Kh0wXEsv79g3f947lHT15zoZ6K66/a58M+9/fmZqGne2vRd0VFVq98hTVXrxQ8w0Z+vqUneqvIczzUgJdi/V1//zgN9f39+Lk70M2Gfi/r3mfDF56K2fibqUzPU135+TU3yVpXneK/VZv4uty/u78/PxYl+Jqq7bp8L/9zbviuOQ71phqqPRCRFa7CDlWlY7HNhygvEZ6I+1Szqo4nBLoCplexzYcrz+2fCahbGGGOqZTULY4wx1bJgYYwxploWLIwxxlTLgkUdJiIxIrJERC4KdllM8IlILxH5p4h8KiK3B7s8pnYQkUtE5C0R+VJERhzvfSxYBIGIvC0iqSLya7n0C0RkvYhsEpGHanCrB4GP/VNKE0i++Eyo6lpV/R1wFWBDa+sBH30uvlDVW4AbgauPuyw2GirwROQsIAt4T1X7OmmhwAbgfGAXsBgYB4QCT5e7xc1AP1xT/COBA6o6NTClN/7gi8+EqqaKyBjgIeA1Vf0oUOU3/uGrz4XzuueBD1V16fGUJey43oE5Iar6k4gklUseAmxS1S0AIjIFuFhVnwYqNDOJyDlADNAbyBGRb1S12K8FN37ji8+Ec5+vgK9EZBpgwaKO89F3hQDPAN8eb6AACxa1STtgp9v5LuCUyjKr6p8BRORGXDULCxT1zzF9JkRkGHAZEAF849eSmWA6ps8FcCdwHhAvIl1V9Z/H81ALFrWHeEmrto1QVd/1fVFMLXFMnwlVnQ3M9ldhTK1xrJ+LV4BXTvSh1sFde+wC2rudJwJ7glQWUzvYZ8J4E5TPhQWL2mMx0E1EOolII2As8FWQy2SCyz4TxpugfC4sWASBiEwGfgZ6iMguEfkfVS0E/gBMB9YCH6vq6mCW0wSOfSaMN7Xpc2FDZ40xxlTLahbGGGOqZcHCGGNMtSxYGGOMqZYFC2OMMdWyYGGMMaZaFiyMMcZUy4KFCRoRyQrAM8bUcLl3Xz5zmIicfhyvGygik5zjG0XkNd+X7tiJSFL5JbK95EkQke8CVSYTeBYsTJ3nLNnslap+parP+OGZVa2rNgw45mABPAy8elwFCjJVTQP2isjQYJfF+IcFC1MriMj9IrJYRFaKyGNu6V84uwGuFpFb3dKzRORxEfkFOE1EtonIYyKyVERWiUhPJ1/pX+gi8q6IvCIiC0Rki4hc4aSHiMgbzjOmisg3JdfKlXG2iPyfiMwB7haR34jILyKyTER+EJFWznLSvwPuEZHlInKm81f3Z877W+ztC1VEYoF+qrrCy7WOIvKj87v5UUQ6OOldRGShc8/HvdXUxLWb4jQRWSEiv4rI1U76YOf3sEJEFolIrFODmOv8Dpd6qx2JSKiIPOv2f3Wb2+UvgGu9/gebuk9V7cd+gvIDZDn/jgAm4lpNMwSYCpzlXGvm/BsF/Ao0d84VuMrtXtuAO53jO4BJzvGNuDYCAngX+MR5Rm9cewIAXIFrSe8QoDVwGLjCS3lnA2+4nTelbBWE3wLPO8ePAve55fsIOMM57gCs9XLvc4DP3M7dy/01cINzfDPwhXM8FRjnHP+u5PdZ7r6XA2+5nccDjYAtwGAnLQ7XCtTRQKST1g1IcY6TgF+d41uBvzjHEUAK0Mk5bwesCvbnyn7882NLlJvaYITzs8w5b4zry+on4C4RudRJb++kHwSKgM/K3edz598luPZ18OYLde39sUZEWjlpZwCfOOn7RGRWFWX9j9txIvAfEWmD6wt4ayWvOQ/o7dqDBoA4EYlV1Uy3PG2AtEpef5rb+3kf+Ltb+iXO8UfAc15euwp4TkT+BkxV1bkichKwV1UXA6hqBrhqIcBrIjIA1++3u5f7jQD6udW84nH9n2wFUoG2lbwHU8dZsDC1gQBPq+qbHomuzXzOA05T1aMiMhvXNrIAuapaVO4+ec6/RVT+2c5zO5Zy/9ZEttvxq8ALqvqVU9ZHK3lNCK73kFPFfXMoe2/VqfGCbqq6QUQGAaOAp0VkBq7mIm/3uAfYD/R3ypzrJY/gqsFN93ItEtf7MPWQ9VmY2mA6cLOINAYQkXYi0hLXX62HnUDREzjVT8+fB1zu9F20wtVBXRPxwG7n+Aa39Ewg1u18Bq5VQgFw/nIvby3QtZLnLMC1DDW4+gTmOccLcTUz4Xbdg4i0BY6q6ge4ah4nA+uAtiIy2MkT63TYx+OqcRQD1+Ha07m86cDtIhLuvLa7UyMBV02kylFTpu6yYGGCTlVn4GpG+VlEVgGf4vqy/Q4IE5GVwBO4vhz94TNcG8r8CrwJ/AKk1+B1jwKfiMhc4IBb+tfApSUd3MBdQLLTIbwGV/+CB1Vdh2vby9jy15zX3+T8Hq4D7nbS/wjcKyKLcDVjeSvzScAiEVkO/Bl4UlXzgauBV0VkBfA9rlrBG8ANIrIQ1xd/tpf7TQLWAEud4bRvUlaLOweY5uU1ph6wJcqNAUSksapmiUhzYBEwVFX3BbgM9wCZqjqphvmjgRxVVREZi6uz+2K/FrLq8vwEXKyqh4NVBuM/1mdhjMtUEWmCq6P6iUAHCsc/gCuPIf8gXB3SAhzBNVIqKEQkAVf/jQWKespqFsYYY6plfRbGGGOqZcHCGGNMtSxYGGOMqZYFC2OMMdWyYGGMMaZaFiyMMcZU6/8BYcxvhokpKrYAAAAASUVORK5CYII=\n",
"text/plain": [
"