{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "01_Intro.ipynb", "provenance": [], "collapsed_sections": [] }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "accelerator": "GPU" }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "6O6Bq4eHwAIM", "colab_type": "text" }, "source": [ "# 01 - Introduction to NLP in `fastai2`\n", "\n", "Things work a little differently in `fastai2` for text compared to the other two modules (vision and tab)\n", "\n", "\n", "* We pre-tokenize our text\n", "* The training outline is different\n", "* ULM-FiT get's fine-tuned differently too\n", "\n", "In today's lesson we'll explore the *high-level* API for Text, and understand what makes it different" ] }, { "cell_type": "code", "metadata": { "id": "7hr_QXrqv-Cj", "colab_type": "code", "colab": {} }, "source": [ "!pip install fastai2 nbdev --quiet\n", "!pip show fastai2" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "Su-SKRnkxRHH", "colab_type": "text" }, "source": [ "## Starting with the data" ] }, { "cell_type": "code", "metadata": { "id": "AES9USzGxM9y", "colab_type": "code", "colab": {} }, "source": [ "from fastai2.text.all import *" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "QW86zV2MxU0j", "colab_type": "text" }, "source": [ "We're going to use a subset of the IMDB dataset, a sentiment-analysis dataset where you try to see if a review was positive or negative:" ] }, { "cell_type": "code", "metadata": { "id": "umg1EXMgxUfY", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 17 }, "outputId": "260b0b2a-4829-4b2f-dba4-2f4fd0762d8f" }, "source": [ "path = untar_data(URLs.IMDB_SAMPLE)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "Grjht8LExZ7c", "colab_type": "text" }, "source": [ "What's in our path?" ] }, { "cell_type": "code", "metadata": { "id": "mZpO9HcPxZZI", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "02be03ff-fde0-4d86-a3f8-546ff5978e83" }, "source": [ "path.ls()" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "(#1) [Path('/root/.fastai/data/imdb_sample/texts.csv')]" ] }, "metadata": { "tags": [] }, "execution_count": 4 } ] }, { "cell_type": "code", "metadata": { "id": "IbD1541Ex5TU", "colab_type": "code", "colab": {} }, "source": [ "df = pd.read_csv(path/'texts.csv')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Py2247P3x7Gn", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 733 }, "outputId": "3ff0001c-968e-4761-9d90-afde60794403" }, "source": [ "df.head()" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
labeltextis_valid
0negativeUn-bleeping-believable! Meg Ryan doesn't even look her usual pert lovable self in this, which normally makes me forgive her shallow ticky acting schtick. Hard to believe she was the producer on this dog. Plus Kevin Kline: what kind of suicide trip has his career been on? Whoosh... Banzai!!! Finally this was directed by the guy who did Big Chill? Must be a replay of Jonestown - hollywood style. Wooofff!False
1positiveThis is a extremely well-made film. The acting, script and camera-work are all first-rate. The music is good, too, though it is mostly early in the film, when things are still relatively cheery. There are no really superstars in the cast, though several faces will be familiar. The entire cast does an excellent job with the script.<br /><br />But it is hard to watch, because there is no good end to a situation like the one presented. It is now fashionable to blame the British for setting Hindus and Muslims against each other, and then cruelly separating them into two countries. There is som...False
2negativeEvery once in a long while a movie will come along that will be so awful that I feel compelled to warn people. If I labor all my days and I can save but one soul from watching this movie, how great will be my joy.<br /><br />Where to begin my discussion of pain. For starters, there was a musical montage every five minutes. There was no character development. Every character was a stereotype. We had swearing guy, fat guy who eats donuts, goofy foreign guy, etc. The script felt as if it were being written as the movie was being shot. The production value was so incredibly low that it felt li...False
3positiveName just says it all. I watched this movie with my dad when it came out and having served in Korea he had great admiration for the man. The disappointing thing about this film is that it only concentrate on a short period of the man's life - interestingly enough the man's entire life would have made such an epic bio-pic that it is staggering to imagine the cost for production.<br /><br />Some posters elude to the flawed characteristics about the man, which are cheap shots. The theme of the movie \"Duty, Honor, Country\" are not just mere words blathered from the lips of a high-brassed offic...False
4negativeThis movie succeeds at being one of the most unique movies you've seen. However this comes from the fact that you can't make heads or tails of this mess. It almost seems as a series of challenges set up to determine whether or not you are willing to walk out of the movie and give up the money you just paid. If you don't want to feel slighted you'll sit through this horrible film and develop a real sense of pity for the actors involved, they've all seen better days, but then you realize they actually got paid quite a bit of money to do this and you'll lose pity for them just like you've alr...False
\n", "
" ], "text/plain": [ " label ... is_valid\n", "0 negative ... False\n", "1 positive ... False\n", "2 negative ... False\n", "3 positive ... False\n", "4 negative ... False\n", "\n", "[5 rows x 3 columns]" ] }, "metadata": { "tags": [] }, "execution_count": 6 } ] }, { "cell_type": "markdown", "metadata": { "id": "kJllXF1mxb_t", "colab_type": "text" }, "source": [ "Alright! So what's the general plan for training?\n", "\n", "1. Language Model (LM) DataLoaders\n", "2. LM Training\n", "3. Classification DataLoaders\n", "4. Fine-Tune with LM encoder" ] }, { "cell_type": "markdown", "metadata": { "id": "ooaA9sWpxuY3", "colab_type": "text" }, "source": [ "We're just going to touch on how to use the API, in the next lesson we'll touch on the nitty-gritty of what is going on. For now, let's look at how to build this LM DataLoader using the `DataBlock` API:" ] }, { "cell_type": "markdown", "metadata": { "id": "hCMZXXA3x-jx", "colab_type": "text" }, "source": [ "### `TextBlock`\n", "\n", "As it's a text problem, we'll probably want something like a `TextBlock` right? Well we can't simply do this:" ] }, { "cell_type": "code", "metadata": { "id": "Zs_6pPZpxbDY", "colab_type": "code", "colab": {} }, "source": [ "block = [TextBlock]" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "KtsyF03fyH21", "colab_type": "text" }, "source": [ "It won't throw any errors yet, so *why*?" ] }, { "cell_type": "code", "metadata": { "id": "wI3NfuRdyPW5", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 168 }, "outputId": "37727a80-8aa0-4ceb-fd3a-404e5b6e9a13" }, "source": [ "doc(TextBlock)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "

class TextBlock[source]

TextBlock(tok_tfm, vocab=None, is_lm=False, seq_len=72, min_freq=3, max_vocab=60000, special_toks=None) :: TransformBlock

\n", "
\n", "

A TransformBlock for texts

\n", "

Show in docs

\n" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "EHXds2onyXFH", "colab_type": "text" }, "source": [ "`TextBlock` needs to know how we plan to tokenize our words (our `tok_tfm`), if we want to use a vocab already, if it's a language model, our sequence length, and a few other parameters. So there's a lot going on there!\n", "\n", "Along with this there's a few class methods for this too:" ] }, { "cell_type": "code", "metadata": { "id": "pnmlws5-ySS0", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 177 }, "outputId": "5365d9ed-1bb9-491e-ff68-84e49ad02d1b" }, "source": [ "doc(TextBlock.from_df)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "

TextBlock.from_df[source]

TextBlock.from_df(text_cols, vocab=None, is_lm=False, seq_len=72, min_freq=3, max_vocab=60000, tok_func='SpacyTokenizer', rules=None, sep=' ', n_workers=2, mark_fields=None, res_col_name='text', **kwargs)

\n", "
\n", "

Build a TextBlock from a dataframe using text_cols

\n", "

Show in docs

\n" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "code", "metadata": { "id": "4hXwCjUnyriT", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 177 }, "outputId": "6a771ba4-f1bf-476c-8d69-346d34f3e231" }, "source": [ "doc(TextBlock.from_folder)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "

TextBlock.from_folder[source]

TextBlock.from_folder(path, vocab=None, is_lm=False, seq_len=72, min_freq=3, max_vocab=60000, tok_func='SpacyTokenizer', rules=None, extensions=None, folders=None, output_dir=None, n_workers=2, encoding='utf8', **kwargs)

\n", "
\n", "

Build a TextBlock from a path

\n", "

Show in docs

\n" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "7In6V0Zgyt_T", "colab_type": "text" }, "source": [ "So we can see if we don't necissarily want to define everything ourself we can use quick and easy `from_df` and `from_folder` methods. We'll use `from_df` here. \n", "\n", "But what? What is this `res_col_name`? `res_col_name` is the column where our tokenized text will be added to. This becomes *very* important as our `get_x` is going to want to pull from this column rather than where our untokenized input is. So let's build a `TextBlock` for our problem. So we can see a different output, we'll change our `res_col_name` to `tok_text`:" ] }, { "cell_type": "code", "metadata": { "id": "oHvRTZQxytVy", "colab_type": "code", "colab": {} }, "source": [ "lm_block = TextBlock.from_df('text', is_lm=True, res_col_name='tok_text')" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "5AvcUtCxzRd1", "colab_type": "text" }, "source": [ "For the rest of our `DataBlock`, we want our `get_x` to read that `res_col_name` column, and our splitter to split our data 90%, 10%: \n", "\n", "> The more data you can train your LM on, the better:" ] }, { "cell_type": "code", "metadata": { "id": "AYsnwPl5zQ6B", "colab_type": "code", "colab": {} }, "source": [ "dblock = DataBlock(blocks=lm_block,\n", " get_x=ColReader('tok_text'),\n", " splitter=RandomSplitter(0.1))" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "TNKcYV_L0Uy3", "colab_type": "text" }, "source": [ "And now we build the `DataLoaders`. We need to declare how long our sequence length is going to be here as well:\n", "\n", "> We'll also set `num_workers` to 4, the rule of thumb is 4 workers / 1 GPU, [source](https://discuss.pytorch.org/t/guidelines-for-assigning-num-workers-to-dataloader/813/5)" ] }, { "cell_type": "code", "metadata": { "id": "-OzaNmsKzaui", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 50 }, "outputId": "85000e38-68b1-416f-a7a4-ea4a51f84bf5" }, "source": [ "%%time\n", "dls = dblock.dataloaders(df, bs=64, seq_len=72, num_workers=4)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } }, { "output_type": "stream", "text": [ "CPU times: user 766 ms, sys: 79.2 ms, total: 845 ms\n", "Wall time: 3.47 s\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "id": "Ej1E6ER01eaH", "colab_type": "text" }, "source": [ "Let's look at a batch of data, both in a raw form and as a show_batch:" ] }, { "cell_type": "code", "metadata": { "id": "amyAR4oT09kT", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 438 }, "outputId": "1bab5fac-3ff5-4664-c1c7-a5cb6e4ac96c" }, "source": [ "dls.show_batch(max_n=3)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
texttext_
0xxbos i xxunk this movie a xxunk days xxunk … what the xxunk was that ? \\n\\n i like movies with xxmaj xxunk xxunk , they are xxunk and xxunk . xxmaj when i xxunk a xxunk of this xxunk and xxunk i xxunk great , this one could be really good … some xxunk for xxunk or xxunk xxunk movies … but xxunk then i xxunk a xxunk and xxunk xxunki xxunk this movie a xxunk days xxunk … what the xxunk was that ? \\n\\n i like movies with xxmaj xxunk xxunk , they are xxunk and xxunk . xxmaj when i xxunk a xxunk of this xxunk and xxunk i xxunk great , this one could be really good … some xxunk for xxunk or xxunk xxunk movies … but xxunk then i xxunk a xxunk and xxunk xxunk it
1xxunk . xxmaj the xxunk is , i xxunk n't xxunk this film at all xxunk . \\n\\n xxmaj it 's the xxunk of xxunk you xxunk xxunk to see xxunk on a xxunk as a xxunk xxunk xxunk and as an xxunk in xxunk xxunk xxunk , xxunk xxunk . and just xxunk the xxunk , it xxunk on some xxunk . xxmaj as a xxunk xxunk of film though ,. xxmaj the xxunk is , i xxunk n't xxunk this film at all xxunk . \\n\\n xxmaj it 's the xxunk of xxunk you xxunk xxunk to see xxunk on a xxunk as a xxunk xxunk xxunk and as an xxunk in xxunk xxunk xxunk , xxunk xxunk . and just xxunk the xxunk , it xxunk on some xxunk . xxmaj as a xxunk xxunk of film though , it
2were xxunk xxunk on such a xxunk of a movie … if you can even xxunk it a movie . xxbos xxmaj the only xxunk xxunk of this film is the xxunk xxunk … xxunk , this movie was xxunk . xxmaj the acting was xxunk xxunk , and the xxunk xxunk was xxunk and very xxunk . xxmaj the xxunk was xxunk , but it was very hard to xxunk xxunkxxunk xxunk on such a xxunk of a movie … if you can even xxunk it a movie . xxbos xxmaj the only xxunk xxunk of this film is the xxunk xxunk … xxunk , this movie was xxunk . xxmaj the acting was xxunk xxunk , and the xxunk xxunk was xxunk and very xxunk . xxmaj the xxunk was xxunk , but it was very hard to xxunk xxunk to
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "xxnfOUDS1j6z", "colab_type": "text" }, "source": [ "So this looks a bit odd, what happened?\n", "\n", "* Tokenized and Numericalized our text (we'll see the latter in a moment)\n", "* LM's want to predict the *next* word in a sequence" ] }, { "cell_type": "code", "metadata": { "id": "L39lo6UQ1jfX", "colab_type": "code", "colab": {} }, "source": [ "xb,yb = next(iter(dls[0]))" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "aV2i9cN-1sow", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 118 }, "outputId": "eb57213a-6665-40e4-8e7a-7abbf8cace5d" }, "source": [ "xb[0]" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "tensor([ 2, 25, 0, 0, 8, 0, 38, 0, 0, 86, 16, 0, 66, 11,\n", " 12, 27, 0, 86, 25, 0, 9, 0, 21, 18, 26, 11, 12, 102,\n", " 73, 48, 86, 25, 0, 19, 8, 0, 92, 0, 16, 0, 10, 8,\n", " 9, 26, 11, 0, 11, 27, 14, 0, 0, 10, 8, 9, 0, 28,\n", " 0, 0, 9, 0, 12, 0, 0, 13, 9, 0, 0, 11, 12, 9,\n", " 0, 17], device='cuda:0')" ] }, "metadata": { "tags": [] }, "execution_count": 33 } ] }, { "cell_type": "code", "metadata": { "id": "mhw3gHSy1tZN", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 118 }, "outputId": "e3c1586e-a4d3-4a9a-c252-9f1dbc078af9" }, "source": [ "yb[0]" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "tensor([ 25, 0, 0, 8, 0, 38, 0, 0, 86, 16, 0, 66, 11, 12,\n", " 27, 0, 86, 25, 0, 9, 0, 21, 18, 26, 11, 12, 102, 73,\n", " 48, 86, 25, 0, 19, 8, 0, 92, 0, 16, 0, 10, 8, 9,\n", " 26, 11, 0, 11, 27, 14, 0, 0, 10, 8, 9, 0, 28, 0,\n", " 0, 9, 0, 12, 0, 0, 13, 9, 0, 0, 11, 12, 9, 0,\n", " 17, 0], device='cuda:0')" ] }, "metadata": { "tags": [] }, "execution_count": 34 } ] }, { "cell_type": "markdown", "metadata": { "id": "jZeViWAp1vX_", "colab_type": "text" }, "source": [ "This is where that `Numericalization` comes into play. Each token/vocabulary gets converted into a number for us to pass to our model, also:" ] }, { "cell_type": "code", "metadata": { "id": "ysX_slwr1uLb", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "b6a2ad75-217d-4f49-bafe-7a3c8024e459" }, "source": [ "xb[0].shape, yb[0].shape" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "(torch.Size([72]), torch.Size([72]))" ] }, "metadata": { "tags": [] }, "execution_count": 36 } ] }, { "cell_type": "markdown", "metadata": { "id": "3G1WAYhX14X3", "colab_type": "text" }, "source": [ "We can see that since we passed in a `seq_len` of 72, each individual text input is 72 words!" ] }, { "cell_type": "markdown", "metadata": { "id": "aHuLhP3F2DFX", "colab_type": "text" }, "source": [ "Cool, can we train already?" ] }, { "cell_type": "markdown", "metadata": { "id": "Xv088vEJ2ETS", "colab_type": "text" }, "source": [ "## Training the LM:\n", "\n", "\n", "We have a special `Learner` for language models, `language_model_learner` we'll use. The only thing you need to pass in is your `DataLoaders`, specify the `arch`, and pass in some metrics:" ] }, { "cell_type": "code", "metadata": { "id": "IY1RqZPK12D7", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 194 }, "outputId": "0580affa-4218-4d71-f552-b9421e40d980" }, "source": [ "doc(language_model_learner)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "

language_model_learner[source]

language_model_learner(dls, arch, config=None, drop_mult=1.0, pretrained=True, pretrained_fnames=None, loss_func=None, opt_func='Adam', lr=0.001, splitter='trainable_params', cbs=None, metrics=None, path=None, model_dir='models', wd=None, wd_bn_bias=False, train_bn=True, moms=(0.95, 0.85, 0.95))

\n", "
\n", "

Create a Learner with a language model from dls and arch.

\n", "

Show in docs

\n" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "BaGA5eYH2R3C", "colab_type": "text" }, "source": [ "Potentially, if you have your own pre-trained base model you want to use you can pass in a `pretrained_fname`, otherwise we'll use a pretrained `WikiText103` model.\n", "\n", "For our metrics we'll be using both accuracy and `Perplexity`\n", "\n", "What is Perplexity? Teaching our model how to deal with uncertainty in language.\n", "\n", "> Perplexity metric in NLP is a way to capture the degree of 'uncertainty' a model has in predicting (assigning probabilities to) some text. Lower the entropy (uncertainty), lower the perplexity. If a model, which is trained on good blogs and is being evaluated on similarly looking good blogs, assigns higher probability, we say the model has lower perplexity than a model which assigns lower probability. [source](https://www.quora.com/What-is-perplexity-in-NLP)" ] }, { "cell_type": "code", "metadata": { "id": "keIgEFBp3ZEH", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 151 }, "outputId": "2d65c4e6-4152-4264-9c67-0a9a912cc49b" }, "source": [ "doc(Perplexity)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "

class Perplexity[source]

Perplexity() :: AvgLoss

\n", "
\n", "

Perplexity (exponential of cross-entropy loss) for Language Models

\n", "

Show in docs

\n" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "mqWwl3WS2-vG", "colab_type": "text" }, "source": [ "So now let's build our `Learner`. For an arch we'll use the `AWD_LSTM` (we'll explore more later on) pretrained on `WikiText`:" ] }, { "cell_type": "code", "metadata": { "id": "j28hqXI-2Pkg", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 17 }, "outputId": "25dae27e-c705-4a82-8d87-48f6fab9c0e4" }, "source": [ "learn = language_model_learner(dls, AWD_LSTM, metrics=[accuracy, Perplexity()])" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "08n9QWN13u1o", "colab_type": "text" }, "source": [ "For training our model, we'll simply use `fine_tune` for a few epochs (mostly due to the small sample size):" ] }, { "cell_type": "code", "metadata": { "id": "9HYqodLU3tWL", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 255 }, "outputId": "91519141-f840-4743-d1e1-a8b852a4317b" }, "source": [ "learn.fine_tune(5)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracyperplexitytime
03.0581002.6426630.39018014.05057400:10
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } }, { "output_type": "display_data", "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracyperplexitytime
02.8333192.5353380.41155912.62069500:12
12.7161482.4600230.41888911.70507700:12
22.6357172.4268120.42127311.32272700:12
32.5908522.4112440.42306611.14782200:12
42.5693192.4075860.42348711.10711500:12
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "8U952CEZ4I0S", "colab_type": "text" }, "source": [ "That's about what we want in accuracy, ~30-40%, don't expect higher unless you *know* the model can. Remember: we're predicting the next word given the previous few, a hard task! Now that we have our LM, we want to save those embeddings away.\n", "\n", "Embeddings?" ] }, { "cell_type": "code", "metadata": { "id": "-Z-5YH_Y31PP", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "985132c8-bb35-4cb7-fbfa-f0c1d5f4a6b9" }, "source": [ "learn.model[0].encoder" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "Embedding(184, 400, padding_idx=1)" ] }, "metadata": { "tags": [] }, "execution_count": 45 } ] }, { "cell_type": "markdown", "metadata": { "id": "i4DIuZp74hSx", "colab_type": "text" }, "source": [ "These embeddings! This is essentially our ImageNet weights. Along with this, for the downstream task we'll also want our vocab, so we'll actually go ahead and save our `DataLoaders` too.\n", "\n", "How do we do this? Using `torch.save` and `save_encoder`:" ] }, { "cell_type": "code", "metadata": { "id": "nvby1xzQ4Xfn", "colab_type": "code", "colab": {} }, "source": [ "torch.save(learn.dls, 'lm_dls.pth')" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "DkGWT5ql4yMr", "colab_type": "code", "colab": {} }, "source": [ "learn.save_encoder('fine_tuned_enc')" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "SJ40DO8_40dy", "colab_type": "text" }, "source": [ "Our model has been saved to the `models` directory now. Let's work on our downstream task:" ] }, { "cell_type": "markdown", "metadata": { "id": "0pTMC4Q75HbU", "colab_type": "text" }, "source": [ "## Classification" ] }, { "cell_type": "markdown", "metadata": { "id": "_iJjande5hy0", "colab_type": "text" }, "source": [ "Now classification is what we actually want to do: is this review `pos` or `neg`\n", "\n", "Let's look at our `DataFrame` one more time to figure out how to frame our `DataBlock`:" ] }, { "cell_type": "code", "metadata": { "id": "wHHXSO2_40EZ", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 733 }, "outputId": "ecb5318d-6be8-4305-d2b2-c81f4bf7287e" }, "source": [ "df.head()" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
labeltextis_valid
0negativeUn-bleeping-believable! Meg Ryan doesn't even look her usual pert lovable self in this, which normally makes me forgive her shallow ticky acting schtick. Hard to believe she was the producer on this dog. Plus Kevin Kline: what kind of suicide trip has his career been on? Whoosh... Banzai!!! Finally this was directed by the guy who did Big Chill? Must be a replay of Jonestown - hollywood style. Wooofff!False
1positiveThis is a extremely well-made film. The acting, script and camera-work are all first-rate. The music is good, too, though it is mostly early in the film, when things are still relatively cheery. There are no really superstars in the cast, though several faces will be familiar. The entire cast does an excellent job with the script.<br /><br />But it is hard to watch, because there is no good end to a situation like the one presented. It is now fashionable to blame the British for setting Hindus and Muslims against each other, and then cruelly separating them into two countries. There is som...False
2negativeEvery once in a long while a movie will come along that will be so awful that I feel compelled to warn people. If I labor all my days and I can save but one soul from watching this movie, how great will be my joy.<br /><br />Where to begin my discussion of pain. For starters, there was a musical montage every five minutes. There was no character development. Every character was a stereotype. We had swearing guy, fat guy who eats donuts, goofy foreign guy, etc. The script felt as if it were being written as the movie was being shot. The production value was so incredibly low that it felt li...False
3positiveName just says it all. I watched this movie with my dad when it came out and having served in Korea he had great admiration for the man. The disappointing thing about this film is that it only concentrate on a short period of the man's life - interestingly enough the man's entire life would have made such an epic bio-pic that it is staggering to imagine the cost for production.<br /><br />Some posters elude to the flawed characteristics about the man, which are cheap shots. The theme of the movie \"Duty, Honor, Country\" are not just mere words blathered from the lips of a high-brassed offic...False
4negativeThis movie succeeds at being one of the most unique movies you've seen. However this comes from the fact that you can't make heads or tails of this mess. It almost seems as a series of challenges set up to determine whether or not you are willing to walk out of the movie and give up the money you just paid. If you don't want to feel slighted you'll sit through this horrible film and develop a real sense of pity for the actors involved, they've all seen better days, but then you realize they actually got paid quite a bit of money to do this and you'll lose pity for them just like you've alr...False
\n", "
" ], "text/plain": [ " label ... is_valid\n", "0 negative ... False\n", "1 positive ... False\n", "2 negative ... False\n", "3 positive ... False\n", "4 negative ... False\n", "\n", "[5 rows x 3 columns]" ] }, "metadata": { "tags": [] }, "execution_count": 49 } ] }, { "cell_type": "markdown", "metadata": { "id": "iKIQWSmI5vo2", "colab_type": "text" }, "source": [ "So we'll want another `ColReader` to grab our label and to split our data by the `is_valid` column. First let's get everything ready by loading in those `DataLoaders` from earlier:" ] }, { "cell_type": "code", "metadata": { "id": "nsF-E58i5sbJ", "colab_type": "code", "colab": {} }, "source": [ "lm_dls = torch.load('lm_dls.pth')" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "2SfWkrwS59o8", "colab_type": "text" }, "source": [ "We need these because we already have a vocab to use, and a sequence length:" ] }, { "cell_type": "code", "metadata": { "id": "q0GSjHJr58jO", "colab_type": "code", "colab": {} }, "source": [ "blocks = (TextBlock.from_df('text', res_col_name='tok_text', seq_len=lm_dls.seq_len,\n", " vocab=lm_dls.vocab), CategoryBlock())" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "Bo-Tgf836XMV", "colab_type": "text" }, "source": [ "Then we'll build our `DataBlock`:" ] }, { "cell_type": "code", "metadata": { "id": "jfWZ2DRr6WVf", "colab_type": "code", "colab": {} }, "source": [ "imdb_class = DataBlock(blocks=blocks,\n", " get_x=ColReader('tok_text'),\n", " get_y=ColReader('label'),\n", " splitter=ColSplitter(col='is_valid'))" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "QuAWYZ0c6i3D", "colab_type": "text" }, "source": [ "And finally the `DataLoaders`:" ] }, { "cell_type": "code", "metadata": { "id": "9pVWoORw6hS-", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 17 }, "outputId": "955544af-f925-4852-f50e-e6efb9ca7adf" }, "source": [ "dls = imdb_class.dataloaders(df, bs=64)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "jPqNMQiF6p9h", "colab_type": "text" }, "source": [ "Those with a keen eye will notice how we called `.vocab`, but before that meant how we grab our classes! So how do we do this here?\n", "\n", "Since `Categorize` (what `CategoryBlock` adds) is a transform that's stored as an attribute, we can grab our classes that way:" ] }, { "cell_type": "code", "metadata": { "id": "fOxy1QmF6noG", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "a45ba64d-c253-40de-9f99-510529d728ae" }, "source": [ "dls.categorize.vocab" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "(#2) ['negative','positive']" ] }, "metadata": { "tags": [] }, "execution_count": 55 } ] }, { "cell_type": "markdown", "metadata": { "id": "aKJ7xbZ67FvR", "colab_type": "text" }, "source": [ "## `text_classifier_learner`\n", "\n", "Next up is the `text_classifier_learner`. This is what we'll use to make our classification AWD_LSTM:" ] }, { "cell_type": "code", "metadata": { "id": "fQrY0jUQ65QT", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 210 }, "outputId": "00a07c94-23fd-4aac-c045-b6b91035a6be" }, "source": [ "doc(text_classifier_learner)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "

text_classifier_learner[source]

text_classifier_learner(dls, arch, seq_len=72, config=None, pretrained=True, drop_mult=0.5, n_out=None, lin_ftrs=None, ps=None, max_len=1440, y_range=None, loss_func=None, opt_func='Adam', lr=0.001, splitter='trainable_params', cbs=None, metrics=None, path=None, model_dir='models', wd=None, wd_bn_bias=False, train_bn=True, moms=(0.95, 0.85, 0.95))

\n", "
\n", "

Create a Learner with a text classifier from dls and arch.

\n", "

Show in docs

\n" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "code", "metadata": { "id": "RhFWxBxl7P0_", "colab_type": "code", "colab": {} }, "source": [ "learn = text_classifier_learner(dls, AWD_LSTM, metrics=[accuracy])" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "zR6h_Y1071yi", "colab_type": "text" }, "source": [ "Now we have our pretrained embeddings right? Let's look at it in our `learn`:" ] }, { "cell_type": "code", "metadata": { "id": "mIR1ECV8706_", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "e3372dd6-6cfc-4959-f390-5d6ae74ca064" }, "source": [ "learn.model[0].module.encoder" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "Embedding(184, 400, padding_idx=1)" ] }, "metadata": { "tags": [] }, "execution_count": 62 } ] }, { "cell_type": "markdown", "metadata": { "id": "GuScoWnZ7_A-", "colab_type": "text" }, "source": [ "It's right there! So we have a `load_encoder` function that will copy over our weights to there:" ] }, { "cell_type": "code", "metadata": { "id": "rlBfNN0B8LZc", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 143 }, "outputId": "ca50b261-1ea7-4e52-e37a-d86558542ddd" }, "source": [ "doc(learn.load_encoder)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "

TextLearner.load_encoder[source]

TextLearner.load_encoder(file, device=None)

\n", "
\n", "

Load the encoder file from the model directory, optionally ensuring it's on device

\n", "

Show in docs

\n" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "code", "metadata": { "id": "2qoZdI7F76gs", "colab_type": "code", "colab": {} }, "source": [ "learn.load_encoder('fine_tuned_enc');" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "o8mRXeX48QN2", "colab_type": "text" }, "source": [ "> Note: It will automatically assume that your model is saved in `learn.path` in the `models` folder and has the extention `.pth`" ] }, { "cell_type": "markdown", "metadata": { "id": "3JykJcEc84ES", "colab_type": "text" }, "source": [ "So now we have our frozen model and our pretrained weights. Let's train!\n", "\n", "We're going to use a training methodology that Jeremy and Sebastian Ruder came up with for fine-tuning:\n", "\n", "1. Find a learning rate\n", "2. Lower that learning rate each time, slowly unfreezing fitting for 1 epoch at a time\n", "3. Finally unfreeze the model and fit for two epochs:" ] }, { "cell_type": "code", "metadata": { "id": "QSmtxVOx8Jj7", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 283 }, "outputId": "0abebb62-126d-4fb3-b77d-b5cbadc18896" }, "source": [ "lr = learn.lr_find(suggestions=True)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } }, { "output_type": "display_data", "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEKCAYAAAAIO8L1AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO3deXjU5dXw8e+ZyUYgCUsSEBJ2AgRlUUCUVUFFquJSK6i1tgraFvtota22Vq1PfdU+drOCSq37Qi11QaVSF3ADFJB9kz0kbAkhG9mT8/4xExzCJJlAfjOT5HyuKxcz9287CUlO7l1UFWOMMaY2V6gDMMYYE54sQRhjjPHLEoQxxhi/LEEYY4zxyxKEMcYYvyxBGGOM8Ssi1AE0lcTERO3Zs2eowzDGmGZl1apVOaqa5O+YowlCRCYDfwXcwDOq+kit4z2AZ4EkIBe4XlUzRWQo8CQQD1QBD6nqP+t7Vs+ePVm5cqUDn4UxxrRcIrKnrmOONTGJiBuYDVwMpAPTRSS91mmPAS+q6mDgQeBhb3kxcIOqDgImA38RkfZOxWqMMeZETvZBjAS2q+pOVS0H5gFTa52TDnzsfb245riqfqOq27yv9wGH8NQyjDHGBImTCaIbsNfnfaa3zNda4Erv6yuAOBHp5HuCiIwEooAdDsVpjDHGj1CPYroLGC8iq4HxQBaePgcAROQ04CXgh6paXftiEZkpIitFZGV2dnawYjbGmFbByQSRBaT6vE/xlh2jqvtU9UpVHQb8xluWByAi8cB7wG9Udbm/B6jqXFUdrqrDk5KsBcoYY5qSkwliBdBPRHqJSBQwDVjge4KIJIpITQz34BnRhPf8N/F0YM93MEZjjDF1cCxBqGolMAtYBGwGXlfVjSLyoIhc5j1tArBVRL4BOgMPecu/B4wDbhSRNd6PoU7FGqisvBIOF5WFOgxjjAkKaSn7QQwfPlydnAdRXa2M+7/FDEltz+xrz3TsOcYYE0wiskpVh/s71mJmUjvtq925ZB4poV20fcmMMa1DqEcxNRtvrfb0r+/NLaal1LqMMaY+liACUFpRxXvr9xPldnG0vIojxRWhDskYYxxnCSIAS7YeorC0ku+NSAE8tQhjjGnpLEEE4K3V+0hsF820Ed0B2HvEEoQxpuWzBNGA/OIKPt5yiMuGdKVHp1gAMo+UhDgqY4xxniWIBvxnw37Kq6q5fFhX4mIiaR8baU1MxphWwRJEA95cnUXvpLac0S0BgNQOsey1GoQxphWwBFGP/fklfLkrl8uHdkNEAEjt2IZMq0EYY1oBSxD12Ly/AIDRfROPlaV2iCXzSAnV1TYXwhjTslmCqEdOYTkAyXHRx8pSOsZSXlXNoUJbk8kY07JZgqhHtndhviSfBJHaoQ1gQ12NMS2fJYh65BSV0S46gphI97Gy1I6eoa42kskY09JZgqhHTlE5ie2ijivr1t5bg8i1kUzGmJbNEkQ9cgrLSGwXfVxZTKSb5Lhoa2IyxrR4jiYIEZksIltFZLuI3O3neA8R+UhE1onIEhFJ8Tn2AxHZ5v34gZNx1iW76MQEAZ5mJmtiMsa0dI4lCBFxA7OBi4F0YLqIpNc67TE824oOBh4EHvZe2xG4HzgbGAncLyIdnIq1LjlFZSTGRZ1QntqhjS23YYxp8ZysQYwEtqvqTlUtB+YBU2udkw587H292Of4RcAHqpqrqkeAD4DJDsZ6goqqavKKK+qsQezPL6GiqjqYIRljTFA5mSC6AXt93md6y3ytBa70vr4CiBORTgFe66jDRZ45EL5DXGukdoilWmF/XmkwQzLGmKAKdSf1XcB4EVkNjAeygKpALxaRmSKyUkRWZmdnN2lgOd45EP5qECkdbS6EMablczJBZAGpPu9TvGXHqOo+Vb1SVYcBv/GW5QVyrffcuao6XFWHJyUlNWnw2fUkiNQONhfCGNPyOZkgVgD9RKSXiEQB04AFvieISKKI1MRwD/Cs9/Ui4EIR6eDtnL7QWxY0Od6lNJL8JIjTEmJwu8RqEMaYFs2xBKGqlcAsPL/YNwOvq+pGEXlQRC7znjYB2Coi3wCdgYe81+YC/4snyawAHvSWBU2Otw/C3yimCLeLru1jbLKcMaZFi3Dy5qq6EFhYq+w+n9fzgfl1XPss39Yogi67sIzYKDexUf6/RJ59IawGYYxpuULdSR22cuqYJFcjtUOs1SCMMS2aJYg65BSV+R3iWiOlQxtyisooKQ940JUxxjQrliDq4KlBnNj/UKNnYlsAduUcDVZIxhgTVJYg6uBZybXuGkRa5zgAvjlYGKyQjDEmqCxB+FFZVc2R4voTRK/EtkS4pNEJ4pUv9zD30x2nGqIxxjjO0VFMzVXu0XJUIbGePoioCBe9EtvyzcGiRt177qc72Z9fyjUjupPQJvJUQzXGGMdYDcKPQ8cmydXdBwGeZqbG1CByj5az53Ax5ZXVvLtu3ynFaIwxTrME4Ud96zD5Suscx94jxQGPZFq7Nw+AmEgX81dlnlqQXoWlFfzfoi227IcxpslZgvDj2CzqBhNEO1Rh+6HAmplW783DJXDr+D6szshj+6FT7+B+YvF2Zi/ewbS5yy1JGGOalCUIP2pqEPXNgwDo5x3JtDXAZqY1e/NI6xzHtWd3x+0S5q86Yf3BRtmbW8xzn+9mTN9EisoqLUkYY5qUJQg/cgrLaBPppm10/X34PTvFEuV2sS2ABKGqrN2bx9DU9iTHxTAhLYk3V2dSVa0nHeej72/B5YLHrh7CKzefbUnCGNOkLEH4UddWo7VFuF30TmobUEf1rpyj5JdUMDS1PQBXD0/hYEEZn207uX0sVu05wrvr9jNzXB+6JMRwereEY0niphdWUFrhzAzvh/+zmbv/vY71mfmO3N8YEz4sQfjR0CQ5X56RTA33QazxdlAP7e5JEOcP6EyH2MiT6qxWVX7/3iaS46K5ZVzvY+Wnd0vg8enD+OZgEY++v6XR923Ixn35PP3JTv65ci+XPvE5U2d/wcvL9/DVrlz255dQfRK1oUMFpcxZsp384oomj9cYc2psHoQf2YVldO8UG9C5aZ3bsWDtPorKKmlXT5PUmr15tI1y0y/Z028RFeFi6tBuvPpVBnnF5bSPbbjGUuO99ftZnZHHH64afEIz2Pi0JG48tyfPfbGb8/onMy7Ns5FS7tFynvh4O1ERLi4a1JkhKe1xuSTgZwI8/clO2ka5WXTHOD7cdJCXv8zg3rc2HDseFeHi8qFd+cVFAxrsvwFPTW3635ezI/so/1yxl6e/fxYDusT7PfdAfin3vrWemeP6MLJXx0bFbYw5OVaD8KOhlVx91Sy50VA/xJq9eZyRkoDb55fytJGplFdW88LSPY2K7/kvdtMnqS1XnZXi9/jdFw+gX3I77vrXWo4cLefjLQe56C+f8uKy3Tzz2U6umLOUcx/5mEf+s4XyyuqAnplxuJh31+3julE9SOkQy42je/HBHeNYctcEXvzRSH5/+el896wU3lydxfmPLeHZz3dRUVX3vfOKy/n+P74iK6+EBy5Np6S8iitmL+WdtSfODzmQX8q0ucv4cPMh7nt7w0nVVIwxjWcJopbKqmpyi8sbnCRXI5A1mUorqti8v4ChqR2OKx/QJZ5JAzvz7Be7KCwNrIklK6+ElXuOcOWZKcclG18xkW7+Mm0oR4rLufSJz/nR8yvp1DaKBbPGsOreC/jzNUMYnJLAU5/s4OYXV3K0rLLB5/79s524XcKPRvc6ViYi9Exsy7i0JK4f1YP/d8UZvH/7OIb16MCD727iu08u9ft5FZZW8IPnVrDjUBF/v2E4N47uxbu3jWFQ13hue201s179mtUZRwBPcpj+9+XkFJUzY2wvthwo5L31+wP6WhljTo2jCUJEJovIVhHZLiJ3+zneXUQWi8hqEVknIlO85ZEi8oKIrBeRzSJyj5Nx+sotbniZDV+pHWOJjnDV2w+xcV8BFVV6rIPa188m9iW/pIKXlgdWi3jPOwP70sFd6z1vUNcEfjV5APvySrh1fB/enjWa9K7xJMRGcsWwFObeMJxHrzqDz7dlc+0zX5J7tLzOe+UUlfH6yr1cMawbXRJi6n1un6R2vPDDETxx7TA27ivgJ698fVxN4nBRGTc8+xUbs/KZc92ZjO3naQJLjo/h1Rmj+MmEPnyyNZsr5ixl6hOfM23uMg4VlPLCj0Zwz8UD6d85jj9/8A2V9dROjDFNw7EEISJuYDZwMZAOTBeR9Fqn3YtnK9JhePasnuMtvxqIVtUzgLOAW0Skp1Ox+sop9Pyi9LcXtT9ul9Cvc7t6axA1HdTDup+YIAantGd8WhLPfLaL4vKG/5J/Z+1+hqS2D6iP5OaxvVl7/4XcffEAoiPcJxy/ZkR3nv7+cLbsL+C7Ty3lQH6p3/u8sHQ35VXVzBzXp8Fngqdmccngrjx85Rl8ti2HX7+xHlVld85RrnpyKZv2FfDEtWcyKb3zcddFRbj45eQBLPv1RB6cOojCskoOF5Xz4k0jOatHR1wu4ecXprEz5yhvrj61OSTGmIY5WYMYCWxX1Z2qWg7MA6bWOkeBml7JBGCfT3lbEYkA2gDlQIGDsR5zbJmNAGsQAGnJJ67J5PtX85q9eZyWEEPneP9/ff9sYl9yj5bzyvKMep+zK+co67PyuXTwaQHHFhdT/4KAF6R35uWbz+ZAfil3v7EO1ePb94vKKnlh6W4uTO9M3+R2AT8X4OrhqfzPxH78a1Umv/r3Oq58cin5JRW8OmMUk0/vUud17aIjuOGcnnx4x3hW3DuJs3p82yl9YXpnBqck8NePtgXcf2KMOTlOJohuwF6f95neMl8PANeLSCaevatv85bPB44C+4EM4DFVza39ABGZKSIrRWRldvbJzSeoLdB1mHz16xzHwYIy8ksqOJBfyveeXsag+xZxy0sreX/DAVZnHPHbvFTjrB4dGd23E09/urPe+Qvvrt2HCFzSQPNSY43o2ZE7L+zPkq3ZvLPu2/Z9VeU3b66nsKySn0zoe1L3vn1SP646M4XXV2YSFxPBGz8ZzVk9OjR8IeByCTGRx9d8RIQ7L+xP5pES/rlybx1XGmOaQqg7qacDz6tqCjAFeElEXHhqH1VAV6AXcKeI9K59sarOVdXhqjo8KSmpSQLKLqxJEIEPO+3fxfOX9YtLd/Odxz9jQ1Y+U4d2ZdWePG59eRWZR0rqTRAAt53fj5yiMl77yn8tQlVZsHYfI3p2bLAf4GTceG5PhqQk8OA7G8kr9jSzPffFbt5es487L0hjSAPx10VEePjKM3j0qjN448fn0su7E9+pGNcvkRE9O/Cn/261WePGOMjJBJEFpPq8T/GW+boJeB1AVZcBMUAicC3wvqpWqOoh4AtguIOxHpNTVEZ0hKveOQ211cxt+OMH39CxbRQLZo3m/64ewvJ7zuf5H47g5jG9uOLM2pWn443q3Ylzenfirx9t44ifDuOtBwvZdqiIS4c0be2hhtslPHzlYI4UV/DQe5tZvvMwDy3czIXpnU+69lAjKsLFNSO606kRtbL6iAiPXjWYympl5kurAuq7McY0npMJYgXQT0R6iUgUnk7oBbXOyQAmAojIQDwJIttbfr63vC0wCmj6qcF+1MyiFgl8Elm39m0Yktqeq85M4e1Zo+nrTRgRbhcT+idz7yXpJMc1/Ff//ZelU1hayWP/3XrCsXfW7sPtEi6up+3+VKV3jWfG2N78a1UmM15cSc9Osfzxe0MaPaEuGHonteNv04ex9UABv/jXiX0nxphT51iCUNVKYBawCNiMZ7TSRhF5UEQu8552JzBDRNYCrwE3qucnfTbQTkQ24kk0z6nqOqdi9XWkuJxOjWheAk9b+ds/Hc0fvzeE2KiTn5w+oEs8N5zTg1e/ymBD1rdrHZWUV7Fg7T7O7dOpUX0jJ+P2Sf3o0SkWVXj6+8Mb7OQOpQn9k/nV5AG8t34/c5bYNq7GNDVHl9pQ1YV4Op99y+7zeb0JGO3nuiI8Q12DLq+4IqRbgd4+KY0Fa/Zx39sbmH/ruWTkFh/rx/jtd2qPEm56MZFuXr/lHIrLq5qkv8BpM8f1ZvP+Ah7771YO5Jfys4n9AlrmwxjTMFuLqZaCkgpSOrQJ2fMT2kTyq4sH8Mv567hvwQbeXrMPlwjP3TiCCf2TgxJDXcNxw5GI8MhVg4mLieS1rzL499eZzBjbmxnjejeqH8kYc6JQj2IKO3klFbSPDW2zynfPTGFoanteXp5B946xvHvbmKAlh+YoJtLN/15+Oh/8fDzn9U/mrx9t45LHP2NHdmA7/Rlj/LME4aO6WskrLg9pExN4+jT+Om0od188gH//+FxSOwa2smxr1yuxLbOvO5N5M0dRWFrJ5bO/OOn9NoxpLj7fluPY97klCB9F5ZVUK7Rv07hOaif06NSWW8f3OWGimGnYqN6deOuno+nWvg03PreCuZ/uYPuhIsoqndlEyZhQmr14O3/+4BtH7m2NtD5qNq1JCHETkzl1qR1jmf/jc/mf11bz/xZu4f8t3IIIdE1owwXpnbl1fB9HJhwaE2wZucWO7ZFiCcJHfoknQbQPcROTaRrtoiP4+w3DWZeVz66cIvYcLmbrgUJeWr6HV7/M4JoRqdw6oQ/d2oduUIIxp6K8spp9+SWONUNbgvCRV1ODsATRYrhcwtDU9sctdbI3t5g5S7Yzb0UGr3y5h/P6J3PNiFTOG5BMpNtFdbWSX1LBlgOFrNidy4rduezNLebmsb25dmT3sJw4aFqnzCPFqEIPSxDOO1aDaMT2n6b5Se0Yy8NXDmbW+f14Zfke5q/K5KMth+gQG0mk20Xu0XIqvbvWiUD/znG0j43i3rc28PaaLB6+8oxjs+WNCaU93rXIegS4RXJjWYLwkVfiWQMp1MNcTXB0a9+GX04ewM8vSGPJ1mwWbthPpMtFYlwUndpG06NTLMN7dCQhNhJVZf6qTB5auJkpf/2cR646gyvP9L/lqzHBUrNYZXerQTjPmphapwi3i0npnU/YwMiXiHD1cE8z1MwXV/L79zYz+fQup7S0ijGnas/hYtpEuh1bPcCGufrIL6kgOsJlQ0tNnRLbRfOb76QHtMGTMU7bc7iY7h1jG7W4aGNYgvCRXxz6WdQm/J3Vo0NAGzwZ47S9ucWOTqS1BOEjr6Q8LCbJmfBXs8HTvDo2eDLGaapKRm6xYx3UYAniOKFeydU0H6N6d2Jkz4489clOm6FtQiK7sIySiipLEMGSX1Jhs6hNwG6b2JcDBaX8a2VmqEMxrVCGdwRTs21iEpHJIrJVRLaLyN1+jncXkcUislpE1onIFJ9jg0VkmYhsFJH1IuL4ugj5JRU2i9oEbEzfRIamtufJJTsoLK0IdTimldlz2DsHojkmCBFx49kZ7mIgHZguIrV3vLkXz05zw/BsSTrHe20E8DJwq6oOAiYAjv8E5lkntWkEEeHXUwZysKCUW19eRXlldahDMq3IntxiRCClQzNMEMBIYLuq7lTVcmAeMLXWOQrEe18nAPu8ry8E1qnqWgBVPayqjjb0llVWUVJRZX0QplFG9urII1cN5ovth/nVv21vbBM8e3OL6ZrQhqgI536NO5kgugF7fd5nest8PQBcLyKZeLYmvc1bngaoiCwSka9F5Jf+HiAiM0VkpYiszM4+tfXQa5bZSLBlNkwjffesFO68II03V2fxh0VbQx2OaSX2HD7q2AzqGqHupJ4OPK+qKcAU4CURceGZ4T0GuM777xUiMrH2xao6V1WHq+rwpKSkUwqkZqlv64MwJ2PW+X2ZPrI7Ty7ZwXNf7Ap1OKYVcHqIKzi71EYWkOrzPsVb5usmYDKAqi7zdkQn4qltfKqqOQAishA4E/jIqWDzji3UZwnCNJ6I8L9TB3G4qIzfvbOJ2Cg314zoHuqwTAt1tKySnKJyx3ebdLIGsQLoJyK9RCQKTyf0glrnZAATAURkIBADZAOLgDNEJNbbYT0e2ORgrLYOkzllEW4Xf7t2GOPTkrj7jfW8vab230PGNI0Mh1dxreFYglDVSmAWnl/2m/GMVtooIg+KyGXe0+4EZojIWuA14Eb1OAL8CU+SWQN8rarvORUr+G4WZH0Q5uRFR7h5+vtnMapXJ37++lre37A/1CGZFujbIa5tHX2Oo0tRqupCPJ3PvmX3+bzeBIyu49qX8Qx1DYq8Ys9S3zZRzpyqmEg3z/xgONc+8yW/nL+O8wd0dnSkiWl9MnKPAs4t813Dvmu98ksqEIG4aFu+2Zy6ttER/M/EvhSUVvLFjpxQh2NamIzcYhLaRDr+B60lCK/8Es86TLadpGkqo/smEhcdwcJ11sxkmtaew86PYAJLEMfkFdsyG6ZpRUe4mZTemf9uOkhFlc2yNk0nw+FlvmtYgvDKK6mwSXKmyU054zTySypYtuNwqEMxLURlVTVZR0ocXYOphiUIr/zichviaprc2H6JtI1ys3C9NTOZprE/v5TKarUaRDDZSq7GCTGRbiYO7MyijQeotGYm0wQOFZYB0CXe8QWuLUHUyCuxlVyNM6accRpHiitYvjM31KGYFiCnyJMgEttFO/4sSxBAdbVaDcI4ZkL/JGKj3Cy0SXOmCRxLEHHO95laggAKyypRhXhLEMYBMZFuzhuQzKIN1sxkTl1OoWdSb6e2VoMIimMrudooJuOQ75xxGoePlrNsp41mMqcmp6iM9rGRQZmdbwkCyCvxZGRrYjJOOX9AMp3aRvHC0t2hDsU0czlFZUHpfwBLEMC3K7laJ7VxSkykm+tG9eDDzYfYmV0U6nBMM+ZJEMFp7bAEgc9uclaDMA76/qgeRLldPGsbCplTkF1oNYigyju23aglCOOcpLhoLh/WlfmrMo+tHmxMY+UUlVuCCKb8mqW+rQZhHHbTmN6UVlTzypcZoQ7FNEOlFVUUlVWSFNcCEoSITBaRrSKyXUTu9nO8u4gsFpHVIrJORKb4OV4kInc5GWdecQVtIt1ER7idfIwx9O8Sx9h+ibywdDfllTbk1TROdmHNJLlm3gchIm5gNnAxkA5MF5H0Wqfdi2enuWF4tiSdU+v4n4D/OBVjjXybRW2C6KYxvThUWMa76/aFOhTTzARzFjUEmCBEpK2IuLyv00TkMhFp6DfqSGC7qu5U1XJgHjC11jkKxHtfJwDHfmJE5HJgF7AxkBhPRZ53LwhjgmF8WhJpndvx6Ptb2OvdW9iYQOQUeZrDw62J6VMgRkS6Af8Fvg8838A13YC9Pu8zvWW+HgCuF5FMPFuT3gYgIu2AXwG/q+8BIjJTRFaKyMrs7OzAPhM/8outBmGCR0R4fPowSsqruOHZr479VWhMQ8KyBgGIqhYDVwJzVPVqYFATPH868LyqpgBTgJe8NZUHgD+rar0DxlV1rqoOV9XhSUlJJx1EvtUgTJAN6BLPcz8cwf78En7w7FcUllaEOiTTDOR4+yA6hVkfhIjIOcB1wHvesoZ6dLOAVJ/3Kd4yXzcBrwOo6jIgBkgEzgb+ICK7gduBX4vIrABjbbS8knLat7FlNkxwndWjI09edxZbDxQy48WVtuucaVBOURnxMRFBG1ATaIK4HbgHeFNVN4pIb2BxA9esAPqJSC8RicLTCb2g1jkZwEQAERmIJ0Fkq+pYVe2pqj2BvwD/T1WfCDDWRsuzJiYTIucNSOYP3x3M8p25PGcT6EwDcorKSQxS/wMEmCBU9RNVvUxVH/U2AeWo6s8auKYSmAUsAjbjGa20UUQeFJHLvKfdCcwQkbXAa8CNqqon/dmchNKKKsoqq22SnAmZK89MYeKAZP764TYOFpSGOhwTxoI5ixoCH8X0qojEi0hbYAOwSUR+0dB1qrpQVdNUtY+qPuQtu09VF3hfb1LV0ao6RFWHqup//dzjAVV9rHGfVuAKbJkNEwbuv3QQFdXKQ+9tDnUoJozlFJWRFG4JAkhX1QLgcjzzEnrhGcnU7CXHx7D9oYu5+qzUhk82xiHdO8Vy6/g+LFi7j2U7bElw4192EBfqg8ATRKR33sPlwAJVrcAzh6FFiHC7grK2ujH1+cmEPqR0aMP9CzZYh7U5QWlFFYWlwVtmAwJPEE8Du4G2wKci0gMocCooY1qjmEg3912SzjcHi3hp2Z5Qh2PCzOGjnklyYdcHoaqPq2o3VZ2iHnuA8xyOzZhW54L0zozpm8gTi7dTVFYZ6nBMGMkpDO4kOQi8kzpBRP5UM2tZRP6IpzZhjGlCIsJdF/Un92g5z35uw17Nt47Nog7DJqZngULge96PAuA5p4IypjUbmtqeC9M78/dPd9q+EeaYb5fZCL9O6j6qer934b2dqvo7oLeTgRnTmt15YX+Kyit56pOdoQ7FhImahfrCrokJKBGRMTVvRGQ0UOJMSMaY/l3imDqkK88v3cWhQps8ZzyT5OKiI4iJDN6+NYEmiFuB2SKy27s+0hPALY5FZYzh9klpVFQpf/lwG0FeYMCEoeyisqD2PwBEBHKSqq4FhohIvPd9gYjcDqxzMjhjWrOeiW2ZNiKVV77M4PNtOVw+rBtXDOtGr0QbH9Ia5RQGdxY1NHJHOVUt8M6oBvi5A/EYY3zcf+kgHrt6CKkd2/C3j7dx3mNLeHtN7UWRTWuQU1RGYlxwV50+lenD0mRRGGP8iopw8d2zUnjl5lEsu3sip3eL57H/brWZ1q1QTlF5UDuo4dQShDWKGhNEXRJiuGNSGntzS3hztdUiWpPyymrySyrCK0GISKGIFPj5KAS6BilGY4zX+QOSOaNbArMXb6fSahGtxuGjwZ9FDQ0kCFWNU9V4Px9xqhpQB7cxpumICD+b2I89h4t5a82+UIdjgiSnsGYORPPpg2iQiEwWka0isl1E7vZzvLuILBaR1SKyTkSmeMsvEJFVIrLe++/5TsZpTHMyaWAyg7rG88TH26wW0UqEYpkNcDBBiIgbmA1cDKQD00UkvdZp9+LZaW4Yni1J53jLc4BLVfUM4AfAS07FaUxzU1OL2H24mAVrrRbRGmR7E0RYD3NtpJHAdu/SHOXAPGBqrXMUiPe+TgD2AajqalWt+c7fCLQRkeB+ZYwJYxcM7MzA0+L53TubeHtNlk2ka+G+XYep5SSIbsBen/eZ3jJfDwDXi0gmsBC4zc99rgK+Vjg+BY4AABmMSURBVNWy2gdEZGbNCrPZ2dlNE7UxzYDLJTx53Zn0TmrL/8xbw4wXV3Ig/9SW5CitqOLhhZtP+T6m6R3ILyUuOoI2UcFbZgMc7oMIwHTgeVVNAaYAL4nIsZhEZBDwKHUs66Gqc1V1uKoOT0pKCkrAxoSLnoltmX/rudz7nYF8vj2HC//8CXtzi0/6fq9+mcHTn+7kqU92NGGUpils3l9AWpe4oD/XyQSRBfhu9JziLfN1E/A6gKouA2KARAARSQHeBG5QVfuONcYPt0u4eWxv3v7pGApKK3l33f6Tuk9ZZRVPf+r5Mfv3qkyO2mZFYaOqWtm4r4AzuiUE/dlOJogVQD8R6SUiUXg6oRfUOicDmAggIgPxJIhsEWkPvAfcrapfOBijMS1C/y5xpJ8Wz+Ith07q+vmrMjlYUMYdk9IoLKvkbRtCGzZ25RyluLyKQV3jGz65iTmWIFS1EpgFLAI24xmttFFEHhSRy7yn3QnMEJG1wGvAjerpbZsF9AXuE5E13o9kp2I1piU4f0AyqzKOkF9c0ajrKqqqeXLJDoamtudnE/sy8LR4Xly22zq+w8TGffkAnN7CahCo6kJVTVPVPqr6kLfsPlVd4H29SVVHq+oQVR2qqv/1lv9eVdt6y2o+Tu5PI2NaifMGJFFVrXy2vXEDNt5es4/MIyXcdn5fRITvj+rBlgOFfJ1xxKFITWNsyMonKsJF3+R2QX92qDupjTFNZGhqB9rHRvJxI5qZqqqVOYu3M/C0eM4f4KmkTx3albjoCF5atsepUE0jbMgqYGCXOCLdwf91bctlGNNCuF3C+LQkPtmaTXW14nL5X3D58205rMvKo7isiozcYnbmHGXOdWci4jm/bXQEV52VwqtfZnDvJWVBH3tvvqWqbNiXz6VDQrP0ndUgjGlBzh+QzOGj5azLyvd7fNO+Aq7/x5f84f2tzFmyncVbD3FBemcmD+py3HnXj+pOeVU1r6/c6/c+JjgycospLK0MyQgmsBqEMS3KuH5JuAQWbznE0NT2Jxz/60ffEBcTweK7JtCpbdSxWkNtfZPjOLdPJ57/Yjc/PLdX0CdoGY8NWZ792U7vGpoEYTUIY1qQDm2jGNa9A4u3ntgPsXFfPos2HuSmMb1IbBddZ3KocccFaRwqLOO5pbucCtc0YMO+fCJcQlqX4HdQgyUIY1qc8/onsS4zn+zC41en+cuH24iPieBHY3oFdJ8RPTsycUAyTy7ZQV5xuROhmgZsyMonrXMc0RGhqcFZgjCmhTnPOxppiU8tYn1mPh9sOsjNY3sTHxMZ8L1+Mbk/RWWVPLnEFjMINlXPDOrTuwV/glwNSxDGtDDpp8VzWkIMDy3czP8t2sKB/FL+8uE3JLSJ5IejezbqXgO6xHPFsG48v3Q3+/NLnAnY+LUvv5Tco+UhmSBXwxKEMS2MiPDMD4YzsmdH5izZwZhHP+ajLYeYMbYXcY2oPdS4Y1IaqvDXD7c5EK2py4as0M2grmGjmIxpgQZ1TWDuDcPZm1vMC0t3s/VgIT84t+dJ3Su1YyzXjerOC0t38+MJfejRqW3TBmv82piVj0tgYBdrYjLGOCC1Yyz3XpLOSzedfVK1hxq3ju+DiPDaVzYvIlg27Cugb3K7kA4xtgRhjGlQ5/gYJg5IZv6qvZRX2j7YwbAhKz9k8x9qWIIwxgRk+tndySkq58PNB0MdSotXUVXNocKykDfnWYIwxgRkXL8kurVvw6tfZoQ6lBavoMSzZHtCm9B2E1uCMMYExO0Spo1I5fPtOew5fDTU4bRo+TUJIvbk+42agqMJQkQmi8hWEdkuInf7Od5dRBaLyGoRWSciU3yO3eO9bquIXORknMaYwFw9PBW3S5i3wjqrnXQsQbRpoQlCRNzAbOBiIB2YLiLptU67F89Oc8PwbEk6x3ttuvf9IGAyMMd7P2NMCHVJiOH8Acn8a6V1VjupxScIYCSwXVV3qmo5MA+YWuscBWoG+SYANRvhTgXmqWqZqu4CtnvvZ4wJsWtHejqrP7LOasfUJIjGLIviBCcTRDfAtx6a6S3z9QBwvYhkAguB2xpxrTEmBMaleTqrn/3CVnl1SkErqEEEYjrwvKqmAFOAl0Qk4JhEZKaIrBSRldnZjduH1xhzctwuYea43qzYfYQvdx4OdTgtUkFpJQDxLThBZAGpPu9TvGW+bgJeB1DVZUAMkBjgtajqXFUdrqrDk5KSmjB0Y0x9rhmRSmK7KJ5YvD3UobRI+SUVREe4iIkMbderkwliBdBPRHqJSBSeTucFtc7JACYCiMhAPAki23veNBGJFpFeQD/gKwdjNcY0Qkykm5vG9OazbTms3ZsX6nBanPziipA3L4GDCUJVK4FZwCJgM57RShtF5EERucx72p3ADBFZC7wG3KgeG/HULDYB7wM/VdUqp2I1xjTe9aO6Ex8TwWyrRTS5/JLwSBCOTtNT1YV4Op99y+7zeb0JGF3HtQ8BDzkZnzHm5MXFRHLj6F48/tE2th4opH+XuBPOyS+uoKC0gtSOsSGIsPkKlwQR6k5qY0wz9sNzexIb5WbOkhNrEfklFVzx5Bdc+OdPWbXnSAiia77ySypC3kENliCMMaegQ9sovn9OD95es49/fP7tsNfKqmpmvfo1GYeL6dg2ih89v4JvDhaGMNLmxWoQxpgW4Y5JaUwe1IX/fXcTj/xnC6rK79/bzGfbcvj95aczb+YooiJc3PCPr8g8UhzqcJuFglJLEMaYFiAm0s3s687k+lHdeeqTHVz55FKeX7qbm8b0YtrI7qR2jOXFH43kaHklN/zjK/KKy0MdclirqlYKSyutickY0zK4XcL/Tj2dn1+QxuqMPCb0T+LXUwYeOz7wtHievXEEGbnFPPjOphBGGv4KS8NjFjXYntTGmCYiIvxsYj8mDexM76S2uF1y3PERPTvykwl9ePzj7Vw6pCvnDUgOUaThLVwW6gOrQRhjmlh61/g6ZwD/9Py+9Etux2/eXH/sL2VzPEsQxphWKTrCzaPfHcz+glIefX9LqMMJS5YgjDGt1pndO/Cj0b14eXkGy22xvxMcW+o7xNuNgiUIY0wI3HlhGj06xXLzCyt5Z+2+hi9oRQpKPCu5Wg3CGNMqxUZF8NqMUaR1bsdtr63m3rfWU1phy62BNTEZYwxd27fhn7ecwy3jevPy8gy++9RSissrQx1WyOWXVBDpFtqEeKlvsARhjAmhSLeLe6YM5Ilrh7Ehq4B/rtjb8EUtXM0yGyLS8MkOswRhjAm5SwZ3ZXiPDjzz2S4qqqpDHU5IFYTJQn1gCcIYEyZuGd+HrLwSFq7fH+pQQipcFuoDhxOEiEwWka0isl1E7vZz/M8issb78Y2I5Pkc+4OIbBSRzSLyuIRDfcsY45iJA5Lpm9yOpz7ZiaqGOpyQyS+pID6mhScIEXEDs4GLgXRguoik+56jqneo6lBVHQr8DXjDe+25eDYSGgycDowAxjsVqzEm9FwuYea43mzeX8Bn23JCHU7IhMtKruBsDWIksF1Vd6pqOTAPmFrP+dPxbDsKoHj2p44CooFI4KCDsRpjwsDUoV3pHB/N05/uCHUoIdNampi6Ab5DEjK9ZScQkR5AL+BjAFVdBiwG9ns/FqnqZgdjNcaEgegINz8a3Ysvth9mXWZewxe0MNXVSkErSRCNMQ2Yr6pVACLSFxgIpOBJKueLyNjaF4nITBFZKSIrs7OzgxqwMcYZ08/uTkKbSGa8uJKvM1rXVqVF5ZVUa3hMkgNnE0QWkOrzPsVb5s80vm1eArgCWK6qRapaBPwHOKf2Rao6V1WHq+rwpKSkJgrbGBNK8TGRvDbDswvdtKeX8+qXGaEOKWjyi8NnFjU4myBWAP1EpJeIROFJAgtqnyQiA4AOwDKf4gxgvIhEiEgkng5qa2IyppVI7xrPO7PGMKpPJ3795np+OX8tBa1gefBvF+pr4QlCVSuBWcAiPL/cX1fVjSLyoIhc5nPqNGCeHj+ubT6wA1gPrAXWquo7TsVqjAk/7WOjeO7GEcw6ry/zV2VywZ8+4f0NB0IdlqMKwmgdJnB4RzlVXQgsrFV2X633D/i5rgq4xcnYjDHhz+0S7rqoPxekd+buN9Zz68uruDC9M7+//HSS42NCHV6Tq6klhcNS3xA+ndTGGFOnIantWTBrNPdcPIBPt2Vz0V8+5YNNLW/kezit5AqWIIwxzUSk28Ut4/vw7m1jOC2hDTNeXMlv3lxPSXnLWSbcEoQxxpyCvslxvPnTc5k5rjevfJnBhMcW88f/bmVvbnGoQztl+SUVuF1Cu2hrYjLGmJMSHeHm11MG8tqMUQw8LZ4nFm9n7B8Wc8OzX5F7tDzU4Z00zzpMEWGx1Dc43EltjDFOOqdPJ87p04msvBL+tXIvcxbv4LdvbWD2dWeGOrSTkl9SGTbNS2A1CGNMC9CtfRtun5TG7Rf04731+3l3XfPc5zqc1mECSxDGmBZk5tjeDEltz2/f2kB2YVmow2m0cNosCCxBGGNakAi3iz9ePZij5VXc+9b6ZrevhCUIY4xxUN/kOO66MI1FGw+yYG3zamqyJiZjjHHYTWM8TU0PvbeZ4vLKUIcTEFW1BGGMMU5zu4TffmcghwrL+Mdnu0IdTkCKy6uorFZLEMYY47ThPTty0aDOPPXJDnKKwr/DOtxmUYMlCGNMC/bLyQMorazm8Y+2hTqUBlmCMMaYIOqT1I7pI1N59csMdmYXhTqcetUs9R0fYwnCGGOC4n8mphEV4eIP728NdSj1shqEMcYEWVJcND8e34f3Nx7gofc2UV0d2NyIzfsLePg/m/n3qkyHI/TIC8ME4ehaTCIyGfgr4AaeUdVHah3/M3Ce920skKyq7b3HugPP4NnXWoEpqrrbyXiNMS3TT87rS3ZRGX//bBcHCsp47OrBREe4OVxUxpurs9i8v5D4NhEktIlEEN7feIDN+wsAiIpwcXbvjqR0iHU0xtUZebSLjqBLQvhshORYghARNzAbuADIBFaIyAJV3VRzjqre4XP+bcAwn1u8CDykqh+ISDug2qlYjTEtm9sl/O6yQZyW0IZH39/CwYJSEttF8cGmg1RUKclx0RSXV1FU5pkzMSS1PQ9OHcRZPTpw5ZylPLZoK3+ZNqyBp5w8VWXJ1kOM6ZtIVET4NOw4WYMYCWxX1Z0AIjIPmApsquP86cD93nPTgQhV/QBAVcO7d8kYE/ZEhB9P6MNpCTH8Yv5a2kVHcMM5PblmRCppneMAqKyqpqSiijifjuKbx/Zi9uId/GhMLwantHcktm8OFrE/v5TbJyU5cv+T5WSC6Abs9XmfCZzt70QR6QH0Aj72FqUBeSLyhrf8Q+Bu717VvtfNBGYCdO/evUmDN8a0TJcP68a5fTqREBtJdIT7uGMRbhdx7uP/gr91fB/mfbWXh97bzLyZowLaqyErr4Qu8TG4XYHt67B46yEAxqclB/hZBEe41GWmAfN9EkAEMBa4CxgB9AZurH2Rqs5V1eGqOjwpKbwyrzEmfCXHx5yQHOoSFxPJ7ZP68eWuXD7afOhYeV5xOaUVx293unFfPje/sILRj3zM955eRsbhwHa5W7zlEANPiw+r/gdwtgaRhaeDuUaKt8yfacBPfd5nAmt8mqfeAkYB/3AgTmOMqde0kd15bulu7l+wkReX72HL/gIOFZYR4RLSOsdxerd4Cksr+c+GA8THRHDDOT148+sspjz+GQ9cNoirzuxWZ82joLSCVXuOMGNc7yB/Vg1zMkGsAPqJSC88iWEacG3tk0RkANABWFbr2vYikqSq2cD5wEoHYzXGmDpFul3cd0k6t726mpzCMsb0S6R/5zjySypYn5XPB5sOUl5Zzc/O78tNY3uT0CaSmeN68/N/ruWuf61l5e5cHr7yDL9J4ottOVRWK+f1D6/mJXAwQahqpYjMAhbhGeb6rKpuFJEHgZWqusB76jRgnvos3K6qVSJyF/CReL6iq4C/OxWrMcY0ZEL/ZNb/7iK/x1SVauW4PoeUDrG8NnMUj76/hbmf7uTcvolcNqTrCdcu2ZpNXEwEZ3Z3pgP8VDg6D0JVFwILa5XdV+v9A3Vc+wEw2LHgjDGmiYgIbj8tSG6X8KvJA/hqVy73v72Bc3p3Iiku+thxVWXJN4cY1y+JCHe4dAl/K/wiMsaYFsTtEh7z7nL327c2HLfL3eb9hRwsKGNC//AcZGMJwhhjHNY3OY47JqXx/sYDvLtu/7HyY8NbwzRBONrEZIwxxmPG2F68v2E/9761gX+u2EtOURl7Dhdzerd4kuPCa3hrDatBGGNMEES4Xfzxe0PondSWorJKUjrEcvmwrtz7nfRQh1Ynq0EYY0yQ9E2O482fjA51GAGzGoQxxhi/LEEYY4zxyxKEMcYYvyxBGGOM8csShDHGGL8sQRhjjPHLEoQxxhi/LEEYY4zxS3wXjmrORCQb2ON9mwDkB/i65t9EIKeRj/W9XyDHGiqrLz6n4gw01vreh3Os4fT/35xite/V4Mcaqv//HqrqfzEoVW1xH8DcQF/7/LvyVJ4TyLGGyhqIz5E4A421vvfhHGs4/f83p1jte7X1fK/W99FSm5jeacRr37JTeU4gxxoqqy8+p+Ks63jtsvreh3Os4fT/7688XGO179WGtZTv1Tq1mCamUyUiK1V1eKjjaEhziRMsVqc0l1ibS5xgsdalpdYgTsbcUAcQoOYSJ1isTmkusTaXOMFi9ctqEMYYY/yyGoQxxhi/LEEYY4zxyxKEMcYYvyxBNEBExorIUyLyjIgsDXU89RERl4g8JCJ/E5EfhDqe+ojIBBH5zPu1nRDqeBoiIm1FZKWIXBLqWOoiIgO9X8/5IvLjUMdTHxG5XET+LiL/FJELQx1PfUSkt4j8Q0TmhzqW2rzfly94v5bXNfX9W3SCEJFnReSQiGyoVT5ZRLaKyHYRubu+e6jqZ6p6K/Au8EI4xwpMBVKACiAzzGNVoAiIaQaxAvwKeN2ZKJvse3Wz93v1e4Bj+1o2UaxvqeoM4FbgmjCPdaeq3uRUjLU1MuYrgfner+VlTR7Mycyuay4fwDjgTGCDT5kb2AH0BqKAtUA6cAaeJOD7kexz3etAXDjHCtwN3OK9dn6Yx+ryXtcZeCXMY70AmAbcCFwSrnF6r7kM+A9wbTh/TX2u+yNwZjOJ1bGfqVOI+R5gqPecV5s6lghaMFX9VER61ioeCWxX1Z0AIjIPmKqqDwN+mw9EpDuQr6qF4RyriGQC5d63VeEcq48jQLQTcUKTfV0nAG3x/ECWiMhCVa0Otzi991kALBCR94BXmzLGpoxVRAR4BPiPqn7tRJxNFWuwNSZmPLXvFGANDrQItegEUYduwF6f95nA2Q1ccxPwnGMR1a2xsb4B/E1ExgKfOhmYH42KVUSuBC4C2gNPOBvaCRoVq6r+BkBEbgRymjo51KOxX9MJeJocooGFjkZ2osZ+r94GTAISRKSvqj7lZHC1NPbr2gl4CBgmIvd4E0mw1RXz48ATIvIdTm0pDr9aY4JoNFW9P9QxBEJVi/Eks7Cnqm/gSWjNhqo+H+oY6qOqS4AlIQ4jIKr6OJ5fbmFPVQ/j6SsJO6p6FPihU/dv0Z3UdcgCUn3ep3jLwpHF6ozmEmtziRMsVqeFJObWmCBWAP1EpJeIROHpfFwQ4pjqYrE6o7nE2lziBIvVaaGJORi98qH6AF4D9vPtsM+bvOVTgG/wjAr4TajjtFgt1uYSp8XaumK2xfqMMcb41RqbmIwxxgTAEoQxxhi/LEEYY4zxyxKEMcYYvyxBGGOM8csShDHGGL8sQZgWTUSKgvy8JtkzRDz7ZeSLyBoR2SIijwVwzeUikt4UzzcGLEEY0ygiUu/6Zap6bhM+7jNVHQoMAy4RkYb2eLgcz4qzxjQJSxCm1RGRPiLyvoisEs+udgO85ZeKyJcislpEPhSRzt7yB0TkJRH5AnjJ+/5ZEVkiIjtF5Gc+9y7y/jvBe3y+twbwineJa0RkirdslYg8LiLv1hevqpbgWc65m/f6GSKyQkTWisi/RSRWRM7FsxfE/3lrHX3q+jyNCZQlCNMazQVuU9WzgLuAOd7yz4FRqjoMmAf80ueadGCSqk73vh+AZ7nykcD9IhLp5znDgNu91/YGRotIDPA0cLH3+UkNBSsiHYB+fLuE+xuqOkJVhwCb8SzFsBTP2jy/UNWhqrqjns/TmIDYct+mVRGRdsC5wL+8f9DDtxsWpQD/FJHT8Ozatcvn0gXev+RrvKeqZUCZiBzCszNe7a1Tv1LVTO9z1wA98WyzulNVa+79GjCzjnDHishaPMnhL6p6wFt+uoj8Hs9eGu2ARY38PI0JiCUI09q4gDxv235tfwP+pKoLvJvvPOBz7Gitc8t8Xlfh/2cpkHPq85mqXiIivYDlIvK6qq4BngcuV9W13k2MJvi5tr7P05iAWBOTaVVUtQDYJSJXg2frSxEZ4j2cwLdr7P/AoRC2Ar19tpS8pqELvLWNR4BfeYvigP3eZq3rfE4t9B5r6PM0JiCWIExLFysimT4fP8fzS/Umb/PNRjx7+4KnxvAvEVkF5DgRjLeZ6ifA+97nFAL5AVz6FDDOm1h+C3wJfAFs8TlnHvALbyd7H+r+PI0JiC33bUyQiUg7VS3yjmqaDWxT1T+HOi5jarMahDHBN8Pbab0RT7PW0yGOxxi/rAZhjDHGL6tBGGOM8csShDHGGL8sQRhjjPHLEoQxxhi/LEEYY4zxyxKEMcYYv/4/nVcY9ZmgXgsAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "tags": [], "needs_background": "light" } } ] }, { "cell_type": "code", "metadata": { "id": "8OKI5Sjb820h", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "0c5f65b9-7c8a-4225-f27e-3b52f889ecb2" }, "source": [ "lr" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "SuggestedLRs(lr_min=0.04365158379077912, lr_steep=0.0014454397605732083)" ] }, "metadata": { "tags": [] }, "execution_count": 69 } ] }, { "cell_type": "markdown", "metadata": { "id": "D16bOBGU9qDC", "colab_type": "text" }, "source": [ "That's about 4e-2, so I'll agree with it, let's train on this schema:" ] }, { "cell_type": "code", "metadata": { "id": "xaIlwqRE9o0Z", "colab_type": "code", "colab": {} }, "source": [ "lr = 0.04365158379077912" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "AcwmEv-F91IQ", "colab_type": "text" }, "source": [ "First one epoch completely frozen:" ] }, { "cell_type": "code", "metadata": { "id": "z9vGw4lC9naS", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 77 }, "outputId": "5a617143-b353-4885-9d59-ae23252d19cd" }, "source": [ "learn.fit_one_cycle(1, lr)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracytime
00.7253290.6609200.61000000:03
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "dS1Z_u0596pB", "colab_type": "text" }, "source": [ "Then we freeze to `-2` and adjust our `lr`:" ] }, { "cell_type": "code", "metadata": { "id": "aMwzXKYy928Z", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 143 }, "outputId": "e8a59575-6565-4067-c8dc-cb2471d6b012" }, "source": [ "doc(learn.freeze_to)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "

Learner.freeze_to[source]

Learner.freeze_to(n)

\n", "
\n", "

Freeze parameter groups up to n

\n", "

Show in docs

\n" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "code", "metadata": { "id": "mHOB-Sws-ImC", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "14638429-c3f1-4638-baf2-c645e9a4dfd6" }, "source": [ "adj = 2.6**4; adj" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "45.69760000000001" ] }, "metadata": { "tags": [] }, "execution_count": 77 } ] }, { "cell_type": "markdown", "metadata": { "id": "iJSvlpxn-KYd", "colab_type": "text" }, "source": [ "This adjuster schema is how we will divide our `lr` during fitting (we'll also adjust our learning rate outside of it):" ] }, { "cell_type": "code", "metadata": { "id": "ml3CfzzB9-2A", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 77 }, "outputId": "d8d3c1a2-88db-47bc-cb58-42e25ef20261" }, "source": [ "learn.freeze_to(-2)\n", "learn.fit_one_cycle(1, slice(lr/adj, lr))" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracytime
00.7286620.6635440.59500000:05
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "ztK9fioO-kZP", "colab_type": "text" }, "source": [ "Then -3:" ] }, { "cell_type": "code", "metadata": { "id": "aAqB8LH4-j7u", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 77 }, "outputId": "17072c88-9ee6-4cc2-f856-e9f56a2f4c8a" }, "source": [ "learn.freeze_to(-3)\n", "lr /= 2\n", "learn.fit_one_cycle(1, slice(lr/adj, lr))" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracytime
00.6310150.5837450.68000000:05
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "code", "metadata": { "id": "Yv4XD48J-rFU", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 106 }, "outputId": "1734dca7-f468-4925-cd60-259fb2c36a58" }, "source": [ "learn.unfreeze()\n", "lr /= 5\n", "learn.fit_one_cycle(2, slice(lr/adj, lr))" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
epochtrain_lossvalid_lossaccuracytime
00.4924730.5313310.72500000:08
10.4487030.5332410.73500000:08
" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } } ] }, { "cell_type": "markdown", "metadata": { "id": "zTzkglUW-1pn", "colab_type": "text" }, "source": [ "73.5% is as high as we got given a subset of only 1,000 texts! Not to shabby, with the full version we can expect around 94.5% accuracy (this was their results in the paper). \n", "\n", "Now let's show an example predict, and for fun, compare `fastai2` to `fastinference`\n", "\n", "> **Warning**: the `ULMFiT` model will *not* export to ONNX" ] }, { "cell_type": "code", "metadata": { "id": "VDxpMkeY-w_x", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 50 }, "outputId": "e8a2c135-cce9-4e76-e964-7363dd8e95b0" }, "source": [ "%%time\n", "out = learn.predict(df.iloc[0]['text'])" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } }, { "output_type": "stream", "text": [ "CPU times: user 39.6 ms, sys: 2.15 ms, total: 41.7 ms\n", "Wall time: 46.4 ms\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "ewEQC7yt_eJf", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "ffbe3a46-bc2b-43a5-8d33-f1f92d6058f2" }, "source": [ "out" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "('negative', tensor(0), tensor([0.6171, 0.3829]))" ] }, "metadata": { "tags": [] }, "execution_count": 88 } ] }, { "cell_type": "code", "metadata": { "id": "r4SyXmK2_UDC", "colab_type": "code", "colab": {} }, "source": [ "dl = learn.dls.test_dl(df.iloc[:1]['text'])" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "iRrdBLHs_Xrg", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 50 }, "outputId": "597a303b-1ed1-4106-faa1-b974751f9b61" }, "source": [ "%%time\n", "preds = learn.get_preds(dl=dl)" ], "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": { "tags": [] } }, { "output_type": "stream", "text": [ "CPU times: user 33.9 ms, sys: 52.5 ms, total: 86.4 ms\n", "Wall time: 127 ms\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "mOHfxnTx_blh", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "8035a140-331a-4153-e6c1-c1d6d7408738" }, "source": [ "preds" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "(tensor([[0.6171, 0.3829]]), None)" ] }, "metadata": { "tags": [] }, "execution_count": 91 } ] }, { "cell_type": "markdown", "metadata": { "id": "psHTGiHt_hdM", "colab_type": "text" }, "source": [ "What about `fastinference`?" ] }, { "cell_type": "code", "metadata": { "id": "zJC9Naw1_glZ", "colab_type": "code", "colab": {} }, "source": [ "!pip install fastinference --quiet" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "RPqGTjxj_sCP", "colab_type": "code", "colab": {} }, "source": [ "from fastinference.inference import *" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "hrCvWFMM_jvd", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 50 }, "outputId": "239c8e2c-d9bd-4df3-b6e4-ec05a56e0af6" }, "source": [ "%%time\n", "out = learn.predict(df.iloc[0]['text'])" ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "CPU times: user 22.9 ms, sys: 4.6 ms, total: 27.5 ms\n", "Wall time: 33.7 ms\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "T1ZSLDA0_6h8", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "193c6cfb-9ec7-44b6-d644-4f204f262dc9" }, "source": [ "out" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "[['negative'], array([[0.6171321, 0.3828678]], dtype=float32)]" ] }, "metadata": { "tags": [] }, "execution_count": 102 } ] }, { "cell_type": "code", "metadata": { "id": "VCKQQ8jd_v3-", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 50 }, "outputId": "8ccb51d9-60f5-482a-dac9-575973e02c74" }, "source": [ "%%time\n", "preds = learn.get_preds(dl=dl)" ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "CPU times: user 20.3 ms, sys: 52.6 ms, total: 72.9 ms\n", "Wall time: 105 ms\n" ], "name": "stdout" } ] }, { "cell_type": "markdown", "metadata": { "id": "260db_Q6ABbx", "colab_type": "text" }, "source": [ "We can still shave off some time!" ] }, { "cell_type": "code", "metadata": { "id": "re7pgrJ0AAKv", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "f1109afc-fa98-4f2c-af30-14fc366d86f7" }, "source": [ "preds" ], "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "[['negative'], array([[0.6171321, 0.3828678]], dtype=float32)]" ] }, "metadata": { "tags": [] }, "execution_count": 104 } ] }, { "cell_type": "markdown", "metadata": { "id": "0-JbyWpnAEIS", "colab_type": "text" }, "source": [ "While also still getting our classes back each time too" ] } ] }