{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# NLP model creation and training" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.text import * \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The main thing here is [`RNNLearner`](/text.learner.html#RNNLearner). There are also some utility functions to help create and update text models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Quickly get a learner" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

language_model_learner[source]

\n", "\n", "> language_model_learner(`data`:[`DataBunch`](/basic_data.html#DataBunch), `bptt`:`int`=`70`, `emb_sz`:`int`=`400`, `nh`:`int`=`1150`, `nl`:`int`=`3`, `pad_token`:`int`=`1`, `drop_mult`:`float`=`1.0`, `tie_weights`:`bool`=`True`, `bias`:`bool`=`True`, `qrnn`:`bool`=`False`, `pretrained_model`:`str`=`None`, `pretrained_fnames`:`OptStrTuple`=`None`, `kwargs`) → `LanguageLearner`\n", "\n", "Create a [`Learner`](/basic_train.html#Learner) with a language model from `data`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(language_model_learner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`bptt` (for backprop trough time) is the number of words we will store the gradient for, and use for the optimization step. \n", "\n", "The model used is an [AWD-LSTM](https://arxiv.org/abs/1708.02182) that is built with embeddings of size `emb_sz`, a hidden size of `nh`, and `nl` layers (the `vocab_size` is inferred from the [`data`](/text.data.html#text.data)). All the dropouts are put to values that we found worked pretty well and you can control their strength by adjusting `drop_mult`. If qrnn is True, the model uses [QRNN cells](https://arxiv.org/abs/1611.01576) instead of LSTMs. The flag `tied_weights` control if we should use the same weights for the encoder and the decoder, the flag `bias` controls if the last linear layer (the decoder) has bias or not.\n", "\n", "You can specify `pretrained_model` if you want to use the weights of a pretrained model. If you have your own set of weights and the corrsesponding dictionary, you can pass them in `pretrained_fnames`. This should be a list of the name of the weight file and the name of the corresponding dictionary. The dictionary is needed because the function will internally convert the embeddings of the pretrained models to match the dictionary of the [`data`](/text.data.html#text.data) passed (a word may have a different id for the pretrained model). Those two files should be in the models directory of `data.path`. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "data = TextLMDataBunch.from_csv(path, 'texts.csv')\n", "learn = language_model_learner(data, pretrained_model=URLs.WT103, drop_mult=0.5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

text_classifier_learner[source]

\n", "\n", "> text_classifier_learner(`data`:[`DataBunch`](/basic_data.html#DataBunch), `bptt`:`int`=`70`, `emb_sz`:`int`=`400`, `nh`:`int`=`1150`, `nl`:`int`=`3`, `pad_token`:`int`=`1`, `drop_mult`:`float`=`1.0`, `qrnn`:`bool`=`False`, `max_len`:`int`=`1400`, `lin_ftrs`:`Collection`\\[`int`\\]=`None`, `ps`:`Collection`\\[`float`\\]=`None`, `pretrained_model`:`str`=`None`, `kwargs`) → `TextClassifierLearner`\n", "\n", "Create a RNN classifier from `data`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(text_classifier_learner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`bptt` (for backprop trough time) is the number of words we will store the gradient for, and use for the optimization step. \n", "\n", "The model used is the encoder of an [AWD-LSTM](https://arxiv.org/abs/1708.02182) that is built with embeddings of size `emb_sz`, a hidden size of `nh`, and `nl` layers (the `vocab_size` is inferred from the [`data`](/text.data.html#text.data)). All the dropouts are put to values that we found worked pretty well and you can control their strength by adjusting `drop_mult`. If qrnn is True, the model uses [QRNN cells](https://arxiv.org/abs/1611.01576) instead of LSTMs.\n", "\n", "The input texts are fed into that model by bunch of `bptt` and only the last `max_len` activations are considerated. This gives us the backbone of our model. The head then consists of:\n", "- a layer that concatenates the final outputs of the RNN with the maximum and average of all the intermediate outputs (on the sequence length dimension),\n", "- blocks of ([`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout), [`nn.Linear`](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU)) layers.\n", "\n", "The blocks are defined by the `lin_ftrs` and `drops` arguments. Specifically, the first block will have a number of inputs inferred from the backbone arch and the last one will have a number of outputs equal to data.c (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by `lin_ftrs` (of course a block has a number of inputs equal to the number of outputs of the previous block). The dropouts all have a the same value ps if you pass a float, or the corresponding values if you pass a list. Default is to have an intermediate hidden size of 50 (which makes two blocks model_activation -> 50 -> n_classes) with a dropout of 0.1." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "
Note: Using QRNN require to have cuda installed (same version as pytorhc is using).
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "jekyll_note(\"Using QRNN require to have cuda installed (same version as pytorhc is using).\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "data = TextClasDataBunch.from_csv(path, 'texts.csv')\n", "learn = text_classifier_learner(data, drop_mult=0.5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class RNNLearner[source]

\n", "\n", "> RNNLearner(`data`:[`DataBunch`](/basic_data.html#DataBunch), `model`:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), `bptt`:`int`=`70`, `split_func`:`OptSplitFunc`=`None`, `clip`:`float`=`None`, `alpha`:`float`=`2.0`, `beta`:`float`=`1.0`, `metrics`=`None`, `kwargs`) :: [`Learner`](/basic_train.html#Learner)\n", "\n", "Basic class for a [`Learner`](/basic_train.html#Learner) in NLP. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Handles the whole creation from data and a `model` with a text data using a certain `bptt`. The `split_func` is used to properly split the model in different groups for gradual unfreezing and differential learning rates. Gradient clipping of `clip` is optionally applied. `alpha` and `beta` are all passed to create an instance of [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer). Can be used for a language model or an RNN classifier. It also handles the conversion of weights from a pretrained model as well as saving or loading the encoder." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_preds[source]

\n", "\n", "> get_preds(`ds_type`:[`DatasetType`](/basic_data.html#DatasetType)=``, `with_loss`:`bool`=`False`, `n_batch`:`Optional`\\[`int`\\]=`None`, `pbar`:`Union`\\[`MasterBar`, `ProgressBar`, `NoneType`\\]=`None`, `ordered`:`bool`=`False`) → `List`\\[`Tensor`\\]\n", "\n", "Return predictions and targets on the valid, train, or test set, depending on `ds_type`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.get_preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If `ordered=True`, returns the predictions in the order of the dataset, otherwise they will be ordered by the sampler (from the longest text to the shortest). The other arguments are passed [`Learner.get_preds`](/basic_train.html#Learner.get_preds)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Loading and saving" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

load_encoder[source]

\n", "\n", "> load_encoder(`name`:`str`)\n", "\n", "Load the encoder `name` from the model directory. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.load_encoder)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

save_encoder[source]

\n", "\n", "> save_encoder(`name`:`str`)\n", "\n", "Save the encoder to `name` inside the model directory. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.save_encoder)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

load_pretrained[source]

\n", "\n", "> load_pretrained(`wgts_fname`:`str`, `itos_fname`:`str`, `strict`:`bool`=`True`)\n", "\n", "Load a pretrained model and adapts it to the data vocabulary. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.load_pretrained)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Opens the weights in the `wgts_fname` of `self.model_dir` and the dictionary in `itos_fname` then adapts the pretrained weights to the vocabulary of the data. The two files should be in the models directory of the `learner.path`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Utility functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

lm_split[source]

\n", "\n", "> lm_split(`model`:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → `List`\\[[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\\]\n", "\n", "Split a RNN `model` in groups for differential learning rates. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(lm_split)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

rnn_classifier_split[source]

\n", "\n", "> rnn_classifier_split(`model`:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) → `List`\\[[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\\]\n", "\n", "Split a RNN `model` in groups for differential learning rates. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(rnn_classifier_split)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

convert_weights[source]

\n", "\n", "> convert_weights(`wgts`:`Weights`, `stoi_wgts`:`Dict`\\[`str`, `int`\\], `itos_new`:`StrList`) → `Weights`\n", "\n", "Convert the model `wgts` to go with a new vocabulary. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(convert_weights)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Uses the dictionary `stoi_wgts` (mapping of word to id) of the weights to map them to a new dictionary `itos_new` (mapping id to word)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Get predictions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class LanguageLearner[source]

\n", "\n", "> LanguageLearner(`data`:[`DataBunch`](/basic_data.html#DataBunch), `model`:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), `bptt`:`int`=`70`, `split_func`:`OptSplitFunc`=`None`, `clip`:`float`=`None`, `alpha`:`float`=`2.0`, `beta`:`float`=`1.0`, `metrics`=`None`, `kwargs`) :: [`RNNLearner`](/text.learner.html#RNNLearner)\n", "\n", "Subclass of RNNLearner for predictions. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

predict[source]

\n", "\n", "> predict(`text`:`str`, `n_words`:`int`=`1`, `no_unk`:`bool`=`True`, `temperature`:`float`=`1.0`, `min_p`:`float`=`None`)\n", "\n", "Return the `n_words` that come after `text`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner.predict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If `no_unk=True` the unknown token is never picked. Words are taken randomly with the distribution of probabilities returned by the model. If `min_p` is not `None`, that value is the minimum probability to be considered in the pool of words. Lowering `temperature` will make the texts less randomized. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

get_preds[source]

\n", "\n", "> get_preds(`ds_type`:[`DatasetType`](/basic_data.html#DatasetType)=``, `with_loss`:`bool`=`False`, `n_batch`:`Optional`\\[`int`\\]=`None`, `pbar`:`Union`\\[`MasterBar`, `ProgressBar`, `NoneType`\\]=`None`, `ordered`:`bool`=`False`) → `List`\\[`Tensor`\\]\n", "\n", "Return predictions and targets on the valid, train, or test set, depending on `ds_type`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.get_preds)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

show_results[source]

\n", "\n", "> show_results(`ds_type`=``, `rows`:`int`=`5`, `max_len`:`int`=`20`)\n", "\n", "Show `rows` result of predictions on `ds_type` dataset. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner.show_results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Easy access of language models and ULMFiT", "title": "text.learner" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }