{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## NLP model creation and training" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.text import * \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The main thing here is [`RNNLearner`](/text.learner.html#RNNLearner). There are also some utility functions to help create and update text models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Quickly get a learner" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

language_model_learner[source]

\n", "\n", "> language_model_learner(**`data`**:[`DataBunch`](/basic_data.html#DataBunch), **`arch`**, **`config`**:`dict`=***`None`***, **`drop_mult`**:`float`=***`1.0`***, **`pretrained`**:`bool`=***`True`***, **`pretrained_fnames`**:`OptStrTuple`=***`None`***, **\\*\\*`learn_kwargs`**) → `LanguageLearner`\n", "\n", "Create a [`Learner`](/basic_train.html#Learner) with a language model from `data` and `arch`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(language_model_learner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model used is given by `arch` and `config`. It can be:\n", "\n", "- an [`AWD_LSTM`](/text.models.awd_lstm.html#AWD_LSTM)([Merity et al.](https://arxiv.org/abs/1708.02182))\n", "- a [`Transformer`](/text.models.transformer.html#Transformer) decoder ([Vaswani et al.](https://arxiv.org/abs/1706.03762))\n", "- a [`TransformerXL`](/text.models.transformer.html#TransformerXL) ([Dai et al.](https://arxiv.org/abs/1901.02860))\n", "\n", "They each have a default config for language modelling that is in {lower_case_class_name}_lm_config if you want to change the default parameter. At this stage, only the AWD LSTM support `pretrained=True` but we hope to add more pretrained models soon. `drop_mult` is applied to all the dropouts weights of the `config`, `learn_kwargs` are passed to the [`Learner`](/basic_train.html#Learner) initialization." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "
Note: Using QRNN (change the flag in the config of the AWD LSTM) requires to have cuda installed (same version as pytorch is using).
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "jekyll_note(\"Using QRNN (change the flag in the config of the AWD LSTM) requires to have cuda installed (same version as pytorch is using).\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "data = TextLMDataBunch.from_csv(path, 'texts.csv')\n", "learn = language_model_learner(data, AWD_LSTM, drop_mult=0.5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

text_classifier_learner[source]

\n", "\n", "> text_classifier_learner(**`data`**:[`DataBunch`](/basic_data.html#DataBunch), **`arch`**:`Callable`, **`bptt`**:`int`=***`70`***, **`max_len`**:`int`=***`1400`***, **`config`**:`dict`=***`None`***, **`pretrained`**:`bool`=***`True`***, **`drop_mult`**:`float`=***`1.0`***, **`lin_ftrs`**:`Collection`\\[`int`\\]=***`None`***, **`ps`**:`Collection`\\[`float`\\]=***`None`***, **\\*\\*`learn_kwargs`**) → `TextClassifierLearner`\n", "\n", "Create a [`Learner`](/basic_train.html#Learner) with a text classifier from `data` and `arch`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(text_classifier_learner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here again, the backbone of the model is determined by `arch` and `config`. The input texts are fed into that model by bunch of `bptt` and only the last `max_len` activations are considered. This gives us the backbone of our model. The head then consists of:\n", "- a layer that concatenates the final outputs of the RNN with the maximum and average of all the intermediate outputs (on the sequence length dimension),\n", "- blocks of ([`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout), [`nn.Linear`](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU)) layers.\n", "\n", "The blocks are defined by the `lin_ftrs` and `drops` arguments. Specifically, the first block will have a number of inputs inferred from the backbone arch and the last one will have a number of outputs equal to data.c (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by `lin_ftrs` (of course a block has a number of inputs equal to the number of outputs of the previous block). The dropouts all have a the same value ps if you pass a float, or the corresponding values if you pass a list. Default is to have an intermediate hidden size of 50 (which makes two blocks model_activation -> 50 -> n_classes) with a dropout of 0.1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "data = TextClasDataBunch.from_csv(path, 'texts.csv')\n", "learn = text_classifier_learner(data, AWD_LSTM, drop_mult=0.5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class RNNLearner[source]

\n", "\n", "> RNNLearner(**`data`**:[`DataBunch`](/basic_data.html#DataBunch), **`model`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`split_func`**:`OptSplitFunc`=***`None`***, **`clip`**:`float`=***`None`***, **`alpha`**:`float`=***`2.0`***, **`beta`**:`float`=***`1.0`***, **`metrics`**=***`None`***, **\\*\\*`learn_kwargs`**) :: [`Learner`](/basic_train.html#Learner)\n", "\n", "Basic class for a [`Learner`](/basic_train.html#Learner) in NLP. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Handles the whole creation from data and a `model` with a text data using a certain `bptt`. The `split_func` is used to properly split the model in different groups for gradual unfreezing and differential learning rates. Gradient clipping of `clip` is optionally applied. `alpha` and `beta` are all passed to create an instance of [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer). Can be used for a language model or an RNN classifier. It also handles the conversion of weights from a pretrained model as well as saving or loading the encoder." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_preds[source]

\n", "\n", "> get_preds(**`ds_type`**:[`DatasetType`](/basic_data.html#DatasetType)=***``***, **`with_loss`**:`bool`=***`False`***, **`n_batch`**:`Optional`\\[`int`\\]=***`None`***, **`pbar`**:`Union`\\[`MasterBar`, `ProgressBar`, `NoneType`\\]=***`None`***, **`ordered`**:`bool`=***`False`***) → `List`\\[`Tensor`\\]\n", "\n", "Return predictions and targets on the valid, train, or test set, depending on `ds_type`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.get_preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If `ordered=True`, returns the predictions in the order of the dataset, otherwise they will be ordered by the sampler (from the longest text to the shortest). The other arguments are passed [`Learner.get_preds`](/basic_train.html#Learner.get_preds)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Loading and saving" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

load_encoder[source]

\n", "\n", "> load_encoder(**`name`**:`str`)\n", "\n", "Load the encoder `name` from the model directory. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.load_encoder)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

save_encoder[source]

\n", "\n", "> save_encoder(**`name`**:`str`)\n", "\n", "Save the encoder to `name` inside the model directory. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.save_encoder)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

load_pretrained[source]

\n", "\n", "> load_pretrained(**`wgts_fname`**:`str`, **`itos_fname`**:`str`, **`strict`**:`bool`=***`True`***)\n", "\n", "Load a pretrained model and adapts it to the data vocabulary. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.load_pretrained)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Opens the weights in the `wgts_fname` of `self.model_dir` and the dictionary in `itos_fname` then adapts the pretrained weights to the vocabulary of the data. The two files should be in the models directory of the `learner.path`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Utility functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

convert_weights[source]

\n", "\n", "> convert_weights(**`wgts`**:`Weights`, **`stoi_wgts`**:`Dict`\\[`str`, `int`\\], **`itos_new`**:`StrList`) → `Weights`\n", "\n", "Convert the model `wgts` to go with a new vocabulary. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(convert_weights)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Uses the dictionary `stoi_wgts` (mapping of word to id) of the weights to map them to a new dictionary `itos_new` (mapping id to word)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Get predictions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class LanguageLearner[source]

\n", "\n", "> LanguageLearner(**`data`**:[`DataBunch`](/basic_data.html#DataBunch), **`model`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`split_func`**:`OptSplitFunc`=***`None`***, **`clip`**:`float`=***`None`***, **`alpha`**:`float`=***`2.0`***, **`beta`**:`float`=***`1.0`***, **`metrics`**=***`None`***, **\\*\\*`learn_kwargs`**) :: [`RNNLearner`](/text.learner.html#RNNLearner)\n", "\n", "Subclass of RNNLearner for predictions. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

predict[source]

\n", "\n", "> predict(**`text`**:`str`, **`n_words`**:`int`=***`1`***, **`no_unk`**:`bool`=***`True`***, **`temperature`**:`float`=***`1.0`***, **`min_p`**:`float`=***`None`***, **`sep`**:`str`=***`' '`***, **`decoder`**=***`'decode_spec_tokens'`***)\n", "\n", "Return the `n_words` that come after `text`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner.predict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If `no_unk=True` the unknown token is never picked. Words are taken randomly with the distribution of probabilities returned by the model. If `min_p` is not `None`, that value is the minimum probability to be considered in the pool of words. Lowering `temperature` will make the texts less randomized. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

beam_search[source]

\n", "\n", "> beam_search(**`text`**:`str`, **`n_words`**:`int`, **`no_unk`**:`bool`=***`True`***, **`top_k`**:`int`=***`10`***, **`beam_sz`**:`int`=***`1000`***, **`temperature`**:`float`=***`1.0`***, **`sep`**:`str`=***`' '`***, **`decoder`**=***`'decode_spec_tokens'`***)\n", "\n", "Return the `n_words` that come after `text` using beam search. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner.beam_search)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Basic functions to get a model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_language_model[source]

\n", "\n", "> get_language_model(**`arch`**:`Callable`, **`vocab_sz`**:`int`, **`config`**:`dict`=***`None`***, **`drop_mult`**:`float`=***`1.0`***)\n", "\n", "Create a language model from `arch` and its `config`, maybe `pretrained`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(get_language_model)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_text_classifier[source]

\n", "\n", "> get_text_classifier(**`arch`**:`Callable`, **`vocab_sz`**:`int`, **`n_class`**:`int`, **`bptt`**:`int`=***`70`***, **`max_len`**:`int`=***`1400`***, **`config`**:`dict`=***`None`***, **`drop_mult`**:`float`=***`1.0`***, **`lin_ftrs`**:`Collection`\\[`int`\\]=***`None`***, **`ps`**:`Collection`\\[`float`\\]=***`None`***) → [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "Create a text classifier from `arch` and its `config`, maybe `pretrained`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(get_text_classifier)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This model uses an encoder taken from the `arch` on `config`. This encoder is fed the sequence by successive bits of size `bptt` and we only keep the last `max_seq` outputs for the pooling layers.\n", "\n", "The decoder use a concatenation of the last outputs, a `MaxPooling` of all the outputs and an `AveragePooling` of all the outputs. It then uses a list of `BatchNorm`, `Dropout`, `Linear`, `ReLU` blocks (with no `ReLU` in the last one), using a first layer size of `3*emb_sz` then following the numbers in `n_layers`. The dropouts probabilities are read in `drops`.\n", "\n", "Note that the model returns a list of three things, the actual output being the first, the two others being the intermediate hidden states before and after dropout (used by the [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer)). Most loss functions expect one output, so you should use a Callback to remove the other two if you're not using [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source]

\n", "\n", "> forward(**`input`**:`LongTensor`) → `Tuple`\\[`Tensor`, `Tensor`\\]\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:`Module` instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MultiBatchEncoder.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

show_results[source]

\n", "\n", "> show_results(**`ds_type`**=***``***, **`rows`**:`int`=***`5`***, **`max_len`**:`int`=***`20`***)\n", "\n", "Show `rows` result of predictions on `ds_type` dataset. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner.show_results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

concat[source]

\n", "\n", "> concat(**`arrs`**:`Collection`\\[`Tensor`\\]) → `Tensor`\n", "\n", "Concatenate the `arrs` along the batch dimension. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MultiBatchEncoder.concat)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class MultiBatchEncoder[source]

\n", "\n", "> MultiBatchEncoder(**`bptt`**:`int`, **`max_len`**:`int`, **`module`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "Create an encoder over `module` that can process a full sentence. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MultiBatchEncoder)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

decode_spec_tokens[source]

\n", "\n", "> decode_spec_tokens(**`tokens`**)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(decode_spec_tokens)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

reset[source]

\n", "\n", "> reset()" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MultiBatchEncoder.reset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Easy access of language models and ULMFiT", "title": "text.learner" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }