{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## NLP model creation and training" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.text import * \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The main thing here is [`RNNLearner`](/text.learner.html#RNNLearner). There are also some utility functions to help create and update text models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Quickly get a learner" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

language_model_learner[source][test]

\n", "\n", "> language_model_learner(**`data`**:[`DataBunch`](/basic_data.html#DataBunch), **`arch`**, **`config`**:`dict`=***`None`***, **`drop_mult`**:`float`=***`1.0`***, **`pretrained`**:`bool`=***`True`***, **`pretrained_fnames`**:`OptStrTuple`=***`None`***, **\\*\\*`learn_kwargs`**) → `LanguageLearner`\n", "\n", "
×

Tests found for language_model_learner:

To run tests please refer to this guide.

\n", "\n", "Create a [`Learner`](/basic_train.html#Learner) with a language model from `data` and `arch`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(language_model_learner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model used is given by `arch` and `config`. It can be:\n", "\n", "- an [AWD_LSTM](/text.models.html#AWD_LSTM)([Merity et al.](https://arxiv.org/abs/1708.02182))\n", "- a [Transformer](/text.models.html#Transformer) decoder ([Vaswani et al.](https://arxiv.org/abs/1706.03762))\n", "- a [TransformerXL](/text.models.html#TransformerXL) ([Dai et al.](https://arxiv.org/abs/1901.02860))\n", "\n", "They each have a default config for language modelling that is in {lower_case_class_name}\\_lm\\_config if you want to change the default parameter. At this stage, only the AWD LSTM and Tranformer support `pretrained=True` but we hope to add more pretrained models soon. `drop_mult` is applied to all the dropouts weights of the `config`, `learn_kwargs` are passed to the [`Learner`](/basic_train.html#Learner) initialization.\n", "\n", "If your [`data`](/text.data.html#text.data) is backward, the pretrained model downloaded will also be a backard one (only available for [`AWD_LSTM`](/text.models.awd_lstm.html#AWD_LSTM))." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "
Note: Using QRNN (change the flag in the config of the AWD LSTM) requires to have cuda installed (same version as pytorch is using).
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "jekyll_note(\"Using QRNN (change the flag in the config of the AWD LSTM) requires to have cuda installed (same version as pytorch is using).\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "data = TextLMDataBunch.from_csv(path, 'texts.csv')\n", "learn = language_model_learner(data, AWD_LSTM, drop_mult=0.5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

text_classifier_learner[source][test]

\n", "\n", "> text_classifier_learner(**`data`**:[`DataBunch`](/basic_data.html#DataBunch), **`arch`**:`Callable`, **`bptt`**:`int`=***`70`***, **`max_len`**:`int`=***`1400`***, **`config`**:`dict`=***`None`***, **`pretrained`**:`bool`=***`True`***, **`drop_mult`**:`float`=***`1.0`***, **`lin_ftrs`**:`Collection`\\[`int`\\]=***`None`***, **`ps`**:`Collection`\\[`float`\\]=***`None`***, **\\*\\*`learn_kwargs`**) → `TextClassifierLearner`\n", "\n", "
×

Tests found for text_classifier_learner:

  • pytest -sv tests/test_text_train.py::test_classifier [source]
  • pytest -sv tests/test_text_train.py::test_order_preds [source]

To run tests please refer to this guide.

\n", "\n", "Create a [`Learner`](/basic_train.html#Learner) with a text classifier from `data` and `arch`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(text_classifier_learner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here again, the backbone of the model is determined by `arch` and `config`. The input texts are fed into that model by bunch of `bptt` and only the last `max_len` activations are considered. This gives us the backbone of our model. The head then consists of:\n", "- a layer that concatenates the final outputs of the RNN with the maximum and average of all the intermediate outputs (on the sequence length dimension),\n", "- blocks of ([`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout), [`nn.Linear`](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU)) layers.\n", "\n", "The blocks are defined by the `lin_ftrs` and `drops` arguments. Specifically, the first block will have a number of inputs inferred from the backbone arch and the last one will have a number of outputs equal to data.c (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by `lin_ftrs` (of course a block has a number of inputs equal to the number of outputs of the previous block). The dropouts all have a the same value ps if you pass a float, or the corresponding values if you pass a list. Default is to have an intermediate hidden size of 50 (which makes two blocks model_activation -> 50 -> n_classes) with a dropout of 0.1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "data = TextClasDataBunch.from_csv(path, 'texts.csv')\n", "learn = text_classifier_learner(data, AWD_LSTM, drop_mult=0.5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class RNNLearner[source][test]

\n", "\n", "> RNNLearner(**`data`**:[`DataBunch`](/basic_data.html#DataBunch), **`model`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`split_func`**:`OptSplitFunc`=***`None`***, **`clip`**:`float`=***`None`***, **`alpha`**:`float`=***`2.0`***, **`beta`**:`float`=***`1.0`***, **`metrics`**=***`None`***, **\\*\\*`learn_kwargs`**) :: [`Learner`](/basic_train.html#Learner)\n", "\n", "
×

No tests found for RNNLearner. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Basic class for a [`Learner`](/basic_train.html#Learner) in NLP. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Handles the whole creation from data and a `model` with a text data using a certain `bptt`. The `split_func` is used to properly split the model in different groups for gradual unfreezing and differential learning rates. Gradient clipping of `clip` is optionally applied. `alpha` and `beta` are all passed to create an instance of [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer). Can be used for a language model or an RNN classifier. It also handles the conversion of weights from a pretrained model as well as saving or loading the encoder." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_preds[source][test]

\n", "\n", "> get_preds(**`ds_type`**:[`DatasetType`](/basic_data.html#DatasetType)=***``***, **`activ`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)=***`None`***, **`with_loss`**:`bool`=***`False`***, **`n_batch`**:`Optional`\\[`int`\\]=***`None`***, **`pbar`**:`Union`\\[`MasterBar`, `ProgressBar`, `NoneType`\\]=***`None`***, **`ordered`**:`bool`=***`True`***) → `List`\\[`Tensor`\\]\n", "\n", "
×

No tests found for get_preds. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return predictions and targets on the valid, train, or test set, depending on `ds_type`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.get_preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If `ordered=True`, returns the predictions in the order of the dataset, otherwise they will be ordered by the sampler (from the longest text to the shortest). The other arguments are passed [`Learner.get_preds`](/basic_train.html#Learner.get_preds)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class TextClassificationInterpretation[source][test]

\n", "\n", "> TextClassificationInterpretation(**`learn`**:[`Learner`](/basic_train.html#Learner), **`preds`**:`Tensor`, **`y_true`**:`Tensor`, **`losses`**:`Tensor`, **`ds_type`**:[`DatasetType`](/basic_data.html#DatasetType)=***``***) :: [`ClassificationInterpretation`](/train.html#ClassificationInterpretation)\n", "\n", "
×

No tests found for TextClassificationInterpretation. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Provides an interpretation of classification based on input sensitivity. This was designed for AWD-LSTM only for the moment, because Transformer already has its own attentional model. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextClassificationInterpretation,title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The darker the word-shading in the below example, the more it contributes to the classification. Results here are without any fitting. After fitting to acceptable accuracy, this class can show you what is being used to produce the classification of a particular case." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "xxbos xxmaj xxunk was perhaps the xxup greatest movie i have ever seen !" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import matplotlib.cm as cm\n", "\n", "txt_ci = TextClassificationInterpretation.from_learner(learn)\n", "test_text = \"Zombiegeddon was perhaps the GREATEST movie i have ever seen!\"\n", "txt_ci.show_intrinsic_attention(test_text,cmap=cm.Purples)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also view the raw attention values with `.intrinsic_attention(text)`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([0.6078, 0.4961, 0.4707, 0.4946, 0.5228, 0.5393, 0.5656, 0.6153, 0.6893,\n", " 0.8047, 0.9329, 1.0000, 0.9080, 0.5786], device='cuda:0')" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "txt_ci.intrinsic_attention(test_text)[1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a tabulation showing the first `k` texts in top_losses along with their prediction, actual,loss, and probability of actual class. `max_len` is the maximum number of tokens displayed. If `max_len=None`, it will display all tokens." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
TextPredictionActualLossProbability
xxbos i have to agree with what many of the other reviewers concluded . a subject which could have been thought - provoking and shed light on a reversed double - standard , failed miserably . \\n \\n xxmaj rape being a crime of violence and forced abusive control , the scenes here were for the most part pathetic . xxmaj it would have been a better idea toposneg8.250.00
xxbos xxmaj betty xxmaj sizemore ( xxmaj renee xxmaj zellweger ) lives her life through soap xxmaj opera \" a xxmaj reason to xxmaj love \" as a way to escape her slob husband and dull life . xxmaj after a shocking incident involving two hit men ( xxmaj morgan xxmaj freeman and xxmaj chris xxmaj rock ) , xxmaj betty goes into shock and travels to xxup la ,pospos7.711.00
xxbos xxmaj when people harp on about how \" they do n't make 'em like they used to \" then just point them towards this fantastically entertaining , and quaint - looking , comedy horror from writer - director xxmaj glenn mcquaid . \\n \\n xxmaj it 's a tale of graverobbers ( played by xxmaj dominic xxmaj monaghan and xxmaj larry xxmaj fessenden ) who end up diggingpospos7.471.00
xxbos i have to agree with all the previous xxunk -- this is simply the best of all frothy comedies , with xxmaj bardot as sexy as xxmaj marilyn xxmaj monroe ever was , and definitely with a prettier face ( maybe there 's less mystique , but look how xxmaj marilyn paid for that . ) i do n't think i 've ever seen such a succulent - lookingpospos6.551.00
xxbos i will freely admit that i have n't seen the original movie , but i 've read the play , so i 've some background with the \" original . \" xxmaj if you shuck off the fact that this is a remake of an old classic , this movie is smart , witty , fresh , and hilarious . xxmaj yes , the casting decisions may seem strangepospos6.381.00
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "txt_ci.show_top_losses(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Loading and saving" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

load_encoder[source][test]

\n", "\n", "> load_encoder(**`name`**:`str`, **`device`**:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=***`None`***)\n", "\n", "
×

No tests found for load_encoder. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Load the encoder `name` from the model directory. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.load_encoder)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

save_encoder[source][test]

\n", "\n", "> save_encoder(**`name`**:`str`)\n", "\n", "
×

No tests found for save_encoder. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Save the encoder to `name` inside the model directory. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.save_encoder)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

load_pretrained[source][test]

\n", "\n", "> load_pretrained(**`wgts_fname`**:`str`, **`itos_fname`**:`str`, **`strict`**:`bool`=***`True`***)\n", "\n", "
×

No tests found for load_pretrained. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Load a pretrained model and adapts it to the data vocabulary. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNLearner.load_pretrained)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Opens the weights in the `wgts_fname` of `self.model_dir` and the dictionary in `itos_fname` then adapts the pretrained weights to the vocabulary of the data. The two files should be in the models directory of the `learner.path`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Utility functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

convert_weights[source][test]

\n", "\n", "> convert_weights(**`wgts`**:`Weights`, **`stoi_wgts`**:`Dict`\\[`str`, `int`\\], **`itos_new`**:`StrList`) → `Weights`\n", "\n", "
×

No tests found for convert_weights. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Convert the model `wgts` to go with a new vocabulary. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(convert_weights)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Uses the dictionary `stoi_wgts` (mapping of word to id) of the weights to map them to a new dictionary `itos_new` (mapping id to word)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Get predictions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class LanguageLearner[source][test]

\n", "\n", "> LanguageLearner(**`data`**:[`DataBunch`](/basic_data.html#DataBunch), **`model`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`split_func`**:`OptSplitFunc`=***`None`***, **`clip`**:`float`=***`None`***, **`alpha`**:`float`=***`2.0`***, **`beta`**:`float`=***`1.0`***, **`metrics`**=***`None`***, **\\*\\*`learn_kwargs`**) :: [`RNNLearner`](/text.learner.html#RNNLearner)\n", "\n", "
×

No tests found for LanguageLearner. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Subclass of RNNLearner for predictions. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

predict[source][test]

\n", "\n", "> predict(**`text`**:`str`, **`n_words`**:`int`=***`1`***, **`no_unk`**:`bool`=***`True`***, **`temperature`**:`float`=***`1.0`***, **`min_p`**:`float`=***`None`***, **`sep`**:`str`=***`' '`***, **`decoder`**=***`'decode_spec_tokens'`***)\n", "\n", "
×

No tests found for predict. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return `text` and the `n_words` that come after " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner.predict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If `no_unk=True` the unknown token is never picked. Words are taken randomly with the distribution of probabilities returned by the model. If `min_p` is not `None`, that value is the minimum probability to be considered in the pool of words. Lowering `temperature` will make the texts less randomized. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

beam_search[source][test]

\n", "\n", "> beam_search(**`text`**:`str`, **`n_words`**:`int`, **`no_unk`**:`bool`=***`True`***, **`top_k`**:`int`=***`10`***, **`beam_sz`**:`int`=***`1000`***, **`temperature`**:`float`=***`1.0`***, **`sep`**:`str`=***`' '`***, **`decoder`**=***`'decode_spec_tokens'`***)\n", "\n", "
×

No tests found for beam_search. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Return the `n_words` that come after `text` using beam search. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner.beam_search)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Basic functions to get a model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_language_model[source][test]

\n", "\n", "> get_language_model(**`arch`**:`Callable`, **`vocab_sz`**:`int`, **`config`**:`dict`=***`None`***, **`drop_mult`**:`float`=***`1.0`***)\n", "\n", "
×

No tests found for get_language_model. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create a language model from `arch` and its `config`, maybe `pretrained`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(get_language_model)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

get_text_classifier[source][test]

\n", "\n", "> get_text_classifier(**`arch`**:`Callable`, **`vocab_sz`**:`int`, **`n_class`**:`int`, **`bptt`**:`int`=***`70`***, **`max_len`**:`int`=***`1400`***, **`config`**:`dict`=***`None`***, **`drop_mult`**:`float`=***`1.0`***, **`lin_ftrs`**:`Collection`\\[`int`\\]=***`None`***, **`ps`**:`Collection`\\[`float`\\]=***`None`***, **`pad_idx`**:`int`=***`1`***) → [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n", "\n", "
×

No tests found for get_text_classifier. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create a text classifier from `arch` and its `config`, maybe `pretrained`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(get_text_classifier)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This model uses an encoder taken from the `arch` on `config`. This encoder is fed the sequence by successive bits of size `bptt` and we only keep the last `max_seq` outputs for the pooling layers.\n", "\n", "The decoder use a concatenation of the last outputs, a `MaxPooling` of all the outputs and an `AveragePooling` of all the outputs. It then uses a list of `BatchNorm`, `Dropout`, `Linear`, `ReLU` blocks (with no `ReLU` in the last one), using a first layer size of `3*emb_sz` then following the numbers in `n_layers`. The dropouts probabilities are read in `drops`.\n", "\n", "Note that the model returns a list of three things, the actual output being the first, the two others being the intermediate hidden states before and after dropout (used by the [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer)). Most loss functions expect one output, so you should use a Callback to remove the other two if you're not using [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source][test]

\n", "\n", "> forward(**`input`**:`LongTensor`) → `Tuple`\\[`List`\\[`Tensor`\\], `List`\\[`Tensor`\\], `Tensor`\\]\n", "\n", "
×

No tests found for forward. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:[`Module`](/torch_core.html#Module) instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MultiBatchEncoder.forward)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

show_results[source][test]

\n", "\n", "> show_results(**`ds_type`**=***``***, **`rows`**:`int`=***`5`***, **`max_len`**:`int`=***`20`***)\n", "\n", "
×

No tests found for show_results. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Show `rows` result of predictions on `ds_type` dataset. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageLearner.show_results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

concat[source][test]

\n", "\n", "> concat(**`arrs`**:`Sequence`\\[`Sequence`\\[`Tensor`\\]\\]) → `List`\\[`Tensor`\\]\n", "\n", "
×

No tests found for concat. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Concatenate the `arrs` along the batch dimension. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MultiBatchEncoder.concat)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class MultiBatchEncoder[source][test]

\n", "\n", "> MultiBatchEncoder(**`bptt`**:`int`, **`max_len`**:`int`, **`module`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`pad_idx`**:`int`=***`1`***) :: [`PrePostInitMeta`](/core.html#PrePostInitMeta) :: [`Module`](/torch_core.html#Module)\n", "\n", "
×

No tests found for MultiBatchEncoder. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create an encoder over `module` that can process a full sentence. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MultiBatchEncoder)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

decode_spec_tokens[source][test]

\n", "\n", "> decode_spec_tokens(**`tokens`**)\n", "\n", "
×

No tests found for decode_spec_tokens. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(decode_spec_tokens)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

reset[source][test]

\n", "\n", "> reset()\n", "\n", "
×

No tests found for reset. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MultiBatchEncoder.reset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Easy access of language models and ULMFiT", "title": "text.learner" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }