{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Implementation of the language models" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.text import * \n", "from fastai.text.models import * " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[`text.models`](/text.models.html#text.models) module fully implements the encoder for an [AWD-LSTM](https://arxiv.org/pdf/1708.02182.pdf), the [transformer model](https://arxiv.org/abs/1706.03762) and the [transformer XL model](https://arxiv.org/abs/1901.02860). They can then plugged in with a decoder to make a language model, or some classifying layers to make a text classifier." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Language model modules" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [ { "data": { "text/markdown": [ "
class
AWD_LSTM
[source][test]AWD_LSTM
(**`vocab_sz`**:`int`, **`emb_sz`**:`int`, **`n_hid`**:`int`, **`n_layers`**:`int`, **`pad_token`**:`int`=***`1`***, **`hidden_p`**:`float`=***`0.2`***, **`input_p`**:`float`=***`0.6`***, **`embed_p`**:`float`=***`0.1`***, **`weight_p`**:`float`=***`0.5`***, **`qrnn`**:`bool`=***`False`***, **`bidir`**:`bool`=***`False`***) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n",
"\n",
"No tests found for AWD_LSTM
. To contribute a test please refer to this guide and this discussion.
reset
[source][test]reset
()\n",
"\n",
"No tests found for reset
. To contribute a test please refer to this guide and this discussion.
class
Transformer
[source][test]Transformer
(**`vocab_sz`**:`int`, **`ctx_len`**:`int`, **`n_layers`**:`int`, **`n_heads`**:`int`, **`d_model`**:`int`, **`d_head`**:`int`, **`d_inner`**:`int`, **`resid_p`**:`float`=***`0.0`***, **`attn_p`**:`float`=***`0.0`***, **`ff_p`**:`float`=***`0.0`***, **`embed_p`**:`float`=***`0.0`***, **`bias`**:`bool`=***`True`***, **`scale`**:`bool`=***`True`***, **`act`**:[`Activation`](/text.models.transformer.html#Activation)=***`No tests found for Transformer
. To contribute a test please refer to this guide and this discussion.
class
TransformerXL
[source][test]TransformerXL
(**`vocab_sz`**:`int`, **`ctx_len`**:`int`, **`n_layers`**:`int`, **`n_heads`**:`int`, **`d_model`**:`int`, **`d_head`**:`int`, **`d_inner`**:`int`, **`resid_p`**:`float`=***`0.0`***, **`attn_p`**:`float`=***`0.0`***, **`ff_p`**:`float`=***`0.0`***, **`embed_p`**:`float`=***`0.0`***, **`bias`**:`bool`=***`False`***, **`scale`**:`bool`=***`True`***, **`act`**:[`Activation`](/text.models.transformer.html#Activation)=***`No tests found for TransformerXL
. To contribute a test please refer to this guide and this discussion.
reset
[source][test]reset
()\n",
"\n",
"No tests found for reset
. To contribute a test please refer to this guide and this discussion.
class
LinearDecoder
[source][test]LinearDecoder
(**`n_out`**:`int`, **`n_hid`**:`int`, **`output_p`**:`float`, **`tie_encoder`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)=***`None`***, **`bias`**:`bool`=***`True`***) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n",
"\n",
"No tests found for LinearDecoder
. To contribute a test please refer to this guide and this discussion.
class
PoolingLinearClassifier
[source][test]PoolingLinearClassifier
(**`layers`**:`Collection`\\[`int`\\], **`drops`**:`Collection`\\[`float`\\]) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n",
"\n",
"No tests found for PoolingLinearClassifier
. To contribute a test please refer to this guide and this discussion.
class
EmbeddingDropout
[source][test]EmbeddingDropout
(**`emb`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`embed_p`**:`float`) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n",
"\n",
"No tests found for EmbeddingDropout
. To contribute a test please refer to this guide and this discussion.
class
RNNDropout
[source][test]RNNDropout
(**`p`**:`float`=***`0.5`***) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n",
"\n",
"No tests found for RNNDropout
. To contribute a test please refer to this guide and this discussion.
class
WeightDropout
[source][test]WeightDropout
(**`module`**:[`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), **`weight_p`**:`float`, **`layer_names`**:`StrList`=***`['weight_hh_l0']`***) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n",
"\n",
"No tests found for WeightDropout
. To contribute a test please refer to this guide and this discussion.
class
PositionalEncoding
[source][test]PositionalEncoding
(**`d`**:`int`) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n",
"\n",
"No tests found for PositionalEncoding
. To contribute a test please refer to this guide and this discussion.
class
DecoderLayer
[source][test]DecoderLayer
(**`n_heads`**:`int`, **`d_model`**:`int`, **`d_head`**:`int`, **`d_inner`**:`int`, **`resid_p`**:`float`=***`0.0`***, **`attn_p`**:`float`=***`0.0`***, **`ff_p`**:`float`=***`0.0`***, **`bias`**:`bool`=***`True`***, **`scale`**:`bool`=***`True`***, **`act`**:[`Activation`](/text.models.transformer.html#Activation)=***`No tests found for DecoderLayer
. To contribute a test please refer to this guide and this discussion.
class
MultiHeadAttention
[source][test]MultiHeadAttention
(**`n_heads`**:`int`, **`d_model`**:`int`, **`d_head`**:`int`=***`None`***, **`resid_p`**:`float`=***`0.0`***, **`attn_p`**:`float`=***`0.0`***, **`bias`**:`bool`=***`True`***, **`scale`**:`bool`=***`True`***) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)\n",
"\n",
"No tests found for MultiHeadAttention
. To contribute a test please refer to this guide and this discussion.
class
MultiHeadRelativeAttention
[source][test]MultiHeadRelativeAttention
(**`n_heads`**:`int`, **`d_model`**:`int`, **`d_head`**:`int`, **`resid_p`**:`float`=***`0.0`***, **`attn_p`**:`float`=***`0.0`***, **`bias`**:`bool`=***`True`***, **`scale`**:`bool`=***`True`***) :: [`MultiHeadAttention`](/text.models.transformer.html#MultiHeadAttention)\n",
"\n",
"No tests found for MultiHeadRelativeAttention
. To contribute a test please refer to this guide and this discussion.
class
SequentialRNN
[source][test]SequentialRNN
(**\\*`args`**) :: [`Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential)\n",
"\n",
"No tests found for SequentialRNN
. To contribute a test please refer to this guide and this discussion.
reset
[source][test]reset
()\n",
"\n",
"No tests found for reset
. To contribute a test please refer to this guide and this discussion.
dropout_mask
[source][test]dropout_mask
(**`x`**:`Tensor`, **`sz`**:`Collection`\\[`int`\\], **`p`**:`float`)\n",
"\n",
"No tests found for dropout_mask
. To contribute a test please refer to this guide and this discussion.
feed_forward
[source][test]feed_forward
(**`d_model`**:`int`, **`d_ff`**:`int`, **`ff_p`**:`float`=***`0.0`***, **`act`**:[`Activation`](/text.models.transformer.html#Activation)=***`No tests found for feed_forward
. To contribute a test please refer to this guide and this discussion.
forward
[source][test]forward
(**\\*`args`**:`ArgStar`)\n",
"\n",
"No tests found for forward
. To contribute a test please refer to this guide and this discussion.
forward
[source][test]forward
(**`words`**:`LongTensor`, **`scale`**:`Optional`\\[`float`\\]=***`None`***) → `Tensor`\n",
"\n",
"No tests found for forward
. To contribute a test please refer to this guide and this discussion.
forward
[source][test]forward
(**`x`**:`Tensor`) → `Tensor`\n",
"\n",
"No tests found for forward
. To contribute a test please refer to this guide and this discussion.
reset
[source][test]reset
()\n",
"\n",
"No tests found for reset
. To contribute a test please refer to this guide and this discussion.
forward
[source][test]forward
(**`input`**:`Tuple`\\[`Tensor`, `Tensor`, `Tensor`\\]) → `Tuple`\\[`Tensor`, `Tensor`, `Tensor`\\]\n",
"\n",
"No tests found for forward
. To contribute a test please refer to this guide and this discussion.
forward
[source][test]forward
(**`input`**:`Tuple`\\[`Tensor`, `Tensor`\\]) → `Tuple`\\[`Tensor`, `Tensor`, `Tensor`\\]\n",
"\n",
"No tests found for forward
. To contribute a test please refer to this guide and this discussion.