{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# NLP datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [],
"source": [
"from fastai.gen_doc.nbdoc import *\n",
"from fastai.text import * \n",
"from fastai.gen_doc.nbdoc import *\n",
"from fastai import *"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This module contains the [`TextDataset`](/text.data.html#TextDataset) class, which is the main dataset you should use for your NLP tasks. It automatically does the preprocessing steps described in [`text.transform`](/text.transform.html#text.transform). It also contains all the functions to quickly get a [`TextDataBunch`](/text.data.html#TextDataBunch) ready."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quickly assemble your data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You should get your data in one of the following formats to make the most of the fastai library and use one of the factory methods of one of the [`TextDataBunch`](/text.data.html#TextDataBunch) classes:\n",
"- raw text files in folders train, valid, test in an ImageNet style,\n",
"- a csv where some column(s) gives the label(s) and the folowwing one the associated text,\n",
"- a dataframe structured the same way,\n",
"- tokens and labels arrays,\n",
"- ids, vocabulary (correspondance id to word) and labels.\n",
"\n",
"If you are assembling the data for a language model, you should define your labels as always 0 to respect those formats. The first time you create a [`DataBunch`](/basic_data.html#DataBunch) with one of those functions, your data will be preprocessed automatically. You can save it, so that the next time you call it is almost instantaneous. \n",
"\n",
"Below are the classes that help assembling the raw data in a [`DataBunch`](/basic_data.html#DataBunch) suitable for NLP."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"
class TextLMDataBunch[source]
\n",
"\n",
"> TextLMDataBunch(`train_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `valid_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `test_dl`:`Optional`\\[[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)\\]=`None`, `device`:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=`None`, `tfms`:`Optional`\\[`Collection`\\[`Callable`\\]\\]=`None`, `path`:`PathOrStr`=`'.'`, `collate_fn`:`Callable`=`'data_collate'`) :: [`TextDataBunch`](/text.data.html#TextDataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextLMDataBunch, title_level=3, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a [`DataBunch`](/basic_data.html#DataBunch) suitable for language modeling: all the texts in the [`datasets`](/datasets.html#datasets) are concatenated and the labels are ignored. Instead, the target is the next word in the sentence."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> show_batch(`rows`:`int`=`5`, `ds_type`:[`DatasetType`](/basic_data.html#DatasetType)=``, `kwargs`)\n",
"\n",
"Show a batch of data in `ds_type` on a few `rows`. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextLMDataBunch.show_batch)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class TextClasDataBunch[source]
\n",
"\n",
"> TextClasDataBunch(`train_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `valid_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `test_dl`:`Optional`\\[[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)\\]=`None`, `device`:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=`None`, `tfms`:`Optional`\\[`Collection`\\[`Callable`\\]\\]=`None`, `path`:`PathOrStr`=`'.'`, `collate_fn`:`Callable`=`'data_collate'`) :: [`TextDataBunch`](/text.data.html#TextDataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextClasDataBunch, title_level=3, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a [`DataBunch`](/basic_data.html#DataBunch) suitable for a text classifier: all the texts are grouped by length (with a bit of randomness for the training set) then padded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> show_batch(`rows`:`int`=`5`, `ds_type`:[`DatasetType`](/basic_data.html#DatasetType)=``, `kwargs`)\n",
"\n",
"Show a batch of data in `ds_type` on a few `rows`. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextClasDataBunch.show_batch)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class TextDataBunch[source]
\n",
"\n",
"> TextDataBunch(`train_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `valid_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `test_dl`:`Optional`\\[[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)\\]=`None`, `device`:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=`None`, `tfms`:`Optional`\\[`Collection`\\[`Callable`\\]\\]=`None`, `path`:`PathOrStr`=`'.'`, `collate_fn`:`Callable`=`'data_collate'`) :: [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch, title_level=3, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a [`DataBunch`](/basic_data.html#DataBunch) with the raw texts. This is only going to work if they all ahve the same lengths."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Factory methods (TextDataBunch)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"All those classes have the following factory methods."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_folder(`path`:`PathOrStr`, `train`:`str`=`'train'`, `valid`:`str`=`'valid'`, `test`:`Optional`\\[`str`\\]=`None`, `classes`:`ArgStar`=`None`, `tokenizer`:[`Tokenizer`](/text.transform.html#Tokenizer)=`None`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `kwargs`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_folder, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) from texts placed in `path` in a [`train`](/train.html#train), `valid` and maybe `test` folders. Text files in the [`train`](/train.html#train) and `valid` folders should be places in subdirectories according to their classes (always the same for a language model) and the ones for the `test` folder should all be placed there directly. `tokenizer` will be used to parse those texts into tokens. The `shuffle` flag will optionally shuffle the texts found.\n",
"\n",
"You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_csv(`path`:`PathOrStr`, `csv_name`, `valid_pct`:`float`=`0.2`, `test`:`Optional`\\[`str`\\]=`None`, `tokenizer`:[`Tokenizer`](/text.transform.html#Tokenizer)=`None`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `classes`:`StrList`=`None`, `header`=`'infer'`, `text_cols`:`Union`\\[`int`, `Collection`\\[`int`\\], `str`, `StrList`\\]=`1`, `label_cols`:`Union`\\[`int`, `Collection`\\[`int`\\], `str`, `StrList`\\]=`0`, `label_delim`:`str`=`None`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_csv, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) from texts placed in `path` in a csv file and maybe `test` csv file opened with `header`. You can specify `txt_cols` and `lbl_cols` or just an integer `n_labels` in which case the label(s) should be the first column(s). `tokenizer` will be used to parse those texts into tokens.\n",
"\n",
"You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_df(`path`:`PathOrStr`, `train_df`:`DataFrame`, `valid_df`:`DataFrame`, `test_df`:`OptDataFrame`=`None`, `tokenizer`:[`Tokenizer`](/text.transform.html#Tokenizer)=`None`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `classes`:`StrList`=`None`, `text_cols`:`Union`\\[`int`, `Collection`\\[`int`\\], `str`, `StrList`\\]=`1`, `label_cols`:`Union`\\[`int`, `Collection`\\[`int`\\], `str`, `StrList`\\]=`0`, `label_delim`:`str`=`None`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_df, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) in `path` from texts in `train_df`, `valid_df` and maybe `test_df`. By default, those are opened with `header=infer` but you can specify another value in the kwargs. You can specify `txt_cols` and `lbl_cols` or just an integer `n_labels` in which case the label(s) should be the first column(s). `tokenizer` will be used to parse those texts into tokens.\n",
"\n",
"You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_tokens(`path`:`PathOrStr`, `trn_tok`:`Tokens`, `trn_lbls`:`Collection`\\[`Union`\\[`int`, `float`\\]\\], `val_tok`:`Tokens`, `val_lbls`:`Collection`\\[`Union`\\[`int`, `float`\\]\\], `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `tst_tok`:`Tokens`=`None`, `classes`:`ArgStar`=`None`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_tokens, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) from `trn_tok`, `trn_lbls`, `val_tok`, `val_lbls` and maybe `tst_tok`.\n",
"\n",
"You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels`, `tok_suff` and `lbl_suff` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_ids(`path`:`PathOrStr`, `vocab`:[`Vocab`](/text.transform.html#Vocab), `train_ids`:`Collection`\\[`Collection`\\[`int`\\]\\], `valid_ids`:`Collection`\\[`Collection`\\[`int`\\]\\], `test_ids`:`Collection`\\[`Collection`\\[`int`\\]\\]=`None`, `train_lbls`:`Collection`\\[`Union`\\[`int`, `float`\\]\\]=`None`, `valid_lbls`:`Collection`\\[`Union`\\[`int`, `float`\\]\\]=`None`, `classes`:`ArgStar`=`None`, `processor`:[`PreProcessor`](/data_block.html#PreProcessor)=`None`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_ids, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) in `path` from texts already processed into `trn_ids`, `trn_lbls`, `val_ids`, `val_lbls` and maybe `tst_ids`. You can specify the corresponding `classes` if applciable. You must specify the `vocab` so that the [`RNNLearner`](/text.learner.html#RNNLearner) class can later infer the corresponding sizes in the model it will create. kwargs will be passed to the class initialization."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load and save"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To avoid losing time preprocessing the text data more than once, you should save/load your [`TextDataBunch`](/text.data.html#TextDataBunch) using thse methods."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> load(`path`:`PathOrStr`, `cache_name`:`PathOrStr`=`'tmp'`, `processor`:[`PreProcessor`](/data_block.html#PreProcessor)=`None`, `kwargs`)\n",
"\n",
"Load a [`TextDataBunch`](/text.data.html#TextDataBunch) from `path/cache_name`. `kwargs` are passed to the dataloader creation. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.load)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> save(`cache_name`:`PathOrStr`=`'tmp'`)\n",
"\n",
"Save the [`DataBunch`](/basic_data.html#DataBunch) in `self.path/cache_name` folder. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.save)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Example"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Untar the IMDB sample dataset if not already done:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"PosixPath('/home/ubuntu/.fastai/data/imdb_sample')"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"path = untar_data(URLs.IMDB_SAMPLE)\n",
"path"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since it comes in the form of csv files, we will use the corresponding `text_data` method. Here is an overview of what your file you should look like:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" label | \n",
" text | \n",
" is_valid | \n",
"
\n",
" \n",
" \n",
" \n",
" | 0 | \n",
" negative | \n",
" Un-bleeping-believable! Meg Ryan doesn't even ... | \n",
" False | \n",
"
\n",
" \n",
" | 1 | \n",
" positive | \n",
" This is a extremely well-made film. The acting... | \n",
" False | \n",
"
\n",
" \n",
" | 2 | \n",
" negative | \n",
" Every once in a long while a movie will come a... | \n",
" False | \n",
"
\n",
" \n",
" | 3 | \n",
" positive | \n",
" Name just says it all. I watched this movie wi... | \n",
" False | \n",
"
\n",
" \n",
" | 4 | \n",
" negative | \n",
" This movie succeeds at being one of the most u... | \n",
" False | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" label text is_valid\n",
"0 negative Un-bleeping-believable! Meg Ryan doesn't even ... False\n",
"1 positive This is a extremely well-made film. The acting... False\n",
"2 negative Every once in a long while a movie will come a... False\n",
"3 positive Name just says it all. I watched this movie wi... False\n",
"4 negative This movie succeeds at being one of the most u... False"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pd.read_csv(path/'texts.csv').head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And here is a simple way of creating your [`DataBunch`](/basic_data.html#DataBunch) for language modelling or classification."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_lm = TextLMDataBunch.from_csv(Path(path), 'texts.csv')\n",
"data_clas = TextClasDataBunch.from_csv(Path(path), 'texts.csv')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The TextList input classes"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Behind the scenes, the previous functions will create a training, validation and maybe test [`TextList`](/text.data.html#TextList) that will be tokenized and numericalized (if needed) using [`PreProcessor`](/data_block.html#PreProcessor)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> Text(`ids`, `text`, `is_lm`) :: [`ItemBase`](/core.html#ItemBase)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(Text, doc_string=False, title_level=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Basic item for text data, contains the numericalized `ids` and the corresponding [`text`](/text.html#text)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> show_xys(`xs`, `ys`, `max_len`:`int`=`70`)\n",
"\n",
"Show the `xs` and `ys`. `max_len` is the maximum number of tokens displayed. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(Text.show_xys)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> show_xyzs(`xs`, `ys`, `zs`, `max_len`:`int`=`70`)\n",
"\n",
"Show `xs` (inputs), `ys` (targets) and `zs` (predictions). `max_len` is the maximum number of tokens displayed. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(Text.show_xyzs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> TextList(`items`:`Iterator`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `pad_idx`:`int`=`1`, `kwargs`) :: [`ItemList`](/data_block.html#ItemList)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList, title_level=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The basic [`ItemList`](/data_block.html#ItemList) for text data in `items` with the corresponding `vocab`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> label_for_lm(`kwargs`)\n",
"\n",
"A special labelling method for language models. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.label_for_lm)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_folder(`path`:`PathOrStr`=`'.'`, `extensions`:`StrList`=`{'.txt'}`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `processor`:[`PreProcessor`](/data_block.html#PreProcessor)=`None`, `kwargs`) → `TextList`\n",
"\n",
"Get the list of files in `path` that have a text suffix. `recurse` determines if we search subfolders. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.from_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class OpenFileProcessor[source]
\n",
"\n",
"> OpenFileProcessor(`ds`:`Collection`=`None`) :: [`PreProcessor`](/data_block.html#PreProcessor)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(OpenFileProcessor, title_level=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Simple `Preprocessor` that opens the files in items and reads the texts inside them."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> open_text(`fn`:`PathOrStr`, `enc`=`'utf-8'`)\n",
"\n",
"Read the text in `fn`. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(open_text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class TokenizeProcessor[source]
\n",
"\n",
"> TokenizeProcessor(`ds`:[`ItemList`](/data_block.html#ItemList)=`None`, `tokenizer`:[`Tokenizer`](/text.transform.html#Tokenizer)=`None`, `chunksize`:`int`=`10000`, `mark_fields`:`bool`=`False`) :: [`PreProcessor`](/data_block.html#PreProcessor)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TokenizeProcessor, title_level=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Simple [`PreProcessor`](/data_block.html#PreProcessor) that tokenizes the texts in `items` using `tokenizer` by bits of `chunsize`. If `mark_fields` is `True`, add field tokens."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class NumericalizeProcessor[source]
\n",
"\n",
"> NumericalizeProcessor(`ds`:[`ItemList`](/data_block.html#ItemList)=`None`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `max_vocab`:`int`=`60000`, `min_freq`:`int`=`2`) :: [`PreProcessor`](/data_block.html#PreProcessor)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(NumericalizeProcessor, title_level=3, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Numericalize the tokens with `vocab` (if not None) otherwise create one with `max_vocab` and `min_freq` from tokens."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Language Model data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A language model is trained to guess what the next word is inside a flow of words. We don't feed it the different texts separately but concatenate them all together in a big array. To create the batches, we split this array into `bs` chuncks of continuous texts. Note that in all NLP tasks, we use the pytoch convention of sequence length being the first dimension (and batch size being the second one) so we transpose that array so that we can read the chunks of texts in columns. Here is an example of batch from our imdb sample dataset. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" 0 | \n",
" 1 | \n",
" 2 | \n",
" 3 | \n",
" 4 | \n",
" 5 | \n",
" 6 | \n",
" 7 | \n",
" 8 | \n",
" 9 | \n",
"
\n",
" \n",
" \n",
" \n",
" | 0 | \n",
" xxbos | \n",
" and | \n",
" - | \n",
" on | \n",
" xxmaj | \n",
" ) | \n",
" the | \n",
" worst | \n",
" . | \n",
" of | \n",
"
\n",
" \n",
" | 1 | \n",
" xxfld | \n",
" severely | \n",
" life | \n",
" stage | \n",
" victor | \n",
" . | \n",
" actors | \n",
" effects | \n",
" xxmaj | \n",
" the | \n",
"
\n",
" \n",
" | 2 | \n",
" 1 | \n",
" retarded | \n",
" marching | \n",
" in | \n",
" xxmaj | \n",
" xxmaj | \n",
" make | \n",
" ever | \n",
" may | \n",
" movie | \n",
"
\n",
" \n",
" | 3 | \n",
" xxmaj | \n",
" at | \n",
" band | \n",
" a | \n",
" vargas | \n",
" from | \n",
" you | \n",
" . | \n",
" not | \n",
" lying | \n",
"
\n",
" \n",
" | 4 | \n",
" this | \n",
" it | \n",
" whose | \n",
" very | \n",
" is | \n",
" there | \n",
" believe | \n",
" xxmaj | \n",
" be | \n",
" in | \n",
"
\n",
" \n",
" | 5 | \n",
" film | \n",
" 's | \n",
" xxunk | \n",
" xxunk | \n",
" that | \n",
" , | \n",
" they | \n",
" the | \n",
" spectacular | \n",
" bed | \n",
"
\n",
" \n",
" | 6 | \n",
" was | \n",
" worst | \n",
" for | \n",
" environment | \n",
" he | \n",
" he | \n",
" are | \n",
" xxmaj | \n",
" , | \n",
" . | \n",
"
\n",
" \n",
" | 7 | \n",
" a | \n",
" . | \n",
" a | \n",
" . | \n",
" 's | \n",
" is | \n",
" that | \n",
" vietnam | \n",
" but | \n",
" xxmaj | \n",
"
\n",
" \n",
" | 8 | \n",
" big | \n",
" xxmaj | \n",
" xxmaj | \n",
" ( | \n",
" a | \n",
" xxunk | \n",
" person | \n",
" scenes | \n",
" it | \n",
" the | \n",
"
\n",
" \n",
" | 9 | \n",
" disappointment | \n",
" xxunk | \n",
" labor | \n",
" xxmaj | \n",
" sexually | \n",
" to | \n",
" , | \n",
" are | \n",
" 's | \n",
" best | \n",
"
\n",
" \n",
" | 10 | \n",
" . | \n",
" xxmaj | \n",
" xxmaj | \n",
" with | \n",
" active | \n",
" seek | \n",
" it | \n",
" shot | \n",
" concept | \n",
" actresses | \n",
"
\n",
" \n",
" | 11 | \n",
" \\n\\n | \n",
" xxunk | \n",
" day | \n",
" songs | \n",
" teenager | \n",
" out | \n",
" 's | \n",
" in | \n",
" is | \n",
" in | \n",
"
\n",
" \n",
" | 12 | \n",
" i | \n",
" was | \n",
" parade | \n",
" of | \n",
" with | \n",
" a | \n",
" a | \n",
" xxunk | \n",
" pretty | \n",
" the | \n",
"
\n",
" \n",
" | 13 | \n",
" take | \n",
" xxunk | \n",
" provide | \n",
" this | \n",
" a | \n",
" camp | \n",
" much | \n",
" xxunk | \n",
" xxunk | \n",
" world | \n",
"
\n",
" \n",
" | 14 | \n",
" the | \n",
" the | \n",
" discipline | \n",
" strength | \n",
" xxunk | \n",
" where | \n",
" more | \n",
" , | \n",
" and | \n",
" can | \n",
"
\n",
" \n",
" | 15 | \n",
" opposite | \n",
" bottom | \n",
" and | \n",
" , | \n",
" the | \n",
" xxmaj | \n",
" enjoyable | \n",
" i | \n",
" it | \n",
" not | \n",
"
\n",
" \n",
" | 16 | \n",
" view | \n",
" of | \n",
" purpose | \n",
" you | \n",
" size | \n",
" american | \n",
" movie | \n",
" xxunk | \n",
" delivers | \n",
" make | \n",
"
\n",
" \n",
" | 17 | \n",
" of | \n",
" the | \n",
" to | \n",
" do | \n",
" of | \n",
" xxunk | \n",
" ! | \n",
" ! | \n",
" in | \n",
" anything | \n",
"
\n",
" \n",
" | 18 | \n",
" the | \n",
" xxunk | \n",
" their | \n",
" n't | \n",
" xxmaj | \n",
" are | \n",
" xxbos | \n",
" xxmaj | \n",
" the | \n",
" very | \n",
"
\n",
" \n",
" | 19 | \n",
" critics | \n",
" here | \n",
" lives | \n",
" need | \n",
" manhattan | \n",
" supposedly | \n",
" xxfld | \n",
" lou | \n",
" right | \n",
" interesting | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" 0 1 2 3 4 5 \\\n",
"0 xxbos and - on xxmaj ) \n",
"1 xxfld severely life stage victor . \n",
"2 1 retarded marching in xxmaj xxmaj \n",
"3 xxmaj at band a vargas from \n",
"4 this it whose very is there \n",
"5 film 's xxunk xxunk that , \n",
"6 was worst for environment he he \n",
"7 a . a . 's is \n",
"8 big xxmaj xxmaj ( a xxunk \n",
"9 disappointment xxunk labor xxmaj sexually to \n",
"10 . xxmaj xxmaj with active seek \n",
"11 \\n\\n xxunk day songs teenager out \n",
"12 i was parade of with a \n",
"13 take xxunk provide this a camp \n",
"14 the the discipline strength xxunk where \n",
"15 opposite bottom and , the xxmaj \n",
"16 view of purpose you size american \n",
"17 of the to do of xxunk \n",
"18 the xxunk their n't xxmaj are \n",
"19 critics here lives need manhattan supposedly \n",
"\n",
" 6 7 8 9 \n",
"0 the worst . of \n",
"1 actors effects xxmaj the \n",
"2 make ever may movie \n",
"3 you . not lying \n",
"4 believe xxmaj be in \n",
"5 they the spectacular bed \n",
"6 are xxmaj , . \n",
"7 that vietnam but xxmaj \n",
"8 person scenes it the \n",
"9 , are 's best \n",
"10 it shot concept actresses \n",
"11 's in is in \n",
"12 a xxunk pretty the \n",
"13 much xxunk xxunk world \n",
"14 more , and can \n",
"15 enjoyable i it not \n",
"16 movie xxunk delivers make \n",
"17 ! ! in anything \n",
"18 xxbos xxmaj the very \n",
"19 xxfld lou right interesting "
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"path = untar_data(URLs.IMDB_SAMPLE)\n",
"data = TextLMDataBunch.from_csv(path, 'texts.csv')\n",
"x,y = next(iter(data.train_dl))\n",
"example = x[:20,:10].cpu()\n",
"texts = pd.DataFrame([data.train_ds.vocab.textify(l).split(' ') for l in example])\n",
"texts"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, as suggested in [this article](https://arxiv.org/abs/1708.02182) from Stephen Merity et al., we don't use a fixed `bptt` through the different batches but slightly change it from batch to batch."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"torch.Size([61, 64])\n",
"torch.Size([69, 64])\n",
"torch.Size([70, 64])\n",
"torch.Size([73, 64])\n",
"torch.Size([68, 64])\n"
]
}
],
"source": [
"iter_dl = iter(data.train_dl)\n",
"for _ in range(5):\n",
" x,y = next(iter_dl)\n",
" print(x.size())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is all done internally when we use [`TextLMDataBunch`](/text.data.html#TextLMDataBunch), by creating [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) using the following class:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class LanguageModelLoader[source]
\n",
"\n",
"> LanguageModelLoader(`dataset`:[`LabelList`](/data_block.html#LabelList), `bs`:`int`=`64`, `bptt`:`int`=`70`, `backwards`:`bool`=`False`, `shuffle`:`bool`=`False`, `max_len`:`int`=`25`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(LanguageModelLoader, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Takes the texts from `dataset` and concatenate them all, then create a big array with `bs` columns (transposed from the data source so that we read the texts in the columns). Spits batches with a size approximately equal to `bptt` but changing at every batch. If `backwards` is True, reverses the original text. If `shuffle` is True, we shuffle the texts before concatenating them together at the start of each epoch. `max_len` is the maximum amount we add to `bptt`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> batchify(`data`:`ndarray`) → `LongTensor`"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(LanguageModelLoader.batchify, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Called at the inialization to create the big array of text ids from the [`data`](/text.data.html#text.data) array."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> get_batch(`i`:`int`, `seq_len`:`int`) → `Tuple`\\[`LongTensor`, `LongTensor`\\]\n",
"\n",
"Create a batch at `i` of a given `seq_len`. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(LanguageModelLoader.get_batch)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Classifier data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When preparing the data for a classifier, we keep the different texts separate, which poses another challenge for the creation of batches: since they don't all have the same length, we can't easily collate them together in batches. To help with this we use two different techniques:\n",
"- padding: each text is padded with the `PAD` token to get all the ones we picked to the same size\n",
"- sorting the texts (ish): to avoid having together a very long text with a very short one (which would then have a lot of `PAD` tokens), we regroup the texts by order of length. For the training set, we still add some randomness to avoid showing the same batches at every step of the training.\n",
"\n",
"Here is an example of batch with padding (the padding index is 1, and the padding is applied before the sentences start)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 44, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 45, 44, 44, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 41, 45, 45, 44, 1, 1, 1, 1, 1, 1],\n",
" [ 7, 41, 41, 45, 44, 44, 44, 44, 1, 1],\n",
" [378, 2, 2, 41, 45, 45, 45, 45, 44, 44],\n",
" [ 0, 15, 48, 2, 41, 41, 41, 41, 45, 45],\n",
" [ 12, 22, 382, 57, 2, 13, 2, 2, 41, 41]], device='cuda:0')"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"path = untar_data(URLs.IMDB_SAMPLE)\n",
"data = TextClasDataBunch.from_csv(path, 'texts.csv')\n",
"iter_dl = iter(data.train_dl)\n",
"_ = next(iter_dl)\n",
"x,y = next(iter_dl)\n",
"x[:20,-10:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is all done internally when we use [`TextClasDataBunch`](/text.data.html#TextClasDataBunch), by using the following classes:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class SortSampler[source]
\n",
"\n",
"> SortSampler(`data_source`:`NPArrayList`, `key`:`KeyFunc`) :: [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(SortSampler, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) to batchify the `data_source` by order of length of the texts. Used for the validation and (if applicable) the test set. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class SortishSampler[source]
\n",
"\n",
"> SortishSampler(`data_source`:`NPArrayList`, `key`:`KeyFunc`, `bs`:`int`) :: [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(SortishSampler, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) to batchify with size `bs` the `data_source` by order of length of the texts with a bit of randomness. Used for the training set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> pad_collate(`samples`:`BatchSamples`, `pad_idx`:`int`=`1`, `pad_first`:`bool`=`True`) → `Tuple`\\[`LongTensor`, `LongTensor`\\]"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(pad_collate, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Function used by the pytorch [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) to collate the `samples` in batches while adding padding with `pad_idx`. If `pad_first` is True, padding is applied at the beginning (before the sentence starts) otherwise it's applied at the end."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Undocumented Methods - Methods moved below this line will intentionally be hidden"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> create(`train_ds`, `valid_ds`, `test_ds`=`None`, `path`:`PathOrStr`=`'.'`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)\n",
"\n",
"Create a [`TextDataBunch`](/text.data.html#TextDataBunch) in `path` from the `datasets` for language modelling. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextLMDataBunch.create)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> create(`train_ds`, `valid_ds`, `test_ds`=`None`, `path`:`PathOrStr`=`'.'`, `bs`=`64`, `pad_idx`=`1`, `pad_first`=`True`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)\n",
"\n",
"Function that transform the `datasets` in a [`DataBunch`](/basic_data.html#DataBunch) for classification. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextClasDataBunch.create)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> new(`items`:`Iterator`, `kwargs`) → `NumericalizedTextList`"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.new)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> get(`i`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.get)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process_one(`item`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TokenizeProcessor.process_one)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process(`ds`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TokenizeProcessor.process)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process_one(`item`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(OpenFileProcessor.process_one)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process(`ds`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(NumericalizeProcessor.process)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process_one(`item`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(NumericalizeProcessor.process_one)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## New Methods - Please document or move to the undocumented section"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> Text(`ids`, `text`, `is_lm`) :: [`ItemBase`](/core.html#ItemBase)\n",
"\n",
"All transformable dataset items use this type. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(Text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_folder(`path`:`PathOrStr`=`'.'`, `extensions`:`StrList`=`{'.txt'}`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `processor`:[`PreProcessor`](/data_block.html#PreProcessor)=`None`, `kwargs`) → `TextList`\n",
"\n",
"Get the list of files in `path` that have a text suffix. `recurse` determines if we search subfolders. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.from_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"jekyll": {
"keywords": "fastai",
"summary": "Basic dataset for NLP tasks and helper functions to create a DataBunch",
"title": "text.data"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}