{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# NLP datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [],
"source": [
"from fastai.gen_doc.nbdoc import *\n",
"from fastai.text import * \n",
"from fastai.gen_doc.nbdoc import *\n",
"from fastai import *"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This module contains the [`TextDataset`](/text.data.html#TextDataset) class, which is the main dataset you should use for your NLP tasks. It automatically does the preprocessing steps described in [`text.transform`](/text.transform.html#text.transform). It also contains all the functions to quickly get a [`TextDataBunch`](/text.data.html#TextDataBunch) ready."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quickly assemble your data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You should get your data in one of the following formats to make the most of the fastai library and use one of the factory methods of one of the [`TextDataBunch`](/text.data.html#TextDataBunch) classes:\n",
"- raw text files in folders train, valid, test in an ImageNet style,\n",
"- a csv where some column(s) gives the label(s) and the folowwing one the associated text,\n",
"- a dataframe structured the same way,\n",
"- tokens and labels arrays,\n",
"- ids, vocabulary (correspondance id to word) and labels.\n",
"\n",
"If you are assembling the data for a language model, you should define your labels as always 0 to respect those formats. The first time you create a [`DataBunch`](/basic_data.html#DataBunch) with one of those functions, your data will be preprocessed automatically. You can save it, so that the next time you call it is almost instantaneous. \n",
"\n",
"Below are the classes that help assembling the raw data in a [`DataBunch`](/basic_data.html#DataBunch) suitable for NLP."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"
class
TextLMDataBunch
[source]
\n",
"\n",
"> TextLMDataBunch
(`train_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `valid_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `test_dl`:`Optional`\\[[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)\\]=`None`, `device`:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=`None`, `tfms`:`Optional`\\[`Collection`\\[`Callable`\\]\\]=`None`, `path`:`PathOrStr`=`'.'`, `collate_fn`:`Callable`=`'data_collate'`) :: [`TextDataBunch`](/text.data.html#TextDataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextLMDataBunch, title_level=3, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a [`DataBunch`](/basic_data.html#DataBunch) suitable for language modeling: all the texts in the [`datasets`](/datasets.html#datasets) are concatenated and the labels are ignored. Instead, the target is the next word in the sentence."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> show_batch
(`sep`=`' '`, `ds_type`:[`DatasetType`](/basic_data.html#DatasetType)=``, `rows`:`int`=`10`, `max_len`:`int`=`100`)\n",
"\n",
"Show `rows` texts from a batch of `ds_type`, tokens are joined with `sep`, truncated at `max_len`. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextLMDataBunch.show_batch)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class
TextClasDataBunch
[source]
\n",
"\n",
"> TextClasDataBunch
(`train_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `valid_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `test_dl`:`Optional`\\[[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)\\]=`None`, `device`:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=`None`, `tfms`:`Optional`\\[`Collection`\\[`Callable`\\]\\]=`None`, `path`:`PathOrStr`=`'.'`, `collate_fn`:`Callable`=`'data_collate'`) :: [`TextDataBunch`](/text.data.html#TextDataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextClasDataBunch, title_level=3, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a [`DataBunch`](/basic_data.html#DataBunch) suitable for a text classifier: all the texts are grouped by length (with a bit of randomness for the training set) then padded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> show_batch
(`rows`:`int`=`None`, `ds_type`:[`DatasetType`](/basic_data.html#DatasetType)=``, `kwargs`)\n",
"\n",
"Show a batch of data in `ds_type` on a few `rows`. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextClasDataBunch.show_batch)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class
TextDataBunch
[source]
\n",
"\n",
"> TextDataBunch
(`train_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `valid_dl`:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), `test_dl`:`Optional`\\[[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)\\]=`None`, `device`:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=`None`, `tfms`:`Optional`\\[`Collection`\\[`Callable`\\]\\]=`None`, `path`:`PathOrStr`=`'.'`, `collate_fn`:`Callable`=`'data_collate'`) :: [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch, title_level=3, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a [`DataBunch`](/basic_data.html#DataBunch) with the raw texts. This is only going to work if they all ahve the same lengths."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Factory methods (TextDataBunch)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"All those classes have the following factory methods."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_folder
(`path`:`PathOrStr`, `train`:`str`=`'train'`, `valid`:`str`=`'valid'`, `test`:`Optional`\\[`str`\\]=`None`, `classes`:`ArgStar`=`None`, `tokenizer`:[`Tokenizer`](/text.transform.html#Tokenizer)=`None`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `kwargs`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_folder, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) from texts placed in `path` in a [`train`](/train.html#train), `valid` and maybe `test` folders. Text files in the [`train`](/train.html#train) and `valid` folders should be places in subdirectories according to their classes (always the same for a language model) and the ones for the `test` folder should all be placed there directly. `tokenizer` will be used to parse those texts into tokens. The `shuffle` flag will optionally shuffle the texts found.\n",
"\n",
"You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_csv
(`path`:`PathOrStr`, `csv_name`, `valid_pct`:`float`=`0.2`, `test`:`Optional`\\[`str`\\]=`None`, `tokenizer`:[`Tokenizer`](/text.transform.html#Tokenizer)=`None`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `classes`:`StrList`=`None`, `header`=`'infer'`, `text_cols`:`Union`\\[`int`, `Collection`\\[`int`\\], `str`, `StrList`\\]=`1`, `label_cols`:`Union`\\[`int`, `Collection`\\[`int`\\], `str`, `StrList`\\]=`0`, `label_delim`:`str`=`None`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_csv, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) from texts placed in `path` in a csv file and maybe `test` csv file opened with `header`. You can specify `txt_cols` and `lbl_cols` or just an integer `n_labels` in which case the label(s) should be the first column(s). `tokenizer` will be used to parse those texts into tokens.\n",
"\n",
"You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_df
(`path`:`PathOrStr`, `train_df`:`DataFrame`, `valid_df`:`DataFrame`, `test_df`:`OptDataFrame`=`None`, `tokenizer`:[`Tokenizer`](/text.transform.html#Tokenizer)=`None`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `classes`:`StrList`=`None`, `text_cols`:`Union`\\[`int`, `Collection`\\[`int`\\], `str`, `StrList`\\]=`1`, `label_cols`:`Union`\\[`int`, `Collection`\\[`int`\\], `str`, `StrList`\\]=`0`, `label_delim`:`str`=`None`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_df, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) in `path` from texts in `train_df`, `valid_df` and maybe `test_df`. By default, those are opened with `header=infer` but you can specify another value in the kwargs. You can specify `txt_cols` and `lbl_cols` or just an integer `n_labels` in which case the label(s) should be the first column(s). `tokenizer` will be used to parse those texts into tokens.\n",
"\n",
"You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_tokens
(`path`:`PathOrStr`, `trn_tok`:`Tokens`, `trn_lbls`:`Collection`\\[`Union`\\[`int`, `float`\\]\\], `val_tok`:`Tokens`, `val_lbls`:`Collection`\\[`Union`\\[`int`, `float`\\]\\], `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `tst_tok`:`Tokens`=`None`, `classes`:`ArgStar`=`None`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_tokens, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) from `trn_tok`, `trn_lbls`, `val_tok`, `val_lbls` and maybe `tst_tok`.\n",
"\n",
"You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels`, `tok_suff` and `lbl_suff` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_ids
(`path`:`PathOrStr`, `vocab`:[`Vocab`](/text.transform.html#Vocab), `train_ids`:`Collection`\\[`Collection`\\[`int`\\]\\], `valid_ids`:`Collection`\\[`Collection`\\[`int`\\]\\], `test_ids`:`Collection`\\[`Collection`\\[`int`\\]\\]=`None`, `train_lbls`:`Collection`\\[`Union`\\[`int`, `float`\\]\\]=`None`, `valid_lbls`:`Collection`\\[`Union`\\[`int`, `float`\\]\\]=`None`, `classes`:`ArgStar`=`None`, `processor`:[`PreProcessor`](/data_block.html#PreProcessor)=`None`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.from_ids, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function will create a [`DataBunch`](/basic_data.html#DataBunch) in `path` from texts already processed into `trn_ids`, `trn_lbls`, `val_ids`, `val_lbls` and maybe `tst_ids`. You can specify the corresponding `classes` if applciable. You must specify the `vocab` so that the [`RNNLearner`](/text.learner.html#RNNLearner) class can later infer the corresponding sizes in the model it will create. kwargs will be passed to the class initialization."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load and save"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To avoid losing time preprocessing the text data more than once, you should save/load your [`TextDataBunch`](/text.data.html#TextDataBunch) using thse methods."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> load
(`path`:`PathOrStr`, `cache_name`:`PathOrStr`=`'tmp'`, `processor`:[`PreProcessor`](/data_block.html#PreProcessor)=`None`, `kwargs`)\n",
"\n",
"Load a [`TextDataBunch`](/text.data.html#TextDataBunch) from `path/cache_name`. `kwargs` are passed to the dataloader creation. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.load)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> save
(`cache_name`:`PathOrStr`=`'tmp'`)\n",
"\n",
"Save the [`DataBunch`](/basic_data.html#DataBunch) in `self.path/cache_name` folder. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextDataBunch.save)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Example"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Untar the IMDB sample dataset if not already done:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"PosixPath('/home/ubuntu/.fastai/data/imdb_sample')"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"path = untar_data(URLs.IMDB_SAMPLE)\n",
"path"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since it comes in the form of csv files, we will use the corresponding `text_data` method. Here is an overview of what your file you should look like:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" label | \n",
" text | \n",
" is_valid | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" negative | \n",
" Un-bleeping-believable! Meg Ryan doesn't even ... | \n",
" False | \n",
"
\n",
" \n",
" 1 | \n",
" positive | \n",
" This is a extremely well-made film. The acting... | \n",
" False | \n",
"
\n",
" \n",
" 2 | \n",
" negative | \n",
" Every once in a long while a movie will come a... | \n",
" False | \n",
"
\n",
" \n",
" 3 | \n",
" positive | \n",
" Name just says it all. I watched this movie wi... | \n",
" False | \n",
"
\n",
" \n",
" 4 | \n",
" negative | \n",
" This movie succeeds at being one of the most u... | \n",
" False | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" label text is_valid\n",
"0 negative Un-bleeping-believable! Meg Ryan doesn't even ... False\n",
"1 positive This is a extremely well-made film. The acting... False\n",
"2 negative Every once in a long while a movie will come a... False\n",
"3 positive Name just says it all. I watched this movie wi... False\n",
"4 negative This movie succeeds at being one of the most u... False"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pd.read_csv(path/'texts.csv').head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And here is a simple way of creating your [`DataBunch`](/basic_data.html#DataBunch) for language modelling or classification."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_lm = TextLMDataBunch.from_csv(Path(path), 'texts.csv')\n",
"data_clas = TextClasDataBunch.from_csv(Path(path), 'texts.csv')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The TextList input classes"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Behind the scenes, the previous functions will create a training, validation and maybe test [`TextList`](/text.data.html#TextList) that will be tokenized and numericalized (if needed) using [`PreProcessor`](/data_block.html#PreProcessor)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> Text
(`ids`, `text`) :: [`ItemBase`](/core.html#ItemBase)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(Text, doc_string=False, title_level=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Basic item for text data, contains the numericalized `ids` and the corresponding [`text`](/text.html#text)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> show_batch
(`idxs`:`Collection`\\[`int`\\], `rows`:`int`, `ds`:[`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset), `max_len`:`int`=`50`)\n",
"\n",
"Show the texts in `idx` on a few `rows` from `ds`. `max_len` is the maximum number of tokens displayed. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(Text.show_batch)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> TextList
(`items`:`Iterator`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `kwargs`) :: [`ItemList`](/data_block.html#ItemList)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList, title_level=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The basic [`ItemList`](/data_block.html#ItemList) for text data in `items` with the corresponding `vocab`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> label_for_lm
(`kwargs`)\n",
"\n",
"A special labelling method for language models. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.label_for_lm)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_folder
(`path`:`PathOrStr`=`'.'`, `extensions`:`StrList`=`['.txt']`, `processor`:[`PreProcessor`](/data_block.html#PreProcessor)=`None`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `kwargs`) → `TextList`\n",
"\n",
"Get the list of files in `path` that have a text suffix. `recurse` determines if we search subfolders. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.from_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class
OpenFileProcessor
[source]
\n",
"\n",
"> OpenFileProcessor
() :: [`PreProcessor`](/data_block.html#PreProcessor)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(OpenFileProcessor, title_level=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Simple `Preprocessor` that opens the files in items and reads the texts inside them."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> open_text
(`fn`:`PathOrStr`, `enc`=`'utf-8'`)\n",
"\n",
"Read the text in `fn`. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(open_text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class
TokenizeProcessor
[source]
\n",
"\n",
"> TokenizeProcessor
(`tokenizer`:[`Tokenizer`](/text.transform.html#Tokenizer)=`None`, `chunksize`:`int`=`10000`, `mark_fields`:`bool`=`True`) :: [`PreProcessor`](/data_block.html#PreProcessor)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TokenizeProcessor, title_level=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Simple [`PreProcessor`](/data_block.html#PreProcessor) that tokenizes the texts in `items` using `tokenizer` by bits of `chunsize`. If `mark_fields` is `True`, add field tokens."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class
NumericalizeProcessor
[source]
\n",
"\n",
"> NumericalizeProcessor
(`vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `max_vocab`:`int`=`60000`, `min_freq`:`int`=`2`) :: [`PreProcessor`](/data_block.html#PreProcessor)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(NumericalizeProcessor, title_level=3, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Numericalize the tokens with `vocab` (if not None) otherwise create one with `max_vocab` and `min_freq` from tokens."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Language Model data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A language model is trained to guess what the next word is inside a flow of words. We don't feed it the different texts separately but concatenate them all together in a big array. To create the batches, we split this array into `bs` chuncks of continuous texts. Note that in all NLP tasks, we use the pytoch convention of sequence length being the first dimension (and batch size being the second one) so we transpose that array so that we can read the chunks of texts in columns. Here is an example of batch from our imdb sample dataset. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" 0 | \n",
" 1 | \n",
" 2 | \n",
" 3 | \n",
" 4 | \n",
" 5 | \n",
" 6 | \n",
" 7 | \n",
" 8 | \n",
" 9 | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" xxfld | \n",
" in | \n",
" michael | \n",
" that | \n",
" movie | \n",
" xxunk | \n",
" \\n\\n | \n",
" watch | \n",
" worst | \n",
" , | \n",
"
\n",
" \n",
" 1 | \n",
" 1 | \n",
" fact | \n",
" moore | \n",
" they | \n",
" down | \n",
" \" | \n",
" once | \n",
" a | \n",
" mistakes | \n",
" xxunk | \n",
"
\n",
" \n",
" 2 | \n",
" i | \n",
" , | \n",
" , | \n",
" have | \n",
" was | \n",
" , | \n",
" this | \n",
" love | \n",
" of | \n",
" and | \n",
"
\n",
" \n",
" 3 | \n",
" really | \n",
" other | \n",
" but | \n",
" more | \n",
" that | \n",
" \" | \n",
" daily | \n",
" story | \n",
" my | \n",
" much | \n",
"
\n",
" \n",
" 4 | \n",
" enjoyed | \n",
" than | \n",
" he | \n",
" in | \n",
" there | \n",
" sailor | \n",
" ( | \n",
" this | \n",
" life | \n",
" much | \n",
"
\n",
" \n",
" 5 | \n",
" girl | \n",
" a | \n",
" also | \n",
" common | \n",
" would | \n",
" moon | \n",
" painful | \n",
" one | \n",
" so | \n",
" more | \n",
"
\n",
" \n",
" 6 | \n",
" fight | \n",
" few | \n",
" follows | \n",
" in | \n",
" be | \n",
" \" | \n",
" ) | \n",
" will | \n",
" far | \n",
" . | \n",
"
\n",
" \n",
" 7 | \n",
" . | \n",
" good | \n",
" in | \n",
" their | \n",
" a | \n",
" and | \n",
" xxunk | \n",
" suffice | \n",
" , | \n",
" it | \n",
"
\n",
" \n",
" 8 | \n",
" it | \n",
" scenes | \n",
" his | \n",
" old | \n",
" hour | \n",
" co. | \n",
" is | \n",
" . | \n",
" and | \n",
" stars | \n",
"
\n",
" \n",
" 9 | \n",
" something | \n",
" , | \n",
" xxunk | \n",
" age | \n",
" of | \n",
" are | \n",
" over | \n",
" xxfld | \n",
" it | \n",
" kim | \n",
"
\n",
" \n",
" 10 | \n",
" i | \n",
" this | \n",
" by | \n",
" than | \n",
" footage | \n",
" xxunk | \n",
" , | \n",
" 1 | \n",
" 's | \n",
" bassenger | \n",
"
\n",
" \n",
" 11 | \n",
" could | \n",
" character | \n",
" using | \n",
" they | \n",
" , | \n",
" . | \n",
" eric | \n",
" i | \n",
" only | \n",
" and | \n",
"
\n",
" \n",
" 12 | \n",
" watch | \n",
" seems | \n",
" several | \n",
" thought | \n",
" then | \n",
" not | \n",
" rushes | \n",
" am | \n",
" half | \n",
" xxunk | \n",
"
\n",
" \n",
" 13 | \n",
" over | \n",
" pretty | \n",
" of | \n",
" . | \n",
" basically | \n",
" to | \n",
" down | \n",
" glad | \n",
" done | \n",
" baldwin | \n",
"
\n",
" \n",
" 14 | \n",
" and | \n",
" much | \n",
" moore | \n",
" even | \n",
" that | \n",
" mention | \n",
" to | \n",
" to | \n",
" . | \n",
" as | \n",
"
\n",
" \n",
" 15 | \n",
" over | \n",
" wasted | \n",
" 's | \n",
" the | \n",
" same | \n",
" the | \n",
" his | \n",
" read | \n",
" i | \n",
" the | \n",
"
\n",
" \n",
" 16 | \n",
" again | \n",
" . | \n",
" propaganda | \n",
" xxunk | \n",
" hour | \n",
" xxunk | \n",
" basement | \n",
" so | \n",
" seriously | \n",
" xxunk | \n",
"
\n",
" \n",
" 17 | \n",
" . | \n",
" \\n\\n | \n",
" film | \n",
" willie | \n",
" repeated | \n",
" racial | \n",
" , | \n",
" many | \n",
" thought | \n",
" xxunk | \n",
"
\n",
" \n",
" 18 | \n",
" the | \n",
" while | \n",
" - | \n",
" xxunk | \n",
" 4 | \n",
" / | \n",
" where | \n",
" negative | \n",
" it | \n",
" 's | \n",
"
\n",
" \n",
" 19 | \n",
" acting | \n",
" i | \n",
" making | \n",
" that | \n",
" times | \n",
" gender | \n",
" all | \n",
" comments | \n",
" was | \n",
" . | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" 0 1 2 3 4 5 6 \\\n",
"0 xxfld in michael that movie xxunk \\n\\n \n",
"1 1 fact moore they down \" once \n",
"2 i , , have was , this \n",
"3 really other but more that \" daily \n",
"4 enjoyed than he in there sailor ( \n",
"5 girl a also common would moon painful \n",
"6 fight few follows in be \" ) \n",
"7 . good in their a and xxunk \n",
"8 it scenes his old hour co. is \n",
"9 something , xxunk age of are over \n",
"10 i this by than footage xxunk , \n",
"11 could character using they , . eric \n",
"12 watch seems several thought then not rushes \n",
"13 over pretty of . basically to down \n",
"14 and much moore even that mention to \n",
"15 over wasted 's the same the his \n",
"16 again . propaganda xxunk hour xxunk basement \n",
"17 . \\n\\n film willie repeated racial , \n",
"18 the while - xxunk 4 / where \n",
"19 acting i making that times gender all \n",
"\n",
" 7 8 9 \n",
"0 watch worst , \n",
"1 a mistakes xxunk \n",
"2 love of and \n",
"3 story my much \n",
"4 this life much \n",
"5 one so more \n",
"6 will far . \n",
"7 suffice , it \n",
"8 . and stars \n",
"9 xxfld it kim \n",
"10 1 's bassenger \n",
"11 i only and \n",
"12 am half xxunk \n",
"13 glad done baldwin \n",
"14 to . as \n",
"15 read i the \n",
"16 so seriously xxunk \n",
"17 many thought xxunk \n",
"18 negative it 's \n",
"19 comments was . "
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"path = untar_data(URLs.IMDB_SAMPLE)\n",
"data = TextLMDataBunch.from_csv(path, 'texts.csv')\n",
"x,y = next(iter(data.train_dl))\n",
"example = x[:20,:10].cpu()\n",
"texts = pd.DataFrame([data.train_ds.vocab.textify(l).split(' ') for l in example])\n",
"texts"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, as suggested in [this article](https://arxiv.org/abs/1708.02182) from Stephen Merity et al., we don't use a fixed `bptt` through the different batches but slightly change it from batch to batch."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"torch.Size([68, 64])\n",
"torch.Size([64, 64])\n",
"torch.Size([40, 64])\n",
"torch.Size([69, 64])\n",
"torch.Size([66, 64])\n"
]
}
],
"source": [
"iter_dl = iter(data.train_dl)\n",
"for _ in range(5):\n",
" x,y = next(iter_dl)\n",
" print(x.size())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is all done internally when we use [`TextLMDataBunch`](/text.data.html#TextLMDataBunch), by creating [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) using the following class:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class
LanguageModelLoader
[source]
\n",
"\n",
"> LanguageModelLoader
(`dataset`:[`LabelList`](/data_block.html#LabelList), `bs`:`int`=`64`, `bptt`:`int`=`70`, `backwards`:`bool`=`False`, `shuffle`:`bool`=`False`, `max_len`:`int`=`25`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(LanguageModelLoader, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Takes the texts from `dataset` and concatenate them all, then create a big array with `bs` columns (transposed from the data source so that we read the texts in the columns). Spits batches with a size approximately equal to `bptt` but changing at every batch. If `backwards` is True, reverses the original text. If `shuffle` is True, we shuffle the texts before concatenating them together at the start of each epoch. `max_len` is the maximum amount we add to `bptt`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> batchify
(`data`:`ndarray`) → `LongTensor`"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(LanguageModelLoader.batchify, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Called at the inialization to create the big array of text ids from the [`data`](/text.data.html#text.data) array."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> get_batch
(`i`:`int`, `seq_len`:`int`) → `Tuple`\\[`LongTensor`, `LongTensor`\\]\n",
"\n",
"Create a batch at `i` of a given `seq_len`. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(LanguageModelLoader.get_batch)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Classifier data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When preparing the data for a classifier, we keep the different texts separate, which poses another challenge for the creation of batches: since they don't all have the same length, we can't easily collate them together in batches. To help with this we use two different techniques:\n",
"- padding: each text is padded with the `PAD` token to get all the ones we picked to the same size\n",
"- sorting the texts (ish): to avoid having together a very long text with a very short one (which would then have a lot of `PAD` tokens), we regroup the texts by order of length. For the training set, we still add some randomness to avoid showing the same batches at every step of the training.\n",
"\n",
"Here is an example of batch with padding (the padding index is 1, and the padding is applied before the sentences start)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 43, 43, 1, 1, 1, 1, 1, 1, 1, 1],\n",
" [ 40, 40, 43, 43, 43, 43, 43, 43, 43, 1],\n",
" [ 2, 10, 40, 40, 40, 40, 40, 40, 40, 43],\n",
" [1061, 9, 297, 5400, 2, 14, 12, 7, 12, 40],\n",
" [ 18, 667, 89, 263, 75, 9, 273, 41, 103, 2],\n",
" [ 65, 8, 462, 47, 465, 6, 14, 2, 29, 1632],\n",
" [ 3, 5047, 47, 2667, 13, 81, 70, 120, 264, 135],\n",
" [ 2, 14, 155, 1115, 4282, 229, 1531, 12, 10, 9],\n",
" [5761, 51, 2, 246, 66, 20, 22, 36, 68, 567],\n",
" [ 18, 100, 0, 13, 14, 4, 4682, 137, 12, 56],\n",
" [ 65, 102, 3, 9, 20, 10, 5, 3, 333, 1343],\n",
" [ 3, 5237, 248, 29, 9, 9, 6107, 5, 14, 181],\n",
" [ 288, 25, 9, 522, 0, 46, 859, 13, 20, 3],\n",
" [ 33, 7, 487, 89, 4, 195, 286, 16, 11, 23],\n",
" [ 596, 2, 248, 377, 10, 20, 41, 112, 77, 6]],\n",
" device='cuda:0')"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"path = untar_data(URLs.IMDB_SAMPLE)\n",
"data = TextClasDataBunch.from_csv(path, 'texts.csv')\n",
"iter_dl = iter(data.train_dl)\n",
"_ = next(iter_dl)\n",
"x,y = next(iter_dl)\n",
"x[:20,-10:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is all done internally when we use [`TextClasDataBunch`](/text.data.html#TextClasDataBunch), by using the following classes:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class
SortSampler
[source]
\n",
"\n",
"> SortSampler
(`data_source`:`NPArrayList`, `key`:`KeyFunc`) :: [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(SortSampler, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) to batchify the `data_source` by order of length of the texts. Used for the validation and (if applicable) the test set. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"class
SortishSampler
[source]
\n",
"\n",
"> SortishSampler
(`data_source`:`NPArrayList`, `key`:`KeyFunc`, `bs`:`int`) :: [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(SortishSampler, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) to batchify with size `bs` the `data_source` by order of length of the texts with a bit of randomness. Used for the training set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> pad_collate
(`samples`:`BatchSamples`, `pad_idx`:`int`=`1`, `pad_first`:`bool`=`True`) → `Tuple`\\[`LongTensor`, `LongTensor`\\]"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(pad_collate, doc_string=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Function used by the pytorch [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) to collate the `samples` in batches while adding padding with `pad_idx`. If `pad_first` is True, padding is applied at the beginning (before the sentence starts) otherwise it's applied at the end."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Undocumented Methods - Methods moved below this line will intentionally be hidden"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> create
(`train_ds`, `valid_ds`, `test_ds`=`None`, `path`:`PathOrStr`=`'.'`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)\n",
"\n",
"Create a [`TextDataBunch`](/text.data.html#TextDataBunch) in `path` from the `datasets` for language modelling. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextLMDataBunch.create)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> create
(`train_ds`, `valid_ds`, `test_ds`=`None`, `path`:`PathOrStr`=`'.'`, `bs`=`64`, `pad_idx`=`1`, `pad_first`=`True`, `kwargs`) → [`DataBunch`](/basic_data.html#DataBunch)\n",
"\n",
"Function that transform the `datasets` in a [`DataBunch`](/basic_data.html#DataBunch) for classification. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextClasDataBunch.create)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> new
(`items`:`Iterator`, `kwargs`) → `NumericalizedTextList`"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.new)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> get
(`i`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.get)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process_one
(`item`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TokenizeProcessor.process_one)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process
(`ds`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TokenizeProcessor.process)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process_one
(`item`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(OpenFileProcessor.process_one)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process
(`ds`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(NumericalizeProcessor.process)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> process_one
(`item`)"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(NumericalizeProcessor.process_one)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## New Methods - Please document or move to the undocumented section"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": false
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> show_batch
(`idxs`:`Collection`\\[`int`\\], `rows`:`int`, `ds`:[`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset), `max_len`:`int`=`50`)\n",
"\n",
"Show the texts in `idx` on a few `rows` from `ds`. `max_len` is the maximum number of tokens displayed. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(Text.show_batch)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> Text
(`ids`, `text`) :: [`ItemBase`](/core.html#ItemBase)\n",
"\n",
"All transformable dataset items use this type. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(Text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide_input": true
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"\n",
"> from_folder
(`path`:`PathOrStr`=`'.'`, `extensions`:`StrList`=`['.txt']`, `processor`:[`PreProcessor`](/data_block.html#PreProcessor)=`None`, `vocab`:[`Vocab`](/text.transform.html#Vocab)=`None`, `kwargs`) → `TextList`\n",
"\n",
"Get the list of files in `path` that have a text suffix. `recurse` determines if we search subfolders. "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_doc(TextList.from_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"jekyll": {
"keywords": "fastai",
"summary": "Basic dataset for NLP tasks and helper functions to create a DataBunch",
"title": "text.data"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}