{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## NLP datasets" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.text import * \n", "from fastai.gen_doc.nbdoc import *\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This module contains the [`TextDataset`](/text.data.html#TextDataset) class, which is the main dataset you should use for your NLP tasks. It automatically does the preprocessing steps described in [`text.transform`](/text.transform.html#text.transform). It also contains all the functions to quickly get a [`TextDataBunch`](/text.data.html#TextDataBunch) ready." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Quickly assemble your data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should get your data in one of the following formats to make the most of the fastai library and use one of the factory methods of one of the [`TextDataBunch`](/text.data.html#TextDataBunch) classes:\n", "- raw text files in folders train, valid, test in an ImageNet style,\n", "- a csv where some column(s) gives the label(s) and the following one the associated text,\n", "- a dataframe structured the same way,\n", "- tokens and labels arrays,\n", "- ids, vocabulary (correspondence id to word) and labels.\n", "\n", "If you are assembling the data for a language model, you should define your labels as always 0 to respect those formats. The first time you create a [`DataBunch`](/basic_data.html#DataBunch) with one of those functions, your data will be preprocessed automatically. You can save it, so that the next time you call it is almost instantaneous. \n", "\n", "Below are the classes that help assembling the raw data in a [`DataBunch`](/basic_data.html#DataBunch) suitable for NLP." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class TextLMDataBunch[source][test]

\n", "\n", "> TextLMDataBunch(**`train_dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), **`valid_dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), **`fix_dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)=***`None`***, **`test_dl`**:`Optional`\\[[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)\\]=***`None`***, **`device`**:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=***`None`***, **`dl_tfms`**:`Optional`\\[`Collection`\\[`Callable`\\]\\]=***`None`***, **`path`**:`PathOrStr`=***`'.'`***, **`collate_fn`**:`Callable`=***`'data_collate'`***, **`no_check`**:`bool`=***`False`***) :: [`TextDataBunch`](/text.data.html#TextDataBunch)\n", "\n", "
×

Tests found for TextLMDataBunch:

Some other tests where TextLMDataBunch is used:

To run tests please refer to this guide.

\n", "\n", "Create a [`TextDataBunch`](/text.data.html#TextDataBunch) suitable for training a language model. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextLMDataBunch, title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All the texts in the [`datasets`](/datasets.html#datasets) are concatenated and the labels are ignored. Instead, the target is the next word in the sentence." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

create[source][test]

\n", "\n", "> create(**`train_ds`**, **`valid_ds`**, **`test_ds`**=***`None`***, **`path`**:`PathOrStr`=***`'.'`***, **`no_check`**:`bool`=***`False`***, **`bs`**=***`64`***, **`val_bs`**:`int`=***`None`***, **`num_workers`**:`int`=***`0`***, **`device`**:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=***`None`***, **`collate_fn`**:`Callable`=***`'data_collate'`***, **`dl_tfms`**:`Optional`\\[`Collection`\\[`Callable`\\]\\]=***`None`***, **`bptt`**:`int`=***`70`***, **`backwards`**:`bool`=***`False`***, **\\*\\*`dl_kwargs`**) → [`DataBunch`](/basic_data.html#DataBunch)\n", "\n", "
×

No tests found for create. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create a [`TextDataBunch`](/text.data.html#TextDataBunch) in `path` from the `datasets` for language modelling. Passes `**dl_kwargs` on to `DataLoader()` " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextLMDataBunch.create)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class TextClasDataBunch[source][test]

\n", "\n", "> TextClasDataBunch(**`train_dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), **`valid_dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), **`fix_dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)=***`None`***, **`test_dl`**:`Optional`\\[[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)\\]=***`None`***, **`device`**:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=***`None`***, **`dl_tfms`**:`Optional`\\[`Collection`\\[`Callable`\\]\\]=***`None`***, **`path`**:`PathOrStr`=***`'.'`***, **`collate_fn`**:`Callable`=***`'data_collate'`***, **`no_check`**:`bool`=***`False`***) :: [`TextDataBunch`](/text.data.html#TextDataBunch)\n", "\n", "
×

Tests found for TextClasDataBunch:

Some other tests where TextClasDataBunch is used:

  • pytest -sv tests/test_text_data.py::test_from_csv_and_from_df [source]
  • pytest -sv tests/test_text_data.py::test_backwards_cls_databunch [source]
  • pytest -sv tests/test_text_data.py::test_load_and_save_test [source]
  • pytest -sv tests/test_text_data.py::test_from_ids_works_for_equally_length_sentences [source]
  • pytest -sv tests/test_text_data.py::test_from_ids_works_for_variable_length_sentences [source]

To run tests please refer to this guide.

\n", "\n", "Create a [`TextDataBunch`](/text.data.html#TextDataBunch) suitable for training an RNN classifier. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextClasDataBunch, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

create[source][test]

\n", "\n", "> create(**`train_ds`**, **`valid_ds`**, **`test_ds`**=***`None`***, **`path`**:`PathOrStr`=***`'.'`***, **`bs`**:`int`=***`32`***, **`val_bs`**:`int`=***`None`***, **`pad_idx`**=***`1`***, **`pad_first`**=***`True`***, **`device`**:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=***`None`***, **`no_check`**:`bool`=***`False`***, **`backwards`**:`bool`=***`False`***, **\\*\\*`dl_kwargs`**) → [`DataBunch`](/basic_data.html#DataBunch)\n", "\n", "
×

No tests found for create. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Function that transform the `datasets` in a [`DataBunch`](/basic_data.html#DataBunch) for classification. Passes `**dl_kwargs` on to `DataLoader()` " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextClasDataBunch.create)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All the texts are grouped by length (with a bit of randomness for the training set) then padded so that the samples have the same length to get in a batch." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class TextDataBunch[source][test]

\n", "\n", "> TextDataBunch(**`train_dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), **`valid_dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), **`fix_dl`**:[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)=***`None`***, **`test_dl`**:`Optional`\\[[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)\\]=***`None`***, **`device`**:[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)=***`None`***, **`dl_tfms`**:`Optional`\\[`Collection`\\[`Callable`\\]\\]=***`None`***, **`path`**:`PathOrStr`=***`'.'`***, **`collate_fn`**:`Callable`=***`'data_collate'`***, **`no_check`**:`bool`=***`False`***) :: [`DataBunch`](/basic_data.html#DataBunch)\n", "\n", "
×

No tests found for TextDataBunch. To contribute a test please refer to this guide and this discussion.

\n", "\n", "General class to get a [`DataBunch`](/basic_data.html#DataBunch) for NLP. Subclassed by [`TextLMDataBunch`](/text.data.html#TextLMDataBunch) and [`TextClasDataBunch`](/text.data.html#TextClasDataBunch). " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextDataBunch, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "
Warning: This class can only work directly if all the texts have the same length.
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "jekyll_warn(\"This class can only work directly if all the texts have the same length.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Factory methods (TextDataBunch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All those classes have the following factory methods." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

from_folder[source][test]

\n", "\n", "> from_folder(**`path`**:`PathOrStr`, **`train`**:`str`=***`'train'`***, **`valid`**:`str`=***`'valid'`***, **`test`**:`Optional`\\[`str`\\]=***`None`***, **`classes`**:`ArgStar`=***`None`***, **`tokenizer`**:[`Tokenizer`](/text.transform.html#Tokenizer)=***`None`***, **`vocab`**:[`Vocab`](/text.transform.html#Vocab)=***`None`***, **`chunksize`**:`int`=***`10000`***, **`max_vocab`**:`int`=***`60000`***, **`min_freq`**:`int`=***`2`***, **`mark_fields`**:`bool`=***`False`***, **\\*\\*`kwargs`**)\n", "\n", "
×

Tests found for from_folder:

Some other tests where from_folder is used:

  • pytest -sv tests/test_text_data.py::test_from_folder [source]
  • pytest -sv tests/test_text_data.py::test_filter_classes [source]

To run tests please refer to this guide.

\n", "\n", "Create a [`TextDataBunch`](/text.data.html#TextDataBunch) from text files in folders. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextDataBunch.from_folder)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The floders are scanned in `path` with a train, `valid` and maybe `test` folders. Text files in the train and `valid` folders should be places in subdirectories according to their classes (not applicable for a language model). `tokenizer` will be used to parse those texts into tokens.\n", "\n", "You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

from_csv[source][test]

\n", "\n", "> from_csv(**`path`**:`PathOrStr`, **`csv_name`**, **`valid_pct`**:`float`=***`0.2`***, **`test`**:`Optional`\\[`str`\\]=***`None`***, **`tokenizer`**:[`Tokenizer`](/text.transform.html#Tokenizer)=***`None`***, **`vocab`**:[`Vocab`](/text.transform.html#Vocab)=***`None`***, **`classes`**:`StrList`=***`None`***, **`delimiter`**:`str`=***`None`***, **`header`**=***`'infer'`***, **`text_cols`**:`IntsOrStrs`=***`1`***, **`label_cols`**:`IntsOrStrs`=***`0`***, **`label_delim`**:`str`=***`None`***, **`chunksize`**:`int`=***`10000`***, **`max_vocab`**:`int`=***`60000`***, **`min_freq`**:`int`=***`2`***, **`mark_fields`**:`bool`=***`False`***, **\\*\\*`kwargs`**) → [`DataBunch`](/basic_data.html#DataBunch)\n", "\n", "
×

Tests found for from_csv:

  • pytest -sv tests/test_text_data.py::test_from_csv_and_from_df [source]

To run tests please refer to this guide.

\n", "\n", "Create a [`TextDataBunch`](/text.data.html#TextDataBunch) from texts in csv files. `kwargs` are passed to the dataloader creation. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextDataBunch.from_csv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This method will look for `csv_name`, and optionally a `test` csv file, in `path`. These will be opened with [`header`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html#pandas-read-csv), using `delimiter`. You can specify which are the `text_cols` and `label_cols`; by default a single label column is assumed to come before a single text column. If your csv has no header, you must specify these as indices. If you're training a language model and don't have labels, you must specify the `text_cols`. If there are several `text_cols`, the texts will be concatenated together with an optional field token. If there are several `label_cols`, the labels will be assumed to be one-hot encoded and `classes` will default to `label_cols` (you can ignore that argument for a language model). `label_delim` can be used to specify the separator between multiple labels in a column.\n", "\n", "You can pass a `tokenizer` to be used to parse the texts into tokens and/or a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). Otherwise you can specify parameters such as `max_vocab`, `min_freq`, `chunksize` for the Tokenizer and Numericalizer (processors). Other parameters (e.g. `bs`, `val_bs` and `num_workers`, etc.) will be passed to [`LabelLists.databunch()`](/data_block.html#LabelLists.databunch) documentation) (see the LM data and classifier data sections for more info)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

from_df[source][test]

\n", "\n", "> from_df(**`path`**:`PathOrStr`, **`train_df`**:`DataFrame`, **`valid_df`**:`DataFrame`, **`test_df`**:`OptDataFrame`=***`None`***, **`tokenizer`**:[`Tokenizer`](/text.transform.html#Tokenizer)=***`None`***, **`vocab`**:[`Vocab`](/text.transform.html#Vocab)=***`None`***, **`classes`**:`StrList`=***`None`***, **`text_cols`**:`IntsOrStrs`=***`1`***, **`label_cols`**:`IntsOrStrs`=***`0`***, **`label_delim`**:`str`=***`None`***, **`chunksize`**:`int`=***`10000`***, **`max_vocab`**:`int`=***`60000`***, **`min_freq`**:`int`=***`2`***, **`mark_fields`**:`bool`=***`False`***, **\\*\\*`kwargs`**) → [`DataBunch`](/basic_data.html#DataBunch)\n", "\n", "
×

Tests found for from_df:

  • pytest -sv tests/test_text_data.py::test_from_csv_and_from_df [source]

Some other tests where from_df is used:

  • pytest -sv tests/test_text_data.py::test_should_load_backwards_lm_1 [source]
  • pytest -sv tests/test_text_data.py::test_should_load_backwards_lm_2 [source]
  • pytest -sv tests/test_text_data.py::test_backwards_cls_databunch [source]
  • pytest -sv tests/test_text_data.py::test_load_and_save_test [source]
  • pytest -sv tests/test_text_data.py::test_regression [source]

To run tests please refer to this guide.

\n", "\n", "Create a [`TextDataBunch`](/text.data.html#TextDataBunch) from DataFrames. `kwargs` are passed to the dataloader creation. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextDataBunch.from_df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This method will use `train_df`, `valid_df` and optionally `test_df` to build the [`TextDataBunch`](/text.data.html#TextDataBunch) in `path`. You can specify `text_cols` and `label_cols`; by default a single label column comes before a single text column. If you're training a language model and don't have labels, you must specify the `text_cols`. If there are several `text_cols`, the texts will be concatenated together with an optional field token. If there are several `label_cols`, the labels will be assumed to be one-hot encoded and `classes` will default to `label_cols` (you can ignore that argument for a language model).\n", "\n", "You can pass a `tokenizer` to be used to parse the texts into tokens and/or a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). Otherwise you can specify parameters such as `max_vocab`, `min_freq`, `chunksize` for the default Tokenizer and Numericalizer (processors). Other parameters (e.g. `bs`, `val_bs` and `num_workers`, etc.) will be passed to [`LabelLists.databunch()`](/data_block.html#LabelLists.databunch) documentation) (see the LM data and classifier data sections for more info)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

from_tokens[source][test]

\n", "\n", "> from_tokens(**`path`**:`PathOrStr`, **`trn_tok`**:`Tokens`, **`trn_lbls`**:`Collection`\\[`Union`\\[`int`, `float`\\]\\], **`val_tok`**:`Tokens`, **`val_lbls`**:`Collection`\\[`Union`\\[`int`, `float`\\]\\], **`vocab`**:[`Vocab`](/text.transform.html#Vocab)=***`None`***, **`tst_tok`**:`Tokens`=***`None`***, **`classes`**:`ArgStar`=***`None`***, **`max_vocab`**:`int`=***`60000`***, **`min_freq`**:`int`=***`3`***, **\\*\\*`kwargs`**) → [`DataBunch`](/basic_data.html#DataBunch)\n", "\n", "
×

No tests found for from_tokens. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create a [`TextDataBunch`](/text.data.html#TextDataBunch) from tokens and labels. `kwargs` are passed to the dataloader creation. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextDataBunch.from_tokens)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This function will create a [`DataBunch`](/basic_data.html#DataBunch) from `trn_tok`, `trn_lbls`, `val_tok`, `val_lbls` and maybe `tst_tok`.\n", "\n", "You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels`, `tok_suff` and `lbl_suff` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

from_ids[source][test]

\n", "\n", "> from_ids(**`path`**:`PathOrStr`, **`vocab`**:[`Vocab`](/text.transform.html#Vocab), **`train_ids`**:`Collection`\\[`Collection`\\[`int`\\]\\], **`valid_ids`**:`Collection`\\[`Collection`\\[`int`\\]\\], **`test_ids`**:`Collection`\\[`Collection`\\[`int`\\]\\]=***`None`***, **`train_lbls`**:`Collection`\\[`Union`\\[`int`, `float`\\]\\]=***`None`***, **`valid_lbls`**:`Collection`\\[`Union`\\[`int`, `float`\\]\\]=***`None`***, **`classes`**:`ArgStar`=***`None`***, **`processor`**:[`PreProcessor`](/data_block.html#PreProcessor)=***`None`***, **\\*\\*`kwargs`**) → [`DataBunch`](/basic_data.html#DataBunch)\n", "\n", "
×

Tests found for from_ids:

  • pytest -sv tests/test_text_data.py::test_from_ids_works_for_equally_length_sentences [source]
  • pytest -sv tests/test_text_data.py::test_from_ids_works_for_variable_length_sentences [source]

To run tests please refer to this guide.

\n", "\n", "Create a [`TextDataBunch`](/text.data.html#TextDataBunch) from ids, labels and a `vocab`. `kwargs` are passed to the dataloader creation. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextDataBunch.from_ids)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Texts are already preprocessed into `train_ids`, `train_lbls`, `valid_ids`, `valid_lbls` and maybe `test_ids`. You can specify the corresponding `classes` if applicable. You must specify a `path` and the `vocab` so that the [`RNNLearner`](/text.learner.html#RNNLearner) class can later infer the corresponding sizes in the model it will create. kwargs will be passed to the class initialization." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load and save" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To avoid losing time preprocessing the text data more than once, you should save and load your [`TextDataBunch`](/text.data.html#TextDataBunch) using [`DataBunch.save`](/basic_data.html#DataBunch.save) and [`load_data`](/basic_data.html#load_data)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

load[source][test]

\n", "\n", "> load(**`path`**:`PathOrStr`, **`cache_name`**:`PathOrStr`=***`'tmp'`***, **`processor`**:[`PreProcessor`](/data_block.html#PreProcessor)=***`None`***, **\\*\\*`kwargs`**)\n", "\n", "
×

Tests found for load:

Some other tests where load is used:

  • pytest -sv tests/test_text_data.py::test_should_load_backwards_lm_1 [source]
  • pytest -sv tests/test_text_data.py::test_should_load_backwards_lm_2 [source]
  • pytest -sv tests/test_text_data.py::test_load_and_save_test [source]

To run tests please refer to this guide.

\n", "\n", "Load a [`TextDataBunch`](/text.data.html#TextDataBunch) from `path/cache_name`. `kwargs` are passed to the dataloader creation. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextDataBunch.load)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "
Warning: This method should only be used to load back `TextDataBunch` saved in v1.0.43 or before, it is now deprecated.
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "jekyll_warn(\"This method should only be used to load back `TextDataBunch` saved in v1.0.43 or before, it is now deprecated.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Untar the IMDB sample dataset if not already done:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "PosixPath('/home/ubuntu/.fastai/data/imdb_sample')" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "path" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since it comes in the form of csv files, we will use the corresponding `text_data` method. Here is an overview of what your file you should look like:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
labeltextis_valid
0negativeUn-bleeping-believable! Meg Ryan doesn't even ...False
1positiveThis is a extremely well-made film. The acting...False
2negativeEvery once in a long while a movie will come a...False
3positiveName just says it all. I watched this movie wi...False
4negativeThis movie succeeds at being one of the most u...False
\n", "
" ], "text/plain": [ " label text is_valid\n", "0 negative Un-bleeping-believable! Meg Ryan doesn't even ... False\n", "1 positive This is a extremely well-made film. The acting... False\n", "2 negative Every once in a long while a movie will come a... False\n", "3 positive Name just says it all. I watched this movie wi... False\n", "4 negative This movie succeeds at being one of the most u... False" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.read_csv(path/'texts.csv').head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here is a simple way of creating your [`DataBunch`](/basic_data.html#DataBunch) for language modelling or classification." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_lm = TextLMDataBunch.from_csv(Path(path), 'texts.csv')\n", "data_clas = TextClasDataBunch.from_csv(Path(path), 'texts.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The TextList input classes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Behind the scenes, the previous functions will create a training, validation and maybe test [`TextList`](/text.data.html#TextList) that will be tokenized and numericalized (if needed) using [`PreProcessor`](/data_block.html#PreProcessor)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Text[source][test]

\n", "\n", "> Text(**`ids`**, **`text`**) :: [`ItemBase`](/core.html#ItemBase)\n", "\n", "
×

No tests found for Text. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Basic item for text data in numericalized `ids`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Text, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class TextList[source][test]

\n", "\n", "> TextList(**`items`**:`Iterator`\\[`T_co`\\], **`vocab`**:[`Vocab`](/text.transform.html#Vocab)=***`None`***, **`pad_idx`**:`int`=***`1`***, **\\*\\*`kwargs`**) :: [`ItemList`](/data_block.html#ItemList)\n", "\n", "
×

Tests found for TextList:

Some other tests where TextList is used:

  • pytest -sv tests/test_text_data.py::test_from_folder [source]
  • pytest -sv tests/test_text_data.py::test_filter_classes [source]
  • pytest -sv tests/test_text_data.py::test_regression [source]

To run tests please refer to this guide.

\n", "\n", "Basic [`ItemList`](/data_block.html#ItemList) for text data. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextList, title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`vocab` contains the correspondence between ids and tokens, `pad_idx` is the id used for padding. You can pass a custom `processor` in the `kwargs` to change the defaults for tokenization or numericalization. It should have the following form:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "processor = [TokenizeProcessor(tokenizer=SpacyTokenizer('en')), NumericalizeProcessor(max_vocab=30000)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See below for all the arguments those tokenizers can take." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

label_for_lm[source][test]

\n", "\n", "> label_for_lm(**\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for label_for_lm. To contribute a test please refer to this guide and this discussion.

\n", "\n", "A special labelling method for language models. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextList.label_for_lm)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

from_folder[source][test]

\n", "\n", "> from_folder(**`path`**:`PathOrStr`=***`'.'`***, **`extensions`**:`StrList`=***`{'.txt'}`***, **`vocab`**:[`Vocab`](/text.transform.html#Vocab)=***`None`***, **`processor`**:[`PreProcessor`](/data_block.html#PreProcessor)=***`None`***, **\\*\\*`kwargs`**) → `TextList`\n", "\n", "
×

Tests found for from_folder:

Some other tests where from_folder is used:

  • pytest -sv tests/test_text_data.py::test_from_folder [source]
  • pytest -sv tests/test_text_data.py::test_filter_classes [source]

To run tests please refer to this guide.

\n", "\n", "Get the list of files in `path` that have a text suffix. `recurse` determines if we search subfolders. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextList.from_folder)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

show_xys[source][test]

\n", "\n", "> show_xys(**`xs`**, **`ys`**, **`max_len`**:`int`=***`70`***)\n", "\n", "
×

No tests found for show_xys. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Show the `xs` (inputs) and `ys` (targets). `max_len` is the maximum number of tokens displayed. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextList.show_xys)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

show_xyzs[source][test]

\n", "\n", "> show_xyzs(**`xs`**, **`ys`**, **`zs`**, **`max_len`**:`int`=***`70`***)\n", "\n", "
×

No tests found for show_xyzs. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Show `xs` (inputs), `ys` (targets) and `zs` (predictions). `max_len` is the maximum number of tokens displayed. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextList.show_xyzs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class OpenFileProcessor[source][test]

\n", "\n", "> OpenFileProcessor(**`ds`**:`Collection`\\[`T_co`\\]=***`None`***) :: [`PreProcessor`](/data_block.html#PreProcessor)\n", "\n", "
×

No tests found for OpenFileProcessor. To contribute a test please refer to this guide and this discussion.

\n", "\n", "[`PreProcessor`](/data_block.html#PreProcessor) that opens the filenames and read the texts. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(OpenFileProcessor, title_level=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

open_text[source][test]

\n", "\n", "> open_text(**`fn`**:`PathOrStr`, **`enc`**=***`'utf-8'`***)\n", "\n", "
×

No tests found for open_text. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Read the text in `fn`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(open_text)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class TokenizeProcessor[source][test]

\n", "\n", "> TokenizeProcessor(**`ds`**:[`ItemList`](/data_block.html#ItemList)=***`None`***, **`tokenizer`**:[`Tokenizer`](/text.transform.html#Tokenizer)=***`None`***, **`chunksize`**:`int`=***`10000`***, **`mark_fields`**:`bool`=***`False`***) :: [`PreProcessor`](/data_block.html#PreProcessor)\n", "\n", "
×

No tests found for TokenizeProcessor. To contribute a test please refer to this guide and this discussion.

\n", "\n", "[`PreProcessor`](/data_block.html#PreProcessor) that tokenizes the texts in `ds`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TokenizeProcessor, title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`tokenizer` is used on bits of `chunksize`. If `mark_fields=True`, add field tokens between each parts of the texts (given when the texts are read in several columns of a dataframe). See more about tokenizers in the [transform documentation](/text.transform.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class NumericalizeProcessor[source][test]

\n", "\n", "> NumericalizeProcessor(**`ds`**:[`ItemList`](/data_block.html#ItemList)=***`None`***, **`vocab`**:[`Vocab`](/text.transform.html#Vocab)=***`None`***, **`max_vocab`**:`int`=***`60000`***, **`min_freq`**:`int`=***`3`***) :: [`PreProcessor`](/data_block.html#PreProcessor)\n", "\n", "
×

No tests found for NumericalizeProcessor. To contribute a test please refer to this guide and this discussion.

\n", "\n", "[`PreProcessor`](/data_block.html#PreProcessor) that numericalizes the tokens in `ds`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(NumericalizeProcessor, title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Uses `vocab` for this (if not None), otherwise create one with `max_vocab` and `min_freq` from tokens." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Language Model data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A language model is trained to guess what the next word is inside a flow of words. We don't feed it the different texts separately but concatenate them all together in a big array. To create the batches, we split this array into `bs` chunks of continuous texts. Note that in all NLP tasks, we don't use the usual convention of sequence length being the first dimension so batch size is the first dimension and sequence length is the second. Here you can read the chunks of texts in lines. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
01234567891011121314
0xxbosxxmajnamejustsaysitall.iwatchedthismoviewithmydad
1goingtoseeanythingyou'llremember.xxbosxxmajevidentlywhenyouoffera
2ofthexxunkxxunkxxmajsuperblyshot,thisthrillingadultadventurecertainlycontainssome
3\\n\\nxxmajonebrightlightinthemidstofthisisxxmajfredxxmajxxunk
4leaveyouxxunkthatyouwatchedit.ifeelreallybadforthosexxmaj
5xxmajginger'sblondehair)hasacoupleofemotionalsolonumbers,including
6.xxmajthisiswhatreallyattractedmetothisfilm.iwasimpressed
7intothehypethatoneyouaresomehowwhiteorsuperior...youarenot
8comeunintentionally,likewhentheytrytoexplainthataninvisibleman'sxxunk
9justwatchthemovie,andidearsayyou'llseethingsabit
10forthereformmovementandmeetsxxmajeponine.xxmajexcept...notxxmajeponine
11xxmajwell,giventhetargetaudience,thatmaynothavebeentoobad
12getsrolesinmovies,inmyopinionthoughsheshouldsticktomoviesof
13itsshare,thoughfarsmallerthanxxunkevenincludingabasicviewofthe
14ruins-sceneisxxupalmosteuropean-likecinema(themovieiseager
\n", "
" ], "text/plain": [ " 0 1 2 3 4 5 6 \\\n", "0 xxbos xxmaj name just says it all \n", "1 going to see anything you 'll remember \n", "2 of the xxunk xxunk xxmaj superbly shot \n", "3 \\n\\n xxmaj one bright light in the \n", "4 leave you xxunk that you watched it \n", "5 xxmaj ginger 's blonde hair ) has \n", "6 . xxmaj this is what really attracted \n", "7 into the hype that one you are \n", "8 come unintentionally , like when they try \n", "9 just watch the movie , and i \n", "10 for the reform movement and meets xxmaj \n", "11 xxmaj well , given the target audience \n", "12 gets roles in movies , in my \n", "13 its share , though far smaller than \n", "14 ruins - scene is xxup almost european \n", "\n", " 7 8 9 10 11 12 13 \\\n", "0 . i watched this movie with my \n", "1 . xxbos xxmaj evidently when you offer \n", "2 , this thrilling adult adventure certainly contains \n", "3 midst of this is xxmaj fred xxmaj \n", "4 . i feel really bad for those \n", "5 a couple of emotional solo numbers , \n", "6 me to this film . i was \n", "7 somehow white or superior ... you are \n", "8 to explain that an invisible man 's \n", "9 dear say you 'll see things a \n", "10 eponine . xxmaj except ... not xxmaj \n", "11 , that may not have been too \n", "12 opinion though she should stick to movies \n", "13 xxunk even including a basic view of \n", "14 - like cinema ( the movie is \n", "\n", " 14 \n", "0 dad \n", "1 a \n", "2 some \n", "3 xxunk \n", "4 xxmaj \n", "5 including \n", "6 impressed \n", "7 not \n", "8 xxunk \n", "9 bit \n", "10 eponine \n", "11 bad \n", "12 of \n", "13 the \n", "14 eager " ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "data = TextLMDataBunch.from_csv(path, 'texts.csv')\n", "x,y = next(iter(data.train_dl))\n", "example = x[:15,:15].cpu()\n", "texts = pd.DataFrame([data.train_ds.vocab.textify(l).split(' ') for l in example])\n", "texts" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "
Warning: If you are used to another convention, beware! fastai always uses batch as a first dimension, even in NLP.
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "jekyll_warn(\"If you are used to another convention, beware! fastai always uses batch as a first dimension, even in NLP.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is all done internally when we use [`TextLMDataBunch`](/text.data.html#TextLMDataBunch), by wrapping the dataset in the following pre-loader before calling a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class LanguageModelPreLoader[source][test]

\n", "\n", "> LanguageModelPreLoader(**`dataset`**:[`LabelList`](/data_block.html#LabelList), **`lengths`**:`Collection`\\[`int`\\]=***`None`***, **`bs`**:`int`=***`32`***, **`bptt`**:`int`=***`70`***, **`backwards`**:`bool`=***`False`***, **`shuffle`**:`bool`=***`False`***) :: [`Callback`](/callback.html#Callback)\n", "\n", "
×

No tests found for LanguageModelPreLoader. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Transforms the tokens in `dataset` to a stream of contiguous batches for language modelling. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageModelPreLoader)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "LanguageModelPreLoader is an internal class uses for training a language model. It takes the sentences passed as a jagged array of numericalised sentences in `dataset` and returns contiguous batches to the pytorch dataloader with batch size `bs` and a sequence length `bptt`. \n", "- `lengths` can be provided for the jagged training data else lengths is calculated internally \n", "- `backwards=True` will reverses the sentences. \n", "- `shuffle=True`, will shuffle the order of the sentences, at the start of each epoch - except the first\n", "\n", "The following description is usefull for understanding the implementation of [`LanguageModelPreLoader`](/text.data.html#LanguageModelPreLoader):\n", "- idx: instance of CircularIndex that indexes items while taking the following into account 1) shuffle, 2) direction of indexing, 3) wraps around to head (reading forward) or tail (reading backwards) of the ragged array as needed in order to fill the last batch(s)\n", "\n", "- ro: index of the first rag of each row in the batch to be extract. Returns as index to the next rag to be extracted\n", "\n", "- ri: Reading forward: index to the first token to be extracted in the current rag (ro). Reading backwards: one position after the last token to be extracted in the rag\n", "\n", "- overlap: overlap between batches is 1, because we only predict the next token \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Classifier data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When preparing the data for a classifier, we keep the different texts separate, which poses another challenge for the creation of batches: since they don't all have the same length, we can't easily collate them together in batches. To help with this we use two different techniques:\n", "- padding: each text is padded with the `PAD` token to get all the ones we picked to the same size\n", "- sorting the texts (ish): to avoid having together a very long text with a very short one (which would then have a lot of `PAD` tokens), we regroup the texts by order of length. For the training set, we still add some randomness to avoid showing the same batches at every step of the training.\n", "\n", "Here is an example of batch with padding (the padding index is 1, and the padding is applied before the sentences start)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([[ 1, 1, 1, 1, 1, 1, 1, 2, 18, 310, 9, 0,\n", " 11, 0, 9, 48, 8, 0, 11, 2301],\n", " [ 1, 1, 1, 1, 1, 1, 1, 2, 4, 1427, 15, 8,\n", " 521, 10, 4, 90, 131, 9, 1427, 242],\n", " [ 1, 1, 1, 1, 1, 1, 1, 2, 18, 175, 55, 2063,\n", " 4677, 14, 8, 209, 22, 1343, 26, 20],\n", " [ 1, 1, 1, 1, 1, 1, 1, 1, 2, 4, 20, 30,\n", " 24, 8, 110, 616, 30, 164, 745, 18],\n", " [ 1, 1, 1, 1, 1, 1, 1, 1, 2, 18, 24, 3560,\n", " 14, 130, 8, 30, 26, 85, 193, 9],\n", " [ 1, 1, 1, 1, 1, 1, 1, 1, 2, 18, 101, 0,\n", " 20, 153, 71, 18, 24, 4055, 17, 4],\n", " [ 1, 1, 1, 1, 1, 1, 1, 1, 2, 4, 1998, 256,\n", " 4, 0, 4, 0, 273, 34, 8, 0],\n", " [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 4, 20,\n", " 24, 12, 119, 30, 19, 83, 12, 202],\n", " [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 4, 8,\n", " 79, 1031, 185, 13, 20, 30, 24, 8],\n", " [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 18,\n", " 61, 36, 143, 104, 20, 30, 1408, 51]], device='cuda:0')" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "data = TextClasDataBunch.from_csv(path, 'texts.csv')\n", "iter_dl = iter(data.train_dl)\n", "_ = next(iter_dl)\n", "x,y = next(iter_dl)\n", "x[-10:,:20]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is all done internally when we use [`TextClasDataBunch`](/text.data.html#TextClasDataBunch), by using the following classes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class SortSampler[source][test]

\n", "\n", "> SortSampler(**`data_source`**:`NPArrayList`, **`key`**:`KeyFunc`) :: [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler)\n", "\n", "
×

No tests found for SortSampler. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Go through the text data by order of length. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SortSampler)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) is used for the validation and (if applicable) the test set. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class SortishSampler[source][test]

\n", "\n", "> SortishSampler(**`data_source`**:`NPArrayList`, **`key`**:`KeyFunc`, **`bs`**:`int`) :: [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler)\n", "\n", "
×

Tests found for SortishSampler:

  • pytest -sv tests/test_text_data.py::test_sortish_sampler [source]

To run tests please refer to this guide.

\n", "\n", "Go through the text data by order of length with a bit of randomness. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SortishSampler)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) is generally used for the training set." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

pad_collate[source][test]

\n", "\n", "> pad_collate(**`samples`**:`BatchSamples`, **`pad_idx`**:`int`=***`1`***, **`pad_first`**:`bool`=***`True`***, **`backwards`**:`bool`=***`False`***) → `Tuple`\\[`LongTensor`, `LongTensor`\\]\n", "\n", "
×

No tests found for pad_collate. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Function that collect samples and adds padding. Flips token order if needed " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(pad_collate)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This will collate the `samples` in batches while adding padding with `pad_idx`. If `pad_first=True`, padding is applied at the beginning (before the sentence starts) otherwise it's applied at the end." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

new[source][test]

\n", "\n", "> new(**`items`**:`Iterator`\\[`T_co`\\], **`processor`**:`Union`\\[[`PreProcessor`](/data_block.html#PreProcessor), `Collection`\\[[`PreProcessor`](/data_block.html#PreProcessor)\\]\\]=***`None`***, **\\*\\*`kwargs`**) → `ItemList`\n", "\n", "
×

No tests found for new. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create a new [`ItemList`](/data_block.html#ItemList) from `items`, keeping the same attributes. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextList.new)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

get[source][test]

\n", "\n", "> get(**`i`**)\n", "\n", "
×

No tests found for get. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Subclass if you want to customize how to create item `i` from `self.items`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextList.get)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

process_one[source][test]

\n", "\n", "> process_one(**`item`**)\n", "\n", "
×

No tests found for process_one. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TokenizeProcessor.process_one)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

process[source][test]

\n", "\n", "> process(**`ds`**)\n", "\n", "
×

No tests found for process. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TokenizeProcessor.process)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

process_one[source][test]

\n", "\n", "> process_one(**`item`**)\n", "\n", "
×

No tests found for process_one. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(OpenFileProcessor.process_one)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

process[source][test]

\n", "\n", "> process(**`ds`**)\n", "\n", "
×

No tests found for process. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(NumericalizeProcessor.process)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

process_one[source][test]

\n", "\n", "> process_one(**`item`**)\n", "\n", "
×

No tests found for process_one. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(NumericalizeProcessor.process_one)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

reconstruct[source][test]

\n", "\n", "> reconstruct(**`t`**:`Tensor`)\n", "\n", "
×

No tests found for reconstruct. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Reconstruct one of the underlying item for its data `t`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(TextList.reconstruct)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

on_epoch_begin[source][test]

\n", "\n", "> on_epoch_begin(**\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for on_epoch_begin. To contribute a test please refer to this guide and this discussion.

\n", "\n", "At the beginning of each epoch. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageModelPreLoader.on_epoch_begin)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

on_epoch_end[source][test]

\n", "\n", "> on_epoch_end(**\\*\\*`kwargs`**)\n", "\n", "
×

No tests found for on_epoch_end. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Called at the end of an epoch. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageModelPreLoader.on_epoch_end)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class LMLabelList[source][test]

\n", "\n", "> LMLabelList(**`items`**:`Iterator`\\[`T_co`\\], **\\*\\*`kwargs`**) :: [`EmptyLabelList`](/data_block.html#EmptyLabelList)\n", "\n", "
×

No tests found for LMLabelList. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Basic [`ItemList`](/data_block.html#ItemList) for dummy labels. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LMLabelList)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

allocate_buffers[source][test]

\n", "\n", "> allocate_buffers()\n", "\n", "
×

No tests found for allocate_buffers. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create the ragged array that will be filled when we ask for items. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageModelPreLoader.allocate_buffers)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

shuffle[source][test]

\n", "\n", "> shuffle()\n", "\n", "
×

No tests found for shuffle. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageModelPreLoader.CircularIndex.shuffle)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

fill_row[source][test]

\n", "\n", "> fill_row(**`forward`**, **`items`**, **`idx`**, **`row`**, **`ro`**, **`ri`**, **`overlap`**, **`lengths`**)\n", "\n", "
×

No tests found for fill_row. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Fill the row with tokens from the ragged array. --OBS-- overlap != 1 has not been implemented " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(LanguageModelPreLoader.fill_row)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Basic dataset for NLP tasks and helper functions to create a DataBunch", "title": "text.data" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }