{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## NLP Preprocessing" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.text import * \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`text.tranform` contains the functions that deal behind the scenes with the two main tasks when preparing texts for modelling: *tokenization* and *numericalization*.\n", "\n", "*Tokenization* splits the raw texts into tokens (which can be words, or punctuation signs...). The most basic way to do this would be to separate according to spaces, but it's possible to be more subtle; for instance, the contractions like \"isn't\" or \"don't\" should be split in \\[\"is\",\"n't\"\\] or \\[\"do\",\"n't\"\\]. By default fastai will use the powerful [spacy tokenizer](https://spacy.io/api/tokenizer).\n", "\n", "*Numericalization* is easier as it just consists in attributing a unique id to each token and mapping each of those tokens to their respective ids." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tokenization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This step is actually divided in two phases: first, we apply a certain list of `rules` to the raw texts as preprocessing, then we use the tokenizer to split them in lists of tokens. Combining together those `rules`, the `tok_func`and the `lang` to process the texts is the role of the [`Tokenizer`](/text.transform.html#Tokenizer) class." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Tokenizer[source][test]

\n", "\n", "> Tokenizer(**`tok_func`**:`Callable`=***`'SpacyTokenizer'`***, **`lang`**:`str`=***`'en'`***, **`pre_rules`**:`ListRules`=***`None`***, **`post_rules`**:`ListRules`=***`None`***, **`special_cases`**:`StrList`=***`None`***, **`n_cpus`**:`int`=***`None`***)\n", "\n", "
×

Tests found for Tokenizer:

To run tests please refer to this guide.

\n", "\n", "Put together rules and a tokenizer function to tokenize text with multiprocessing. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Tokenizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This class will process texts by applying them the `pre_rules`, tokenizing them with `tok_func(lang)` and then applying the `post_rules`. `special_cases` are a list of tokens passed as special to the tokenizer and `n_cpus` is the number of cpus to use for multi-processing (by default, half the cpus available). We don't directly pass a tokenizer for multi-processing purposes: each process needs to initiate a tokenizer of its own. The rules and special_cases default to:\n", "\n", "`defaults.text_pre_rules = [fix_html, replace_rep, replace_wrep, spec_add_spaces, rm_useless_spaces]`
[source]
\n", "\n", "`defaults.text_post_rules = [replace_all_caps, deal_caps]`
[source]
\n", "\n", "and\n", "\n", "`defaults.text_spec_tok = [UNK,PAD,BOS,FLD,TK_MAJ,TK_UP,TK_REP,TK_WREP]`
[source]
\n", "\n", "The rules are all listed below, here is the meaning of the special tokens:\n", "- `UNK` (xxunk) is for an unknown word (one that isn't present in the current vocabulary)\n", "- `PAD` (xxpad) is the token used for padding, if we need to regroup several texts of different lengths in a batch\n", "- `BOS` (xxbos) represents the beginning of a text in your dataset\n", "- `FLD` (xxfld) is used if you set `mark_fields=True` in your [`TokenizeProcessor`](/text.data.html#TokenizeProcessor) to separate the different fields of texts (if your texts are loaded from several columns in a dataframe)\n", "- `TK_MAJ` (xxmaj) is used to indicate the next word begins with a capital in the original text\n", "- `TK_UP` (xxup) is used to indicate the next word is written in all caps in the original text\n", "- `TK_REP` (xxrep) is used to indicate the next character is repeated n times in the original text (usage xxrep n {char})\n", "- `TK_WREP`(xxwrep) is used to indicate the next word is repeated n times in the original text (usage xxwrep n {word})" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

process_text[source][test]

\n", "\n", "> process_text(**`t`**:`str`, **`tok`**:[`BaseTokenizer`](/text.transform.html#BaseTokenizer)) → `List`\\[`str`\\]\n", "\n", "
×

No tests found for process_text. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Process one text `t` with tokenizer `tok`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Tokenizer.process_text)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

process_all[source][test]

\n", "\n", "> process_all(**`texts`**:`StrList`) → `List`\\[`List`\\[`str`\\]\\]\n", "\n", "
×

Tests found for process_all:

Some other tests where process_all is used:

  • pytest -sv tests/test_text_transform.py::test_tokenize [source]
  • pytest -sv tests/test_text_transform.py::test_tokenize_handles_empty_lines [source]
  • pytest -sv tests/test_text_transform.py::test_tokenize_ignores_extraneous_space [source]

To run tests please refer to this guide.

\n", "\n", "Process a list of `texts`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Tokenizer.process_all)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For an example, we're going to grab some IMDB reviews." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "PosixPath('/home/ubuntu/.fastai/data/imdb_sample')" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "path" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'This is a extremely well-made film. The acting, script and camera-work are all first-rate. The music is good, too, though it is mostly early in the film, when things are still relatively cheery. There are no really superstars in the cast, though several faces will be familiar. The entire cast does an excellent job with the script.

But it is hard to watch, because there is no good end to a situation like the one presented. It is now fashionable to blame the British for setting Hindus and Muslims against each other, and then cruelly separating them into two countries. There is some merit in this view, but it\\'s also true that no one forced Hindus and Muslims in the region to mistreat each other as they did around the time of partition. It seems more likely that the British simply saw the tensions between the religions and were clever enough to exploit them to their own ends.

The result is that there is much cruelty and inhumanity in the situation and this is very unpleasant to remember and to see on the screen. But it is never painted as a black-and-white case. There is baseness and nobility on both sides, and also the hope for change in the younger generation.

There is redemption of a sort, in the end, when Puro has to make a hard choice between a man who has ruined her life, but also truly loved her, and her family which has disowned her, then later come looking for her. But by that point, she has no option that is without great pain for her.

This film carries the message that both Muslims and Hindus have their grave faults, and also that both can be dignified and caring people. The reality of partition makes that realisation all the more wrenching, since there can never be real reconciliation across the India/Pakistan border. In that sense, it is similar to \"Mr & Mrs Iyer\".

In the end, we were glad to have seen the film, even though the resolution was heartbreaking. If the UK and US could deal with their own histories of racism with this kind of frankness, they would certainly be better off.'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.read_csv(path/'texts.csv', header=None)\n", "example_text = df.iloc[2][1]; example_text" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'xxmaj this is a extremely well - made film . xxmaj the acting , script and camera - work are all first - rate . xxmaj the music is good , too , though it is mostly early in the film , when things are still relatively cheery . xxmaj there are no really superstars in the cast , though several faces will be familiar . xxmaj the entire cast does an excellent job with the script . \\n\\n xxmaj but it is hard to watch , because there is no good end to a situation like the one presented . xxmaj it is now fashionable to blame the xxmaj british for setting xxmaj hindus and xxmaj muslims against each other , and then cruelly separating them into two countries . xxmaj there is some merit in this view , but it \\'s also true that no one forced xxmaj hindus and xxmaj muslims in the region to mistreat each other as they did around the time of partition . xxmaj it seems more likely that the xxmaj british simply saw the tensions between the religions and were clever enough to exploit them to their own ends . \\n\\n xxmaj the result is that there is much cruelty and inhumanity in the situation and this is very unpleasant to remember and to see on the screen . xxmaj but it is never painted as a black - and - white case . xxmaj there is baseness and nobility on both sides , and also the hope for change in the younger generation . \\n\\n xxmaj there is redemption of a sort , in the end , when xxmaj puro has to make a hard choice between a man who has ruined her life , but also truly loved her , and her family which has disowned her , then later come looking for her . xxmaj but by that point , she has no option that is without great pain for her . \\n\\n xxmaj this film carries the message that both xxmaj muslims and xxmaj hindus have their grave faults , and also that both can be dignified and caring people . xxmaj the reality of partition makes that realisation all the more wrenching , since there can never be real reconciliation across the xxmaj india / xxmaj pakistan border . xxmaj in that sense , it is similar to \" xxmaj mr & xxmaj mrs xxmaj iyer \" . \\n\\n xxmaj in the end , we were glad to have seen the film , even though the resolution was heartbreaking . xxmaj if the xxup uk and xxup us could deal with their own histories of racism with this kind of frankness , they would certainly be better off .'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer = Tokenizer()\n", "tok = SpacyTokenizer('en')\n", "' '.join(tokenizer.process_text(example_text, tok))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As explained before, the tokenizer split the text according to words/punctuations signs but in a smart manner. The rules (see below) also have modified the text a little bit. We can tokenize a list of texts directly at the same time:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'xxmaj this is a extremely well - made film . xxmaj the acting , script and camera - work are all first - rate . xxmaj the music is good , too , though it is mostly early in the film , when things are still relatively cheery . xxmaj there are no really superstars in the cast , though several faces will be familiar . xxmaj the entire cast does an excellent job with the script . \\n\\n xxmaj but it is hard to watch , because there is no good end to a situation like the one presented . xxmaj it is now fashionable to blame the xxmaj british for setting xxmaj hindus and xxmaj muslims against each other , and then cruelly separating them into two countries . xxmaj there is some merit in this view , but it \\'s also true that no one forced xxmaj hindus and xxmaj muslims in the region to mistreat each other as they did around the time of partition . xxmaj it seems more likely that the xxmaj british simply saw the tensions between the religions and were clever enough to exploit them to their own ends . \\n\\n xxmaj the result is that there is much cruelty and inhumanity in the situation and this is very unpleasant to remember and to see on the screen . xxmaj but it is never painted as a black - and - white case . xxmaj there is baseness and nobility on both sides , and also the hope for change in the younger generation . \\n\\n xxmaj there is redemption of a sort , in the end , when xxmaj puro has to make a hard choice between a man who has ruined her life , but also truly loved her , and her family which has disowned her , then later come looking for her . xxmaj but by that point , she has no option that is without great pain for her . \\n\\n xxmaj this film carries the message that both xxmaj muslims and xxmaj hindus have their grave faults , and also that both can be dignified and caring people . xxmaj the reality of partition makes that realisation all the more wrenching , since there can never be real reconciliation across the xxmaj india / xxmaj pakistan border . xxmaj in that sense , it is similar to \" xxmaj mr & xxmaj mrs xxmaj iyer \" . \\n\\n xxmaj in the end , we were glad to have seen the film , even though the resolution was heartbreaking . xxmaj if the xxup uk and xxup us could deal with their own histories of racism with this kind of frankness , they would certainly be better off .'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.read_csv(path/'texts.csv', header=None)\n", "texts = df[1].values\n", "tokenizer = Tokenizer()\n", "tokens = tokenizer.process_all(texts)\n", "' '.join(tokens[2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Customize the tokenizer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `tok_func` must return an instance of [`BaseTokenizer`](/text.transform.html#BaseTokenizer):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class BaseTokenizer[source][test]

\n", "\n", "> BaseTokenizer(**`lang`**:`str`)\n", "\n", "
×

No tests found for BaseTokenizer. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Basic class for a tokenizer function. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BaseTokenizer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

tokenizer[source][test]

\n", "\n", "> tokenizer(**`t`**:`str`) → `List`\\[`str`\\]\n", "\n", "
×

Tests found for tokenizer:

Some other tests where tokenizer is used:

  • pytest -sv tests/test_text_transform.py::test_tokenize [source]
  • pytest -sv tests/test_text_transform.py::test_tokenize_handles_empty_lines [source]
  • pytest -sv tests/test_text_transform.py::test_tokenize_ignores_extraneous_space [source]

To run tests please refer to this guide.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BaseTokenizer.tokenizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Take a text `t` and returns the list of its tokens." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

add_special_cases[source][test]

\n", "\n", "> add_special_cases(**`toks`**:`StrList`)\n", "\n", "
×

No tests found for add_special_cases. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BaseTokenizer.add_special_cases)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Record a list of special tokens `toks`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The fastai library uses [spacy](https://spacy.io/) tokenizers as its default. The following class wraps it as [`BaseTokenizer`](/text.transform.html#BaseTokenizer)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class SpacyTokenizer[source][test]

\n", "\n", "> SpacyTokenizer(**`lang`**:`str`) :: [`BaseTokenizer`](/text.transform.html#BaseTokenizer)\n", "\n", "
×

No tests found for SpacyTokenizer. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Wrapper around a spacy tokenizer to make it a [`BaseTokenizer`](/text.transform.html#BaseTokenizer). " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SpacyTokenizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to use your custom tokenizer, just subclass the [`BaseTokenizer`](/text.transform.html#BaseTokenizer) and override its `tokenizer` and `add_spec_cases` functions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Rules" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rules are just functions that take a string and return the modified string. This allows you to customize the list of `default_pre_rules` or `default_post_rules` as you please. Those are:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

deal_caps[source][test]

\n", "\n", "> deal_caps(**`x`**:`StrList`) → `StrList`\n", "\n", "
×

Tests found for deal_caps:

  • pytest -sv tests/test_text_transform.py::test_rules [source]

To run tests please refer to this guide.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(deal_caps, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In `t`, every word is lower-case. If a word begins with a capital, we put a token `TK_MAJ` in front of it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

fix_html[source][test]

\n", "\n", "> fix_html(**`x`**:`str`) → `str`\n", "\n", "
×

Tests found for fix_html:

  • pytest -sv tests/test_text_transform.py::test_rules [source]

To run tests please refer to this guide.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(fix_html, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This rules replaces a bunch of HTML characters or norms in plain text ones. For instance `
` are replaced by `\\n`, ` ` by spaces etc..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Some HTML& text\\n'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fix_html(\"Some HTML text
\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

replace_all_caps[source][test]

\n", "\n", "> replace_all_caps(**`x`**:`StrList`) → `StrList`\n", "\n", "
×

Tests found for replace_all_caps:

  • pytest -sv tests/test_text_transform.py::test_rules [source]

To run tests please refer to this guide.

\n", "\n", "Replace tokens in ALL CAPS in `x` by their lower version and add `TK_UP` before. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(replace_all_caps)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

replace_rep[source][test]

\n", "\n", "> replace_rep(**`t`**:`str`) → `str`\n", "\n", "
×

Tests found for replace_rep:

  • pytest -sv tests/test_text_transform.py::test_rules [source]

To run tests please refer to this guide.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(replace_rep, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Whenever a character is repeated more than three times in `t`, we replace the whole thing by 'TK_REP n char' where n is the number of occurrences and char the character." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I'm so excited xxrep 8 ! \"" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "replace_rep(\"I'm so excited!!!!!!!!\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

replace_wrep[source][test]

\n", "\n", "> replace_wrep(**`t`**:`str`) → `str`\n", "\n", "
×

Tests found for replace_wrep:

  • pytest -sv tests/test_text_transform.py::test_rules [source]

To run tests please refer to this guide.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(replace_wrep, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Whenever a word is repeated more than four times in `t`, we replace the whole thing by 'TK_WREP n w' where n is the number of occurrences and w the word repeated." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I've never xxwrep 7 ever done this.\"" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "replace_wrep(\"I've never ever ever ever ever ever ever ever done this.\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

rm_useless_spaces[source][test]

\n", "\n", "> rm_useless_spaces(**`t`**:`str`) → `str`\n", "\n", "
×

Tests found for rm_useless_spaces:

  • pytest -sv tests/test_text_transform.py::test_rules [source]

To run tests please refer to this guide.

\n", "\n", "Remove multiple spaces in `t`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(rm_useless_spaces)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Inconsistent use of spaces.'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rm_useless_spaces(\"Inconsistent use of spaces.\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

spec_add_spaces[source][test]

\n", "\n", "> spec_add_spaces(**`t`**:`str`) → `str`\n", "\n", "
×

Tests found for spec_add_spaces:

  • pytest -sv tests/test_text_transform.py::test_rules [source]

To run tests please refer to this guide.

\n", "\n", "Add spaces around / and # in `t`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(spec_add_spaces)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [ { "data": { "text/plain": [ "'I # like to # put # hashtags # everywhere!'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "spec_add_spaces('I #like to #put #hashtags #everywhere!')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Numericalization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To convert our set of tokens to unique ids (and be able to have them go through embeddings), we use the following class:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Vocab[source][test]

\n", "\n", "> Vocab(**`itos`**:`StrList`)\n", "\n", "
×

No tests found for Vocab. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Contain the correspondence between numbers and tokens and numericalize. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Vocab)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`itos` contains the id to token correspondence." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

create[source][test]

\n", "\n", "> create(**`tokens`**:`Tokens`, **`max_vocab`**:`int`, **`min_freq`**:`int`) → `Vocab`\n", "\n", "
×

No tests found for create. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Create a vocabulary from a set of `tokens`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Vocab.create)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Only keeps `max_vocab` tokens, and only if they appear at least `min_freq` times, set the rest to `UNK`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

numericalize[source][test]

\n", "\n", "> numericalize(**`t`**:`StrList`) → `List`\\[`int`\\]\n", "\n", "
×

No tests found for numericalize. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Convert a list of tokens `t` to their ids. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Vocab.numericalize)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

textify[source][test]

\n", "\n", "> textify(**`nums`**:`Collection`\\[`int`\\], **`sep`**=***`' '`***) → `List`\\[`str`\\]\n", "\n", "
×

No tests found for textify. To contribute a test please refer to this guide and this discussion.

\n", "\n", "Convert a list of `nums` to their tokens. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Vocab.textify)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[4, 20, 15, 12, 623, 89, 23, 115, 31, 10]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vocab = Vocab.create(tokens, max_vocab=1000, min_freq=2)\n", "vocab.numericalize(tokens[2])[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

tokenizer[source][test]

\n", "\n", "> tokenizer(**`t`**:`str`) → `List`\\[`str`\\]\n", "\n", "
×

Tests found for tokenizer:

Some other tests where tokenizer is used:

  • pytest -sv tests/test_text_transform.py::test_tokenize [source]
  • pytest -sv tests/test_text_transform.py::test_tokenize_handles_empty_lines [source]
  • pytest -sv tests/test_text_transform.py::test_tokenize_ignores_extraneous_space [source]

To run tests please refer to this guide.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SpacyTokenizer.tokenizer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

add_special_cases[source][test]

\n", "\n", "> add_special_cases(**`toks`**:`StrList`)\n", "\n", "
×

No tests found for add_special_cases. To contribute a test please refer to this guide and this discussion.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SpacyTokenizer.add_special_cases)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "NLP data processing; tokenizes text and creates vocab indexes", "title": "text.transform" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }