{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# NLP Preprocessing" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.text import * \n", "from fastai import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`text.tranform` contains the functions that deal behind the scenes with the two main tasks when preparing texts for modelling: *tokenization* and *numericalization*.\n", "\n", "*Tokenization* splits the raw texts into tokens (wich can be words, or punctuation signs...). The most basic way to do this would be to separate according to spaces, but it's possible to be more subtle; for instance, the contractions like \"isn't\" or \"don't\" should be split in \\[\"is\",\"n't\"\\] or \\[\"do\",\"n't\"\\]. By default fastai will use the powerful [spacy tokenizer](https://spacy.io/api/tokenizer).\n", "\n", "*Numericalization* is easier as it just consists in attributing a unique id to each token and mapping each of those tokens to their respective ids." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tokenization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This step is actually divided in two phases: first, we apply a certain list of `rules` to the raw texts as preprocessing, then we use the tokenizer to split them in lists of tokens. Combining together those `rules`, the `tok_func`and the `lang` to process the texts is the role of the [`Tokenizer`](/text.transform.html#Tokenizer) class." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Tokenizer[source]

\n", "\n", "> Tokenizer(`tok_func`:`Callable`=`'SpacyTokenizer'`, `lang`:`str`=`'en'`, `rules`:`ListRules`=`None`, `special_cases`:`StrList`=`None`, `n_cpus`:`int`=`None`)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Tokenizer, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This class will process texts by appling them the `rules` then tokenizing them with `tok_func(lang)`. `special_cases` are a list of tokens passed as special to the tokenizer and `n_cpus` is the number of cpus to use for multi-processing (by default, half the cpus available). We don't directly pass a tokenizer for multi-processing purposes: each process needs to initiate a tokenizer of its own. The rules and special_cases default to\n", "\n", "`default_rules = [fix_html, replace_rep, replace_wrep, deal_caps, spec_add_spaces, rm_useless_spaces]`
[source]
\n", "\n", "and\n", "\n", "`default_spec_tok = [BOS, FLD, UNK, PAD]`
[source]
" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

process_text[source]

\n", "\n", "> process_text(`t`:`str`, `tok`:[`BaseTokenizer`](/text.transform.html#BaseTokenizer)) → `List`\\[`str`\\]\n", "\n", "Processe one text `t` with tokenizer `tok`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Tokenizer.process_text)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

process_all[source]

\n", "\n", "> process_all(`texts`:`StrList`) → `List`\\[`List`\\[`str`\\]\\]\n", "\n", "Process a list of `texts`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Tokenizer.process_all)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For an example, we're going to grab some IMDB reviews." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "PosixPath('/home/ubuntu/fastai/fastai/../data/imdb_sample')" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "path = untar_data(URLs.IMDB_SAMPLE)\n", "path" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Every once in a long while a movie will come along that will be so awful that I feel compelled to warn people. If I labor all my days and I can save but one soul from watching this movie, how great will be my joy.

Where to begin my discussion of pain. For starters, there was a musical montage every five minutes. There was no character development. Every character was a stereotype. We had swearing guy, fat guy who eats donuts, goofy foreign guy, etc. The script felt as if it were being written as the movie was being shot. The production value was so incredibly low that it felt like I was watching a junior high video presentation. Have the directors, producers, etc. ever even seen a movie before? Halestorm is getting worse and worse with every new entry. The concept for this movie sounded so funny. How could you go wrong with Gary Coleman and a handful of somewhat legitimate actors. But trust me when I say this, things went wrong, VERY WRONG.'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.read_csv(path/'train.csv', header=None)\n", "example_text = df.iloc[2][1]; example_text" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'every once in a long while a movie will come along that will be so awful that i feel compelled to warn people . if i labor all my days and i can save but one soul from watching this movie , how great will be my joy . \\n\\n where to begin my discussion of pain . for starters , there was a musical montage every five minutes . there was no character development . every character was a stereotype . we had swearing guy , fat guy who eats donuts , goofy foreign guy , etc . the script felt as if it were being written as the movie was being shot . the production value was so incredibly low that it felt like i was watching a junior high video presentation . have the directors , producers , etc . ever even seen a movie before ? halestorm is getting worse and worse with every new entry . the concept for this movie sounded so funny . how could you go wrong with gary coleman and a handful of somewhat legitimate actors . but trust me when i say this , things went wrong , xxup very xxup wrong .'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer = Tokenizer()\n", "tok = SpacyTokenizer('en')\n", "' '.join(tokenizer.process_text(example_text, tok))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As explained before, the tokenizer split the text according to words/punctuations signs but in a smart manner. The rules (see below) also have modified the text a little bit. We can tokenize a list of texts directly at the same time:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'every once in a long while a movie will come along that will be so awful that i feel compelled to warn people . if i labor all my days and i can save but one soul from watching this movie , how great will be my joy . \\n\\n where to begin my discussion of pain . for starters , there was a musical montage every five minutes . there was no character development . every character was a stereotype . we had swearing guy , fat guy who eats donuts , goofy foreign guy , etc . the script felt as if it were being written as the movie was being shot . the production value was so incredibly low that it felt like i was watching a junior high video presentation . have the directors , producers , etc . ever even seen a movie before ? halestorm is getting worse and worse with every new entry . the concept for this movie sounded so funny . how could you go wrong with gary coleman and a handful of somewhat legitimate actors . but trust me when i say this , things went wrong , xxup very xxup wrong .'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.read_csv(path/'train.csv', header=None)\n", "texts = df[1].values\n", "tokenizer = Tokenizer()\n", "tokens = tokenizer.process_all(texts)\n", "' '.join(tokens[2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Customize the tokenizer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `tok_func` must return an instance of [`BaseTokenizer`](/text.transform.html#BaseTokenizer):" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class BaseTokenizer[source]

\n", "\n", "> BaseTokenizer(`lang`:`str`)\n", "\n", "Basic class for a tokenizer function. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BaseTokenizer)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

tokenizer[source]

\n", "\n", "> tokenizer(`t`:`str`) → `List`\\[`str`\\]" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BaseTokenizer.tokenizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Take a text `t` and returns the list of its tokens." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

add_special_cases[source]

\n", "\n", "> add_special_cases(`toks`:`StrList`)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(BaseTokenizer.add_special_cases)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Record a list of special tokens `toks`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The fastai library uses [spacy](https://spacy.io/) tokenizers as its default. The following class wraps it as [`BaseTokenizer`](/text.transform.html#BaseTokenizer)." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class SpacyTokenizer[source]

\n", "\n", "> SpacyTokenizer(`lang`:`str`) :: [`BaseTokenizer`](/text.transform.html#BaseTokenizer)\n", "\n", "Wrapper around a spacy tokenizer to make it a [`BaseTokenizer`](/text.transform.html#BaseTokenizer). " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SpacyTokenizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to use your custom tokenizer, just subclass the [`BaseTokenizer`](/text.transform.html#BaseTokenizer) and override its `tokenizer` and `add_spec_cases` functions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Rules" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rules are just functions that take a string and return the modified string. This allows you to customize the list of `default_rules` as you please. Those `default_rules` are:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

deal_caps[source]

\n", "\n", "> deal_caps(`t`:`str`) → `str`" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(deal_caps, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In `t`, if a word is written in all caps, we put it in a lower case and add a special token before. A model will more easily learn this way the meaning of the sentence. The rest of the capitals are removed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"i'm suddenly xxup shouting xxup for no xxup reason!\"" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "deal_caps(\"I'm suddenly SHOUTING FOR NO REASON!\")" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

fix_html[source]

\n", "\n", "> fix_html(`x`:`str`) → `str`" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(fix_html, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This rules replaces a bunch of HTML characters or norms in plain text ones. For instance `
` are replaced by `\\n`, ` ` by spaces etc..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Some HTML& text\\n'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fix_html(\"Some HTML text
\")" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

replace_rep[source]

\n", "\n", "> replace_rep(`t`:`str`) → `str`" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(replace_rep, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Whenever a character is repeated more than three times in `t`, we replace the whole thing by 'TK_REP n char' where n is the number of occurences and char the character." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I'm so excited xxrep 8 ! \"" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "replace_rep(\"I'm so excited!!!!!!!!\")" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

replace_wrep[source]

\n", "\n", "> replace_wrep(`t`:`str`) → `str`" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(replace_wrep, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Whenever a word is repeated more than four times in `t`, we replace the whole thing by 'TK_WREP n w' where n is the number of occurences and w the word repeated." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I've never xxwrep 7 ever done this.\"" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "replace_wrep(\"I've never ever ever ever ever ever ever ever done this.\")" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

rm_useless_spaces[source]

\n", "\n", "> rm_useless_spaces(`t`:`str`) → `str`\n", "\n", "Remove multiple spaces in `t`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(rm_useless_spaces)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Inconsistent use of spaces.'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rm_useless_spaces(\"Inconsistent use of spaces.\")" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

spec_add_spaces[source]

\n", "\n", "> spec_add_spaces(`t`:`str`) → `str`\n", "\n", "Add spaces around / and # in `t`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(spec_add_spaces)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [ { "data": { "text/plain": [ "'I # like to # put # hashtags # everywhere!'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "spec_add_spaces('I #like to #put #hashtags #everywhere!')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Numericalization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To convert our set of tokens to unique ids (and be able to have them go through embeddings), we use the following class:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class Vocab[source]

\n", "\n", "> Vocab(`path`:`PathOrStr`)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Vocab, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Contain the correspondance between numbers and tokens and numericalize. `path` should point to the 'tmp' directory with the token and id files." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

create[source]

\n", "\n", "> create(`path`:`PathOrStr`, `tokens`:`Tokens`, `max_vocab`:`int`, `min_freq`:`int`) → `Vocab`" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Vocab.create, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a [`Vocab`](/text.transform.html#Vocab) dictionary from a set of `tokens` in `path`. Only keeps `max_vocab` tokens, and only if they appear at least `min_freq` times, set the rest to `UNK`." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

numericalize[source]

\n", "\n", "> numericalize(`t`:`StrList`) → `List`\\[`int`\\]\n", "\n", "Convert a list of tokens `t` to their ids. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Vocab.numericalize)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

textify[source]

\n", "\n", "> textify(`nums`:`Collection`\\[`int`\\]) → `List`\\[`str`\\]\n", "\n", "Convert a list of `nums` to their tokens. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Vocab.textify)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[207, 321, 11, 6, 246, 144, 6, 22, 88, 240]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vocab = Vocab.create(path, tokens, max_vocab=1000, min_freq=2)\n", "vocab.numericalize(tokens[2])[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

tokenizer[source]

\n", "\n", "> tokenizer(`t`:`str`) → `List`\\[`str`\\]" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SpacyTokenizer.tokenizer)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

add_special_cases[source]

\n", "\n", "> add_special_cases(`toks`:`StrList`)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(SpacyTokenizer.add_special_cases)" ] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "NLP data processing; tokenizes text and creates vocab indexes", "title": "text.transform" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }