Coverage for nltk.tokenize.treebank : 95%
![](keybd_closed.png)
Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
# Natural Language Toolkit: Tokenizers # # Copyright (C) 2001-2012 NLTK Project # Author: Edward Loper <edloper@gradient.cis.upenn.edu> # Michael Heilman <mheilman@cmu.edu> (re-port from http://www.cis.upenn.edu/~treebank/tokenizer.sed) # # URL: <http://nltk.sourceforge.net> # For license information, see LICENSE.TXT
Penn Treebank Tokenizer
The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank. This implementation is a port of the tokenizer sed script written by Robert McIntyre and available at http://www.cis.upenn.edu/~treebank/tokenizer.sed. """
""" The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank. This is the method that is invoked by ``word_tokenize()``. It assumes that the text has already been segmented into sentences, e.g. using ``sent_tokenize()``.
This tokenizer performs the following steps:
- split standard contractions, e.g. ``don't`` -> ``do n't`` and ``they'll`` -> ``they 'll`` - treat most punctuation characters as separate tokens - split off commas and single quotes, when followed by whitespace - separate periods that appear at the end of line
>>> from nltk.tokenize import TreebankWordTokenizer >>> s = '''Good muffins cost $3.88\\nin New York. Please buy me\\ntwo of them.\\n\\nThanks.''' >>> TreebankWordTokenizer().tokenize(s) ['Good', 'muffins', 'cost', '$', '3.88', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.'] >>> s = "They'll save and invest more." >>> TreebankWordTokenizer().tokenize(s) ['They', "'ll", 'save', 'and', 'invest', 'more', '.']
NB. this tokenizer assumes that the text is presented as one sentence per line, where each line is delimited with a newline character. The only periods to be treated as separate tokens are those appearing at the end of a line. """
# List of contractions adapted from Robert MacIntyre's tokenizer. re.compile(r"(?i)\b(d)('ye)\b"), re.compile(r"(?i)\b(gim)(me)\b"), re.compile(r"(?i)\b(gon)(na)\b"), re.compile(r"(?i)\b(got)(ta)\b"), re.compile(r"(?i)\b(lem)(me)\b"), re.compile(r"(?i)\b(mor)('n)\b"), re.compile(r"(?i)\b(wan)(na) ")] re.compile(r"(?i) ('t)(was)\b")] re.compile(r"(?i)\b(wha)(t)(cha)\b")]
#starting quotes
#punctuation
#parens, brackets, etc.
#add extra space to make things easier
#ending quotes
# We are not using CONTRACTIONS4 since # they are also commented out in the SED scripts # for regexp in self.CONTRACTIONS4: # text = regexp.sub(r' \1 \2 \3 ', text)
#add space at end to match up with MacIntyre's output (for debugging)
import doctest doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE) |