{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Tokenization Exercises\n", "\n", "In the lecture we took a look at a simple tokenizer and sentence segmenter. In this exercise we will expand our understanding of the problem by asking a few important questions, and looking at the problem from a different perspectives." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup 1: Load Libraries" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import re" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Task 1: Improving tokenization\n", "\n", "Write a tokenizer to correctly tokenize the following text:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[\"'Curiouser\", 'and', 'curiouser', \"'\", 'cried', 'Alice', 'she', 'was', 'so', 'much', 'surprised', 'that', 'for', 'the', 'moment', 'she', 'quite', 'forgot', 'how', 'to', 'speak', 'good', 'English', \"'now\", \"I'm\", 'opening', 'out', 'like', 'the', 'largest', 'telescope', 'that', 'ever', 'was', 'Good', 'bye', 'feet', \"'\", 'for', 'when', 'she', 'looked', 'down', 'at', 'her', 'feet', 'they', 'seemed', 'to', 'be', 'almost', 'out', 'of', 'sight', 'they', 'were', 'getting', 'so', 'far', 'off', '.', \"'Oh\", 'my', 'poor', 'little', 'feet', 'I', 'wonder', 'who', 'will', 'put', 'on', 'your', 'shoes', 'and', 'stockings', 'for', 'you', 'now', 'dears', '?', \"I'm\", 'sure', 'I', \"shan't\", 'be', 'able', 'I', 'shall', 'be', 'a', 'great', 'deal', 'too', 'far', 'off', 'to', 'trouble', 'myself', 'about', 'you', 'you', 'must', 'manage', 'the', 'best', 'way', 'you', 'can', 'but', 'I', 'must', 'be', 'kind', 'to', 'them', \"'\", 'thought', 'Alice', \"'or\", 'perhaps', 'they', \"won't\", 'walk', 'the', 'way', 'I', 'want', 'to', 'go', 'Let', 'me', 'see', \"I'll\", 'give', 'them', 'a', 'new', 'pair', 'of', 'boots', 'every', 'Christmas', '.', \"'\"]\n" ] } ], "source": [ "text = \"\"\"'Curiouser and curiouser!' cried Alice (she was so much surprised, that for the moment she quite\n", "forgot how to speak good English); 'now I'm opening out like the largest telescope that ever was! Good-bye,\n", "feet!' (for when she looked down at her feet, they seemed to be almost out of sight, they were getting so far\n", "off). 'Oh, my poor little feet, I wonder who will put on your shoes and stockings for you now, dears? I'm sure I\n", "shan't be able! I shall be a great deal too far off to trouble myself about you: you must manage the best\n", "way you can; —but I must be kind to them,' thought Alice, 'or perhaps they won't walk the way I want to go!\n", "Let me see: I'll give them a new pair of boots every Christmas.'\n", "\"\"\"\n", "\n", "token = re.compile('Mr.|[\\w\\']+|[.?]')\n", "tokens = token.findall(text)\n", "print(tokens)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Questions:\n", "- should one separate 'm, 'll, n't, possessives, and other forms of contractions from the word?\n", "- should elipsis be considered as three '.'s or one '...'?\n", "- there's a bunch of these small rules - will you implement all of them to create a 'perfect' tokenizer?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Task 2: Twitter Tokenization\n", "As you might imagine, tokenizing tweets differs from standard tokenization. There are 'rules' on what specific elements of a tweet might be (mentions, hashtags, links), and how they are tokenized. The goal of this exercise is not to create a bullet-proof Twitter tokenizer but to understand tokenization in a different domain.\n", "\n", "Tokenize the following [UCLMR tweet](https://twitter.com/IAugenstein/status/766628888843812864) correctly:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'#emnlp2016 paper on numerical grounding for error correction http://arxiv.org/abs/1608.04147 @geospith @riedelcastro #NLProc'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tweet = \"#emnlp2016 paper on numerical grounding for error correction http://arxiv.org/abs/1608.04147 @geospith @riedelcastro #NLProc\"\n", "tweet" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['emnlp2016 paper on numerical grounding for error correction http', 'arxiv', 'org', 'abs', '1608', '04147 ', 'geospith ', 'riedelcastro ', 'NLProc']\n" ] } ], "source": [ "token = re.compile('[\\w\\s]+')\n", "tokens = token.findall(tweet)\n", "print(tokens)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Questions:\n", "- what does 'correctly' mean, when it comes to Twitter tokenization?\n", "- what defines correct tokenization of each tweet element?\n", "- how will your tokenizer tokenize elipsis (...)?\n", "- will it correctly tokenize emojis?\n", "- what about composite emojis?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Task 3: Improving sentence segmenter\n", "\n", "Sentence segmentation is not a trivial task either. There might be some cases where your simple sentence segmentation won't work properly.\n", "\n", "First, make sure you understand the following sentence segmentation code used in the lecture:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "import re\n", "\n", "def sentence_segment(match_regex, tokens):\n", " \"\"\"\n", " Splits a sequence of tokens into sentences, splitting wherever the given matching regular expression\n", " matches.\n", "\n", " Parameters\n", " ----------\n", " match_regex the regular expression that defines at which token to split.\n", " tokens the input sequence of string tokens.\n", "\n", " Returns\n", " -------\n", " a list of token lists, where each inner list represents a sentence.\n", "\n", " >>> tokens = ['the','man','eats','.','She', 'sleeps', '.']\n", " >>> sentence_segment(re.compile('\\.'), tokens)\n", " [['the', 'man', 'eats', '.'], ['She', 'sleeps', '.']]\n", " \"\"\"\n", " current = []\n", " sentences = [current]\n", " for tok in tokens:\n", " current.append(tok)\n", " if match_regex.match(tok):\n", " current = []\n", " sentences.append(current)\n", " if not sentences[-1]:\n", " sentences.pop(-1)\n", " return sentences\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, modify the following code so that sentence segmentation returns correctly segmented sentences on the following text:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch', 'is', 'the', 'longest', 'official', 'one', 'word', 'placename', 'in', 'U', '.']\n", "['K', '.']\n", "[\"Isn't\", 'that', 'weird', '?', 'I', 'mean', 'someone', 'took', 'the', 'effort', 'to', 'really', 'make', 'this', 'name', 'as', 'complicated', 'as', 'possible', 'huh', '?', 'Of', 'course', 'U', '.']\n", "['S', '.']\n", "['A', '.']\n", "['also', 'has', 'its', 'own', 'record', 'in', 'the', 'longest', 'name', 'albeit', 'a', 'bit', 'shorter', '.']\n", "['.']\n", "['.']\n", "['This', 'record', 'belongs', 'to', 'the', 'place', 'called', 'Chargoggagoggmanchauggagoggchaubunagungamaugg', '.']\n", "[\"There's\", 'so', 'many', 'wonderful', 'little', 'details', 'one', 'can', 'find', 'out', 'while', 'browsing', 'http', 'www', '.']\n", "['wikipedia', '.']\n", "['org', 'during', 'their', 'Ph', '.']\n", "['D', '.']\n", "['or', 'an', 'M', '.']\n", "['Sc', '.']\n" ] } ], "source": [ "text = \"\"\"\n", "Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch is the longest official one-word placename in U.K. Isn't that weird? I mean, someone took the effort to really make this name as complicated as possible, huh?! Of course, U.S.A. also has its own record in the longest name, albeit a bit shorter... This record belongs to the place called Chargoggagoggmanchauggagoggchaubunagungamaugg. There's so many wonderful little details one can find out while browsing http://www.wikipedia.org during their Ph.D. or an M.Sc.\n", "\"\"\"\n", "\n", "token = re.compile('Mr.|[\\w\\']+|[.?]')\n", "\n", "tokens = token.findall(text)\n", "sentences = sentence_segment(re.compile('\\.'), tokens)\n", "for sentence in sentences:\n", " print(sentence)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Questions:\n", "- what elements of a sentence did you have to take care of here?\n", "- is it useful or possible to enumerate all such possible examples?\n", "- how would you deal with all URLs effectively?\n", "- are there any specific punctuation not covered in the example you might think of?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solutions\n", "\n", "You can find the solutions to this exercises [here](tokenization_solutions.ipynb)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 1 }