{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true, "deletable": true, "editable": true }, "source": [ "OK. We've spent some time thinking about how to get and work with data, but we haven't really touched on what you can do with data once you have it. The reason for this is that data munging and data analysis are really two separate concepts in their own way. And the kinds of analysis you can perform on data are as vast as the types of data you could find. As a digital humanist, you might be interested in any number of things: georeferencing, statistical measurements, network analysis, or many more. And, then, once you've analyzed things, you'll likely want to visualize your results. For the purposes of showing you what you can do with Python, we will just barely scratch the surface of these areas by showing some very basic methods. We will then visualize our results, which will hopefully show how you can use programming to carry out interpretations. Our goal here will be to use some of the data from the previous lesson on web scraping. Since the previous data was text, we will be working with basic text analysis to analyze author style by word counts." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "First let's get the data we need by copying over some of the work that we did last time. First we will import the modules that we need. Then we will use Beautiful Soup to scrape down the corpus. Be sure to check out the previous lesson or ask a neighbor if you have any questions about what any of these lines are doing. This will take a few moments." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "from bs4 import BeautifulSoup\n", "from urllib import request\n", "\n", "url = \"https://raw.githubusercontent.com/humanitiesprogramming/scraping-corpus/master/full-text.txt\"\n", "html = request.urlopen(url).read()\n", "soup = BeautifulSoup(html, 'lxml')\n", "raw_text = soup.text\n", "texts = eval(soup.text)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "The eval() function is new here, and it tells Python to take a string it is passed and interpret it as code. Beautiful Soup pulls down the contents of a website, but it assumes that the result of soup.text is going to be a string. If you actually look at the contents of that link, though, you'll see that I dumped the contents of the texts as a list of texts. So we need to interpret that big long text file as code, and Python can help us do that. Calling eval on it looks for special characters like the [], which indicate lists, and runs Python on it as expected. To actually work with this code again. We can prove that this is going on by taking the length of our two different versions of the soup results:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4398113\n", "10\n" ] } ], "source": [ "print(len(raw_text))\n", "print(len(texts))" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "The first len() function is way larger, as it is taking the length of a giant string. So it returns the total number of characters in the collected text. The second statement gives us the expected result \"10\", because it is measuring the length of our list of texts. We have 10 of them. As always, it is important that we remember what data types we have and when we have them." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now that we have our data, we can start processing it as text. The package we are using is NLTK, the Natural Language Toolkit, which is something of a Swiss army knife for text analysis. Other packages might give you better baked in functionality, but NLTK is great for learning because it expects that you'll be working your own text functions from scratch. It also has a fantastic [text book](https://nltk.org/book) that I often use for teaching text analysis. The exercises rapidly engage you in real-world text questions. Let's start by importing what we'll need:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "import nltk\n", "from nltk import word_tokenize" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "NLTK is a massive package, with lots of moving pieces. We'll be calling lots of lower-level functions, so expect to do a lot of dot typing. The second line here is a way of shortening that process so that instead of typing nltk.word_tokenize a lot we can just type word_tokenize. Before we get going, we'll have to load in some additional nltk data. Fire this up from within the interpreter (you'll have to import nltk first)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml\n" ] }, { "data": { "text/plain": [ "True" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nltk.download()" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Depending on what operating system you're running, one of a couple things will happen. On a Mac, a pop up will appear asking you what you want to download. You'll want to select \"All\" and hit download. It will take a few seconds to download things to your computer. On Linux, your prompt will change, and you'll want to hit 'd all' and then enter. This will download things for you. After all the downloads are done, you'll need to close the download window, restart python and reimport nltk." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "As we've been learning all along, computers have to work with structured data. By default, humanities texts are pretty darn unstructured. The goal in this lesson is to take unstructured text data and turn it into something that a program could read. We'll take this:\n", "\n", "Moby Dick\n", "by\n", "Herman Melville\n", "1851\n", "ETYMOLOGY.\n", "\n", "and turn it into this:\n", "\n", "['[',\n", " 'Moby',\n", " 'Dick',\n", " 'by',\n", " 'Herman',\n", " 'Melville',\n", " '1851',\n", " ']',\n", " 'ETYMOLOGY',\n", " '.']\n", "\n", "\n", "We're trying to take that text and turn it into a list, or a list of lists. We could think of any number of ways to structure a text, but the way done here is to break that text down into smaller units:\n", "\n", "* A text is made of many sentences. (Breaking down texts into sentences is called **segmentation**)\n", "* A Sentence is made of many words. (Breaking a large text into words is called **tokenization**, and those words become called **tokens**.\n", "\n", "Of course, this process quickly becomes subject to interpretation: are you going to count punctuation as tokens? The pre-packaged NLTK texts come with a lot of those decisions already made. We're going to go through the whole process ourselves so that you have a sense of how each part of it works. Here are the steps for the one we'll be using:\n", "\n", "* tokenization\n", "* normalization\n", "* removing stopwords\n", "* analysis\n", "* visualization\n", "\n", "Those are some basics, but, depending on your interests, you might have more steps. You might, for example care about sentence boundaries. Or, you might be interested in tagging the part of speech for each word. The process will change depending on your interests." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Tokenization\n", "\n", "The first step in our process is to break the text into smaller units that we can work with. In any tokenization process, you have to decide what kinds of things count as tokens - does punctuation count? How do we deal with word boundaries? You could tokenize things yourself, but it's not necessary to reinvent the wheel. We'll use NLTK to tokenize for us. \n", "\n", "This will take a bit of time to process, as we're working with a lot of text. So that you know things aren't broken, I've included a timer that prints out as it moves through each text." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "deletable": true, "editable": true, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "=====\n", "136227\n", "['Project', 'Gutenberg', \"'s\", 'The', 'Return', 'of', 'Sherlock', 'Holmes', ',', 'by', 'Arthur', 'Conan', 'DoyleThis', 'eBook', 'is', 'for', 'the', 'use', 'of', 'anyone']\n", "=====\n", "127419\n", "['Project', 'Gutenberg', \"'s\", 'The', 'Adventures', 'of', 'Sherlock', 'Holmes', ',', 'by', 'Arthur', 'Conan', 'DoyleThis', 'eBook', 'is', 'for', 'the', 'use', 'of', 'anyone']\n", "=====\n", "55005\n", "['The', 'Project', 'Gutenberg', 'EBook', 'of', 'A', 'Study', 'In', 'Scarlet', ',', 'by', 'Arthur', 'Conan', 'DoyleThis', 'eBook', 'is', 'for', 'the', 'use', 'of']\n", "=====\n", "72206\n", "['Project', 'Gutenberg', \"'s\", 'The', 'Hound', 'of', 'the', 'Baskervilles', ',', 'by', 'A.', 'Conan', 'DoyleThis', 'eBook', 'is', 'for', 'the', 'use', 'of', 'anyone']\n", "=====\n", "54520\n", "['The', 'Project', 'Gutenberg', 'EBook', 'of', 'The', 'Sign', 'of', 'the', 'Four', ',', 'by', 'Arthur', 'Conan', 'DoyleThis', 'eBook', 'is', 'for', 'the', 'use']\n", "=====\n", "225845\n", "['Jane', 'Eyre', ',', 'by', 'Charlotte', 'BronteThe', 'Project', 'Gutenberg', 'eBook', ',', 'Jane', 'Eyre', ',', 'by', 'Charlotte', 'Bronte', ',', 'Illustratedby', 'F.', 'H.']\n", "=====\n", "113812\n", "['The', 'Project', 'Gutenberg', 'EBook', 'of', 'Villette', ',', 'by', 'Charlotte', 'BrontëThis', 'eBook', 'is', 'for', 'the', 'use', 'of', 'anyone', 'anywhere', 'at', 'no']\n", "=====\n", "12814\n", "['Project', 'Gutenberg', \"'s\", 'The', 'Search', 'After', 'Happiness', ',', 'by', 'Charlotte', 'BronteThis', 'eBook', 'is', 'for', 'the', 'use', 'of', 'anyone', 'anywhere', 'in']\n", "=====\n", "136933\n", "['The', 'Project', 'Gutenberg', 'EBook', 'of', 'Shirley', ',', 'by', 'Charlotte', 'BrontëThis', 'eBook', 'is', 'for', 'the', 'use', 'of', 'anyone', 'anywhere', 'at', 'no']\n", "=====\n", "12814\n", "['Project', 'Gutenberg', \"'s\", 'The', 'Search', 'After', 'Happiness', ',', 'by', 'Charlotte', 'BronteThis', 'eBook', 'is', 'for', 'the', 'use', 'of', 'anyone', 'anywhere', 'in']\n" ] } ], "source": [ "tokenized_texts = []\n", "for text in texts:\n", " tokenized_texts.append(word_tokenize(text))\n", "\n", "for tokenized_text in tokenized_texts:\n", " print('=====')\n", " print(len(tokenized_text))\n", " print(tokenized_text[0:20])\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Success! We've got a series of texts, all of which are tokenized. But wow those are big numbers. Lots of words! Five texts by Charlotte Bronte and five by Sir Arthur Conan Doyle. Let's get a little more organized by separating the two corpora by author" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "5\n", "5\n" ] } ], "source": [ "doyle = tokenized_texts[:5]\n", "bronte = tokenized_texts[5:]\n", "\n", "print(len(doyle))\n", "print(len(bronte))" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Normalization\n", "\n", "Humanities data is messy. And as we've often noted, computers don't deal with mess well. We'll take a few steps to help our friendly neighborhood computer. We'll do two things here:\n", "\n", "* lowercase all words (for a computer, \"The\" is a different word from \"the\")\n", "* remove the Project Gutenberg frontmatter (you may have noticed that all the texts above started the same way)\n", "\n", "For the second one, Project Gutenberg actually makes things a little tricky. Their frontmatter is not consistent from text to text. We can grab nine of our texts by using the following phrases: \"START OF THIS PROJECT GUTENBERG EBOOK.\" This won't perfectly massage out all the frontmatter, but for the sake of simplicity I will leave it as is. For the sake of practice, we'll be defining a function for doing these things." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['us-ascii', ')', '***start', 'of', 'the', 'project', 'gutenberg', 'ebook', 'jane', 'eyre***transcribed', 'from', 'the', '1897', 'service', '&', 'paton', 'edition', 'by', 'david', 'price', ',', 'email', 'ccx074', '@', 'pglaf.orgjane', 'eyrean', 'autobiographybycharlotte', 'brontëillustrated', 'by', 'f.', 'h.', 'townsendlondonservice', '&', 'paton5', 'henrietta', 'street1897the', 'illustrationsin', 'this', 'volume', 'are', 'the', 'copyright', 'ofservice', '&', 'paton', ',', 'londontow', '.', 'm.', 'thackeray', ',', 'esq.', ',', 'this', 'workis', 'respectfully', 'inscribedbythe', 'authorprefacea', 'preface', 'to', 'the', 'first', 'edition', 'of', '“', 'jane', 'eyre', '”', 'being', 'unnecessary', ',', 'i', 'gave', 'none', ':', 'this', 'second', 'edition', 'demands', 'a', 'few', 'words', 'both', 'of', 'acknowledgment', 'and', 'miscellaneous', 'remark.my', 'thanks', 'are', 'due', 'in', 'three', 'quarters.to', 'the', 'public', ',', 'for', 'the', 'indulgent', 'ear', 'it', 'has', 'inclined', 'to', 'a', 'plain', 'tale', 'with', 'few', 'pretensions.to', 'the', 'press', ',', 'for', 'the', 'fair', 'field', 'its', 'honest', 'suffrage', 'has', 'opened', 'to', 'an', 'obscure', 'aspirant.to', 'my', 'publishers', ',', 'for', 'the', 'aid', 'their', 'tact', ',', 'their', 'energy', ',', 'their', 'practical', 'sense', 'and', 'frank', 'liberality', 'have', 'afforded', 'an', 'unknown', 'and', 'unrecommended', 'author.the', 'press', 'and', 'the', 'public', 'are', 'but', 'vague', 'personifications', 'for', 'me', ',', 'and', 'i', 'must', 'thank', 'them', 'in', 'vague', 'terms', ';', 'but', 'my', 'publishers', 'are', 'definite', ':', 'so', 'are', 'certain', 'generous', 'critics', 'who', 'have', 'encouraged', 'me', 'as', 'only', 'large-hearted', 'and', 'high-minded', 'men', 'know', 'how', 'to', 'encourage', 'a', 'struggling', 'stranger']\n", "['and', 'faithful.st', '.', 'john', 'is', 'unmarried', ':', 'he', 'never', 'will', 'marry', 'now', '.', 'himself', 'has', 'hitherto', 'sufficed', 'to', 'the', 'toil', ',', 'and', 'the', 'toil', 'draws', 'near', 'its', 'close', ':', 'his', 'glorious', 'sun', 'hastens', 'to', 'its', 'setting', '.', 'the', 'last', 'letter', 'i', 'received', 'from', 'him', 'drew', 'from', 'my', 'eyes', 'human', 'tears', ',', 'and', 'yet', 'filled', 'my', 'heart', 'with', 'divine', 'joy', ':', 'he', 'anticipated', 'his', 'sure', 'reward', ',', 'his', 'incorruptible', 'crown', '.', 'i', 'know', 'that', 'a', 'stranger', '’', 's', 'hand', 'will', 'write', 'to', 'me', 'next', ',', 'to', 'say', 'that', 'the', 'good', 'and', 'faithful', 'servant', 'has', 'been', 'called', 'at', 'length', 'into', 'the', 'joy', 'of', 'his', 'lord', '.', 'and', 'why', 'weep', 'for', 'this', '?', 'no', 'fear', 'of', 'death', 'will', 'darken', 'st.', 'john', '’', 's', 'last', 'hour', ':', 'his', 'mind', 'will', 'be', 'unclouded', ',', 'his', 'heart', 'will', 'be', 'undaunted', ',', 'his', 'hope', 'will', 'be', 'sure', ',', 'his', 'faith', 'steadfast', '.', 'his', 'own', 'words', 'are', 'a', 'pledge', 'of', 'this—', '“', 'my', 'master', ',', '”', 'he', 'says', ',', '“', 'has', 'forewarned', 'me', '.', 'daily', 'he', 'announces', 'more', 'distinctly', ',', '—', '‘', 'surely', 'i', 'come', 'quickly', '!', '’', 'and', 'hourly', 'i', 'more', 'eagerly', 'respond', ',', '—', '‘', 'amen', ';', 'even', 'so', 'come', ',', 'lord', 'jesus', '!', '’', '”']\n" ] } ], "source": [ "def normalize(tokens):\n", " \"\"\"Takes a list of tokens and returns a list of tokens \n", " that has been normalized by lowercasing all tokens and \n", " removing Project Gutenberg frontmatter.\"\"\"\n", " \n", "# lowercase all words\n", " normalized = [token.lower() for token in tokens]\n", " \n", "# very rough end of front matter.\n", " end_of_front_matter = 90\n", "# very rough beginning of end matter.\n", " start_of_end_matter = -2973\n", "# get only the text between the end matter and front matter\n", " normalized = normalized[end_of_front_matter:start_of_end_matter]\n", "\n", " return normalized\n", "\n", "print(normalize(bronte[0])[:200])\n", "print(normalize(bronte[0])[-200:])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Above I've printed about the first 200 and the last 200 words of Jane Eyre to see how we're doing. Pretty rough! That's because we're mostly just guessing where the beginning and the ending of the actual text is. The problem here is that PG changes the structure of its paratext for each text, so we would need something pretty sophisticated to work through it cleanly. If you wanted a more refined approach, you could [this package](https://pypi.python.org/pypi/Gutenberg), though it has enough installation requirements that we didn't want to deal with it in this course. Essentially, it uses a lot of complicated formulae to determine how to strip off the gutenberg material. They use a syntax called **regular expressions** that is (thankfully) out of the scope of this course. For now, we'll just accept our rough cut with the understanding that we would want to clean things up more were we working on our own. \n", "\n", "Let's normalize everything with these caveats in mind. Below, we essentially say,\n", "\n", "* Go through each in the list of texts\n", "* For each of those texts, normalize the text in them using the function we defined above.\n", "* Take the results of that normalization process and make a new list out of them.\n", "* The result will be a list of normalized tokens stored in a variable of the same name as the original list." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['of', 'this', 'project', 'gutenberg', 'ebook', 'the', 'return', 'of', 'sherlock', 'holmes', '***produced', 'by', 'an', 'anonymous', 'volunteer', 'and', 'david', 'widgerthe', 'return', 'of', 'sherlock', 'holmes', ',', 'a', 'collection', 'of', 'holmes', 'adventuresby', 'sir', 'arthur']\n" ] } ], "source": [ "doyle = [normalize(text) for text in doyle]\n", "bronte = [normalize(text) for text in bronte]\n", "\n", "print(doyle[0][:30])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Removing Stopwords\n", "\n", "The last step in this basic text analysis pipeline is to remove those words that we don't care about. The most common words in any text are articles, pronouns, and punctuation, words that might not carry a lot of information in them about the text themselves. While there are sometimes good reasons for keeping this list of **stopwords** in the text, we usually take them out to get a better read of things we actually care about in a text. NLTK actually comes with a big packet of stopwords. Let's import it and take a look:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', \"you're\", \"you've\", \"you'll\", \"you'd\", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', \"she's\", 'her', 'hers', 'herself', 'it', \"it's\", 'its', 'itself']\n" ] } ], "source": [ "from nltk.corpus import stopwords\n", "\n", "print(stopwords.words('english')[0:30])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We'll loop over the cleaned texts and get rid of those words that exist in the stopwords list. To do this, we'll compare both lists.\n", "\n", "Also grab the count for the text pre-stopwording to make clear how many words are lost when you do this:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "def remove_stopwords(tokens):\n", " return [token for token in tokens if token not in stopwords.words('english')]" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We could loop over each text again, as we have been doing (will take bit of time):" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "133164\n", "start cleaning\n", "doyle done\n", "bronte done\n", "71419\n" ] } ], "source": [ "print(len(doyle[0]))\n", "print('start cleaning')\n", "doyle = [remove_stopwords(text) for text in doyle]\n", "print('doyle done')\n", "bronte = [remove_stopwords(text) for text in bronte]\n", "print('bronte done')\n", "\n", "print(len(doyle[0]))" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Before we did all this, this same text had 133164 tokens in it. Now, that text only has 71419 tokens in it. Look how many have been removed because they were common words! Our new text is almost half the size of the original one. We have a much smaller, potentially more meaningful set to work with. We're ready to do some basic analysis. " ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Analysis\n", "\n", "While a larger lesson in natural language processing is outside the scope of this workshop, we will cover a few quick things you can do with it here. One thing we can do quite easily is count up the frequencies with which particular words occur in a text. NLTK has a particular way of doing this using an object called a Frequency Distribution, which is exactly what it sounds like. We make a frequency distribution of a text like so:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "deletable": true, "editable": true, "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "[(',', 8293),\n", " ('the', 6045),\n", " ('.', 4691),\n", " ('and', 2830),\n", " ('of', 2802),\n", " ('a', 2612),\n", " ('to', 2586),\n", " ('i', 2511),\n", " ('that', 2052),\n", " ('in', 1818),\n", " ('was', 1803),\n", " ('he', 1665),\n", " ('it', 1612),\n", " ('you', 1457),\n", " ('his', 1344),\n", " ('is', 1104),\n", " ('had', 1013),\n", " ('have', 931),\n", " ('with', 883),\n", " ('my', 827)]" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "example = nltk.FreqDist(doyle[0])\n", "print(example.most_common(20))" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Above, we take the first Doyle text, make a Frequency Distribution out of it, and store it in a variable called example. We can then do frequency distribution things with it, like find the most common words! Let's do this for all our texts:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false, "deletable": true, "editable": true, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "=====\n", "[(',', 8293), ('the', 6045), ('.', 4691), ('and', 2830), ('of', 2802), ('a', 2612), ('to', 2586), ('i', 2511), ('that', 2052), ('in', 1818)]\n", "=====\n", "=====\n", "[(',', 7647), ('the', 5482), ('.', 3720), ('and', 2868), ('to', 2656), ('of', 2647), ('a', 2585), ('i', 2549), ('in', 1730), ('that', 1658)]\n", "=====\n", "=====\n", "[(',', 2992), ('the', 2502), ('.', 1773), ('and', 1332), ('of', 1221), ('to', 1088), ('a', 985), ('i', 761), ('he', 756), ('in', 735)]\n", "=====\n", "=====\n", "[(',', 3435), ('the', 3227), ('.', 2160), ('of', 1599), ('and', 1555), ('to', 1395), ('a', 1268), ('i', 1238), ('that', 1098), ('in', 895)]\n", "=====\n", "=====\n", "[(',', 3268), ('.', 2509), ('the', 2329), ('``', 1233), ('i', 1215), ('and', 1175), ('of', 1136), ('a', 1077), ('to', 1075), ('it', 678)]\n", "=====\n", "=====\n", "[(',', 14536), ('the', 7583), ('and', 6333), ('i', 6264), ('to', 5065), ('.', 4405), ('of', 4323), ('a', 4275), (';', 3475), (':', 2765)]\n", "=====\n", "=====\n", "[(',', 8261), ('the', 3893), ('.', 3404), ('and', 2994), ('i', 2741), ('to', 2352), ('of', 2326), ('a', 2222), ('``', 2078), ('in', 1541)]\n", "=====\n", "=====\n", "[('the', 680), ('of', 401), ('and', 368), ('a', 211), (',', 205), ('he', 195), ('to', 183), ('in', 172), ('.', 169), ('was', 134)]\n", "=====\n", "=====\n", "[(',', 9436), ('.', 5203), ('the', 4647), ('and', 3517), ('``', 3156), ('to', 3127), ('of', 2875), ('a', 2557), ('i', 1846), ('in', 1729)]\n", "=====\n", "=====\n", "[('the', 680), ('of', 401), ('and', 368), ('a', 211), (',', 205), ('he', 195), ('to', 183), ('in', 172), ('.', 169), ('was', 134)]\n", "=====\n" ] } ], "source": [ "doyle_freq_dist = [nltk.FreqDist(text) for text in doyle]\n", "bronte_freq_dist = [nltk.FreqDist(text) for text in bronte]\n", "\n", "def print_top_words(freq_dist_text):\n", " \"\"\"Takes a frequency distribution of a text and prints out the top 10 words in it.\"\"\"\n", " print('=====')\n", " print(freq_dist_text.most_common(10))\n", " print('=====')\n", " \n", "for text in doyle_freq_dist:\n", " print_top_words(text)\n", "for text in bronte_freq_dist:\n", " print_top_words(text)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We can also query particular words:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false, "deletable": true, "editable": true, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "573\n" ] }, { "data": { "text/plain": [ "649" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(doyle_freq_dist[0]['holmes'])\n", "print(bronte_freq_dist[0]['would'])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Let's make a quick function that would, given a particular word, return the frequencies of that word in both corpora." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[41, 17, 6, 14, 4], [5, 2, 0, 1, 0]]\n", "[[4, 0, 1, 1, 0], [32, 25, 1, 22, 1]]\n", "[[343, 346, 85, 182, 127], [929, 405, 8, 456, 8]]\n", "[[823, 737, 208, 518, 215], [1497, 509, 14, 779, 14]]\n" ] } ], "source": [ "def get_counts_in_corpora(token, corpus_one, corpus_two):\n", " \"\"\"Take two corpora, represented as lists of frequency distributions, and token query.\n", " Return the frequency of that token in all the texts in the corpus. The result\n", " Should be a list of two lists, one for each text.\"\"\"\n", " corpus_one_counts = [text_freq_dist[token] for text_freq_dist in corpus_one]\n", " corpus_two_counts = [text_freq_dist[token] for text_freq_dist in corpus_two]\n", " return [corpus_one_counts, corpus_two_counts]\n", "\n", "print(get_counts_in_corpora('evidence', doyle_freq_dist, bronte_freq_dist))\n", "print(get_counts_in_corpora('reader', doyle_freq_dist, bronte_freq_dist))\n", "print(get_counts_in_corpora('!', doyle_freq_dist, bronte_freq_dist))\n", "print(get_counts_in_corpora('?', doyle_freq_dist, bronte_freq_dist))\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We now have an easy way to get the total counts for any word, and we could get one corpus or the other by slicing the one list out:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[343, 346, 85, 182, 127]\n", "[929, 405, 8, 456, 8]\n" ] } ], "source": [ "results = get_counts_in_corpora('!', doyle_freq_dist, bronte_freq_dist)\n", "corpus_one_results = results[0]\n", "corpus_two_results = results[1]\n", "\n", "print(corpus_one_results)\n", "print(corpus_two_results)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We could go far deeper in text analysis, and there are many ways to adapt statistical methods and machine learning to your text analysis pipeline to develop sophisticated ways of reading texts from a distance. Keep going if you're interested! We're happy to talk more. " ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Visualization" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Just as there are all sorts of ways to dig into text analysis, visualization methods are a vast topic in their own way. We just want to give you a couple of methods to whet your appetite. Here are a few.\n", "\n", "A dispersion plot gives you a rough indication of the word usage in a particular text, and it has the added benefit of showing where particular usages cluster. You can pass a list of terms to it." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZUAAAEWCAYAAACufwpNAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3XmcnFWd7/HPlzQSJAwIZJQ1jQgioAZoFBgwQVYREWbQ\nyJULQTDKdZiLDiq8YExzB++wuTN3JDgYLwiCDIzcMBoQJqIgSweBsC8CohgJMgGjqCy/+8dzHvpJ\npaq6uvt0LeT7fr3qVVXnOcvvnFp+eZZUKyIwMzPLYY1OB2BmZq8eTipmZpaNk4qZmWXjpGJmZtk4\nqZiZWTZOKmZmlo2Tir3qSPq+pKPG2cdsST8ZZx/3SJo5nj5yyrEuYxhzUNJF7RzTOstJxTpK0mOS\n9snZZ0S8JyK+lbPPKkn9kkLSinT7jaQFkvatiWP7iFg0UXGM1kSti6T5kv6c1uIZSddK2nYM/WR/\nL1j7OamYjd36ETEFeDtwLXClpNmdCkZSX6fGBs5Ka7EZ8BQwv4OxWAc5qVjXknSQpDskLZd0k6S3\npfKt0r+Id0rPN5G0rDzUJGmRpGMr/XxU0n2Sfifp3kq7kyQ9Uik/dCxxRsTSiPgKMAicKWmN1P8r\n//KW9A5JQ5KeS3s2X0zl5V7PHElPSvq1pBMrsa9RifO3ki6TtEFN22Mk/QK4XtJkSRelussl3Sbp\n9bXrkvo9VdLjkp6S9H8lrVfT71GSfiHpaUmntLgWfwAuBnaot13Swemw4PIUz1tS+YXAFsD/S3s8\nnxnt62DdwUnFupKkHYELgI8BGwLnAVdJWisiHgE+C1wk6bXAN4Fv1TvUJOkDFF/2RwJ/ARwM/DZt\nfgTYE1gPOC31t/E4wr4C+EvgzXW2fQX4SkT8BbAVcFnN9r2ArYH9gM9WDgMdDxwCzAA2Af4L+Oea\ntjOAtwD7A0el+WxOsW4fB56vE8/sdNsLeCMwBTi3ps4eaS57A58rE0AzkqYAHwZ+VmfbNsAlwAnA\nVOA/KJLIayLivwO/AN4XEVMi4qyRxrLu5KRi3WoOcF5E3BIRL6VzAX8CdgWIiPOBh4FbgI2BRv+S\nPpbi0MxtUXg4Ih5PfXw3Ip6MiJcj4lLgIeAd44j5yXS/QZ1tLwBvkrRRRKyIiJtrtp8WEb+PiCUU\nSfLwVP5x4JSI+GVE/IkiQR5Wc6hrMLV9Po2zIfCmtG6LI+K5OvF8GPhiRPw8IlYAJwMfqun3tIh4\nPiLuBO6kOMzXyImSllO8JlMoElatWcDVEXFtRLwAnAOsDezepF/rMU4q1q2mAX+fDpMsT19Ym1P8\na710PsVhlq+lL9x6NqfYI1mFpCMrh9eWp742GkfMm6b7Z+psOwbYBrg/HZI6qGb7E5XHjzM8z2kU\n52rKGO8DXgJe36DthcBC4DvpcNpZktasE88maZzqmH01/S6tPP4DRbJo5JyIWD8i3hARB6e9yaZj\nRsTLKfZN69S1HuWkYt3qCeDz6YuqvL02Ii6BVw6zfBn4V2CwPM/QoJ+tagslTaNISn8LbBgR6wN3\nAxpHzIdSnKR+oHZDRDwUEYdTHB47E7hc0jqVKptXHm/B8F7PE8B7atZhckT8qtp9ZZwXIuK0iNiO\nYg/gIIpDf7WepEhY1TFfBH7T4lzHYqUxJYli3uVc/JPprwJOKtYN1kwnmMtbH8UX/sclvVOFdSS9\nV9K6qc1XgKGIOBa4Gvh6g76/QXFoZufUz5tSQlmH4ktsGYCko2lwcnkkkl4v6W+BucDJ6V/gtXWO\nkDQ1bVueiqv1/kHSayVtDxwNXJrKvw58PsWMpKmS3t8klr0kvVXSJOA5isNhq8RDcW7jk5K2TAn6\nfwOXRsSLo5n7KF0GvFfS3mnv6e8pDmnelLb/huL8jvUwJxXrBv9BcTK5vA1GxBDwUYqTx/9Fcax+\nNkD6Uj0AOC61/xSwk6QP13YcEd8FPk9xRdLvgH8HNoiIe4EvAD+l+DJ7K3DjKONeLun3wBLgQOAD\nEXFBg7oHAPdIWkGRED+UzoGUfpTmeB3FoaRrUvlXgKuAayT9DrgZeGeTmN4AXE6RUO5L/V5Yp94F\nqfwG4FHgjxQXBUyYiHgAOAL4GvA08D6KE/N/TlX+CTg1Heo7sUE31uXkP9Jl1jmS+im+1Nec4L0E\ns7bwnoqZmWXjpGJmZtn48JeZmWXjPRUzM8umkz9A1xEbbbRR9Pf3dzoMM7OesXjx4qcjYmordVe7\npNLf38/Q0FCnwzAz6xmSHh+5VsGHv8zMLBsnFTMzy8ZJxczMsnFSMTOzbJxUzMwsGycVMzPLxknF\nzMyycVIxM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVMzMLBsnFTMzy8ZJxczMsnFSMTOzbJxU\nzMwsGycVMzPLxknFzMyycVIxM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVMzMLBsnFTMzyyZ7\nUpG4qUH5fInDco/XToODMHNm/n7r9Tk4OHyrt61RH2WM1Tr16o/UH0B//8rbR+pnIo1m7ImMs9Fr\nMlIcnVy7esYaTyvtxlKnlXWtrdNtawpji2ms85iI76IcFBHtGUjMBxZEcHlbBmxgYGAghoaGxtRW\nKu5zL5m0ap/lWPXGa1Q/on67evVH6q+2z+p9J4xm7ImMczTvgWocnVy7esYaTyvtxlKnlXWtrdNt\nawpji2kiX4tcJC2OiIFW6ra0pyJxhMStEndInCfxCYmzK9tnS5ybHq9I95I4V+IBiR8Cf1mpv7PE\njyQWSyyU2DiVL5I4M431oMSeqXySxDkSd0vcJXF8s37MzKwzRkwqEm8BZgF/FcF04CVgBXBopdos\n4Ds1TQ8F3gxsBxwJ7J76WxP4GnBYBDsDFwCfr7Tri+AdwAnA3FQ2B+gHpkfwNuDbLfRTmYPmSBqS\nNLRs2bKRpmxmZmPU10KdvYGdgdvS7ufawFPAzyV2BR4CtgVurGn3LuCSCF4CnpS4PpW/GdgBuDb1\nNwn4daXdFel+MUUiAdgH+HoELwJE8IzEDiP084qImAfMg+LwVwtzNjOzMWglqQj4VgQnr1QoPgJ8\nELgfuDKCVr+sBdwTwW4Ntv8p3b80Qnwj9WNmZm3WSlK5DviexJcieEpiA2Bd4ErgFGBH4LN12t0A\nfEziWxTnU/YCLgYeAKZK7BbBT9NhrG0iuKdJDNemvv4zghdTDGPpZ1zmzoVFi/L3O2NG/bGaxdGo\njzLG6pUhzfpqtn3atJW3j9TPRBrN2BMZ51jj6OTa1TPWeFppN5Y6E9Wm3cYS01jnUe97oxu0dPWX\nxCzgZIpzMC8An4jgZokFwHYRvLFSd0UEUyREcc5jX+AXqd0FEVwuMR34KrAeRWL7cgTnSywCToxg\nSGIjYCiCfok+4CzggNTP+RGc26ifZnMZz9VfZmaro9Fc/dW2S4q7hZOKmdnoZL+k2MzMrBVOKmZm\nlo2TipmZZeOkYmZm2TipmJlZNk4qZmaWjZOKmZll46RiZmbZOKmYmVk2TipmZpaNk4qZmWXjpGJm\nZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2TipmJlZNk4qZmaWjZOKmZll46RiZmbZOKmYmVk2Tipm\nZpZNVyYViUGJEzsdRzsNDg4/njmzU1H0lnrrVK5jdT1zqu13rOO0K74cbVrpc6LXPcdnYqJia2R1\n/RwrIjodwyokBoEVEZyTu++BgYEYGhrK3e24SVC+FNXH1li9dSrLJmoNa/sd6zjtii9Hm1b6bPe6\nd6qPbh5vIklaHBEDrdTtij0ViSMl7pK4U+LCmm2LJAbS440kHkuPJ0mcLXFbavuxDoRuZmYVfZ0O\nQGJ74FRg9wieltgA+LsWmh4DPBvBLhJrATdKXBPBo6uOoTnAHIAtttgiY/RmZlbVDXsq7wa+G8HT\nABE802K7/YAjJe4AbgE2BLauVzEi5kXEQEQMTJ06NUfMZmZWR8f3VFrwIsPJb3KlXMDxESxsf0hm\nZlZPN+ypXA98QGJDgHT4q+oxYOf0+LBK+ULgOIk1U7ttJNaZ4FgnzNy5w49nzOhcHL2k3jqV61hd\nz5xq+x3rOO2KL0ebVvqc6HXP8ZmYqNgaWV0/x11x9ZfEUcCngZeAn1EkkhURnCOxLXBZ2nY1cEQE\n/RJrAKcD76PYa1kGHBLBs83G6tarv8zMutVorv7qiqTSTk4qZmaj03OXFJuZ2auDk4qZmWXjpGJm\nZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2TipmJlZNk4qZmaWjZOKmZll46RiZmbZOKmYmVk2Tipm\nZpaNk4qZmWXjpGJmZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2TipmJlZNk4qZmaWjZOKmZll05ak\nIjEg8dV2jGVmZp3TlqQSwVAEf9eOsczMmhkc7HQEr27jSioS60hcLXGnxN0SsyR2kbgpld0qsa7E\nTIkFlTYXpG0/k3h/Kp8tcYXEDyQekjirMs4BErenPq9r1o+ZWTOnndbpCF7d+sbZ/gDgyQjeCyCx\nHvAzYFYEt0n8BfB8TZtTgOsj+IjE+sCtEj9M26YDOwJ/Ah6Q+BrwR+B84F0RPCqxQbN+Ivj9OOdk\nZmZjNN6ksgT4gsSZwAJgOfDrCG4DiOA5AGmlNvsBB0ucmJ5PBrZIj6+L4NnU5l5gGvA64IYIHk19\nPjNCP/fVBilpDjAHYIsttqjdbGZmmYwrqUTwoMROwIHA6cD1LTQT8DcRPLBSoXgnxR5K6aUR4qvb\nT/04Yx4wD2BgYCBaiNHMzMZgvOdUNgH+EMFFwNnAO4GNJXZJ29eVVkkMC4HjJZTq7DjCMDcD75LY\nMtUvD3+Nth8zM5tg4z389VbgbImXgReA4yj2IL4msTbF+ZR9atr8I/Bl4C6JNYBHgYMaDRDBMok5\nwBWp/lPAvqPtx8wMYO7cTkfw6qaI1eto0MDAQAwNDXU6DDOzniFpcUQMtFLX/6PezMyycVIxM7Ns\nnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVMzMLBsnFTMzy8ZJxczMsnFSMTOzbJxUzMwsGycVMzPL\nxknFzMyycVIxM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVMzMLBsnFTMzy8ZJxczMsnFSMTOz\nbLoiqUis6HQMZmY2fl2RVKz9BgdXvm/HWKPV39+8/UTF3o416Xa9tAa9FGupXZ+7TqyNIqL9o9YG\nIVZEMEVCwFnAe4AATo/gUonvABdGcHWqPx9YAFwJnAHMBNYC/jmC85qNNTAwEENDQxM2l14hQcTw\nfTvGGmu7Ru0nKvZ2rEm366U16KVYS+363EGecSQtjoiBVup2257KXwPTgbcD+wBnS2wMXAp8EEDi\nNcDewNXAMcCzEewC7AJ8VGLLTgRuZmbdl1T2AC6J4KUIfgP8iCJZfB/YS2Itir2YGyJ4HtgPOFLi\nDuAWYENg69pOJc2RNCRpaNmyZe2ai5nZaqev0wG0IoI/SiwC9gdmAd9JmwQcH8HC5u1jHjAPisNf\nExiqmdlqrdv2VH4MzJKYJDEVeBdwa9p2KXA0sCfwg1S2EDhOYk0AiW0k1mlzzGZmlnTbnsqVwG7A\nnRQn6j8TwdK07RrgQuB7Efw5lX0D6AduTyf5lwGHtDXiHjV37sr37RhrtKZNa95+omJvx5p0u15a\ng16KtdTNn7vx6oqrv9rJV3+ZmY1OL1/9ZWZmPcxJxczMsnFSMTOzbJxUzMwsGycVMzPLxknFzMyy\ncVIxM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVMzMLBsnFTMzy8ZJxczMsnFSMTOzbJxUzMws\nGycVMzPLxknFzMyycVIxM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVEZhcLD+/Wjbz5zZfHtt\nWW39kdrPnNm4r2Z1qs8bta/XV/Vxs34aPS7n0yyeZnK0G2m9qnVamVOpOrfBQejvr1+33N5oLRrV\nazReM9U+ms1vpBiavQdajaHeOPXKy3mVMTcba6TP2Ujtmm1rtiaNtjf63NSqxtvoNa7XvlrW6P3V\nToqIzo3eAQMDAzE0NDSmthJErHo/1vaNtteWwcrlI7Wv16a2r5H6bRRLdXujvhr1M9Lj2jFbXd8c\n7VpZ+3pxtrpmZV9lf/Vibra9Ub1mc2qk2kez+bUSQzXeRjE1i6HZ697oPVIbd72+x/P5HEufzbY3\n+tw0G7+Vz3C9srHOfSSSFkfEQCt1u25PRWIdiasl7pS4W2KWxOckbkvP50lIYiuJ2yvttq4+NzOz\n9uu6pAIcADwZwdsj2AH4AXBuBLuk52sDB0XwCPCsxPTU7mjgm/U6lDRH0pCkoWXLlrVjDmZmq6Vu\nTCpLgH0lzpTYM4Jngb0kbpFYArwb2D7V/QZwtMQkYBZwcb0OI2JeRAxExMDUqVPbMQczs9VSX6cD\nqBXBgxI7AQcCp0tcB3wCGIjgCYlBYHKq/m/AXOB6YHEEv+1EzGZmVui6pCKxCfBMBBdJLAeOTZue\nlpgCHAZcDhDBHyUWAv8CHDPRsc2dW/9+tO1nzGi+vbZs0aKVy0ZqP2NG/StHqv3Xq1Pd3iiWVuq2\nUq82lpH6aSZHu5HmW33e6jrAqnObP7953+Vr3WgOtfUajddMtY/qe6DV93cra9VqDPXa1hu/nFej\n93a99q2sRaOYGm0baU1G87mpVY233ue+lf6nTRt5nInWdVd/SewPnA28DLwAHAccAhwOLAUeBB6P\nYDDV35UiyUyL4KWR+h/P1V9mZquj0Vz91XV7KhEsBBbWFA8BpzZosgfwzVYSipmZTayuSyqjIXEl\nsBXFyXszM+uwnk4qERza6RjMzGxYN15SbGZmPcpJxczMsnFSMTOzbJxUzMwsGycVMzPLxknFzMyy\ncVIxM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVMzMLBsnFTMzy8ZJxczMsnFSMTOzbJxUzMws\nGycVMzPLxknFzMyycVIxM7NsnFTMzCybnksqEv9LYp865TMlFrQrjpkzi9v666+6bXBwuE71+eAg\nTJ5ctCnbl+Xl4/7+lR/39Q3fl+3KOtX2g4NFedl/X1+xvdp3X99wvdqxy7mU2/v7h+dWbiu3l/Op\nti/blM+r5dUxateprFveqvMr5zN58srzqK5ntZ+yXhlz9dbfP1ze11c8rvZbG3vt3KpzrqrOuzp2\n7VqV9apxlPOpXf/qWtQbtzbech4jvQbVsUvV/qvxVeOujlu+h6sxlrFXX8tqLOX7rlq/usb13s/V\nbdXt1fd5ddyyfu3alG3L5+V7uuyn+prVvk/L16M2rmr92te4Gnu917H281Ptp3xejbH2vV77mS/V\nW9967+V2UES0Z6QJJjETODGCg5rVGxgYiKGhoRzjvaJ2CaWirN59rWp5o8fNjKdeK22bxQ2tjV2v\n3VjaVuNptq6jXetm4zSKvdTK2LVlo32d672/xqq6bmVfta9lvTUe77jNYmkU32jGG039Zq/HSHUa\nrUUrr2ltndp+RvouqLc2rbw29eqPlqTFETHQSt2u2FORWEfiaok7Je6WmCXxOYnb0vN5Ekp150sc\nlh4fIHG/xO3AX3d0EmZm1h1JBTgAeDKCt0ewA/AD4NwIdknP14aV90AkJgPnA+8Ddgbe0KhzSXMk\nDUkaWrZs2YRNwsxsddctSWUJsK/EmRJ7RvAssJfELRJLgHcD29e02RZ4NIKHIgjgokadR8S8iBiI\niIGpU6dO2CTMzFZ3fZ0OACCCByV2Ag4ETpe4DvgEMBDBExKDwOROxmhmZiPriqQisQnwTAQXSSwH\njk2bnpaYAhwGXF7T7H6gX2KrCB4BDm9fxDBjRnF/xx2rbps7d+U65fO5c+GMM4orV6ZPX7n+okXF\n42nThq+4mTYNfvlL2Gyz4n7KlKLdY4+tfFVO2f/8+bB0adH/ihWwxx7FFR9l35MmwamnFvXK9tWx\n77gDTjih2A6wfPnwPMp5nnDCqnNdtKiICWD27OJ5Oe5jjw2X1Vunsm6pbNPfX9wvXVqUn3TScB/V\n9az2c/PNRb0y5qoyvhkz4Cc/KR6vtdZwv9U4y3iqc5s9e9X4oXiNynmXyvWqrlVZr1qn9nFtm9q1\nqdavxlvOu3zdGr0G5fbq2OUa1sZYllfLZswo+jzjjOF1rr53pk8fjrcay+mnF++7av3qGlffj9Wx\ny22w8pVr5ft8112Hxy3r176W5XupXIvyfVz2Xb5u5dyq79P11itej9r3XbV+1bRpw/1V69a+p6pq\n+6l+1qp1G32n1G6vrm+993I7dMXVXxL7A2cDLwMvAMcBh1AkiqXAg8DjEQxKzAcWRHC5xAHAl4E/\nAD8GtmrX1V9mZquL0Vz91RVJpZ2cVMzMRqfnLik2M7NXBycVMzPLxknFzMyycVIxM7NsnFTMzCwb\nJxUzM8vGScXMzLJxUjEzs2ycVMzMLBsnFTMzy8ZJxczMsnFSMTOzbJxUzMwsGycVMzPLxknFzMyy\ncVIxM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVMzMLBsnFTMzy8ZJxczMsnFSMTOzbJxUzMws\nG0VEp2NoK0nLgMfH0HQj4OnM4bSb59B5vR4/eA7dop1zmBYRU1upuNollbGSNBQRA52OYzw8h87r\n9fjBc+gW3ToHH/4yM7NsnFTMzCwbJ5XWzet0ABl4Dp3X6/GD59AtunIOPqdiZmbZeE/FzMyycVIx\nM7NsnFRaIOkASQ9IeljSSR2OZXNJ/ynpXkn3SPqfqXwDSddKeijdv67S5uQU+wOS9q+U7yxpSdr2\nVUlK5WtJujSV3yKpfwLmMUnSzyQt6NH415d0uaT7Jd0nabcenMMn03vobkmXSJrc7XOQdIGkpyTd\nXSlrS8ySjkpjPCTpqMxzODu9l+6SdKWk9bt5Dk1FhG9NbsAk4BHgjcBrgDuB7ToYz8bATunxusCD\nwHbAWcBJqfwk4Mz0eLsU81rAlmkuk9K2W4FdAQHfB96Tyv8H8PX0+EPApRMwj08BFwML0vNei/9b\nwLHp8WuA9XtpDsCmwKPA2un5ZcDsbp8D8C5gJ+DuStmExwxsAPw83b8uPX5dxjnsB/Slx2d2+xya\nzi93h6+2G7AbsLDy/GTg5E7HVYnne8C+wAPAxqlsY+CBevECC9OcNgbur5QfDpxXrZMe91H8r11l\njHkz4Drg3QwnlV6Kfz2KL2TVlPfSHDYFnkhfMH3AgvTF1vVzAPpZ+Qt5wmOu1knbzgMOzzWHmm2H\nAt/u9jk0uvnw18jKD1/pl6ms49Ju7Y7ALcDrI+LXadNS4PXpcaP4N02Pa8tXahMRLwLPAhtmDP3L\nwGeAlytlvRT/lsAy4JvpEN43JK3TS3OIiF8B5wC/AH4NPBsR1/TSHCraEXM7vwc+QrHnsVI8NeN2\n7RycVHqUpCnAvwEnRMRz1W1R/DOkK68Vl3QQ8FRELG5Up5vjT/ooDl/8S0TsCPye4rDLK7p9Dum8\nw/spEuQmwDqSjqjW6fY51NOLMVdJOgV4Efh2p2MZKyeVkf0K2LzyfLNU1jGS1qRIKN+OiCtS8W8k\nbZy2bww8lcobxf+r9Li2fKU2kvooDvf8NlP4fwUcLOkx4DvAuyVd1EPxQ/EvvF9GxC3p+eUUSaaX\n5rAP8GhELIuIF4ArgN17bA6ldsQ84d8DkmYDBwEfTsmx5+YATiqtuA3YWtKWkl5DceLrqk4Fk67w\n+Ffgvoj4YmXTVUB5NcdRFOdayvIPpStCtgS2Bm5Nhwuek7Rr6vPImjZlX4cB11fe5OMSESdHxGYR\n0U+xltdHxBG9En+aw1LgCUlvTkV7A/f20hwoDnvtKum1aey9gft6bA6ldsS8ENhP0uvSXt5+qSwL\nSQdQHBI+OCL+UDO3npjDK3KfpHk13oADKa6yegQ4pcOx7EGxe38XcEe6HUhxzPQ64CHgh8AGlTan\npNgfIF0hksoHgLvTtnMZ/oWFycB3gYcprjB54wTNZSbDJ+p7Kn5gOjCUXod/p7iaptfmcBpwfxr/\nQoorjLp6DsAlFOeAXqDYYzymXTFTnOt4ON2OzjyHhynOd5Sf6a938xya3fwzLWZmlo0Pf5mZWTZO\nKmZmlo2TipmZZeOkYmZm2TipmJlZNk4qZjUkfUnSCZXnCyV9o/L8C5I+NY7+ByWd2GDbnPRrtfdL\nulXSHpVte6r4VeE7JK2dftn2Hklnj3L8fkn/bazxmzXjpGK2qhsp/nc5ktYANgK2r2zfHbiplY7S\n/2huSfoJm48Be0TEtsDHgYslvSFV+TDwTxExPSKeB+YAb4uIT7c6RtIPOKnYhHBSMVvVTRS/BAtF\nMrkb+F36n8hrAW8BblfhbBV/j2SJpFkAkmZK+rGkqyj+pz2STpH0oKSfAG9edUgAPgt8OiKeBoiI\n2yl+Yv8Tko4FPgj8o6Rvp76nAIslzZL0gRTHnZJuSGNOSvHdpuLvdHwsjXMGsGfa4/lkzoUza/lf\nUWari4h4UtKLkrag2Cv5KcWvue5G8YuvSyLiz5L+huJ/1r+dYm/mtvILneK3wHaIiEcl7UzxkzTT\nKT5ztwP1flBz+zrlQ8BREfEP6VDYgoi4HEDSioiYnh4vAfaPiF9p+A88HUPx68O7pGR4o6RrKH78\n8sSIOGh8K2W2KicVs/puokgouwNfpEgqu1MklRtTnT2ASyLiJYofNfwRsAvwHMXvMz2a6u0JXBnp\nN53SXkZuNwLzJV1G8eOQUPy209skHZaer0fx21F/noDxzQAf/jJrpDyv8laKw183U+yptHo+5fdj\nGPNeYOeasp2Be0ZqGBEfB06l+BXaxZI2pPjDTMenczDTI2LLKP5mitmEcVIxq+8mip8hfyYiXoqI\nZyj+ZPBuDCeVHwOz0rmLqRR/JvbWOn3dABySrthaF3hfgzHPAs5MCQFJ0yn+xO//GSlYSVtFxC0R\n8TmKPyC2OcUv0B6n4k8lIGkbFX9M7HcUf4raLDsf/jKrbwnFeZKLa8qmlCfSgSspksydFL8c/ZmI\nWCpp22pHEXG7pEtTvaco/pzCKiLiKkmbAjdJCoov/yNi+K8aNnO2pK0p9k6uS2PdRXGl1+3p59GX\nAYek8peCNwyDAAAAQklEQVQk3QnMj4gvtdC/WUv8K8VmZpaND3+ZmVk2TipmZpaNk4qZmWXjpGJm\nZtk4qZiZWTZOKmZmlo2TipmZZfP/AYuQpktyF8wSAAAAAElFTkSuQmCC\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZUAAAEWCAYAAACufwpNAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3XucHGWd7/HPlwQJEk0EsnIRMoh4AzXCoMjhEkQBERV2\n0ejRA0EwynHZw+6yLr5gzai4y0VdEPasBheHBeUiK0cOrAaEE1FYLhMMd7lJEIFAEMNFURB+5496\nyqlUqnt6Zp6Znkm+79erX1391HP51VPV/UtVV3oUEZiZmeWwXrcDMDOztYeTipmZZeOkYmZm2Tip\nmJlZNk4qZmaWjZOKmZll46Riax1JP5B06Cj7mC/pp6Ps43ZJc0fTR0455mUEY/ZJOnc8x7TuclKx\nrpK0XNK7cvYZEe+JiLNz9lklqUdSSHomPR6VdKmkd9fi2D4iloxVHMM1VvMiqV/Sc2kunpB0haTX\nj6Cf7MeCjT8nFbORmxkR04G3AFcAF0ua361gJE3t1tjAyWkuXgU8BvR3MRbrIicVm7AkHSBpmaRV\nkq6V9OZUvm36F/GO6fUWklaWl5okLZF0RKWfT0i6U9LTku6otDtW0n2V8oNGEmdErIiI04A+4CRJ\n66X+//Qvb0lvkzQg6al0ZvPVVF6e9SyQ9LCkRyQdU4l9vUqcv5Z0oaSNa20Pl/RL4CpJ0ySdm+qu\nknSjpFfW5yX1e7ykByQ9JunfJc2o9XuopF9KelzScR3Oxe+A7wA7NK2X9P50WXBViucNqfwcYGvg\n/6Yzns8Mdz/YxOCkYhOSpLcCZwGfBDYBvgFcImmDiLgP+HvgXEkvBb4FnN10qUnSByk+7A8BXg68\nH/h1Wn0fsDswA/h86m/zUYT9PeDPgNc1rDsNOC0iXg5sC1xYW78XsB2wD/D3lctARwEHAnsCWwC/\nAf6l1nZP4A3AvsChaXu2opi3TwHPNsQzPz32Al4NTAfOqNXZLW3L3sDnygTQjqTpwEeBnzWsey1w\nHnA0MAv4T4ok8pKI+B/AL4H3RcT0iDh5qLFsYnJSsYlqAfCNiLg+Il5I3wX8AdgFICLOBO4Frgc2\nB1r9S/oIikszN0bh3oh4IPXx3Yh4OCJejIgLgHuAt40i5ofT88YN654HXiNp04h4JiKuq63/fET8\nNiJupUiSH0nlnwKOi4hfRcQfKBLkwbVLXX2p7bNpnE2A16R5WxoRTzXE81HgqxHxi4h4Bvgs8OFa\nv5+PiGcj4mbgZorLfK0cI2kVxT6ZTpGw6uYBl0XEFRHxPPBlYENg1zb92iTjpGIT1Wzgb9NlklXp\nA2srin+tl86kuMxyevrAbbIVxRnJGiQdUrm8tir1tekoYt4yPT/RsO5w4LXAz9MlqQNq6x+sLD/A\n4HbOpviupozxTuAF4JUt2p4DLAbOT5fTTpa0fkM8W6RxqmNOrfW7orL8O4pk0cqXI2JmRGwWEe9P\nZ5Ntx4yIF1PsWzbUtUnKScUmqgeBL6UPqvLx0og4D/50meVU4N+AvvJ7hhb9bFsvlDSbIin9JbBJ\nRMwEbgM0ipgPoviS+q76ioi4JyI+QnF57CTgIkkbVapsVVnemsGzngeB99TmYVpEPFTtvjLO8xHx\n+Yh4I8UZwAEUl/7qHqZIWNUx/wg82uG2jsRqY0oSxXaX2+KfTF8LOKnYRLB++oK5fEyl+MD/lKS3\nq7CRpPdKellqcxowEBFHAJcBX2/R9zcpLs3slPp5TUooG1F8iK0EkHQYLb5cHoqkV0r6S2Ah8Nn0\nL/B6nY9JmpXWrUrF1Xr/IOmlkrYHDgMuSOVfB76UYkbSLEkfaBPLXpLeJGkK8BTF5bA14qH4buOv\nJW2TEvQ/AhdExB+Hs+3DdCHwXkl7p7Onv6W4pHltWv8oxfc7Nok5qdhE8J8UXyaXj76IGAA+QfHl\n8W8ortXPB0gfqvsBR6b2fwPsKOmj9Y4j4rvAlyjuSHoa+D/AxhFxB/AV4L8oPszeBFwzzLhXSfot\ncCuwP/DBiDirRd39gNslPUORED+cvgMp/Tht45UUl5IuT+WnAZcAl0t6GrgOeHubmDYDLqJIKHem\nfs9pqHdWKr8auB/4PcVNAWMmIu4CPgacDjwOvI/ii/nnUpV/Ao5Pl/qOadGNTXDyH+ky6x5JPRQf\n6uuP8VmC2bjwmYqZmWXjpGJmZtn48peZmWXjMxUzM8ummz9A1xWbbrpp9PT0dDsMM7NJY+nSpY9H\nxKxO6q5zSaWnp4eBgYFuh2FmNmlIemDoWgVf/jIzs2ycVMzMLBsnFTMzy8ZJxczMsnFSMTOzbJxU\nzMwsGycVMzPLxknFzMyycVIxM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVMzMLBsnFTMzy8ZJ\nxczMsnFSMTOzbJxUzMwsGycVMzPLxknFzMyycVIxM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2yc\nVMzMLJvsSUXi2hbl/RIH5x7PrKqvr9sRrL3qc9turr0f8plsc6mIGJ+BRD9waQQXjcuALfT29sbA\nwEA3Q7AxJME4HdLrnPrctptr74d8JsJcSloaEb2d1O3oTEXiYxI3SCyT+IbEpyVOqayfL3FGWn4m\nPUviDIm7JH4E/Fml/k4SP5ZYKrFYYvNUvkTipDTW3RK7p/IpEl+WuE3iFomj2vVjZmbdMWRSkXgD\nMA/4bxHMAV4AngEOqlSbB5xfa3oQ8DrgjcAhwK6pv/WB04GDI9gJOAv4UqXd1AjeBhwNLExlC4Ae\nYE4Ebwa+3UE/lW3QAkkDkgZWrlw51CabmdkITe2gzt7ATsCNEgAbAo8Bv5DYBbgHeD1wTa3dHsB5\nEbwAPCxxVSp/HbADcEXqbwrwSKXd99LzUopEAvAu4OsR/BEggickdhiinz+JiEXAIiguf3WwzWZm\nNgKdJBUBZ0fw2dUKxceBDwE/By6OoNMPawG3R/COFuv/kJ5fGCK+ofoxM7Nx1sl3KlcCB0vFdyIS\nG0vMBi4GPgB8hDUvfQFcDcxL34dsDuyVyu8CZklFMpBYX2L7IWK4AvikVCQZiY1H2I+t5RYuHLqO\njUx9btvNtfdDPpNtLju6+0tiHvBZiiT0PPDpCK6TuBR4YwSvrtR9JoLpEqL4zuPdwC9Tu7MiuEhi\nDvA1YAbF2cipEZwpsQQ4JoIBiU2BgQh6UjI5Gdgv9XNmBGe06qfdtvjuLzOz4RnO3V/jdkvxROGk\nYmY2PNlvKTYzM+uEk4qZmWXjpGJmZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2TipmJlZNk4qZmaW\njZOKmZll46RiZmbZOKmYmVk2TipmZpaNk4qZmWXjpGJmZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm\n2TipmJlZNk4qZmaWjZOKmZllMyGTikSfxDHdjsPMzIZnQiYVMzObnCZEUpE4ROIWiZslzqmtWyLR\nm5Y3lVielqdInCJxY2r7yS6EbmZmFVO7HYDE9sDxwK4RPC6xMfBXHTQ9HHgygp0lNgCukbg8gvvX\nHEMLgAUAW2+9dcbozcysaiKcqbwT+G4EjwNE8ESH7fYBDpFYBlwPbAJs11QxIhZFRG9E9M6aNStH\nzGZm1qDrZyod+CODyW9apVzAUREsHv+QzMysyUQ4U7kK+KDEJgDp8lfVcmCntHxwpXwxcKTE+qnd\nayU2GuNYzcysja6fqURwu8SXgB9LvAD8jCKRlL4MXCixALisUv5NoAe4SULASuDAcQnazMwaKSK6\nHcO46u3tjYGBgW6HYWY2aUhaGhG9ndSdCJe/zMxsLeGkYmZm2TipmJlZNk4qZmaWjZOKmZll46Ri\nZmbZOKmYmVk2TipmZpaNk4qZmWXjpGJmZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2TipmJlZNk4q\nZmaWjZOKmZll46RiZmbZOKmYmVk2TipmZpaNk4qZmWXjpGJmZtmMS1KR6JX42niMZWZm3TMuSSWC\ngQj+ajzGsomvr29yjt3NuG1yWhePGUXEyBuLjYALgVcBU4AvAr8ATgM2Av4A7A3sBBwTwQGpzenA\nDsD6QF8E35eYD7wfeCmwLXBxBJ9J4+wH/GMa4/EI9m7Vz1Ax9/b2xsDAwIi32UZPglEcdl0bu5tx\n2+S0thwzkpZGRG8ndaeOcqz9gIcjeG8xMDOAnwHzIrhR4uXAs7U2xwFXRfBxiZnADRI/SuvmAG+l\nSEZ3SZwO/B44E9gjgvslNm7XTwS/HeU2mZnZCI02qdwKfEXiJOBSYBXwSAQ3AkTwFBTZumIf4P0S\nx6TX04Ct0/KVETyZ2twBzAZeAVwdwf2pzyeG6OfOepCSFgALALbeeuv6ajMzy2RUSSWCuyV2BPYH\nTgCu6qCZgL+I4K7VCsXbKc5QSi8MEV9jP81xxiJgERSXvzqI0czMRmBUX9RLbAH8LoJzgVOAtwOb\nS+yc1r9MWiMxLAaOklCq89YhhrkO2ENim1S/vPw13H7MzGyMjfby15uAUyReBJ4HjqQ4gzhdYkOK\n71PeVWvzReBU4BaJ9YD7gQNaDRDBSokFwPdS/ceAdw+3H5s4Fi6cnGN3M26bnNbFY2ZUd39NRr77\ny8xseIZz95f/R72ZmWXjpGJmZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2TipmJlZNk4qZmaWjZOK\nmZll46RiZmbZOKmYmVk2TipmZpaNk4qZmWXjpGJmZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2Tip\nmJlZNk4qZmaWjZOKmZll46RiZmbZTIikIvFMt2MwM7PRmxBJZbLp61uzrKdn+G0A5s4dWQxDtavH\n0zR+q5halXe6PpdyG4czXl/f0HPT11fMT19f631Z7aPV3I3XPHQyTk/P4KOMberUwXVNfcyc2TxG\nOT/V1+3moP4o21bb1PfJzJlFWTnXueeyVX+t3qf1/d0u9nJdtbxVnWpf1bFnzlx9zsr5qLat9jXU\n50vV1KnN8Yz0s2a4FBHjM1K7IMQzEUyXEHAy8B4ggBMiuEDifOCcCC5L9fuBS4GLgROBucAGwL9E\n8I12Y/X29sbAwMBo46U+bU1lnawfqt1w+2u1fjgxj3RbcivHGc54UvE8VPxVTfNSLW81d0ONk0sn\n21/fplI5f+Vyq37ry9X67dq3Ut9vTcdjU5tchntsD7X9Te+lobavPvdNY9Q1He/DPdaa6o/2eJW0\nNCJ6O6k70c5U/hyYA7wFeBdwisTmwAXAhwAkXgLsDVwGHA48GcHOwM7AJyS26UbgZmY28ZLKbsB5\nEbwQwaPAjymSxQ+AvSQ2oDiLuTqCZ4F9gEMklgHXA5sA29U7lbRA0oCkgZUrV47XtpiZrXOmdjuA\nTkTwe4klwL7APOD8tErAUREsbt8+FgGLoLj8NYahmpmt0ybamcpPgHkSUyRmAXsAN6R1FwCHAbsD\nP0xli4EjJdYHkHitxEbjHLOZmSUT7UzlYuAdwM0UX9R/JoIVad3lwDnA9yN4LpV9E+gBbkpf8q8E\nDhzrIBcuXLNs9uzhtwHYc8+RxTBUu3o8TeO3iqlVeafrcym3cTjjLVwIS5YMXae/H+bPb14/e/bq\nd9sMZ+7GQidjVfd3uV0nnDC4rmlbZ8xoHqOcn6HGb1Vetq2urx+vM2bAnDmwfPmad9vl0Cq2Vu/T\nanz1tvXYy/XV8lZ1qn1Vx54xA44+evD1qacW89E0fn1/DGXKFNhttzX7GOp9kcuEuPtrPOW4+8vM\nbF0yme/+MjOzScxJxczMsnFSMTOzbJxUzMwsGycVMzPLxknFzMyycVIxM7NsnFTMzCwbJxUzM8vG\nScXMzLJxUjEzs2ycVMzMLBsnFTMzy8ZJxczMsnFSMTOzbJxUzMwsGycVMzPLxknFzMyycVIxM7Ns\nnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVIahrw/mzh18PXcu9PQMLjfV7+tbc31ZBoPte3qa+6jX\nr9etrquOV+17qP7q6rHOnbv6eGW/rbav077r5fX4qzFUn2HNGKr7pql+fbk6Xrt29TjqbZv6byqb\nNm3N9tOmrVm32u9Q+6lpnFbHWX3eO9nWJjNnNrcvy+vHSid9tlrXtM/qx3TT8d90HA21ve3mp/6+\nrx57TVq9D5uOtXr9VrGX7/mm90/1eOzpGf6256aIGJ+RJoje3t4YGBgYUVupeC6nrPpaGixvql9d\n37Rc77veT7W8Xb/1+Drpr936ss/qeK2WOzmU2sVUHaepTbt5q8c41JzXtWrXKsZ6rK32f7sxm9qP\ndk47Wa6PN5xjpN6u6blV/O22Zzjz16rPpmNhpNvbdHw17at2x0mn+7bVe6reX9N2Na1vaj+c46mJ\npKUR0dtJ3Ql3piKxkcRlEjdL3CYxT+JzEjem14skJLGtxE2VdttVX5uZ2fibcEkF2A94OIK3RLAD\n8EPgjAh2Tq83BA6I4D7gSYk5qd1hwLeaOpS0QNKApIGVK1eOxzaYma2TJmJSuRV4t8RJErtH8CSw\nl8T1ErcC7wS2T3W/CRwmMQWYB3ynqcOIWBQRvRHRO2vWrPHYBjOzddLUbgdQF8HdEjsC+wMnSFwJ\nfBrojeBBiT6g/IrzP4CFwFXA0gh+3Y2YzcysMOGSisQWwBMRnCuxCjgirXpcYjpwMHARQAS/l1gM\n/Ctw+FjHtnAhLFky+HrPPWH58sHlpvrVuk3ls2cPPre6W6tav163uq6pXif91dVjrW7zwoXQ379m\nP03bP1Tf9fJWd4aV4zTNW7WsjLOpfn15yZLB8dq1a4qj2rap/6ayDTaAY49dvf2JJ65Zt93+bKWT\n46w+751ua92MGc3ty/JW47frs9W6pn1Wvt9a1Rmq/1Z1hpqf6nug6dhrFVO9fv1Ya7dcLevvb/35\nUH3v9PfD/PnN8XR6PI3WhLv7S2Jf4BTgReB54EjgQOAjwArgbuCBCPpS/V0okszsCF4Yqv/R3P1l\nZrYuGs7dXxPuTCWCxcDiWvEAcHyLJrsB3+okoZiZ2diacEllOCQuBral+PLezMy6bFInlQgO6nYM\nZmY2aCLeUmxmZpOUk4qZmWXjpGJmZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2TipmJlZNk4qZmaW\njZOKmZll46RiZmbZOKmYmVk2TipmZpaNk4qZmWXjpGJmZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm\n2TipmJlZNpMuqUh8QeJdDeVzJS4d6/H7+ornnh6YO7d4hmK5XFeunzlzsLx8wOrlZZ9lf2VZtV05\nRlm33mf5utrH3LnFY+bM1eOqxtFUXm5T2a6nZ/WYqo96f6XqvPT1FX3NnLnmnJT1qs9lvep41VjL\n19OmFctTpxbrpk1rjnHmzKK8fK7PcdlfuZ31uenpKR5Tpw4ul9tQ1ltvvcHlqVMHl6dNW73fcrym\n+S37ro5bPpdjl23KOS3XV7d32rTV+60eQ/XtqM5DOV51nsrtqB5vZd3q/i3Lq/PVdBzXVY/X+v6t\n74fqfJXzWj1eqsvlsVB9TzS9D+rvq+oct9rmcn6r81Q+qvNQbVfdp9X3QfUYqb+nq31V56WsVx5n\nrT6DqttaxjpeFBHjO+IYkZgLHBPBAe3q9fb2xsDAwGjGIaJ4LlVfl9NZXV/Vru1QZe36HUq9fTlG\np/0OFWf1MKqPMdJ4m2Jtmv92MTa1b9V/u/qdxtgujk6NZPzRjtU07nDH7+R1VavjabhxdBJbPYZO\n368jHaOT93SnY5TxDfW50+69PdqPeUlLI6K3k7oT4kxFYiOJyyRulrhNYp7E5yRuTK8XSSjV7Zc4\nOC3vJ/FziZuAP+/qRpiZ2cRIKsB+wMMRvCWCHYAfAmdEsHN6vSGsfgYiMQ04E3gfsBOwWavOJS2Q\nNCBpYOXKlWO2EWZm67qJklRuBd4tcZLE7hE8Cewlcb3ErcA7ge1rbV4P3B/BPREEcG6rziNiUUT0\nRkTvrFmzxmwjzMzWdVO7HQBABHdL7AjsD5wgcSXwaaA3ggcl+oBp3YzRzMyGNiGSisQWwBMRnCux\nCjgirXpcYjpwMHBRrdnPgR6JbSO4D/jIeMS6cGHxPHt2cdfF8uXF6z33XP1ujdmzYdUqmDNn9XKA\nGTNWL1+4EPr7B+8CqffX37/6+EuWrN5n+brax5Ilxbply+Doo9eMv9XykiXFNq1aVbTr74f58wdj\nqqv2Udpzz8F5WbgQTj21WJ45c/U5Kccq57Gnp4i3vFulOl45Tlm2wQawyy7w05/C8cfDiSc2x7hs\nWfE8Y8bg+uocl2bPHtzO6nizZxfPv/oVvOpVxXK5DWW9L3xhcPmEEwaXTzwRNttssN9ynzTNb7mP\ny7rluLNnD45dtoHBOS2Pw9J118Gxx66+32Cw/+p2VPdrub+q81RuR/V4K/dZfZ7KbSvjrvZdP/5L\nZezV9dVxq88zZgzO14oVxbxWj5fq8ooVg23L90TT+6D+vurvH5zjVtu8YkUxv+X8V4+h6vuunLfl\ny1ffp6tWDb4Pjj568Bipz0M5bn1eyvksj7PqvqvOdfV9We7T8TIh7v6S2Bc4BXgReB44EjiQIlGs\nAO4GHoigT6IfuDSCiyT2A04Ffgf8BNh2rO/+MjNb1wzn7q8JkVTGk5OKmdnwTLpbis3MbO3gpGJm\nZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2TipmJlZNk4qZmaWjZOKmZll46RiZmbZOKmYmVk2Tipm\nZpaNk4qZmWXjpGJmZtk4qZiZWTZOKmZmlo2TipmZZeOkYmZm2TipmJlZNk4qZmaWjZOKmZll46Ri\nZmbZOKmYmVk2TipmZpaNk4qZmWWjiOh2DONK0krggRE03RR4PHM4axvPUXuen/Y8P+11c35mR8Ss\nTiquc0llpCQNRERvt+OYyDxH7Xl+2vP8tDdZ5seXv8zMLBsnFTMzy8ZJpXOLuh3AJOA5as/z057n\np71JMT/+TsXMzLLxmYqZmWXjpGJmZtk4qXRA0n6S7pJ0r6Rjux3PWJK0XNKtkpZJGkhlG0u6QtI9\n6fkVlfqfTfNyl6R9K+U7pX7ulfQ1SUrlG0i6IJVfL6lnvLdxuCSdJekxSbdVysZlTiQdmsa4R9Kh\n47PFw9NifvokPZSOo2WS9q+sW2fmR9JWkv6fpDsk3S7pf6Xytff4iQg/2jyAKcB9wKuBlwA3A2/s\ndlxjuL3LgU1rZScDx6blY4GT0vIb03xsAGyT5mlKWncDsAsg4AfAe1L5/wS+npY/DFzQ7W3uYE72\nAHYEbhvPOQE2Bn6Rnl+Rll/R7fnocH76gGMa6q5T8wNsDuyYll8G3J3mYK09fnymMrS3AfdGxC8i\n4jngfOADXY5pvH0AODstnw0cWCk/PyL+EBH3A/cCb5O0OfDyiLguiqP732ttyr4uAvYu/8U1UUXE\n1cATteLxmJN9gSsi4omI+A1wBbBf/i0cnRbz08o6NT8R8UhE3JSWnwbuBLZkLT5+nFSGtiXwYOX1\nr1LZ2iqAH0laKmlBKntlRDySllcAr0zLreZmy7RcL1+tTUT8EXgS2CT3RoyD8ZiTyX7sHSXplnR5\nrLy8s87OT7os9Vbgetbi48dJxep2i4g5wHuAT0vao7oy/SvJ96FXeE4a/SvFJeM5wCPAV7obTndJ\nmg78B3B0RDxVXbe2HT9OKkN7CNiq8vpVqWytFBEPpefHgIspLv89mk6/Sc+Ppeqt5uahtFwvX62N\npKnADODXY7EtY2w85mTSHnsR8WhEvBARLwJnUhxHsA7Oj6T1KRLKtyPie6l4rT1+nFSGdiOwnaRt\nJL2E4ouwS7oc05iQtJGkl5XLwD7AbRTbW945cijw/bR8CfDhdPfJNsB2wA3ptP4pSbuka7uH1NqU\nfR0MXJX+pTbZjMecLAb2kfSKdPlon1Q24ZUfmMlBFMcRrGPzk7bl34A7I+KrlVVr7/HTrbsiJtMD\n2J/iro37gOO6Hc8YbuerKe48uRm4vdxWiuuzVwL3AD8CNq60OS7Ny12ku1FSeS/FB8l9wBkM/nrD\nNOC7FF9A3gC8utvb3cG8nEdxCed5iuvSh4/XnAAfT+X3Aod1ey6GMT/nALcCt1B86G2+Ls4PsBvF\npa1bgGXpsf/afPz4Z1rMzCwbX/4yM7NsnFTMzCwbJxUzM8vGScXMzLJxUjEzs2ycVMxqJP2zpKMr\nrxdL+mbl9Vck/c0o+u+TdEyLdQsk/Tw9bpC0W2Xd7umXbpdJ2lDSKen1KcMcv0fSfx9p/GbtOKmY\nrekaYFcASesBmwLbV9bvClzbSUfpfzh3RNIBwCcpfirn9cCngO9I2ixV+SjwTxExJyKeBRYAb46I\nv+t0jKQHcFKxMeGkYrama4F3pOXtKf7D2dPpfyZvALwBuEmFUyTdlv7OxTwASXMl/UTSJcAdqew4\nSXdL+inwuhbj/j3wdxHxOEAUv257NsVvsB0BfAj4oqRvp76nA0slzZP0wRTHzZKuTmNOSfHdmH7Y\n8ZNpnBOB3dMZz1/nnDizjv8VZbauiIiHJf1R0tYUZyX/RfHrru+g+AXYWyPiOUl/QfGDiW+hOJu5\nsfxAp/j7IjtExP2SdqL4eZ85FO+5m4ClDUNv31A+ABwaEf+QLoVdGhEXAUh6Joof/0TSrcC+EfGQ\npJmp7eHAkxGxc0qG10i6nOLvdxwTEQeMbqbM1uSkYtbsWoqEsivwVYqksitFUrkm1dkNOC8iXqD4\ngcAfAzsDT1H8XtP9qd7uwMUR8TuAdJaR2zVAv6QLgfJHC/cB3izp4PR6BsVvST03BuObAb78ZdZK\n+b3Kmyguf11HcabS6fcpvx3BmHcAO9XKdqL4Hba2IuJTwPEUv0q7VNImFH8h8Kj0HcyciNgmIi4f\nQVxmHXNSMWt2LXAA8EQUP+H+BDCTIrGUSeUnwLz03cUsij+re0NDX1cDB6Y7tl4GvK/FmCcDJ6WE\ngKQ5wHzgfw8VrKRtI+L6iPgcsJIiuSwGjkw/vY6k16Zfn36a4k/bmmXny19mzW6l+J7kO7Wy6eUX\n6RR/b+YdFL/qHMBnImKFpNdXO4qImyRdkOo9RvHnFNYQEZdI2hK4VlJQfPh/LAb/QmA7p0jajuLs\n5Mo01i2pIHE0AAAAUUlEQVQUd3rdlH4ufSXFn6C9BXhB0s1Af0T8cwf9m3XEv1JsZmbZ+PKXmZll\n46RiZmbZOKmYmVk2TipmZpaNk4qZmWXjpGJmZtk4qZiZWTb/H72RB5XgGDsjAAAAAElFTkSuQmCC\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "nltk.Text(doyle[0]).dispersion_plot(['evidence', 'clue', 'science', 'love', 'say', 'said'])\n", "nltk.Text(bronte[0]).dispersion_plot(['evidence', 'clue', 'science', 'love', 'say', 'said'])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Looking at the above, we can see 'love' shows up far more in this Bronte text than it did in the Sherlock Holmes text. Similarly, the word 'evidence', understandably, shows up a lot more in Doyle. It would be even better if we could represent them on the same plot, and we can, using a slightly more complicated nltk function. If you're interested, you might try exploring NLTK's ConditionalFreqDist class." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "For the sake of simplicity, we will stop there. But if you want to learn more about visualization, you'll want to explore the [matplotlib](http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.01-Simple-Line-Plots.ipynb) package." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Exercises\n", "\n", "1. Use the above methods to find a meaningful point of comparison between the doyle corpus and the bronte corpus.\n", "2. Rewrite your answer to number 1 to use a class and object oriented programming.\n", "3. Rework your answer to 2 to include a Corpus class composed of individual text classes." ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Frequency of He in Doyle\n", "[1727, 1460, 797, 910, 636]\n", "Frequency of He in Bronte\n", "[1860, 793, 195, 1453, 195]\n", "Frequency of She in Doyle\n", "[374, 423, 88, 135, 106]\n", "Frequency of She in Bronte\n", "[1455, 1104, 0, 1327, 0]\n" ] } ], "source": [ "# Potential Answer to 1. Use the above methods to\n", "# find a meaningful point of comparison between the doyle corpus and the bronte corpus.\n", "\n", "# import everything\n", "from bs4 import BeautifulSoup\n", "from urllib import request\n", "import nltk\n", "from nltk import word_tokenize\n", "from nltk.corpus import stopwords\n", "\n", "# store all of our functions for later use\n", "\n", "def normalize(tokens):\n", " \"\"\"Takes a list of tokens and returns a list of tokens \n", " that has been normalized by lowercasing all tokens and \n", " removing Project Gutenberg frontmatter.\"\"\"\n", " \n", " # lowercase all words\n", " normalized = [token.lower() for token in tokens]\n", " \n", " # very rough end of front matter.\n", " end_of_front_matter = 90\n", " # very rough beginning of end matter.\n", " start_of_end_matter = -2973\n", " # get only the text between the end matter and front matter\n", " normalized = normalized[end_of_front_matter:start_of_end_matter]\n", " return normalized\n", "\n", "def remove_stopwords(tokens):\n", " # remove all the stopwords from a list of tokens\n", " return [token for token in tokens if token not in stopwords.words('english')]\n", "\n", "def get_counts_in_corpora(token, corpus_one, corpus_two):\n", " \"\"\"Take two corpora, represented as lists of frequency distributions, and token query.\n", " Return the frequency of that token in all the texts in the corpus. The result\n", " Should be a list of two lists, one for each text.\"\"\"\n", " corpus_one_counts = [text_freq_dist[token] for text_freq_dist in corpus_one]\n", " corpus_two_counts = [text_freq_dist[token] for text_freq_dist in corpus_two]\n", " return [corpus_one_counts, corpus_two_counts]\n", "\n", "# do the thing\n", "# get the texts from the github repo\n", "url = \"https://raw.githubusercontent.com/humanitiesprogramming/scraping-corpus/master/full-text.txt\"\n", "html = request.urlopen(url).read()\n", "soup = BeautifulSoup(html, 'lxml')\n", "raw_text = soup.text\n", "# take that long string and evaluate it as Python code (because I put it up as such)\n", "texts = eval(soup.text)\n", "# tokenize each of those texts\n", "tokenized_texts = [word_tokenize(text) for text in texts]\n", "# split the corpus up by author.\n", "doyle = tokenized_texts[:5]\n", "bronte = tokenized_texts[5:]\n", "doyle = [normalize(text) for text in doyle]\n", "bronte = [normalize(text) for text in bronte]\n", "\n", "# don't remove the stopwords, because we're going to look at frequencies of he and she.\n", "# those are stopwords, so we'd get zero counts for every text if we take stopwords out!\n", "# stopwords can be meaningful!\n", "\n", "# print('start cleaning')\n", "# doyle = [remove_stopwords(text) for text in doyle]\n", "# print('doyle done')\n", "# bronte = [remove_stopwords(text) for text in bronte]\n", "# print('bronte done')\n", "doyle_freq_dist = [nltk.FreqDist(text) for text in doyle]\n", "bronte_freq_dist = [nltk.FreqDist(text) for text in bronte]\n", "\n", "\n", "he_results = get_counts_in_corpora('he', doyle_freq_dist, bronte_freq_dist)\n", "he_corpus_one_results = he_results[0]\n", "he_corpus_two_results = he_results[1]\n", "she_results = get_counts_in_corpora('she', doyle_freq_dist, bronte_freq_dist)\n", "she_corpus_one_results = she_results[0]\n", "she_corpus_two_results = she_results[1]\n", "\n", "print(\"Frequency of He in Doyle\")\n", "print(he_corpus_one_results)\n", "print(\"Frequency of He in Bronte\")\n", "print(he_corpus_two_results)\n", "print(\"Frequency of She in Doyle\")\n", "print(she_corpus_one_results)\n", "print(\"Frequency of She in Bronte\")\n", "print(she_corpus_two_results)" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "making a text\n", "making a text\n", "making a text\n", "making a text\n", "making a text\n", "making a text\n", "making a text\n", "making a text\n", "making a text\n", "making a text\n", "['project', 'gutenberg', 'ebook', 'return', 'sherlock', 'holmes', '***produced', 'anonymous', 'volunteer', 'david']\n" ] } ], "source": [ "# Potential Answer to 2.\n", "# Create a Text class and object oriented programming. It should provide access to the raw text,\n", "# the tokenized text, the normalized tokens, and a frequency distribution. \n", "\n", "from bs4 import BeautifulSoup\n", "from urllib import request\n", "import nltk\n", "from nltk import word_tokenize\n", "from nltk.corpus import stopwords\n", "\n", "class Text:\n", " def __init__(self, raw_tokens):\n", " print(\"making a text\")\n", " self.raw_tokens = raw_tokens\n", " self.raw_tokens = word_tokenize(self.raw_tokens)\n", " # how could you make this more DRY?\n", " self.clean_tokens = self.normalize(self.raw_tokens)\n", " self.clean_tokens = self.remove_stopwords(self.clean_tokens)\n", " self.fq = nltk.FreqDist(self.clean_tokens)\n", " \n", " def normalize(self, tokens):\n", " \"\"\"Takes a list of tokens and returns a list of tokens \n", " that has been normalized by lowercasing all tokens and \n", " removing Project Gutenberg frontmatter.\"\"\"\n", "\n", " # lowercase all words\n", " normalized = [token.lower() for token in tokens]\n", "\n", " # very rough end of front matter.\n", " end_of_front_matter = 90\n", " # very rough beginning of end matter.\n", " start_of_end_matter = -2973\n", " # get only the text between the end matter and front matter\n", " normalized = normalized[end_of_front_matter:start_of_end_matter]\n", " return normalized\n", "\n", " def remove_stopwords(self, tokens):\n", " # remove all the stopwords from a list of tokens\n", " return [token for token in tokens if token not in stopwords.words('english')]\n", "\n", "\n", "# do the thing\n", "# get the texts from the github repo\n", "# scraping corpus\n", "url = \"https://raw.githubusercontent.com/humanitiesprogramming/scraping-corpus/master/full-text.txt\"\n", "html = request.urlopen(url).read()\n", "soup = BeautifulSoup(html, 'lxml')\n", "raw_text = soup.text\n", "# take that long string and evaluate it as Python code (because I put it up as such)\n", "texts = eval(soup.text)\n", "# tokenize each of those texts\n", "\n", "doyle = [Text(raw_text) for raw_text in texts[:5]]\n", "bronte = [Text(raw_text) for raw_text in texts[:5]]\n", "\n", "print(doyle[0].clean_tokens[0:10])" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Creating Text\n", "Creating Text\n", "Creating Text\n", "Creating Text\n", "Creating Text\n", "Creating Text\n", "Creating Text\n", "Creating Text\n", "Creating Text\n", "Creating Text\n", "['project', 'gutenberg', 'ebook', 'return', 'sherlock', 'holmes', '***produced', 'anonymous', 'volunteer', 'david', 'widgerthe', 'return', 'sherlock', 'holmes', ',', 'collection', 'holmes', 'adventuresby', 'sir', 'arthur']\n" ] } ], "source": [ "# Potential Answer to 3.\n", "# Rework your answer to 2 to include a Corpus class composed of individual text classes.\n", "\n", "from bs4 import BeautifulSoup\n", "from urllib import request\n", "import nltk\n", "from nltk import word_tokenize\n", "from nltk.corpus import stopwords\n", "\n", "class Text:\n", " def __init__(self, raw_tokens):\n", " print('Creating Text')\n", " self.raw_tokens = raw_tokens\n", " self.raw_tokens = word_tokenize(self.raw_tokens)\n", " # how could you make this more DRY?\n", " self.clean_tokens = self.normalize(self.raw_tokens)\n", " self.clean_tokens = self.remove_stopwords(self.clean_tokens)\n", " self.fq = nltk.FreqDist(self.clean_tokens)\n", " \n", " def normalize(self, tokens):\n", " \"\"\"Takes a list of tokens and returns a list of tokens \n", " that has been normalized by lowercasing all tokens and \n", " removing Project Gutenberg frontmatter.\"\"\"\n", "\n", " # lowercase all words\n", " normalized = [token.lower() for token in tokens]\n", "\n", " # very rough end of front matter.\n", " end_of_front_matter = 90\n", " # very rough beginning of end matter.\n", " start_of_end_matter = -2973\n", " # get only the text between the end matter and front matter\n", " normalized = normalized[end_of_front_matter:start_of_end_matter]\n", " return normalized\n", "\n", " def remove_stopwords(self, tokens):\n", " # remove all the stopwords from a list of tokens\n", " return [token for token in tokens if token not in stopwords.words('english')]\n", "\n", "class Corpus:\n", " def __init__(self, url):\n", " self.url = url\n", " self.raw_texts = self.scrape()\n", " self.doyle, self.bronte = self.make_texts()\n", " \n", " \n", " def scrape(self):\n", " html = request.urlopen(url).read()\n", " soup = BeautifulSoup(html, 'lxml')\n", " raw_text = soup.text\n", " # take that long string and evaluate it as Python code (because I put it up as such)\n", " texts = eval(soup.text)\n", " return texts\n", " \n", " def make_texts(self):\n", " doyle = [Text(raw_text) for raw_text in self.raw_texts[:5]]\n", " bronte = [Text(raw_text) for raw_text in self.raw_texts[:5]]\n", " return doyle, bronte\n", " \n", "# do the thing\n", "# get the texts from the github repo\n", "url = \"https://raw.githubusercontent.com/humanitiesprogramming/scraping-corpus/master/full-text.txt\"\n", "corpus = Corpus(url)\n", "print(corpus.doyle[0].clean_tokens[0:20])" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }