{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Counting Word Frequencies with Python" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The debates that occur in the Canadian Parliament are transcribed and published in a document known as Hansard. Since 2006, Hansard has been available for public download in .xml file format. To date, this archive contains over 57 million words. By converting these transcripts from .xml to .txt, the files can be processed in numerous ways with the Python coding language.

\n", "

The purpose of this document is to illustrate the counting of specific words in the collection of text files that will be referred to from now on as the corpus. Terms that may be unfamiliar to the reader are highlighted in bold, and contain a definition that can be accessed by hovering the mouse pointer over the word. While it is not necessary to read every line of code to understand the process, explanatory sections within the code are marked with a # and coloured light blue." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 1: Determining the word frequency for a single file" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we can begin working with a piece of text, it must be loaded into and read by Python. Rather than altering (and potentially irreversibly changing) our original text file, we will work with the contents of the file as a string of text contained within a variable. Now we can close our original text file, keeping it intact.\n", "\n", "In the next piece of code we will open the file that contains the textual transcripts of all of the Parliamentary debates that occured in the House of Commons during the 39th sitting of Parliament.\n", "\n", "While working with one long string may be useful for other applications, our purposes require that we split the text into pieces. In this case, each word will become it's own unique string." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# 1. open the text file\n", "infile = open('data/39.txt')\n", "# 2. read the file and assign it to the variable 'text'\n", "text = infile.read()\n", "# 3. close the text file\n", "infile.close()\n", "# 4. split the variable 'text' into distinct word strings\n", "words = text.split()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have loaded our file, we can begin to work on it. Python offers us a lot of pre-built tools to make the task of coding easier. Some of the most commonly used tools are known as functions. Functions are useful for automating tasks that would otherwise require a repetitive amount of coding. While Python has many built-in functions, the language's true power comes from the ability to define unique functions based on programming needs. In the code above, we've already used four Python functions: open, read, close, and split.

\n", "

In the next piece of code we will define our own funtion, called count_in_list. This function will allow us to count the occurence of any word in the corpus." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# 5. define the'count_in_list' function\n", "def count_in_list(item_to_count, list_to_search):\n", " \"Counts the number of a specified word within a list of words\"\n", " number_of_hits = 0\n", " for item in list_to_search:\n", " if item == item_to_count:\n", " number_of_hits += 1\n", " return number_of_hits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can call the function for any word we choose. The next example shows that there are 392 occurences of the word privacy contained in the transcripts for the 39th sitting of Parliament." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instances of the word 'privacy': 392\n" ] } ], "source": [ "# 6. here the function counts the instances of the word 'privacy'\n", "print \"Instances of the word \\'privacy\\':\", (count_in_list(\"privacy\", words))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unfortunately, there are two distinct problems here, centred around the fact that our function is only counting the string privacy exactly as it appears.

\n", "

The first problem is that text strings are case-sensitive. If the word contains UPPERCASE and lowercase letters, the word that is searched for will only be counted if the cases match exactly. The following example counts the number of instances of Privacy with the first letter capitalized." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instances of the word 'Privacy': 454\n" ] } ], "source": [ "print \"Instances of the word \\'Privacy\\':\", (count_in_list(\"Privacy\", words))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is a more extreme example to illustrate the point." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instances of the word 'pRiVaCy': 0\n" ] } ], "source": [ "print \"Instances of the word \\'pRiVaCy\\':\",(count_in_list(\"pRiVaCy\", words))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The second problem is that of punctuation. Much like words are case-sensitive, they are also punctuation-sensitive. If a piece of punctuation has been included in the string, it will be included in the search. Here we count the occurrences of privacy, shown here with a comma after the word." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instances of the word 'privacy,': 41\n" ] } ], "source": [ "print \"Instances of the word \\'privacy,\\':\", (count_in_list(\"privacy,\", words))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here we count privacy., with the word followed by a period." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instances of the word 'privacy.': 65\n" ] } ], "source": [ "print \"Instances of the word \\'privacy.\\':\",(count_in_list(\"privacy.\", words))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We could comb through the text to find all of the different instantiations of privacy, and then run the code for each one and add together all of the numbers, but that would be time consuming and potentially inaccurate. Instead, we must process the text further to make the text uniform. In this case we want to make all of the characters lowercase, and remove all of the punctuation.

\n", "

Python has a function that will do this for us." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will reload the text file, split the text into distinct words or tokens and then use the the text cleaning function." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [], "source": [ "infile = open('data/39.txt')\n", "text = infile.read()\n", "infile.close()\n", "tokens = text.split()\n", "\n", "#here we call the text cleaning function\n", "words = [w.lower() for w in tokens if w.isalpha()]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, when we count the instances of privacy, we are presented with a total of 846 instances." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instances of the word 'privacy': 846\n" ] } ], "source": [ "print \"Instances of the word \\'privacy\\':\", (count_in_list(\"privacy\", words))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---------" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 2: Determining Word Frequencies for the Entire Corpus" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's see how this compares to the rest of the corpus. To accomplish this, we must write another function that will read all of the text files in our file folder.\n", "\n", "

First we need to introduce a function of Python that we've yet to see: modules. Modules are packages of functions and code that serve specific purposes. These are much like functions, but more complex.\n", "\n", "

The next piece of code imports a module called os, specifically the function listdir. We will use listdir to print a list of all the files in a specific directory. Each of the listed files corresponds to a textual transcript of Hansard. The first nine files refer to the complete transcript for each year from 2006 to 2014, while the last three files are the transcripts corresponding to each sitting of Parliament, in this case the 39th through to the end of the second sitting of the 41st Parliament." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "['.DS_Store',\n", " '2006.txt',\n", " '2007.txt',\n", " '2008.txt',\n", " '2009.txt',\n", " '2010.txt',\n", " '2011.txt',\n", " '2012.txt',\n", " '2013.txt',\n", " '2014.txt',\n", " '2015.txt',\n", " '39.txt',\n", " '40.txt',\n", " '41.txt']" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# imports the os module\n", "from os import listdir\n", "# calls the listdir function to list the files in a specific directory\n", "listdir(\"data\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Although we can display the contents of a directory by using the listdir function, Python needs those names stored in a list in order to iterate over it. We also want to specify that only files with the extension .txt are included. Here we create another function called list_textfiles." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def list_textfiles(directory):\n", " \"Return a list of filenames ending in '.txt'\"\n", " textfiles = []\n", " for filename in listdir(directory):\n", " if filename.endswith(\".txt\"):\n", " textfiles.append(directory + \"/\" + filename)\n", " return textfiles" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rather than writing code to open each file individually, we can create another custom function to open the file we pass to it. We'll call this one read_file." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def read_file(filename):\n", " \"Read the contents of FILENAME and return as a string.\"\n", " infile = open(filename)\n", " contents = infile.read()\n", " infile.close()\n", " return contents" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can open all of the files in our directory, strip each file of uppercase letters and punctuation, split the whole of each text into tokens, and store all the data as separate lists in our variable corpus." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "corpus = []\n", "for filename in list_textfiles(\"data\"):\n", " # reads the file\n", " text = read_file(filename)\n", " # splits the text into tokens\n", " tokens = text.split()\n", " # removes the punctuation and changes Uppercase to lower\n", " words = [w.lower() for w in tokens if w.isalpha()]\n", " # creates a set of word lists for each file\n", " corpus.append(words)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check to make sure the code worked by using the len function to count the number of items in our corpus list." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "There are 13 files in the list, named: data/2006.txt, data/2007.txt, data/2008.txt, data/2009.txt, data/2010.txt, data/2011.txt, data/2012.txt, data/2013.txt, data/2014.txt, data/2015.txt, data/39.txt, data/40.txt, data/41.txt\n" ] } ], "source": [ "print\"There are\", len(corpus), \"files in the list, named: \", ', '.join(list_textfiles('data'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a function to make the names of the files more readable. First we'll have to strip the the file extension .txt." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from os.path import splitext\n", "\n", "def remove_ext(filename):\n", " \"Removes the file extension, such as .txt\"\n", " name, extension = splitext(filename)\n", " return name\n", "\n", "for files in list_textfiles('data'):\n", " remove_ext(files)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's make a function to remove the data/." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from os.path import basename\n", "\n", "def remove_dir(filepath):\n", " \"Removes the path from the file name\"\n", " name = basename(filepath)\n", " return name\n", "for files in list_textfiles('data'):\n", " remove_dir(files)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And finally, we'll write a function to tie the two functions together." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def get_filename(filepath):\n", " \"Removes the path and file extension from the file name\"\n", " filename = remove_ext(filepath)\n", " name = remove_dir(filename)\n", " return name\n", "\n", "\n", "filenames = []\n", "for files in list_textfiles('data'):\n", " files = get_filename(files)\n", " filenames.append(files)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can display a readable list of the files within our directory." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "There are 13 files in the list, named: 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 39, 40, 41 .\n" ] } ], "source": [ "print\"There are\", len(corpus), \"files in the list, named:\", ', '.join(filenames),\".\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next step involves iterating through both lists: corpus and filenames, in order to generate a word frequency for each file in the corpus. For this we will use Python's zip function." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instances of the word 'privacy' in 2006 : 356\n", "Instances of the word 'privacy' in 2007 : 258\n", "Instances of the word 'privacy' in 2008 : 252\n", "Instances of the word 'privacy' in 2009 : 612\n", "Instances of the word 'privacy' in 2010 : 533\n", "Instances of the word 'privacy' in 2011 : 624\n", "Instances of the word 'privacy' in 2012 : 552\n", "Instances of the word 'privacy' in 2013 : 918\n", "Instances of the word 'privacy' in 2014 : 1567\n", "Instances of the word 'privacy' in 2015 : 806\n", "Instances of the word 'privacy' in 39 : 846\n", "Instances of the word 'privacy' in 40 : 1643\n", "Instances of the word 'privacy' in 41 : 3989\n" ] } ], "source": [ "for words, names in zip(corpus, filenames):\n", " print\"Instances of the word \\'privacy\\' in\",names, \":\", count_in_list(\"privacy\", words)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What's exciting about this code is that we can now search the entire corpus for any word we choose. Let's search for information." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instances of the word 'information' in 2006 : 2724\n", "Instances of the word 'information' in 2007 : 2328\n", "Instances of the word 'information' in 2008 : 1610\n", "Instances of the word 'information' in 2009 : 3554\n", "Instances of the word 'information' in 2010 : 3913\n", "Instances of the word 'information' in 2011 : 3250\n", "Instances of the word 'information' in 2012 : 4137\n", "Instances of the word 'information' in 2013 : 3281\n", "Instances of the word 'information' in 2014 : 4999\n", "Instances of the word 'information' in 2015 : 2728\n", "Instances of the word 'information' in 39 : 6615\n", "Instances of the word 'information' in 40 : 9688\n", "Instances of the word 'information' in 41 : 16221\n" ] } ], "source": [ "for words, names in zip(corpus, filenames):\n", " print\"Instances of the word \\'information\\' in\",names, \":\", count_in_list(\"information\", words)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How about ethics?" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instances of the word 'ethics' in 2006 : 456\n", "Instances of the word 'ethics' in 2007 : 177\n", "Instances of the word 'ethics' in 2008 : 680\n", "Instances of the word 'ethics' in 2009 : 333\n", "Instances of the word 'ethics' in 2010 : 473\n", "Instances of the word 'ethics' in 2011 : 211\n", "Instances of the word 'ethics' in 2012 : 285\n", "Instances of the word 'ethics' in 2013 : 615\n", "Instances of the word 'ethics' in 2014 : 339\n", "Instances of the word 'ethics' in 2015 : 156\n", "Instances of the word 'ethics' in 39 : 1304\n", "Instances of the word 'ethics' in 40 : 911\n", "Instances of the word 'ethics' in 41 : 1510\n" ] } ], "source": [ "for words, names in zip(corpus, filenames):\n", " print\"Instances of the word \\'ethics\\' in\",names, \":\", count_in_list(\"ethics\", words)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While word frequencies, by themselves, do not give us a tremendous amount of contextual information, they are a valuable first step in conducting large scale text analyses. For instance, returning to our frequency list for privacy, we can observe a general trend suggesting that the use of privacy has been increasing between 2006 and now. It is important to note that our calculations are a raw number. For a more contextual analysis we could calculate how many times the Parliament was in session during each period, or perhaps we could compare the word privacy to the total amount of words in each file.

\n", "

Stay tuned for the next section: Adding Context to Word Frequency Counts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "--------" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.0" } }, "nbformat": 4, "nbformat_minor": 0 }