{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)
\n", "For questions/comments/improvements, email nathan.kelber@ithaka.org.
\n", "____\n", "**Exploring Word Frequencies**\n", "\n", "**Description:**\n", "This [notebook](https://docs.tdm-pilot.org/key-terms/#jupyter-notebook) shows how to find the most common words in a \n", "[dataset](https://docs.tdm-pilot.org/key-terms/#dataset). The following processes are described:\n", "\n", "* Using the `tdm_client` to create a Pandas DataFrame\n", "* Filtering based on a pre-processed ID list\n", "* Filtering based on a [stop words list](https://docs.tdm-pilot.org/key-terms/#stop-words)\n", "* Using a `Counter()` object to get the most common words\n", "\n", "**Difficulty:** Intermediate\n", "\n", "**Completion time:** 60 minutes\n", "\n", "**Knowledge Required:** \n", "* Python Basics ([Start Python Basics I](./python-basics-1.ipynb))\n", "\n", "**Knowledge Recommended:**\n", "\n", "* [Working with Dataset Files](./working-with-dataset-files.ipynb)\n", "* [Pandas I](./pandas-1.ipynb)\n", "* [Counter Objects](./counter-objects.ipynb)\n", "* [Creating a Stopwords List](./creating-stopwords-list.ipynb)\n", "\n", "**Data Format:** [JSON Lines (.jsonl)](https://docs.tdm-pilot.org/key-terms/#jsonl)\n", "\n", "**Libraries Used:**\n", "* **[tdm_client](https://docs.tdm-pilot.org/key-terms/#tdm-client)** to collect, unzip, and read our dataset\n", "* **[NLTK](https://docs.tdm-pilot.org/key-terms/#nltk)** to help [clean](https://docs.tdm-pilot.org/key-terms/#clean-data) up our dataset\n", "* [Counter](https://docs.tdm-pilot.org/key-terms/#python-counter) from **Collections** to help sum up our word frequencies\n", "\n", "**Research Pipeline:**\n", "\n", "1. Build a dataset\n", "2. Create a \"Pre-Processing CSV\" with [Exploring Metadata](./exploring-metadata.ipynb) (Optional)\n", "3. Create a \"Custom Stopwords List\" with [Creating a Stopwords List](./creating-stopwords-list.ipynb) (Optional)\n", "4. Complete the word frequencies analysis with this notebook\n", "___" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Import your dataset\n", "\n", "We'll use the tdm_client library to automatically retrieve the dataset in the JSON file format. \n", "\n", "Enter a [dataset ID](https://docs.tdm-pilot.org/key-terms/#dataset-ID) in the next code cell. \n", "\n", "If you don't have a dataset ID, you can:\n", "* Use the sample dataset ID already in the code cell\n", "* [Create a new dataset](https://tdm-pilot.org/builder)\n", "* [Use a dataset ID from other pre-built sample datasets](https://tdm-pilot.org/dataset/dashboard)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Creating a variable `dataset_id` to hold our dataset ID\n", "# The default dataset is Shakespeare Quarterly, 1950-present\n", "dataset_id = \"7e41317e-740f-e86a-4729-20dab492e925\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, import the `tdm_client`, passing the `dataset_id` as an argument using the `get_dataset` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Importing your dataset with a dataset ID\n", "import tdm_client\n", "# Pull in the dataset that matches `dataset_id`\n", "# in the form of a gzipped JSON lines file.\n", "dataset_file = tdm_client.get_dataset(dataset_id)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Apply Pre-Processing Filters (if available)\n", "If you completed pre-processing with the \"Exploring Metadata and Pre-processing\" notebook, you can use your CSV file of dataset IDs to automatically filter the dataset. Your pre-processed CSV file must be in the same directory as this notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import a pre-processed CSV file of filtered dataset IDs.\n", "# If you do not have a pre-processed CSV file, the analysis\n", "# will run on the full dataset and may take longer to complete.\n", "import pandas as pd\n", "import os\n", "\n", "pre_processed_file_name = f'data/pre-processed_{dataset_id}.csv'\n", "\n", "if os.path.exists(pre_processed_file_name):\n", " df = pd.read_csv(pre_processed_file_name)\n", " filtered_id_list = df[\"id\"].tolist()\n", " use_filtered_list = True\n", " print('Pre-Processed CSV found. Successfully read in ' + str(len(df)) + ' documents.')\n", "else: \n", " use_filtered_list = False\n", " print('No pre-processed CSV file found. Full dataset will be used.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Extract the Unigram Counts from the dataset JSON file\n", "\n", "We pulled in our dataset using a `dataset_id`. The file, which resides in the datasets/ folder, is a compressed JSON Lines file (jsonl.gz) that contains all the metadata information found in the metadata CSV *plus* the textual data necessary for analysis including:\n", "\n", "* Unigram Counts\n", "* Bigram Counts\n", "* Trigram Counts\n", "* Full-text (if available)\n", "\n", "To complete our analysis, we are going to pull out the unigram counts for each document and store them in a Counter() object. We will import `Counter` which will allow us to use Counter() objects for counting unigrams. Then we will initialize an empty Counter() object `word_frequency` to hold all of our unigram counts." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import Counter()\n", "from collections import Counter\n", "\n", "# Create an empty Counter object called `word_frequency`\n", "word_frequency = Counter()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Gather unigramCounts from documents in `filtered_id_list` if it is available\n", "\n", "for document in tdm_client.dataset_reader(dataset_file):\n", " if use_filtered_list is True:\n", " document_id = document['id']\n", " # Skip documents not in our filtered_id_list\n", " if document_id not in filtered_id_list:\n", " continue\n", " unigrams = document.get(\"unigramCount\", [])\n", " for gram, count in unigrams.items():\n", " word_frequency[gram] += count\n", "\n", "# Print success message\n", "if use_filtered_list is True:\n", " print('Unigrams have been collected for documents in filtered_id_list')\n", "else:\n", " print('Unigrams have been collected for all documents without filtering')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Find Most Common Unigrams\n", "Now that we have a list of the frequency of all the unigrams in our corpus, we need to sort them to find which are most common" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "for gram, count in word_frequency.most_common(25):\n", " print(gram.ljust(20), count)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Clean Up Tokens\n", "\n", "We have successfully created a word frequency list. There are a couple small issues, however, that we still need to address:\n", "1. There are many [function words](https://docs.tdm-pilot.org/key-terms/#function-words), words like \"the\", \"in\", and \"of\" that are grammatically important but do not carry as much semantic meaning like [content words](https://docs.tdm-pilot.org/key-terms/#content-words), such as nouns and verbs. \n", "2. The words represented here are actually case-sensitive [strings](https://docs.tdm-pilot.org/key-terms/#string). That means that the string \"the\" is a different from the string \"The\". You may notice this in your results above.\n", "\n", "To solve these issues, we need to find a way to remove common [function words](https://docs.tdm-pilot.org/key-terms/#function-words) and combine [strings](https://docs.tdm-pilot.org/key-terms/#string) that may have capital letters in them. We can solve these issues by:\n", "\n", "1. Using a [stopwords](https://docs.tdm-pilot.org/key-terms/#stop-words) list to remove common [function words](https://docs.tdm-pilot.org/key-terms/#function-words)\n", "2. Lowercasing all the characters in each string to combine our counts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Stopwords List\n", "\n", "If you have created a stopword list in the stopwords notebook, we will import it here. (You can always modify the CSV file to add or subtract words then reload the list.) Otherwise, we'll load the NLTK [stopwords](https://docs.tdm-pilot.org/key-terms/#stop-words) list automatically." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load a custom data/stop_words.csv if available\n", "# Otherwise, load the nltk stopwords list in English\n", "\n", "# Create an empty Python list to hold the stopwords\n", "stop_words = []\n", "\n", "# The filename of the custom data/stop_words.csv file\n", "stopwords_list_filename = 'data/stop_words.csv'\n", "\n", "if os.path.exists(stopwords_list_filename):\n", " import csv\n", " with open(stopwords_list_filename, 'r') as f:\n", " stop_words = list(csv.reader(f))[0]\n", " print('Custom stopwords list loaded from CSV')\n", "else:\n", " # Load the NLTK stopwords list\n", " from nltk.corpus import stopwords\n", " stop_words = stopwords.words('english')\n", " print('NLTK stopwords list loaded')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Apply Processing\n", "In addition to using a stopwords list, we will clean up the tokens by lowercasing all tokens and combining them. This will combine tokens with different capitalization such as \"quarterly\" and \"Quarterly.\" We will also remove any tokens that are not alphanumeric." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Gather unigramCounts from documents in `filtered_id_list` if available\n", "# and apply the processing.\n", "\n", "transformed_word_frequency = Counter()\n", "\n", "for document in tdm_client.dataset_reader(dataset_file):\n", " if use_filtered_list is True:\n", " document_id = document['id']\n", " # Skip documents not in our filtered_id_list\n", " if document_id not in filtered_id_list:\n", " continue\n", " unigrams = document.get(\"unigramCount\", [])\n", " for gram, count in unigrams.items():\n", " clean_gram = gram.lower()\n", " if clean_gram in stop_words:\n", " continue\n", " if not clean_gram.isalpha():\n", " continue\n", " transformed_word_frequency[clean_gram] += count" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we will display the 20 most common words by using the `.most_common()` method on the `Counter()` object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Print the most common processed unigrams and their counts\n", "for gram, count in transformed_word_frequency.most_common(25):\n", " print(gram.ljust(20), count)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": null }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }