{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)
\n", "For questions/comments/improvements, email nathan.kelber@ithaka.org.
\n", "___\n", "\n", "**Finding Significant Words Using TF/IDF**\n", "\n", "**Description:**\n", "This [notebook](https://docs.tdm-pilot.org/key-terms/#jupyter-notebook) shows how to discover significant words. The method for finding significant terms is [tf-idf](https://docs.tdm-pilot.org/key-terms/#tf-idf). The following processes are described:\n", "\n", "* An educational overview of TF-IDF, including how it is calculated\n", "* Using the `tdm_client` to retrieve a dataset\n", "* Filtering based on a pre-processed ID list\n", "* Filtering based on a [stop words list](https://docs.tdm-pilot.org/key-terms/#stop-words)\n", "* Cleaning the tokens in the dataset\n", "* Creating a [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary)\n", "* Creating a [gensim](https://docs.tdm-pilot.org/key-terms/#gensim) [bag of words](https://docs.tdm-pilot.org/key-terms/#bag-of-words) [corpus](https://docs.tdm-pilot.org/key-terms/#corpus)\n", "* Computing the most significant words in your [corpus](https://docs.tdm-pilot.org/key-terms/#corpus) using [gensim](https://docs.tdm-pilot.org/key-terms/#gensim) implementation of [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf)\n", "\n", "**Use Case:** For Learners (Detailed explanation, not ideal for researchers)\n", "\n", "**Difficulty:** Intermediate\n", "\n", "**Completion time:** 60 minutes\n", "\n", "**Knowledge Required:** \n", "* Python Basics Series ([Start Python Basics I](./python-basics-1.ipynb))\n", "\n", "**Knowledge Recommended:**\n", "* [Exploring Metadata](./metadata.ipynb)\n", "* [Working with Dataset Files](./working-with-dataset-files.ipynb)\n", "* [Pandas I](./pandas-1.ipynb)\n", "* [Creating a Stopwords List](./creating-stopwords-list.ipynb)\n", "* A familiarity with [gensim](https://docs.tdm-pilot.org/key-terms/#gensim) is helpful but not required.\n", "\n", "**Data Format:** [JSON Lines (.jsonl)](https://docs.tdm-pilot.org/key-terms/#jsonl)\n", "\n", "**Libraries Used:**\n", "* `pandas` to load a preprocessing list\n", "* `csv` to load a custom stopwords list\n", "* [gensim](https://docs.tdm-pilot.org/key-terms/#gensim) to help compute the [tf-idf](https://docs.tdm-pilot.org/key-terms/#tf-idf) calculations\n", "* [NLTK](https://docs.tdm-pilot.org/key-terms/#nltk) to create a stopwords list (if no list is supplied)\n", "\n", "**Research Pipeline:**\n", "\n", "1. Build a dataset\n", "2. Create a \"Pre-Processing CSV\" with [Exploring Metadata](./exploring-metadata.ipynb) (Optional)\n", "3. Create a \"Custom Stopwords List\" with [Creating a Stopwords List](./creating-stopwords-list.ipynb) (Optional)\n", "4. Complete the TF-IDF analysis with this notebook\n", "____" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# What is \"Term Frequency- Inverse Document Frequency\" (TF-IDF)?\n", "\n", "[TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) is used in [machine learning](https://docs.tdm-pilot.org/key-terms/#machine-learning) and [natural language processing](https://docs.tdm-pilot.org/key-terms//#nlp) for measuring the significance of particular terms for a given document. It consists of two parts that are multiplied together:\n", "\n", "1. Term Frequency- A measure of how many times a given word appears in a document\n", "2. Inverse Document Frequency- A measure of how many times the same word occurs in other documents within the corpus\n", "\n", "If we were to merely consider [word frequency](https://docs.tdm-pilot.org/key-terms/#word-frequency), the most frequent words would be common [function words](https://docs.tdm-pilot.org/key-terms/#function-words) like: \"the\", \"and\", \"of\". We could use a [stopwords list](https://docs.tdm-pilot.org/key-terms/#stop-words) to remove the common [function words](https://docs.tdm-pilot.org/key-terms/#function-words), but that still may not give us results that describe the unique terms in the document since the uniqueness of terms depends on the context of a larger body of documents. In other words, the same term could be significant or insignificant depending on the context. Consider these examples:\n", "\n", "* Given a set of scientific journal articles in biology, the term \"lab\" may not be significant since biologists often rely on and mention labs in their research. However, if the term \"lab\" were to occur frequently in a history or English article, then it is likely to be significant since humanities articles rarely discuss labs. \n", "* If we were to look at thousands of articles in literary studies, then the term \"postcolonial\" may be significant for any given article. However, if were to look at a few hundred articles on the topic of \"the global south,\" then the term \"postcolonial\" may occur so frequently that it is not a significant way to differentiate between the articles.\n", "\n", "The [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) calculation reveals the words that are frequent in this document **yet rare in other documents**. The goal is to find out what is unique or remarkable about a document given the context (and *the given context* can change the results of the analysis). \n", "\n", "Here is how the calculation is mathematically written:\n", "\n", "$$tfidf_{t,d} = tf_{t,d} \\cdot idf_{t,D}$$\n", "\n", "In plain English, this means: **The value of [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) is the product (or multiplication) of a given term's frequency multiplied by its inverse document frequency.** Let's unpack these terms one at a time.\n", "\n", "## Term Frequency Function\n", "\n", "$$tf_{t,d}$$\n", "The number of times (t) a term occurs in a given document (d)\n", "\n", "## Inverse Document Frequency Function\n", "\n", "$$idf_i = \\mbox{log} \\frac{N}{|{d : t_i \\in d}|}$$\n", "The inverse document frequency can be expanded to the calculation on the right. In plain English, this means: **The log of the total number of documents (N) divided by the number of documents that contain the term**\n", "\n", "## TF-IDF Calculation in Plain English\n", "\n", "$$(Times-the-word-occurs-in-given-document) \\cdot \\mbox{log} \\frac{(Total-number-of-documents)}{(Number-of-documents-containing-word)}$$\n", "\n", "There are variations on the [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) formula, but this is the most widely-used version." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## An Example Calculation of TF-IDF\n", "\n", "Let's take a look at an example to illustrate the fundamentals of [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf). First, we need several texts to compare. Our texts will be very simple.\n", "\n", "* text1 = 'The grass was green and spread out the distance like the sea.'\n", "* text2 = 'Green eggs and ham were spread out like the book.'\n", "* text3 = 'Green sailors were met like the sea met troubles.'\n", "* text4 = 'The grass was green.'\n", "\n", "The first step is we need to discover how many unique words are in each text. \n", "\n", "|text1|text2|text3|text4|\n", "| --- | ---| --- | --- |\n", "|the|green|green|the|\n", "|grass|eggs|sailors|grass|\n", "|was|and|were|was|\n", "|green|ham|met|green|\n", "|and|were|like| |\n", "|spread|spread|the| |\n", "|out|out|sea| |\n", "|into|like|met| |\n", "|distance|the|troubles| |\n", "|like|book| | |\n", "|sea| | | |\n", "\n", "\n", "Our four texts share some similar words. Next, we create a single list of unique words that occur across all three texts. (When we use the [gensim](https://docs.tdm-pilot.org/key-terms/#gensim) library later, we will call this list a [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary).) \n", "\n", "|Unique Words|\n", "| --- |\n", "|and|\n", "|book|\n", "|distance|\n", "|eggs|\n", "|grass|\n", "|green|\n", "|ham|\n", "|like|\n", "|met|\n", "|out|\n", "|sailors|\n", "|sea|\n", "|spread|\n", "|the|\n", "|troubles|\n", "|was|\n", "|were|\n", "\n", "Now let's count the occurences of each unique word in each sentence\n", "\n", "|word|text1|text2|text3|text4|\n", "|---|---|---|---|---|\n", "|and|1|1|0|0|\n", "|book|0|1|0|0|\n", "|distance|1|0|0|0|\n", "|eggs|0|1|0|0|\n", "|grass|1|0|0|1|\n", "|green|1|1|1|1|\n", "|ham|0|1|0|0|\n", "|like|1|1|1|0|\n", "|met|0|0|2|0|\n", "|out|1|1|0|0|\n", "|sailors|0|0|1|0|\n", "|sea|1|0|1|0|\n", "|spread|1|1|0|0|\n", "|the|3|1|1|1|\n", "|troubles|0|0|1|0|\n", "|was|1|0|0|1|\n", "|were|0|1|1|0|" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Computing TF-IDF (Example 1)\n", "\n", "We have enough information now to compute [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) for every word in our corpus. Recall the plain English formula.\n", "\n", "$$(Times-the-word-occurs-in-given-document) \\cdot \\mbox{log} \\frac{(Total-number-of-documents)}{(Number-of-documents-containing-word)}$$\n", "\n", "We can use the formula to compute [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) for the most common word in our corpus: 'the'. In total, we will compute [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) four times (once for each of our texts). \n", "\n", "|word|text1|text2|text3|text4|\n", "|---|---|---|---|---|\n", "|the|3|1|1|1|\n", "\n", "text1: $$ tf-idf = 3 \\cdot \\mbox{log} \\frac{4}{(4)} = 3 \\cdot \\mbox{log} 1 = 3 \\cdot 0 = 0$$\n", "text2: $$ tf-idf = 1 \\cdot \\mbox{log} \\frac{4}{(4)} = 1 \\cdot \\mbox{log} 1 = 1 \\cdot 0 = 0$$\n", "text3: $$ tf-idf = 1 \\cdot \\mbox{log} \\frac{4}{(4)} = 1 \\cdot \\mbox{log} 1 = 1 \\cdot 0 = 0$$\n", "text4: $$ tf-idf = 1 \\cdot \\mbox{log} \\frac{4}{(4)} = 1 \\cdot \\mbox{log} 1 = 1 \\cdot 0 = 0$$\n", "\n", "The results of our analysis suggest 'the' has a weight of 0 in every document. The word 'the' exists in all of our documents, and therefore it is not a significant term to differentiate one document from another.\n", "\n", "Given that idf is\n", "\n", "$$\\mbox{log} \\frac{(Total-number-of-documents)}{(Number-of-documents-containing-word)}$$\n", "\n", "and \n", "\n", "$$\\mbox{log} 1 = 0$$\n", "we can see that [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) will be 0 for any word that occurs in every document. That is, if a word occurs in every document, then it is not a significant term for any individual document.\n", "\n", "\n", "\n", "## Computing TF-IDF (Example 2)\n", "\n", "Let's try a second example with the word 'out'. Recall the plain English formula.\n", "\n", "$$(Times-the-word-occurs-in-given-document) \\cdot \\mbox{log} \\frac{(Total-number-of-documents)}{(Number-of-documents-containing-word)}$$\n", "\n", "We will compute [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) four times, once for each of our texts. \n", "\n", "|word|text1|text2|text3|text4|\n", "|---|---|---|---|---|\n", "|out|1|1|0|0|\n", "\n", "text1: $$ tf-idf = 1 \\cdot \\mbox{log} \\frac{4}{(2)} = 1 \\cdot \\mbox{log} 2 = 1 \\cdot .3010 = .3010$$\n", "text2: $$ tf-idf = 1 \\cdot \\mbox{log} \\frac{4}{(2)} = 1 \\cdot \\mbox{log} 2 = 1 \\cdot .3010 = .3010$$\n", "text3: $$ tf-idf = 0 \\cdot \\mbox{log} \\frac{4}{(2)} = 0 \\cdot \\mbox{log} 2 = 0 \\cdot .3010 = 0$$\n", "text4: $$ tf-idf = 0 \\cdot \\mbox{log} \\frac{4}{(2)} = 0 \\cdot \\mbox{log} 2 = 0 \\cdot .3010 = 0$$\n", "\n", "The results of our analysis suggest 'out' has some significance in text1 and text2, but no significance for text3 and text4 where the word does not occur.\n", "\n", "## Computing TF-IDF (Example 3)\n", "\n", "Let's try one last example with the word 'met'. Here's the [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) formula again:\n", "\n", "$$(Times-the-word-occurs-in-given-document) \\cdot \\mbox{log} \\frac{(Total-number-of-documents)}{(Number-of-documents-containing-word)}$$\n", "\n", "And here's how many times the word 'met' occurs in each text.\n", "\n", "|word|text1|text2|text3|text4|\n", "|---|---|---|---|---|\n", "|met|0|0|2|0|\n", "\n", "text1: $$ tf-idf = 0 \\cdot \\mbox{log} \\frac{4}{(1)} = 0 \\cdot \\mbox{log} 4 = 1 \\cdot .6021 = 0$$\n", "text2: $$ tf-idf = 0 \\cdot \\mbox{log} \\frac{4}{(1)} = 0 \\cdot \\mbox{log} 4 = 1 \\cdot .6021 = 0$$\n", "text3: $$ tf-idf = 2 \\cdot \\mbox{log} \\frac{4}{(1)} = 2 \\cdot \\mbox{log} 4 = 2 \\cdot .6021 = 1.2042$$\n", "text4: $$ tf-idf = 0 \\cdot \\mbox{log} \\frac{4}{(1)} = 0 \\cdot \\mbox{log} 4 = 1 \\cdot .6021 = 0$$\n", "\n", "As should be expected, we can see that the word 'met' is very significant in text3 but not significant in any other text since it does not occur in any other text. \n", "\n", "## The Full TF-IDF Example Table\n", "\n", "Here are the original sentences for each text:\n", "\n", "* text1 = 'The grass was green and spread out the distance like the sea.'\n", "* text2 = 'Green eggs and ham were spread out like the book.'\n", "* text3 = 'Green sailors were met like the sea met troubles.'\n", "* text4 = 'The grass was green.'\n", "\n", "And here's the corresponding [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) scores for each word in each text:\n", "\n", "|word|text1|text2|text3|text4|\n", "|---|---|---|---|---|\n", "|and|.3010|.3010|0|0|\n", "|book|0|.6021|0|0|\n", "|distance|.6021|0|0|0|\n", "|eggs|0|.6021|0|0|\n", "|grass|.3010|0|0|.3010|\n", "|green|0|0|0|0|\n", "|ham|0|.6021|0|0|\n", "|like|.1249|.1249|.1249|0|\n", "|met|0|0|1.2042|0|\n", "|out|.3010|.3010|0|0|\n", "|sailors|0|0|.6021|0|\n", "|sea|.3010|0|.3010|0|\n", "|spread|.3010|.3010|0|0|\n", "|the|0|0|0|0|\n", "|troubles|0|0|.6021|0|\n", "|was|.3010|0|0|.3010|\n", "|were|0|.3010|.3010|0|\n", "\n", "There are a few noteworthy things in this data. \n", "\n", "* The [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) score for any word that does not occur in a text is 0.\n", "* The scores for almost every word in text4 are 0 since it is a shorter version of text1. There are no unique words in text4 since text1 contains all the same words. It is also a short text which means that there are only four words to consider. The words 'the' and 'green' occur in every text, leaving only 'was' and 'grass' which are also found in text1.\n", "* The words 'book', 'eggs', and 'ham' are significant in text2 since they only occur in that text.\n", "\n", "Now that you have a basic understanding of how [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) is computed at a small scale, let's try computing [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) on a [corpus](https://docs.tdm-pilot.org/key-terms/#corpus) which could contain millions of words.\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Computing TF-IDF with your Dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll use the tdm_client library to automatically retrieve the dataset in the JSON file format. \n", "\n", "Enter a [dataset ID](https://docs.tdm-pilot.org/key-terms/#dataset-ID) in the next code cell. \n", "\n", "If you don't have a dataset ID, you can:\n", "* Use the sample dataset ID already in the code cell\n", "* [Create a new dataset](https://tdm-pilot.org/builder)\n", "* [Use a dataset ID from other pre-built sample datasets](https://tdm-pilot.org/dataset/dashboard)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dataset_id = \"b4668c50-a970-c4d7-eb2c-bb6d04313542\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, import the `tdm_client`, passing the `dataset_id` as an argument using the `get_dataset` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Importing your dataset with a dataset ID\n", "import tdm_client\n", "# Pull in the dataset that matches `dataset_id`\n", "# in the form of a gzipped JSON lines file.\n", "dataset_file = tdm_client.get_dataset(dataset_id)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Apply Pre-Processing Filters (if available)\n", "If you completed pre-processing with the \"Exploring Metadata and Pre-processing\" notebook, you can use your CSV file of dataset IDs to automatically filter the dataset. Your pre-processed CSV file must be in the root folder." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import a pre-processed CSV file of filtered dataset IDs.\n", "# If you do not have a pre-processed CSV file, the analysis\n", "# will run on the full dataset and may take longer to complete.\n", "import pandas as pd\n", "import os\n", "\n", "pre_processed_file_name = f'data/pre-processed_{dataset_id}.csv'\n", "\n", "if os.path.exists(pre_processed_file_name):\n", " df = pd.read_csv(pre_processed_file_name)\n", " filtered_id_list = df[\"id\"].tolist()\n", " use_filtered_list = True\n", " print('Pre-Processed CSV found. Successfully read in ' + str(len(df)) + ' documents.')\n", "else: \n", " use_filtered_list = False\n", " print('No pre-processed CSV file found. Full dataset will be used.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Load Stopwords List\n", "\n", "If you have created a stopword list in the stopwords notebook, we will import it here. (You can always modify the CSV file to add or subtract words then reload the list.) Otherwise, we'll load the NLTK [stopwords](https://docs.tdm-pilot.org/key-terms/#stop-words) list automatically." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load a custom data/stop_words.csv if available\n", "# Otherwise, load the nltk stopwords list in English\n", "\n", "# Create an empty Python list to hold the stopwords\n", "stop_words = []\n", "\n", "# The filename of the custom data/stop_words.csv file\n", "stopwords_list_filename = 'data/stop_words.csv'\n", "\n", "if os.path.exists(stopwords_list_filename):\n", " import csv\n", " with open(stopwords_list_filename, 'r') as f:\n", " stop_words = list(csv.reader(f))[0]\n", " print('Custom stopwords list loaded from CSV')\n", "else:\n", " # Load the NLTK stopwords list\n", " from nltk.corpus import stopwords\n", " stop_words = stopwords.words('english')\n", " print('NLTK stopwords list loaded')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Define a Unigram Processing Function\n", "In this step, we gather the unigrams. If there is a Pre-Processing Filter, we will only analyze documents from the filtered ID list. We will also process each unigram, assessing them individually. We will complete the following tasks:\n", "\n", "* Lowercase all tokens\n", "* Remove tokens in stopwords list\n", "* Remove tokens with fewer than 4 characters\n", "* Remove tokens with non-alphabetic characters\n", "\n", "We can define this process in a function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define a function that will process individual tokens\n", "# Only a token that passes through all three `if` \n", "# statements will be returned. A `True` result for\n", "# any `if` statement does not return the token. \n", "\n", "def process_token(token):\n", " token = token.lower()\n", " if token in stop_words: # If True, do not return token\n", " return\n", " if len(token) < 4: # If True, do not return token\n", " return\n", " if not(token.isalpha()): # If True, do not return token\n", " return\n", " return token # If all are False, return the lowercased token" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we process all the unigrams into a list called `documents`. For demonstration purposes, this code runs on a limit of 500 documents, but we can change this to process all the documents." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Collecting the unigrams and processing them into `documents`\n", "\n", "limit = 500 # Change number of documents being analyzed. Set to `None` to do all documents.\n", "n = 0\n", "documents = []\n", "document_ids = []\n", " \n", "for document in tdm_client.dataset_reader(dataset_file):\n", " processed_document = []\n", " document_id = document['id']\n", " if use_filtered_list is True:\n", " # Skip documents not in our filtered_id_list\n", " if document_id not in filtered_id_list:\n", " continue\n", " document_ids.append(document_id)\n", " unigrams = document.get(\"unigramCount\", [])\n", " for gram, count in unigrams.items():\n", " clean_gram = process_token(gram)\n", " if clean_gram is None:\n", " continue\n", " processed_document.append(clean_gram)\n", " if len(processed_document) > 0:\n", " documents.append(processed_document)\n", " n += 1\n", " if (limit is not None) and (n >= limit):\n", " break\n", "print('Unigrams collected and processed.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have all the cleaned unigrams in a list, we can use Gensim to compute TF/IDF." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "# Using Gensim to Compute \"Term Frequency- Inverse Document Frequency\"\n", "\n", "It will be helpful to remember the basic steps we did in the explanatory [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) example:\n", "\n", "1. Create a list of the frequency of every word in every document\n", "2. Create a list of every word in the [corpus](https://docs.tdm-pilot.org/key-terms/#corpus)\n", "3. Compute [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) based on that data\n", "\n", "So far, we have completed the first item by creating a list of the frequency of every word in every document. Now we need to create a list of every word in the corpus. In [gensim](https://docs.tdm-pilot.org/key-terms/#gensim), this is called a \"dictionary\". A [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) is similar to a [Python dictionary](https://docs.tdm-pilot.org/key-terms/#python-dictionary), but here it is called a [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) to show it is a specialized kind of dictionary.\n", "\n", "## Creating a Gensim Dictionary\n", "\n", "Let's create our [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary). A [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) is a kind of masterlist of all the words across all the documents in our corpus. Each unique word is assigned an ID in the gensim dictionary. The result is a set of key/value pairs of unique tokens and their unique IDs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import gensim\n", "dictionary = gensim.corpora.Dictionary(documents)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have a [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary), we can get a preview that displays the number of unique tokens across all of our texts." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(dictionary)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) stores a unique identifier (starting with 0) for every unique token in the corpus. The [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) does not contain information on word frequencies; it only catalogs all the words in the corpus. You can see the unique ID for each token in the text using the .token2id() method. Your corpus may have hundreds of thousands of unique words so here we just give a preview of the first ten." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dict(list(dictionary.token2id.items())[0:10]) # Print the first ten tokens and their associated IDs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also look up the corresponding ID for a token using the ``.get`` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dictionary.token2id.get('people', 0) # Get the value for the key 'people'. Return 0 if there is no token matching 'people'. The number returned is the gensim dictionary ID for the token. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating a Bag of Words Corpus\n", "\n", "\n", "### Example: A Single Document\n", "\n", "The next step is to combine our word frequency data found within ``documents`` to our [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) token IDs. For every document, we want to know how many times a word (notated by its ID) occurs. We can do a single document first to show how this works. We will create a [Python list](https://docs.tdm-pilot.org/key-terms/#python-list) called ``example_bow_corpus`` that will turn our word counts into a series of [tuples](https://docs.tdm-pilot.org/key-terms/#tuple) where the first number is the [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) token ID and the second number is the word frequency." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "example_bow_corpus = [dictionary.doc2bow(documents[0])] # Create an example bag of words corpus. We select a document at random to use as our sample.\n", "list(example_bow_corpus[0][:10]) # List out the first ten tuples in ``example_bow_corpus``" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using IDs can seem a little abstract, but we can discover the word associated with a particular ID. For demonstration purposes, the following code will replace the token IDs in the last example with the actual tokens." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "word_counts = [[(dictionary[id], count) for id, count in line] for line in example_bow_corpus]\n", "list(word_counts[0][:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We saw before that you could discover the [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) ID number by running:\n", "\n", "> dictionary.token2id.get('people', 0)\n", "\n", "If you wanted to discover the token given only the ID number, the method is a little more involved. You could use [list comprehension](https://docs.tdm-pilot.org/key-terms/#list-comprehensions) to find the **key** token based on the **value** ID. Normally, [Python dictionaries](https://docs.tdm-pilot.org/key-terms/#python-dictionary) only map from keys to values (not from values to keys). However, we can write a quick [list comprehension](https://docs.tdm-pilot.org/key-terms/#list-comprehensions) to go the other direction. (It is unlikely one would ever do these methods in practice, but they are shown here to demonstrate how the [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) is connected to the list entries in the [gensim](https://docs.tdm-pilot.org/key-terms/#gensim) ``bow_corpus``. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "[token for dict_id, token in dictionary.items() if dict_id == 100] # Find the corresponding token in our gensim dictionary for the gensim dictionary ID" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: All Documents\n", "\n", "We have seen an example that demonstrates how the [gensim](https://docs.tdm-pilot.org/key-terms/#gensim) [bag of words](https://docs.tdm-pilot.org/key-terms/#bag-of-words) [corpus](https://docs.tdm-pilot.org/key-terms/#corpus) works on a single document. Let's apply it now to all of our documents. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bow_corpus = [dictionary.doc2bow(doc) for doc in documents]\n", "#print(bow_corpus[:3]) #Show the bag of words corpus for the first 3 documents" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next step is to create the [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) model which will set the parameters for our implementation of [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf). In our [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) example, the formula for [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) was:\n", "\n", "$$(Times-the-word-occurs-in-given-document) \\cdot \\mbox{log} \\frac{(Total-number-of-documents)}{(Number-of-documents-containing-word)}$$\n", "\n", "In [gensim](https://docs.tdm-pilot.org/key-terms/#gensim), the default formula for measuring [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) uses log base 2 instead of log base 10, as shown:\n", "\n", "$$(Times-the-word-occurs-in-given-document) \\cdot \\log_{2} \\frac{(Total-number-of-documents)}{(Number-of-documents-containing-the-word)}$$\n", "\n", "If you would like to use a different formula for your [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) calculation, there is a description of [parameters you can pass](https://radimrehurek.com/gensim/models/tfidfmodel.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create the `TfidfModel`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = gensim.models.TfidfModel(bow_corpus) # Create our gensim TF-IDF model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we apply our model to the ``bow_corpus`` to create our results in ``corpus_tfidf``. The ``corpus_tfidf`` is a python list of each document similar to ``bow_document``. Instead of listing the frequency next to the [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) ID, however, it contains the TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) score for the associated token. Below, we display the first document in ``corpus_tfidf``. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "corpus_tfidf = model[bow_corpus] # Create TF-IDF scores for the ``bow_corpus`` using our model\n", "list(corpus_tfidf[0][:10]) # List out the TF-IDF scores for the first 10 tokens of the first text in the corpus" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's display the tokens instead of the [gensim dictionary](https://docs.tdm-pilot.org/key-terms/#gensim-dictionary) IDs." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "example_tfidf_scores = [[(dictionary[id], count) for id, count in line] for line in corpus_tfidf]\n", "list(example_tfidf_scores[0][:10]) # List out the TF-IDF scores for the first 10 tokens of the first text in the corpus" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, let's sort the terms by their [TF-IDF](https://docs.tdm-pilot.org/key-terms/#tf-idf) weights to find the most significant terms in the document." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Sort the tuples in our tf-idf scores list\n", "\n", "def Sort(tfidf_tuples): \n", " tfidf_tuples.sort(key = lambda x: x[1], reverse=True) \n", " return tfidf_tuples \n", "\n", "list(Sort(example_tfidf_scores[0])[:10]) #List the top ten tokens in our example document by their TF-IDF scores" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We could also analyze across the entire corpus to find the most unique terms. These are terms that appear frequently in a single text, but rarely or never appear in other texts. (Often, these will be proper names since a particular article may mention a name often but the name may rarely appear in other articles.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "td = { # Define a dictionary ``td`` where each document gather\n", " dictionary.get(_id): value for doc in corpus_tfidf\n", " for _id, value in doc\n", " }\n", "sorted_td = sorted(td.items(), key=lambda kv: kv[1], reverse=True) # Sort the items of ``td`` into a new variable ``sorted_td``, the ``reverse`` starts from highest to lowest" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "for term, weight in sorted_td[:25]: # Print the top 25 terms in the entire corpus\n", " print(term, weight)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And, finally, we can see the most significant term in every document." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# For each document, print the ID, most common word, and TF/IDF score\n", "\n", "for n, doc in enumerate(corpus_tfidf):\n", " if len(doc) < 1:\n", " continue\n", " word_id, score = max(doc, key=lambda x: x[1])\n", " print(document_ids[n], dictionary.get(word_id), score)\n", " if n >= 10:\n", " break" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": null }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": true } }, "nbformat": 4, "nbformat_minor": 4 }