{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Text Analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Motivation: Text Analysis on University of Maryland Patents\n", "\n", "Recall that we were able to use the [Patentsview API](https://www.patentsview.org/api/query-language.html) to pull data from patents awarded to inventors at University of Maryland, including the abstract from each of the patents. Suppose we wanted to know what types of patents were awarded to UMD. We can look at the information from the abstracts and read through them, but this would take a very long time since there are almost 1,300 abstracts. Instead, we can use **text analysis** to help us.\n", "\n", "However, as-is, the text from abstracts can be difficult to analyze. We aren't able to use traditional statistical techniques without some heavy data manipulation, because the text is essentially a categorical variable with unique values for each patent. We need to basically break it apart and clean the data before we apply our data analysis techniques. \n", "\n", "In this notebook, we will go through the process of cleaning and processing the text data to prepare it for **topic modeling**, which is the process of automatically assigning topics to individual **documents** (in this case, an individual document is an abstract from a patent). We will use a technique called **Latent Dirichlet Allocation** as our topic modeling technique, and try to determine what sorts of patents were awarded to University of Maryland." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction to Text Analysis\n", "\n", "**Text analysis** is used to extract useful information from or summarize a large amount of unstructured text stored in documents. This opens up the opportunity of using text data alongside more conventional data sources (e.g. surveys and administrative data). The goal of text analysis is to take a large corpus of complex and unstructured text data and extract important and meaningful messages in a comprehensible way. \n", "\n", "Text analysis can help with the following tasks:\n", "\n", "* **Information Retrieval**: Find relevant information in a large database, such as a systematic literature review, that would be very time-consuming for humans to do manually. \n", "\n", "* **Clustering and Text Categorization**: Summarize a large corpus of text by finding the most important phrases, using methods like topic modeling. \n", "\n", "* **Text Summarization**: Create category-sensitive text summaries of a large corpus of text. \n", "\n", "* **Machine Translation**: Translate documents from one language to another. \n", " \n", "## Glossary of Terms\n", "\n", "* **Corpus**: A corpus is the set of all text documents used in your analysis; for example, your corpus of text may include hundreds of abstracts from patent data.\n", "\n", "* **Tokenize**: Tokenization is the process by which text is separated into meaningful terms or phrases. In English this is easy to do for individual words, as they are separated by whitespace; however, it can get more complicated to automate determining which groups of words constitute meaningful phrases. \n", "\n", "* **Stemming**: Stemming is normalizing text by reducing all forms or conjugations of a word to the word's most basic form. In English, this can mean making a rule of removing the suffixes \"ed\" or \"ing\" from the end of all words, but it gets more complex. For example, \"to go\" is irregular, so you need to tell the algorithm that \"went\" and \"goes\" stem from a common lemma, and should be considered alternate forms of the word \"go.\"\n", "\n", "* **TF-IDF**: TF-IDF (term frequency-inverse document frequency) is an example of feature engineering where the most important words are extracted by taking account their frequency in documents and the entire corpus of documents as a whole.\n", "\n", "* **Topic Modeling**: Topic modeling is an unsupervised learning method where groups of words that often appear together are clustered into topics. Typically, the words in one topic should be related and make sense (e.g. boat, ship, captain). Individual documents can fall under one topic or multiple topics. \n", "\n", "* **LDA**: LDA (Latent Dirichlet Allocation) is a type of probabilistic model commonly used for topic modeling. \n", "\n", "* **Stop Words**: Stop words are words that have little semantic meaning but occur very frequently, like prepositions, articles and common nouns. For example, every document (in English) will probably contain the words \"and\" and \"the\" many times. You will often remove them as part of preprocessing using a list of stop words.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# interacting with websites and web-APIs\n", "import requests # easy way to interact with web sites and services\n", "\n", "# data manipulation\n", "import pandas as pd\n", "import numpy as np\n", "\n", "import nltk\n", "\n", "from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\n", "from sklearn.decomposition import LatentDirichletAllocation\n", "from sklearn import preprocessing\n", "\n", "from nltk.corpus import stopwords\n", "from nltk import SnowballStemmer\n", "import string" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load the Data\n", "\n", "To start, we'll load the data using the PatentsView API. For more information about how this part works, view the API notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "url = 'http://www.patentsview.org/api/patents/query?'\n", "PARAMS = {'q': '{\"assignee_organization\":\"university of maryland\"}',\n", " 'f': '[\"patent_title\",\"patent_year\", \"patent_type\", \"patent_abstract\"]',\n", " 'o': '{\"per_page\":1300}'}\n", "r = requests.get(url, params=PARAMS) # Get response from the URL\n", "r.status_code # Check status code" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "json = r.json()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "patents = pd.DataFrame(json['patents'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "patents.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "abstracts = patents.patent_abstract" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Topic Modeling\n", "\n", "We are going to apply topic modeling, an unsupervised learning method, to our corpus to find the high-level topics in our corpus. Through this process, we'll discuss how to clean and preprocess our data to get the best results. Topic modeling is a broad subfield of machine learning and natural language processing. We are going to focus on a common modeling approach called Latent Dirichlet Allocation (LDA). \n", "\n", "To use topic modeling, we first have to assume that topics exist in our corpus, and that some small number of these topics can \"explain\" the corpus. Topics in this context refer to words from the corpus, in a list that is ranked by probability. A single document can be explained by multiple topics. For instance, an article on net neutrality would fall under the topic \"technology\" as well as the topic \"politics.\" The set of topics used by a document is known as the document's allocation, hence, the name Latent Dirchlet Allocation, each document has an allocation of latent topics allocated by Dirchlet distribution. \n", "\n", "We will use topic modeling in order to determine what types of inventions have been produced at the University of Maryland. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparing Text Data for Natural Language Processing (NLP)\n", "\n", "The first important step in working with text data is cleaning and processing the data, which includes (but is not limited to):\n", "\n", "- forming a corpus of text\n", "- stemming and lemmatization\n", "- tokenization\n", "- removing stop-words\n", "- finding words co-located together (N-grams)\n", "\n", "\n", "\n", "The ultimate goal is to transform our text data into a form an algorithm can work with, because a document or a corpus of text cannot be fed directly into an algorithm. Algorithms expect numerical feature vectors with certain fixed sizes, and can't handle documents, which are basically sequences of symbols with variable length. We will be transforming our text corpus into a *bag of n-grams* to be further analyzed. In this form our text data is represented as a matrix where each row refers to a specific job description (document) and each column is the occurence of a word (feature)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Stemming and Lemmatization - Distilling text data\n", "\n", "We want to process our text through *stemming and lemmatization*, or replacing words with their root or simplest form. For example \"systems,\" \"systematic,\" and \"system\" are all different words, but we can replace all these words with \"system\" without sacrificing much meaning. \n", "\n", "- A **lemma** is the original dictionary form of a word (e.g. the lemma for \"lies,\" \"lied,\" and \"lying\" is \"lie\").\n", "- The process of turning a word into its simplest form is **stemming**. There are several well known stemming algorithms -- Porter, Snowball, Lancaster -- that all have their respective strengths and weaknesses.\n", "\n", "In this notebook, we'll use the Snowball Stemmer:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Examples of how a Stemmer works:\n", "stemmer = SnowballStemmer(\"english\")\n", "print(stemmer.stem('lies'))\n", "print(stemmer.stem(\"lying\"))\n", "print(stemmer.stem('systematic'))\n", "print(stemmer.stem(\"running\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Removing Punctuation\n", "\n", "For some purposes, we might want to preserve punctuation. For example, if we wanted to be able to detect sentiment of text, we might want to keep exclamation points, because they signify something about the text. For our purposes, however, we will simply strip the punctuation so that it does not affect our analysis. To do this, we use the `string` package, creating a translator that takes any string and \"translates\" it into a string without any punctuation.\n", "\n", "An example using the first abstract in our corpus is shown below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Before\n", "abstracts[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create translator\n", "translator=str.maketrans(string.punctuation, ' ' * len(string.punctuation))\n", "\n", "# After\n", "abstracts[0].translate(translator)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tokenizing\n", "\n", "We want to separate text into individual tokens (generally individual words). To do this, we'll first write a function that takes a string and splits it up into indiviudal words. We'll do the whole process of removing punctuation, stemming, and tokenizing all in one function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def tokenize(text):\n", " translator=str.maketrans(string.punctuation, ' '*len(string.punctuation)) # translator that replaces punctuation with empty spaces\n", " return [stemmer.stem(i) for i in text.translate(translator).split()]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `tokenize` function actually does several things at the same time. First, it removes any punctuation using the `translate` method. Then, the `split` method breaks it apart into individual words. Then, using `stemmer.stem`, it creates a list of the stemmed versions of each of those individual words. \n", "\n", "Let's take a look at an example of how this works using the first abstract in our corpus." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tokenize(abstracts[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What we get out of it is something called a **bag of words**. This is a list of all of the words that are in the abstract, cleaned of all punctuation and stemmed. The paragraph is now represented as a vector of individual words rather than as one whole entity. \n", "\n", "We can apply this to each abstract in our corpus using `CountVectorizer`. This will not only do the tokenizing, but it will also count any duplicates of words and create a matrix that contains the frequency of each word. This will be quite a large matrix (number of columns will be number of unique words), so it outputs the data as a sparse matrix.\n", "\n", "Similar to how we fit models using `sklearn`, we will first create the `vectorizer` object (you can think of this like a model object), and then fit it with our abstracts. This should give us back our overall corpus bag of words, as well as a list of features (that is, the unique words in all the abstracts)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vectorizer = CountVectorizer(analyzer= \"word\", # unit of features are single words rather then phrases of words \n", " tokenizer=tokenize, # function to create tokens\n", " ngram_range=(0,1),\n", " strip_accents='unicode',\n", " min_df = 0.05,\n", " max_df = 0.95)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bag_of_words = vectorizer.fit_transform(abstracts) #transform our corpus is a bag of words \n", "features = vectorizer.get_feature_names()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(bag_of_words[0])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "features[0:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have our bag of words, we can start using it for models such as Latent Dirichlet Allocation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Latent Dirichlet Allocation\n", "\n", "Latent Dirichlet Allocation (LDA) is a statistical model that generates groups based on similarities. This is an example of an **unsupervised machine learning model**. That is, we don't have any sort of outcome variable -- we're just trying to group the abstracts into rough categories.\n", "\n", "Let's try fitting an LDA model. The way we do it is very similar to the models we've fit before from `sklearn`. We first create a `LatentDirichletAllocation` object, then fit it using our corpus bag of words." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lda = LatentDirichletAllocation(learning_method='online') \n", "\n", "doctopic = lda.fit_transform( bag_of_words )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ls_keywords = []\n", "for i,topic in enumerate(lda.components_):\n", " word_idx = np.argsort(topic)[::-1][:5]\n", " keywords = ', '.join( features[i] for i in word_idx)\n", " ls_keywords.append(keywords)\n", " print(i, keywords)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This doesn't look very helpful! There are way too many common words in the corpus, such as 'a', 'of', and so on. We need to remove them, because they don't actually have any interesting information about the documents." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Removing meaningless text - Stopwords\n", "\n", "Stopwords are words that are found commonly throughout a text and carry little semantic meaning. Examples of common stopwords are prepositions (\"to\", \"on\", \"in\"), articles (\"the\", \"an\", \"a\"), conjunctions (\"and\", \"or\", \"but\") and common nouns. For example, the words *the* and *of* are totally ubiquitous, so they won't serve as meaningful features, whether to distinguish documents from each other or to tell what a given document is about. You may also run into words that you want to remove based on where you obtained your corpus of text or what it's about. There are many lists of common stopwords available for you to use, both for general documents and for specific contexts, so you don't have to start from scratch. \n", "\n", "We can eliminate stopwords by checking all the words in our corpus against a list of commonly occuring stopwords that comes with NLTK." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Download most current stopwords\n", "nltk.download('stopwords')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stop = stopwords.words('english')\n", "stop[0:10]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Tokenize stop words to match\n", "eng_stopwords = [tokenize(s)[0] for s in stop]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vectorizer = CountVectorizer(analyzer= \"word\", # unit of features are single words rather then phrases of words \n", " tokenizer=tokenize, # function to create tokens\n", " ngram_range=(0,1),\n", " strip_accents='unicode',\n", " stop_words=eng_stopwords,\n", " min_df = 0.05,\n", " max_df = 0.95)\n", "\n", "# Creating bag of words\n", "bag_of_words = vectorizer.fit_transform(abstracts) #transform our corpus is a bag of words \n", "features = vectorizer.get_feature_names()\n", "\n", "# Fitting LDA model\n", "lda = LatentDirichletAllocation(n_components = 5, learning_method='online') \n", "doctopic = lda.fit_transform( bag_of_words )\n", "\n", "# Displaying the top keywords in each topic\n", "ls_keywords = []\n", "for i,topic in enumerate(lda.components_):\n", " word_idx = np.argsort(topic)[::-1][:5]\n", " keywords = ', '.join( features[i] for i in word_idx)\n", " ls_keywords.append(keywords)\n", " print(i, keywords)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### N-grams - Adding context by creating N-grams\n", "\n", "Obviously, reducing a document to a bag of words means losing much of its meaning - we put words in certain orders, and group words together in phrases and sentences, precisely to give them more meaning. If you follow the processing steps we've gone through so far, splitting your document into individual words and then removing stopwords, you'll completely lose all phrases like \"kick the bucket,\" \"commander in chief,\" or \"sleeps with the fishes.\" \n", "\n", "One way to address this is to break down each document similarly, but rather than treating each word as an individual unit, treat each group of 2 words, or 3 words, or *n* words, as a unit. We call this a \"bag of *n*-grams,\" where *n* is the number of words in each chunk. Then you can analyze which groups of words commonly occur together (in a fixed order). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vectorizer = CountVectorizer(analyzer= \"word\", # unit of features are single words rather then phrases of words \n", " tokenizer=tokenize, # function to create tokens\n", " ngram_range=(0,2), # Allow for bigrams\n", " strip_accents='unicode',\n", " stop_words=eng_stopwords,\n", " min_df = 0.05,\n", " max_df = 0.95)\n", "\n", "# Creating bag of words\n", "bag_of_words = vectorizer.fit_transform(abstracts) #transform our corpus is a bag of words \n", "features = vectorizer.get_feature_names()\n", "\n", "# Fitting LDA model\n", "lda = LatentDirichletAllocation(n_components = 5, learning_method='online') \n", "doctopic = lda.fit_transform( bag_of_words )\n", "\n", "# Displaying the top keywords in each topic\n", "ls_keywords = []\n", "for i,topic in enumerate(lda.components_):\n", " word_idx = np.argsort(topic)[::-1][:10]\n", " keywords = ', '.join( features[i] for i in word_idx)\n", " ls_keywords.append(keywords)\n", " print(i, keywords)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### TF-IDF - Weighting terms based on frequency\n", "\n", "A final step in cleaning and processing our text data is **Term Frequency-Inverse Document Frequency (TF-IDF)**. TF-IDF is based on the idea that the words (or terms) that are most related to a certain topic will occur frequently in documents on that topic, and infrequently in unrelated documents. TF-IDF re-weights words so that we emphasize words that are unique to a document and suppress words that are common throughout the corpus by inversely weighting terms based on their frequency within the document and across the corpus.\n", "\n", "Let's look at how to use TF-IDF:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stop = stopwords.words('english') + ['invent', 'produce', 'method', 'use', 'first', 'second']\n", "full_stopwords = [tokenize(s)[0] for s in stop]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vectorizer = CountVectorizer(analyzer= \"word\", # unit of features are single words rather then phrases of words \n", " tokenizer=tokenize, # function to create tokens\n", " ngram_range=(0,2),\n", " strip_accents='unicode',\n", " stop_words=full_stopwords,\n", " min_df = 0.05,\n", " max_df = 0.95)\n", "# Creating bag of words\n", "bag_of_words = vectorizer.fit_transform(abstracts) #transform our corpus is a bag of words \n", "features = vectorizer.get_feature_names()\n", "\n", "# Use TfidfTransformer to re-weight bag of words \n", "transformer = TfidfTransformer(norm = None, smooth_idf = True, sublinear_tf = True)\n", "tfidf = transformer.fit_transform(bag_of_words)\n", "\n", "# Fitting LDA model\n", "lda = LatentDirichletAllocation(n_components = 5, learning_method='online') \n", "doctopic = lda.fit_transform(tfidf)\n", "\n", "# Displaying the top keywords in each topic\n", "ls_keywords = []\n", "for i,topic in enumerate(lda.components_):\n", " word_idx = np.argsort(topic)[::-1][:10]\n", " keywords = ', '.join( features[i] for i in word_idx)\n", " ls_keywords.append(keywords)\n", " print(i, keywords)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ls_keywords" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "doctopic" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "topic_df = pd.DataFrame(doctopic, columns = ls_keywords)\n", "topic_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Supervised Learning: Document Classification\n", "\n", "Previously, we used topic modeling to infer relationships between social service facilities within the data. That is an example of unsupervised learning: we were looking to uncover structure in the form of topics, or groups of agencies, but we did not necessarily know the ground truth of how many groups we should find or which agencies belonged in which group. \n", "\n", "We can also do supervised learning with text data. In supervised learning, we have a *known* outcome or label (*Y*) that we want to produce given some data (*X*), and in general, we want to be able to produce this *Y* when we *don't* know it, or when we *only* have *X*. \n", "\n", "In order to produce labels we need to first have examples our algorithm can learn from, a \"training set.\" In the context of text analysis, developing a training set can be very expensive, as it can require a large amount of human labor or linguistic expertise. **Document classification** is an example of supervised learning in which want to characterize our documents based on their contents (*X*). A common example of document classification is spam e-mail detection. Another example of supervised learning in text analysis is *sentiment analysis*, where *X* is our documents and *Y* is the state of the author. This \"state\" is dependent on the question you're trying to answer, and can range from the author being happy or unhappy with a product to the author being politically conservative or liberal. Another example is *part-of-speech tagging* where *X* are individual words and *Y* is the part-of-speech. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Further Resources\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A great resource for NLP in Python is \n", "[Natural Language Processing with Python](https://www.amazon.com/Natural-Language-Processing-Python-Analyzing/dp/0596516495)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": "block", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }