{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Unsupervised methods\n", "\n", "In this lesson, we'll cover unsupervised computational text anlalysis approaches. The central methods covered are TF-IDF and Topic Modeling. Both of these are common approachs in the social sciences and humanities.\n", "\n", "[DTM/TF-IDF](#dtm)
\n", "\n", "[Topic modeling](#topics)
\n", "\n", "### Today you will\n", "* Understand the DTM and why it's important to text analysis\n", "* Learn how to create a DTM in Python\n", "* Learn basic functionality of Python's package scikit-learn\n", "* Understand tf-idf scores\n", "* Learn a simple way to identify distinctive words\n", "* Implement a basic topic modeling algorithm and learn how to tweak it\n", "* In the process, gain more familiarity and comfort with the Pandas package and manipulating data\n", "\n", "### Time\n", "- Teaching: 30 minutes\n", "- Exercises: 30 minutes\n", "\n", "\n", "### Key Jargon\n", "* *Document Term Matrix*:\n", " * a matrix that describes the frequency of terms that occur in a collection of documents. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms.\n", "* *TF-IDF Scores*: \n", " * short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus.\n", "* *Topic Modeling*:\n", " * A general class of statistical models that uncover abstract topics within a text. It uses the co-occurrence of words within documents, compared to their distribution across documents, to uncover these abstract themes. The output is a list of weighted words, which indicate the subject of each topic, and a weight distribution across topics for each document.\n", " \n", "* *LDA*:\n", " * Latent Dirichlet Allocation. A particular model for topic modeling. It does not take document order into account, unlike other topic modeling algorithms." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DTM/TF-IDF \n", "\n", "In this lesson we will use Python's scikit-learn package learn to make a document term matrix from a .csv Music Reviews dataset (collected from MetaCritic.com). We will then use the DTM and a word weighting technique called tf-idf (term frequency inverse document frequency) to identify important and discriminating words within this dataset (utilizing the Pandas package). The illustrating question: **what words distinguish reviews of Rap albums, Indie Rock albums, and Jazz albums?**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import numpy as np\n", "import pandas as pd\n", "\n", "DATA_DIR = '../data'\n", "music_fname = 'music_reviews.csv'\n", "music_fname = os.path.join(DATA_DIR, music_fname)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### First attempt at reading in file" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reviews = pd.read_csv(music_fname)\n", "reviews.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Challenge\n", "\n", "Our first attempt at reading in the csv file failed. Why?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Print the text of the first review." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(reviews['body'][0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Explore the Data using Pandas\n", "\n", "Let's first look at some descriptive statistics about this dataset, to get a feel for what's in it. We'll do this using the Pandas package. \n", "\n", "Note: this is always good practice. It serves two purposes. It checks to make sure your data is correct, and there's no major errors. It also keeps you in touch with your data, which will help with interpretation. <3 your data!\n", "\n", "First, what genres are in this dataset, and how many reviews in each genre?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#We can count this using the value_counts() function\n", "reviews['genre'].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first thing most people do is to `describe` their data. (This is the `summary` command in R, or the `sum` command in Stata)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#There's only one numeric column in our data so we only get one column for output.\n", "reviews.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This only gets us numerical summaries. To get summaries of some of the other columns, we can explicitly ask for it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reviews.describe(include=['O'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Who were the reviewers?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reviews['critic'].value_counts().head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And the artists?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reviews['artist'].value_counts().head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can get the average score as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reviews['score'].mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What's the distribution of scores?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reviews['score'].plot(kind='hist');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we want to know the average score for each genre? To do this, we use Pandas `groupby` function. You'll want to get very familiar with the `groupby` function. It's quite powerful." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reviews_grouped_by_genre = reviews.groupby(\"genre\")\n", "reviews_grouped_by_genre['score'].mean().sort_values(ascending=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating the DTM using scikit-learn\n", "\n", "Ok, that's the summary of the metadata. Next, we turn to analyzing the text of the reviews. Remember, the text is stored in the 'body' column. First, a preprocessing step to remove numbers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def remove_digits(comment):\n", " return ''.join([ch for ch in comment if not ch.isdigit()])\n", "\n", "reviews['body_without_digits'] = reviews['body'].apply(remove_digits)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reviews['body_without_digits'].head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### CountVectorizer Function\n", "\n", "Our next step is to turn the text into a document term matrix using the scikit-learn function called `CountVectorizer`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "countvec = CountVectorizer()\n", "sparse_dtm = countvec.fit_transform(reviews['body_without_digits'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great! We made a DTM! Let's look at it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sparse_dtm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This format is called Compressed Sparse Format. It save a lot of memory to store the dtm in this format, but it is difficult to look at for a human. To illustrate the techniques in this lesson we will first convert this matrix back to a Pandas DataFrame, a format we're more familiar with. For larger datasets, you will have to use the Compressed Sparse Format. Putting it into a DataFrame, however, will enable us to get more comfortable with Pandas!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dtm = pd.DataFrame(sparse_dtm.toarray(), columns=countvec.get_feature_names(), index=reviews.index)\n", "dtm.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Challenge\n", "\n", "Read in all the Jane Austen books from day 2 and turn them into a DTM. What will be the rows and columns?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import glob\n", "DAY2_DATA_DIR = '../../day-2/data'\n", "AUSTEN_DIR = os.path.join(DAY2_DATA_DIR, 'austen', '*.txt')\n", "fnames = glob.glob(AUSTEN_DIR)\n", "books = []\n", "for fname in fnames:\n", " with open(fname) as f:\n", " text = f.read()\n", " books.append(text)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What can we do with a DTM?\n", "\n", "We can quickly identify the most frequent words" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dtm.sum().sort_values(ascending=False).head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Challenge\n", "\n", "Print out the most infrequent words rather than the most frequent words. You can look at the [Pandas documentation](http://pandas.pydata.org/pandas-docs/stable/api.html#api-dataframe-stats) for more information.\n", "\n", "### Gold star challenge: \n", "* Print the average number of times each word is used in a review.\n", "* Print this out sorted from lowest to highest." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### TF-IDF scores\n", "\n", "How to find distinctive words in a corpus is a long-standing question in text analysis? Today, we'll learn one simple approach to this: TF-IDF. The idea behind words scores is to weight words not just by their frequency, but by their frequency in one document compared to their distribution across all documents. Words that are frequent, but are also used in every single document, will not be distinguising. We want to identify words that are unevenly distributed across the corpus.\n", "\n", "One of the most popular ways to weight words (beyond frequency counts) is `tf-idf score`. By offsetting the frequency of a word by its document frequency (the number of documents in which it appears) will in theory filter out common terms such as 'the', 'of', and 'and'.\n", "\n", "Traditionally, the inverse document frequency is calculated as such:\n", "\n", "number_of_documents / number_documents_with_term\n", "\n", "so:\n", "\n", "tfidf_word1 = word1_frequency_document1 * (number_of_documents / number_document_with_word1)\n", "\n", "You can, and often should, normalize the numerator: \n", "\n", "tfidf_word1 = (word1_frequency_document1 / word_count_document1) * (number_of_documents / number_document_with_word1)\n", "\n", "We can calculate this manually, but scikit-learn has a built-in function to do so. This function also uses log frequencies, so the numbers will not correspond excactly to the calculations above. We'll use the [scikit-learn calculation](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html), but a challenge for you: use Pandas to calculate this manually. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### TF-IDFVectorizer Function\n", "\n", "To do so, we simply do the same thing we did above with CountVectorizer, but instead we use the function TfidfVectorizer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.feature_extraction.text import TfidfVectorizer\n", "\n", "tfidfvec = TfidfVectorizer()\n", "sparse_tfidf = tfidfvec.fit_transform(reviews['body_without_digits'])\n", "sparse_tfidf" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tfidf = pd.DataFrame(sparse_tfidf.toarray(), columns=tfidfvec.get_feature_names(), index=reviews.index)\n", "tfidf.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the 20 words with highest tf-idf weights." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tfidf.max().sort_values(ascending=False).head(20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok! We have successfully identified content words, without removing stop words. What else do you notice about this list?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Identifying Distinctive Words\n", "\n", "What can we do with this? These scores are best used when you want to identify distinctive words for individual documents, or groups of documents, compared to other groups or the corpus as a whole. To illustrate this, let's compare three genres and identify the most distinctive words by genre.\n", "\n", "First we add in a column of genre." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tfidf['genre_'] = reviews['genre']\n", "tfidf.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now lets compare the words with the highest tf-idf weight for each genre. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rap = tfidf[tfidf['genre_']=='Rap']\n", "indie = tfidf[tfidf['genre_']=='Indie']\n", "jazz = tfidf[tfidf['genre_']=='Jazz']\n", "\n", "rap.max(numeric_only=True).sort_values(ascending=False).head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "indie.max(numeric_only=True).sort_values(ascending=False).head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "jazz.max(numeric_only=True).sort_values(ascending=False).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There we go! A method of identifying distinctive words. You notice there are some proper nouns in there. How might we remove those if we're not interested in them?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Challenge\n", "\n", "Instead of outputting the highest weighted words, output the lowest weighted words. How should we interpret these words?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Topic modeling \n", "\n", "There are many topic modeling algorithms, but we'll use LDA. This is a standard model to use. Again, the goal is not to learn everything you need to know about topic modeling. Instead, this will provide you some starter code to run a simple model, with the idea that you can use this base of knowledge to explore this further.\n", "\n", "We will run Latent Dirichlet Allocation, the most basic and the oldest version of topic modeling. We will run this in one big chunk of code. Our challenge: use our knowledge of scikit-learn that we gained aboe to walk through the code to understand what it is doing. Your challenge: figure out how to modify this code to work on your own data, and/or tweak the parameters to get better output.\n", "\n", "Note: we will be using a different dataset for this technique. The music reviews in the above dataset are often short, one word or one sentence reviews. Topic modeling is not really appropriate for texts that are this short. Instead, we want texts that are longer and are composed of multiple topics each. For this exercise we will use a database of children's literature from the 19th century. \n", "\n", "The data were compiled by students in this course: http://english197s2015.pbworks.com/w/page/93127947/FrontPage\n", "Found here: http://dhresourcesforprojectbuilding.pbworks.com/w/page/69244469/Data%20Collections%20and%20Datasets#demo-corpora\n", "\n", "That page has additional corpora, for those interested in exploring text analysis further.\n", "\n", "I did some minimal cleaning to get the children's literature data in .csv format for our use." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "literature_fname = os.path.join(DATA_DIR, 'childrens_lit.csv.bz2')\n", "df_lit = pd.read_csv(literature_fname, sep='\\t', encoding = 'utf-8', compression = 'bz2', index_col=0)\n", "\n", "#drop rows where the text is missing\n", "df_lit = df_lit.dropna(subset=['text'])\n", "df_lit.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we're ready to fit the model. This requires the use of CountVecorizer, which we've already used, and the scikit-learn function LatentDirichletAllocation.\n", "\n", "See [here](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html) for more information about this function. \n", "\n", "First, we have to import it from sklearn. We tell the model how many topics we expect to find." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.decomposition import LatentDirichletAllocation\n", "n_topics = 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In sklearn, the input to LDA is a DTM (with either counts or TF-IDF scores)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tfidf_vectorizer = TfidfVectorizer(max_df=0.80, min_df=50,\n", " max_features=5000,\n", " stop_words='english')\n", "tfidf = tfidf_vectorizer.fit_transform(df_lit['text'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tf_vectorizer = CountVectorizer(max_df=0.80, min_df=50,\n", " max_features=5000,\n", " stop_words='english'\n", " )\n", "tf = tf_vectorizer.fit_transform(df_lit['text'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is where we fit the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import warnings\n", "warnings.filterwarnings(\"ignore\", category=DeprecationWarning) \n", "lda = LatentDirichletAllocation(n_components=n_topics, max_iter=20, random_state=0)\n", "lda = lda.fit(tf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a function to print out the top words for each topic in a pretty way. Don't worry too much about understanding every line of this code." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def print_top_words(model, feature_names, n_top_words):\n", " for topic_idx, topic in enumerate(model.components_):\n", " print(\"\\nTopic #{}:\".format(topic_idx))\n", " print(\" \".join([feature_names[i]\n", " for i in topic.argsort()[:-n_top_words - 1:-1]]))\n", " print()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tf_feature_names = tf_vectorizer.get_feature_names()\n", "print_top_words(lda, tf_feature_names, 20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Further resources\n", "\n", "[This blog post](https://de.dariah.eu/tatom/feature_selection.html) goes through finding distinctive words using Python in more detail \n", "\n", "Paper: [Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict](http://languagelog.ldc.upenn.edu/myl/Monroe.pdf), Burt Monroe, Michael Colaresi, Kevin Quinn\n", "\n", "[More detailed description of implementing LDA using scikit-learn](http://scikit-learn.org/stable/auto_examples/applications/topics_extraction_with_nmf_lda.html#sphx-glr-auto-examples-applications-topics-extraction-with-nmf-lda-py).\n", "\n", "[Topic modeling with Textacy](https://github.com/repmax/topic-model/blob/master/topic-modelling.ipynb)" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" } }, "nbformat": 4, "nbformat_minor": 1 }