{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Topic Modeling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another popular text analysis technique is called topic modeling. The ultimate goal of topic modeling is to find various topics that are present in your corpus. Each document in the corpus will be made up of at least one topic, if not multiple topics.\n", "\n", "In this notebook, we will be covering the steps on how to do **Latent Dirichlet Allocation (LDA)**, which is one of many topic modeling techniques. It was specifically designed for text data.\n", "\n", "To use a topic modeling technique, you need to provide (1) a document-term matrix and (2) the number of topics you would like the algorithm to pick up.\n", "\n", "Once the topic modeling technique is applied, your job as a human is to interpret the results and see if the mix of words in each topic make sense. If they don't make sense, you can try changing up the number of topics, the terms in the document-term matrix, model parameters, or even try a different model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Topic Modeling - Attempt #1 (All Text)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
aaaaahaaaaahhhhhhhaaaaauuugghhhhhhaaaahhhhhaaahaahabcabcsabilityabject...zeezenzeppelinzerozillionzombiezombieszoningzooéclair
ali0000001000...0000010000
anthony0000000000...0000000000
bill1000000100...0001111100
bo0111000010...0001000000
dave0000100000...0000000000
hasan0000000000...2101000000
jim0000000000...0000000000
joe0000000000...0000000000
john0000000000...0000000001
louis0000030000...0002000000
mike0000000000...0021000000
ricky0000000011...0000000010
\n", "

12 rows × 7468 columns

\n", "
" ], "text/plain": [ " aaaaah aaaaahhhhhhh aaaaauuugghhhhhh aaaahhhhh aaah aah abc \\\n", "ali 0 0 0 0 0 0 1 \n", "anthony 0 0 0 0 0 0 0 \n", "bill 1 0 0 0 0 0 0 \n", "bo 0 1 1 1 0 0 0 \n", "dave 0 0 0 0 1 0 0 \n", "hasan 0 0 0 0 0 0 0 \n", "jim 0 0 0 0 0 0 0 \n", "joe 0 0 0 0 0 0 0 \n", "john 0 0 0 0 0 0 0 \n", "louis 0 0 0 0 0 3 0 \n", "mike 0 0 0 0 0 0 0 \n", "ricky 0 0 0 0 0 0 0 \n", "\n", " abcs ability abject ... zee zen zeppelin zero zillion \\\n", "ali 0 0 0 ... 0 0 0 0 0 \n", "anthony 0 0 0 ... 0 0 0 0 0 \n", "bill 1 0 0 ... 0 0 0 1 1 \n", "bo 0 1 0 ... 0 0 0 1 0 \n", "dave 0 0 0 ... 0 0 0 0 0 \n", "hasan 0 0 0 ... 2 1 0 1 0 \n", "jim 0 0 0 ... 0 0 0 0 0 \n", "joe 0 0 0 ... 0 0 0 0 0 \n", "john 0 0 0 ... 0 0 0 0 0 \n", "louis 0 0 0 ... 0 0 0 2 0 \n", "mike 0 0 0 ... 0 0 2 1 0 \n", "ricky 0 1 1 ... 0 0 0 0 0 \n", "\n", " zombie zombies zoning zoo éclair \n", "ali 1 0 0 0 0 \n", "anthony 0 0 0 0 0 \n", "bill 1 1 1 0 0 \n", "bo 0 0 0 0 0 \n", "dave 0 0 0 0 0 \n", "hasan 0 0 0 0 0 \n", "jim 0 0 0 0 0 \n", "joe 0 0 0 0 0 \n", "john 0 0 0 0 1 \n", "louis 0 0 0 0 0 \n", "mike 0 0 0 0 0 \n", "ricky 0 0 0 1 0 \n", "\n", "[12 rows x 7468 columns]" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Let's read in our document-term matrix\n", "import pandas as pd\n", "import pickle\n", "\n", "data = pd.read_pickle('dtm_stop.pkl')\n", "data" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# # Uncomment to setuo LDA logging to a file\n", "# import logging\n", "# logging.basicConfig(filename='lda_model.log', format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\n", "\n", "# Import the necessary modules for LDA with gensim\n", "# Terminal / Anaconda Navigator: conda install -c conda-forge gensim\n", "from gensim import matutils, models\n", "import scipy.sparse # sparse matrix format is required for gensim" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
alianthonybillbodavehasanjimjoejohnlouismikericky
aaaaah001000000000
aaaaahhhhhhh000100000000
aaaaauuugghhhhhh000100000000
aaaahhhhh000100000000
aaah000010000000
\n", "
" ], "text/plain": [ " ali anthony bill bo dave hasan jim joe john louis \\\n", "aaaaah 0 0 1 0 0 0 0 0 0 0 \n", "aaaaahhhhhhh 0 0 0 1 0 0 0 0 0 0 \n", "aaaaauuugghhhhhh 0 0 0 1 0 0 0 0 0 0 \n", "aaaahhhhh 0 0 0 1 0 0 0 0 0 0 \n", "aaah 0 0 0 0 1 0 0 0 0 0 \n", "\n", " mike ricky \n", "aaaaah 0 0 \n", "aaaaahhhhhhh 0 0 \n", "aaaaauuugghhhhhh 0 0 \n", "aaaahhhhh 0 0 \n", "aaah 0 0 " ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# One of the required inputs is a term-document matrix (transpose of document-term)\n", "tdm = data.transpose()\n", "tdm.head()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# We're going to put the term-document matrix into a new gensim format, from df --> sparse matrix --> gensim corpus\n", "sparse_counts = scipy.sparse.csr_matrix(tdm)\n", "corpus = matutils.Sparse2Corpus(sparse_counts)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# Gensim also requires a dictionary of all the terms and their respective location in the term-document matrix\n", "cv = pickle.load(open(\"cv_stop.pkl\", \"rb\")) # CountVectorizor creates dtm\n", "id2word = dict((v, k) for k, v in cv.vocabulary_.items())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term), we're ready to train the LDA model. We need to specify two other parameters - the number of topics and the number of training passes. Let's start the number of topics at 2, see if the results make sense, and increase the number from there." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Topic 0 \n", " 0.009*\"shit\" + 0.008*\"fucking\" + 0.007*\"fuck\" + 0.005*\"theyre\" + 0.005*\"didnt\" + 0.005*\"man\" + 0.004*\"cause\" + 0.004*\"hes\" + 0.004*\"say\" + 0.004*\"did\" \n", "\n", "Topic 1 \n", " 0.006*\"fucking\" + 0.006*\"say\" + 0.005*\"going\" + 0.005*\"went\" + 0.005*\"want\" + 0.005*\"thing\" + 0.005*\"good\" + 0.005*\"day\" + 0.005*\"love\" + 0.004*\"hes\" \n", "\n" ] } ], "source": [ "# Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term),\n", "# we need to specify two other parameters as well - the number of topics and the number of passes.\n", "\n", "# *Note: gensim refers to it as corpus, we call it term-document matrix\n", "# passes is how many times the algorithm is supposed to pass over the whole corpus\n", "import numpy as np\n", "lda = models.LdaModel(corpus=corpus, \n", " id2word=id2word, \n", " num_topics=2, \n", " passes=10, \n", " random_state=np.random.RandomState(seed=10))\n", "\n", "for topic, topwords in lda.show_topics():\n", " print(\"Topic\", topic, \"\\n\", topwords, \"\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Increment the number of topics to see if it improves**" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Topic 0 \n", " 0.008*\"shit\" + 0.006*\"fucking\" + 0.005*\"didnt\" + 0.005*\"fuck\" + 0.005*\"did\" + 0.005*\"say\" + 0.005*\"day\" + 0.004*\"hes\" + 0.004*\"little\" + 0.004*\"guys\" \n", "\n", "Topic 1 \n", " 0.008*\"love\" + 0.007*\"want\" + 0.007*\"dad\" + 0.005*\"going\" + 0.005*\"say\" + 0.004*\"stuff\" + 0.004*\"good\" + 0.004*\"shes\" + 0.004*\"bo\" + 0.004*\"did\" \n", "\n", "Topic 2 \n", " 0.010*\"fucking\" + 0.007*\"theyre\" + 0.006*\"fuck\" + 0.006*\"went\" + 0.006*\"theres\" + 0.006*\"cause\" + 0.006*\"say\" + 0.006*\"thing\" + 0.005*\"going\" + 0.005*\"hes\" \n", "\n" ] } ], "source": [ "# LDA for num_topics = 3\n", "lda = models.LdaModel(corpus=corpus, \n", " id2word=id2word, \n", " num_topics=3, \n", " passes=10, \n", " random_state=np.random.RandomState(seed=10))\n", "\n", "for topic, topwords in lda.show_topics():\n", " print(\"Topic\", topic, \"\\n\", topwords, \"\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Increment the number of topics again**" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Topic 0 \n", " 0.010*\"fucking\" + 0.006*\"fuck\" + 0.006*\"shit\" + 0.006*\"going\" + 0.006*\"theyre\" + 0.006*\"say\" + 0.005*\"went\" + 0.005*\"day\" + 0.005*\"hes\" + 0.005*\"want\" \n", "\n", "Topic 1 \n", " 0.006*\"didnt\" + 0.005*\"want\" + 0.005*\"fucking\" + 0.005*\"shit\" + 0.005*\"good\" + 0.005*\"love\" + 0.004*\"really\" + 0.004*\"fuck\" + 0.004*\"man\" + 0.004*\"says\" \n", "\n", "Topic 2 \n", " 0.009*\"life\" + 0.007*\"thing\" + 0.006*\"hes\" + 0.006*\"theres\" + 0.006*\"cause\" + 0.005*\"shit\" + 0.005*\"good\" + 0.005*\"theyre\" + 0.005*\"tit\" + 0.004*\"really\" \n", "\n", "Topic 3 \n", " 0.008*\"joke\" + 0.006*\"anthony\" + 0.006*\"day\" + 0.006*\"say\" + 0.005*\"guys\" + 0.004*\"tell\" + 0.004*\"grandma\" + 0.004*\"thing\" + 0.004*\"good\" + 0.004*\"did\" \n", "\n" ] } ], "source": [ "# LDA for num_topics = 4\n", "lda = models.LdaModel(corpus=corpus, \n", " id2word=id2word, \n", " num_topics=4, \n", " passes=10)\n", "\n", "for topic, topwords in lda.show_topics():\n", " print(\"Topic\", topic, \"\\n\", topwords, \"\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These topics aren't looking too meaningful, and there's a lot of overlap between the topics. We've tried modifying our parameters. Let's try modifying our terms list as well." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Topic Modeling - Attempt #2 (Nouns Only)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One popular trick is to look only at terms that are from one part of speech (only nouns, only adjectives, etc.). Check out the UPenn tag set: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html.\n", "\n", "For the 2nd attempt let's look at nouns only. The tag for nouns is NN." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# Let's create a function to pull out nouns from a string of text\n", "from nltk import word_tokenize, pos_tag\n", "\n", "def nouns(text):\n", " '''Given a string of text, tokenize the text and pull out only the nouns.'''\n", " is_noun = lambda pos: pos[:2] == 'NN' # pos = part-of-speech\n", " tokenized = word_tokenize(text)\n", " all_nouns = [word for (word, pos) in pos_tag(tokenized) if is_noun(pos)] \n", " return ' '.join(all_nouns)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "# Read in the cleaned data, before the CountVectorizer step\n", "data_clean = pd.read_pickle('data_clean.pkl')" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
transcript
aliladies gentlemen stage ali hi thank hello na s...
anthonythank thank people i em i francisco city world...
billthank thank pleasure georgia area oasis i june...
bomacdonald farm e i o farm pig e i i snort macd...
davejokes living stare work profound train thought...
hasanwhats davis whats home i netflix la york i son...
jimladies gentlemen stage mr jim jefferies thank ...
joeladies gentlemen joe fuck thanks phone fuckfac...
johnpetunia thats hello hello chicago thank crowd ...
louismusic lets lights lights thank i i place place...
mikewow hey thanks look insane years everyone i id...
rickyhello thank fuck thank im gon youre weve money...
\n", "
" ], "text/plain": [ " transcript\n", "ali ladies gentlemen stage ali hi thank hello na s...\n", "anthony thank thank people i em i francisco city world...\n", "bill thank thank pleasure georgia area oasis i june...\n", "bo macdonald farm e i o farm pig e i i snort macd...\n", "dave jokes living stare work profound train thought...\n", "hasan whats davis whats home i netflix la york i son...\n", "jim ladies gentlemen stage mr jim jefferies thank ...\n", "joe ladies gentlemen joe fuck thanks phone fuckfac...\n", "john petunia thats hello hello chicago thank crowd ...\n", "louis music lets lights lights thank i i place place...\n", "mike wow hey thanks look insane years everyone i id...\n", "ricky hello thank fuck thank im gon youre weve money..." ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Apply the nouns function to the transcripts to filter only on nouns\n", "data_nouns = pd.DataFrame(data_clean.transcript.apply(nouns))\n", "data_nouns" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
aaaaahhhhhhhaaaaauuugghhhhhhaaaahhhhhaahabcabcsabilityabortionabortionsabuse...yummyzezealandzeezeppelinzillionzombiezombieszooéclair
ali0000100000...0000001000
anthony0000000200...00100000000
bill0000010000...0100011100
bo1110001000...0000000000
dave0000000010...0000000000
hasan0000000000...0001000000
jim0000000000...0000000000
joe0000000001...0000000000
john0000000000...0000000001
louis0003000000...0000000000
mike0000000000...0000200000
ricky0000001000...1000000010
\n", "

12 rows × 4635 columns

\n", "
" ], "text/plain": [ " aaaaahhhhhhh aaaaauuugghhhhhh aaaahhhhh aah abc abcs ability \\\n", "ali 0 0 0 0 1 0 0 \n", "anthony 0 0 0 0 0 0 0 \n", "bill 0 0 0 0 0 1 0 \n", "bo 1 1 1 0 0 0 1 \n", "dave 0 0 0 0 0 0 0 \n", "hasan 0 0 0 0 0 0 0 \n", "jim 0 0 0 0 0 0 0 \n", "joe 0 0 0 0 0 0 0 \n", "john 0 0 0 0 0 0 0 \n", "louis 0 0 0 3 0 0 0 \n", "mike 0 0 0 0 0 0 0 \n", "ricky 0 0 0 0 0 0 1 \n", "\n", " abortion abortions abuse ... yummy ze zealand zee zeppelin \\\n", "ali 0 0 0 ... 0 0 0 0 0 \n", "anthony 2 0 0 ... 0 0 10 0 0 \n", "bill 0 0 0 ... 0 1 0 0 0 \n", "bo 0 0 0 ... 0 0 0 0 0 \n", "dave 0 1 0 ... 0 0 0 0 0 \n", "hasan 0 0 0 ... 0 0 0 1 0 \n", "jim 0 0 0 ... 0 0 0 0 0 \n", "joe 0 0 1 ... 0 0 0 0 0 \n", "john 0 0 0 ... 0 0 0 0 0 \n", "louis 0 0 0 ... 0 0 0 0 0 \n", "mike 0 0 0 ... 0 0 0 0 2 \n", "ricky 0 0 0 ... 1 0 0 0 0 \n", "\n", " zillion zombie zombies zoo éclair \n", "ali 0 1 0 0 0 \n", "anthony 0 0 0 0 0 \n", "bill 1 1 1 0 0 \n", "bo 0 0 0 0 0 \n", "dave 0 0 0 0 0 \n", "hasan 0 0 0 0 0 \n", "jim 0 0 0 0 0 \n", "joe 0 0 0 0 0 \n", "john 0 0 0 0 1 \n", "louis 0 0 0 0 0 \n", "mike 0 0 0 0 0 \n", "ricky 0 0 0 1 0 \n", "\n", "[12 rows x 4635 columns]" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Create a new document-term matrix using only nouns\n", "from sklearn.feature_extraction import text\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "# Re-add the additional stop words since we are recreating the document-term matrix\n", "add_stop_words = ['like', 'im', 'know', 'just', 'dont', 'thats', 'right', 'people',\n", " 'youre', 'got', 'gonna', 'time', 'think', 'yeah', 'said']\n", "stop_words = text.ENGLISH_STOP_WORDS.union(add_stop_words)\n", "\n", "# Recreate a document-term matrix with only nouns\n", "cv_nouns = CountVectorizer(stop_words=stop_words)\n", "data_cv_nouns = cv_nouns.fit_transform(data_nouns.transcript)\n", "data_dtm_nouns = pd.DataFrame(data_cv_nouns.toarray(), columns=cv_nouns.get_feature_names())\n", "data_dtm_nouns.index = data_nouns.index\n", "data_dtm_nouns" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "# Create the gensim corpus - this time with nouns only\n", "corpus_nouns = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtm_nouns.transpose()))\n", "\n", "# Create the vocabulary dictionary the all terms and their respective location\n", "id2word_nouns = dict((v, k) for k, v in cv_nouns.vocabulary_.items())" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(0,\n", " '0.011*\"dad\" + 0.006*\"life\" + 0.006*\"shes\" + 0.005*\"mom\" + 0.005*\"parents\" + 0.005*\"school\" + 0.004*\"girl\" + 0.004*\"home\" + 0.004*\"hes\" + 0.003*\"hey\"'),\n", " (1,\n", " '0.010*\"thing\" + 0.009*\"day\" + 0.008*\"shit\" + 0.008*\"man\" + 0.007*\"cause\" + 0.007*\"life\" + 0.007*\"hes\" + 0.007*\"way\" + 0.007*\"fuck\" + 0.007*\"guy\"')]" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Let's start with 2 topics\n", "lda_nouns = models.LdaModel(corpus=corpus_nouns, num_topics=2, id2word=id2word_nouns, passes=10)\n", "lda_nouns.print_topics()" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(0,\n", " '0.010*\"thing\" + 0.009*\"cause\" + 0.009*\"day\" + 0.009*\"life\" + 0.008*\"man\" + 0.008*\"guy\" + 0.008*\"way\" + 0.008*\"hes\" + 0.007*\"shit\" + 0.007*\"fuck\"'),\n", " (1,\n", " '0.012*\"shit\" + 0.009*\"man\" + 0.008*\"fuck\" + 0.006*\"lot\" + 0.006*\"didnt\" + 0.005*\"ahah\" + 0.005*\"money\" + 0.005*\"room\" + 0.005*\"hes\" + 0.004*\"guy\"'),\n", " (2,\n", " '0.010*\"day\" + 0.008*\"dad\" + 0.008*\"joke\" + 0.007*\"thing\" + 0.006*\"life\" + 0.006*\"hes\" + 0.006*\"shit\" + 0.006*\"lot\" + 0.006*\"years\" + 0.006*\"shes\"')]" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Let's try topics = 3\n", "lda_nouns = models.LdaModel(corpus=corpus_nouns, num_topics=3, id2word=id2word_nouns, passes=10)\n", "lda_nouns.print_topics()" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(0,\n", " '0.013*\"day\" + 0.009*\"thing\" + 0.009*\"cause\" + 0.007*\"women\" + 0.007*\"lot\" + 0.006*\"man\" + 0.006*\"shit\" + 0.006*\"way\" + 0.006*\"guy\" + 0.005*\"baby\"'),\n", " (1,\n", " '0.008*\"joke\" + 0.008*\"hes\" + 0.008*\"stuff\" + 0.007*\"thing\" + 0.007*\"day\" + 0.007*\"bo\" + 0.006*\"man\" + 0.006*\"years\" + 0.006*\"id\" + 0.006*\"repeat\"'),\n", " (2,\n", " '0.012*\"thing\" + 0.010*\"life\" + 0.009*\"cause\" + 0.009*\"day\" + 0.009*\"guy\" + 0.009*\"shit\" + 0.008*\"gon\" + 0.008*\"hes\" + 0.007*\"way\" + 0.006*\"kind\"'),\n", " (3,\n", " '0.012*\"shit\" + 0.011*\"fuck\" + 0.011*\"man\" + 0.009*\"dad\" + 0.008*\"life\" + 0.006*\"house\" + 0.006*\"hes\" + 0.006*\"way\" + 0.006*\"lot\" + 0.006*\"shes\"')]" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Let's try 4 topics\n", "lda_nouns = models.LdaModel(corpus=corpus_nouns, num_topics=4, id2word=id2word_nouns, passes=10)\n", "lda_nouns.print_topics()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**I still don't see the topics becoming clear, so in attempt 3 I will try both nouns and adjectivs.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Topic Modeling - Attempt #3 (Nouns and Adjectives)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "# Create a function to pull out nouns and adjectives from a string of text\n", "def nouns_adj(text):\n", " '''Given a string of text, tokenize the text and pull out only the nouns and adjectives.'''\n", " is_noun_adj = lambda pos: pos[:2] == 'NN' or pos[:2] == 'JJ'\n", " tokenized = word_tokenize(text)\n", " nouns_adj = [word for (word, pos) in pos_tag(tokenized) if is_noun_adj(pos)] \n", " return ' '.join(nouns_adj)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
transcript
aliladies gentlemen welcome stage ali wong hi wel...
anthonythank san francisco thank good people surprise...
billright thank thank pleasure greater atlanta geo...
boold macdonald farm e i i o farm pig e i i snor...
davedirty jokes living stare most hard work profou...
hasanwhats davis whats im home i netflix special la...
jimladies gentlemen welcome stage mr jim jefferie...
joeladies gentlemen joe fuck san francisco thanks...
johnright petunia august thats good right hello he...
louismusic lets lights lights thank much i i i nice...
mikewow hey thanks hey seattle nice look crazy ins...
rickyhello great thank fuck thank lovely welcome im...
\n", "
" ], "text/plain": [ " transcript\n", "ali ladies gentlemen welcome stage ali wong hi wel...\n", "anthony thank san francisco thank good people surprise...\n", "bill right thank thank pleasure greater atlanta geo...\n", "bo old macdonald farm e i i o farm pig e i i snor...\n", "dave dirty jokes living stare most hard work profou...\n", "hasan whats davis whats im home i netflix special la...\n", "jim ladies gentlemen welcome stage mr jim jefferie...\n", "joe ladies gentlemen joe fuck san francisco thanks...\n", "john right petunia august thats good right hello he...\n", "louis music lets lights lights thank much i i i nice...\n", "mike wow hey thanks hey seattle nice look crazy ins...\n", "ricky hello great thank fuck thank lovely welcome im..." ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Apply the nouns function to the transcripts to filter only on nouns\n", "data_nouns_adj = pd.DataFrame(data_clean.transcript.apply(nouns_adj))\n", "data_nouns_adj" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "# Create a new document-term matrix using only nouns and adjectives, also remove common words with max_df\n", "cv_nouns_adj = CountVectorizer(stop_words=stop_words, max_df=.8) # Remove corpus-specific stop words with max_df, if occurs >80%\n", "data_cv_nouns_adj = cv_nouns_adj.fit_transform(data_nouns_adj.transcript)\n", "data_dtm_nouns_adj = pd.DataFrame(data_cv_nouns_adj.toarray(), columns=cv_nouns_adj.get_feature_names())\n", "data_dtm_nouns_adj.index = data_nouns_adj.index" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "# Create the gensim corpus\n", "corpus_nouns_adj = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtm_nouns_adj.transpose()))\n", "\n", "# Create the vocabulary dictionary\n", "id2word_nouns_adj = dict((v, k) for k, v in cv_nouns_adj.vocabulary_.items())" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(0,\n", " '0.004*\"mom\" + 0.004*\"ass\" + 0.003*\"joke\" + 0.003*\"friend\" + 0.003*\"parents\" + 0.003*\"clinton\" + 0.003*\"jenny\" + 0.003*\"guns\" + 0.002*\"dick\" + 0.002*\"anthony\"'),\n", " (1,\n", " '0.003*\"joke\" + 0.003*\"bo\" + 0.003*\"comedy\" + 0.003*\"parents\" + 0.003*\"love\" + 0.003*\"gay\" + 0.003*\"hasan\" + 0.002*\"repeat\" + 0.002*\"nuts\" + 0.002*\"ahah\"')]" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Let's start with 2 topics\n", "lda_nouns_adj = models.LdaModel(corpus=corpus_nouns_adj, num_topics=2, id2word=id2word_nouns_adj, passes=10)\n", "lda_nouns_adj.print_topics()" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(0,\n", " '0.004*\"hasan\" + 0.004*\"parents\" + 0.004*\"jenny\" + 0.004*\"class\" + 0.004*\"guns\" + 0.003*\"mom\" + 0.003*\"door\" + 0.003*\"ass\" + 0.003*\"girls\" + 0.003*\"girlfriend\"'),\n", " (1,\n", " '0.004*\"joke\" + 0.004*\"wife\" + 0.003*\"mom\" + 0.003*\"clinton\" + 0.003*\"ahah\" + 0.003*\"gay\" + 0.003*\"hell\" + 0.002*\"son\" + 0.002*\"nuts\" + 0.002*\"husband\"'),\n", " (2,\n", " '0.006*\"joke\" + 0.005*\"bo\" + 0.004*\"repeat\" + 0.004*\"jokes\" + 0.004*\"eye\" + 0.004*\"anthony\" + 0.003*\"contact\" + 0.003*\"tit\" + 0.003*\"mom\" + 0.003*\"ok\"')]" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Let's try 3 topics\n", "lda_nouns_adj = models.LdaModel(corpus=corpus_nouns_adj, num_topics=3, id2word=id2word_nouns_adj, passes=10)\n", "lda_nouns_adj.print_topics()" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(0,\n", " '0.004*\"ok\" + 0.004*\"ass\" + 0.003*\"mom\" + 0.003*\"dog\" + 0.003*\"bo\" + 0.003*\"parents\" + 0.003*\"um\" + 0.003*\"friend\" + 0.003*\"clinton\" + 0.003*\"jenny\"'),\n", " (1,\n", " '0.006*\"joke\" + 0.004*\"jenner\" + 0.004*\"nuts\" + 0.003*\"jokes\" + 0.003*\"bruce\" + 0.003*\"stupid\" + 0.003*\"hampstead\" + 0.003*\"chimp\" + 0.003*\"rape\" + 0.003*\"dead\"'),\n", " (2,\n", " '0.007*\"joke\" + 0.005*\"ahah\" + 0.005*\"mad\" + 0.005*\"anthony\" + 0.004*\"gun\" + 0.004*\"gay\" + 0.004*\"son\" + 0.003*\"nigga\" + 0.003*\"wife\" + 0.003*\"grandma\"'),\n", " (3,\n", " '0.009*\"hasan\" + 0.007*\"mom\" + 0.006*\"parents\" + 0.006*\"brown\" + 0.004*\"bike\" + 0.004*\"birthday\" + 0.004*\"york\" + 0.003*\"door\" + 0.003*\"bethany\" + 0.003*\"pizza\"')]" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Let's try 4 topics\n", "lda_nouns_adj = models.LdaModel(corpus=corpus_nouns_adj, num_topics=4, id2word=id2word_nouns_adj, passes=10)\n", "lda_nouns_adj.print_topics()" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/Users/nwams/anaconda3/lib/python3.7/site-packages/gensim/models/ldamodel.py:775: RuntimeWarning: divide by zero encountered in log\n", " diff = np.log(self.expElogbeta)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Topic 0 \n", " 0.008*\"joke\" + 0.007*\"gun\" + 0.006*\"bo\" + 0.005*\"guns\" + 0.005*\"repeat\" + 0.004*\"um\" + 0.004*\"anthony\" + 0.004*\"party\" + 0.004*\"comedy\" + 0.004*\"jokes\" \n", "\n", "Topic 1 \n", " 0.011*\"mom\" + 0.010*\"clinton\" + 0.007*\"husband\" + 0.007*\"cow\" + 0.007*\"wife\" + 0.006*\"ok\" + 0.006*\"office\" + 0.006*\"wan\" + 0.005*\"ass\" + 0.005*\"pregnant\" \n", "\n", "Topic 2 \n", " 0.007*\"parents\" + 0.006*\"hasan\" + 0.006*\"jenny\" + 0.006*\"mom\" + 0.005*\"door\" + 0.004*\"brown\" + 0.004*\"texas\" + 0.004*\"york\" + 0.003*\"high\" + 0.003*\"friend\" \n", "\n", "Topic 3 \n", " 0.007*\"joke\" + 0.006*\"ahah\" + 0.005*\"nuts\" + 0.005*\"gay\" + 0.005*\"tit\" + 0.005*\"young\" + 0.004*\"nigga\" + 0.004*\"dead\" + 0.004*\"jenner\" + 0.004*\"rape\" \n", "\n" ] } ], "source": [ "# Keep it at 4 topics, but experiment with other hyper-parameters:\n", "# Increase the number of passes\n", "# Change alpha to really small value or symmetric or auto\n", "# Change eta to very small values\n", "# Set random_state to persist results on every run. By default LDA output varies on each run.\n", "lda_nouns_adj = models.LdaModel(corpus=corpus_nouns_adj, \n", " num_topics=4, \n", " id2word=id2word_nouns_adj, \n", " passes=100, \n", " alpha='symmetric', \n", " eta=0.00001,\n", " random_state=np.random.RandomState(seed=10))\n", "\n", "for topic, topwords in lda_nouns_adj.show_topics():\n", " print(\"Topic\", topic, \"\\n\", topwords, \"\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Unfortunately tuning the hyper-parameters did not yield any meaningful topics.**" ] } ], "metadata": { "_draft": { "nbviewer_url": "https://gist.github.com/9a44423d8e787e678a7fe233fa051265" }, "gist": { "data": { "description": "4-Topic-Modeling.ipynb", "public": true }, "id": "9a44423d8e787e678a7fe233fa051265" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" }, "toc": { "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "toc_cell": false, "toc_position": {}, "toc_section_display": "block", "toc_window_display": false }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }