{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "**Best viewed via Jupyter nbviewer**: https://nbviewer.jupyter.org/github/Josh-Been/LSI-Topic-Model/blob/master/Baylor-Libraries-LSI-Topic-Model.ipynb?flush_cache=true\n", "![alt text](https://github.com/Josh-Been/Sentiment-Per-Line/blob/master/Capture.PNG?raw=true \"Baylor University Libraries\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## LSI-Topic-Model\n", "\n", "Generates related keywords from a corpus bundeled together into topic areas. 10 keywords are generated per topic. A single text file is uploaded and each line is treated as a separate document.\n", "\n", "Baylor University Libraries: LSI Topic Model\n", "\n", "Implements the Latent Semantic Index\n", "\n", "From Wikipedia https://en.wikipedia.org/wiki/Latent_semantic_analysis#Latent_semantic_indexing \"Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings.\"\n", "\n", "This Python application was built by the Baylor University Libraries to assist researchers to implement unsupervised topic modelling on 1-line documents, such as Twitter social media." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "

**First**, ensure Anaconda 2.7 is installed on your system. If it is not, head to https://www.anaconda.com/download/ and install. Then continue with the next step.

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "**Second**, launch Anaconda Navigator. After installing in the previous step, this will be in the Programs menu on Windows and in the Applications directory on Mac." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "**Third**, launch the Jupyter Notebook application. Anaconda Navigator has a link directly to Jupyter Notebook." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "**Fourth**, download the Jupyter Notebook file https://raw.githubusercontent.com/Josh-Been/LSI-Topic-Model/master/Baylor-Libraries-LSI-Topic-Model.ipynb to your computer. In the Jupyter browser tab that opened in the previous step, click the Upload button and browse for the saved Jupyter Notebook file.\n", "\n", "Up to this point you have been reading an HTML version of this Notebook.\n", "\n", "Now switch to the interactive version in Jupyter.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "**Fifth**, ensure you have the Gensim: Topic Modelling for Humans library installed. If you are confident you already installed Gensim, skip ahead of this step.\n", "\n", "For background on Gensim - https://radimrehurek.com/gensim/\n", "\n", "Steps:\n", "\n", "(1) Open Anaconda Navigator\n", "\n", "(2) On the left-hand menu, click Environments\n", "\n", "(3) In the drop-down menu at the top center of the page, select 'All'\n", "\n", "(4) In the Search Packages to the right, type gensim\n", "\n", "(5) In the result below, check the box to the left of gensim and then click t he green apply button on the bottom right. Click Apply on any popups.\n", "\n", "\n", "#### NOTE: Step 5 above may take a few minutes to complete, depending on the speed of the computer and the network connection. Please be patient before moving on to the next step. Now is a good time to go and grab that coffee." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "**Sixth**, browse for the text file containing lines of documents. Put the cursor in the box below and click the 'run cell, select below' button at the top of this notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import warnings, string\n", "warnings.filterwarnings(action='ignore', category=UserWarning, module='gensim')\n", "from gensim import corpora, models\n", "from Tkinter import *\n", "from tkFileDialog import askopenfilename\n", "\n", "def strip_non_ascii(cleanme):\n", " stripped = cleanme.translate(None, string.punctuation)\n", " stripped = (c for c in stripped if 0 < ord(c) < 127)\n", " return ''.join(stripped)\n", "\n", "root_stop = Tk()\n", "txt_file = askopenfilename()\n", "print txt_file\n", "root_stop.update()\n", "root_stop.destroy()\n", "\n", "documents = []\n", "documents[:] = []\n", "f = open(txt_file, 'r')\n", "for line in f:\n", " documents.append(strip_non_ascii(line))\n", "f.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "**Seventh**, browse for a stop word list. This list must be a text file with one word per line. Put the cursor in the box beow and click the 'run cell, select below' button at the top of this notebook.\n", "\n", "There are numerous stopword lists on the internet. One example of lists in numerous languages is https://github.com/Alir3z4/stop-words It is advisable to modify lists as per your corpus." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "root_lines = Tk()\n", "stoptxt = askopenfilename()\n", "print stoptxt\n", "root_lines.update()\n", "root_lines.destroy()\n", "\n", "stoplist = []\n", "stoplist[:] = []\n", "f1 = open(stoptxt, 'r')\n", "for line in f1:\n", " line = line.replace('\\n','')\n", " line = line.replace('\\r','')\n", " stoplist.append(line)\n", "f1.close()\n", "stoplist.append('rt')\n", "stoplist.append('>')\n", "stoplist.append('sho')\n", "stoplist.append('&:)')\n", "stopset = set(stoplist)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "**Eighth**, specify the following options. Then, put the cursor in the box below and click the 'run cell, select below' button at the top of this notebook.\n", "\n", "If you set limit_proper_english_words to 'true', a browse dialog will appear. Browse for a list of English words. There are numerous ways to obtain lists of words, including https://github.com/dwyl/english-words" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "number_of_topics = 5\n", "limit_proper_english_words = 'true'\n", "remove_urls = 'true'\n", "\n", "if limit_proper_english_words == 'true':\n", " root_dict = Tk()\n", " dicttxt = askopenfilename()\n", " print dicttxt\n", " root_dict.update()\n", " root_dict.destroy()\n", "\n", " dictlist = []\n", " dictlist[:] = []\n", " f2 = open(dicttxt, 'r')\n", " for line in f2:\n", " line = line.replace('\\n','')\n", " line = line.replace('\\r','')\n", " dictlist.append(line)\n", " f2.close()\n", " dictset = set(dictlist)\n", "\n", "print 'Options Entered!'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "**Ninth**, calculate the LSI topics for the corpus. Put the cursor in the box beow and click the 'run cell, select below' button at the top of this notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " if limit_proper_english_words.lower() == 'true':\n", " lpe = dictset\n", " else:\n", " lpe = ''\n", " lpe = set()\n", " if remove_urls.lower() == 'true':\n", " ru = 'http'\n", " else:\n", " ru = ' '\n", "except:\n", " print 'Error With Options - Applying Defaults!'\n", " number_of_topics = 5\n", " lpe = ''\n", " ru = 'dict'\n", "\n", "if limit_proper_english_words.lower() == 'true':\n", " texts = [[word for word in document.lower().split() if (not word in stopset and not ru in word and not word.isdigit() and word.islower() and word in lpe and len(word)>1)]\n", " for document in documents]\n", "else:\n", " texts = [[word for word in document.lower().split() if (not word in stopset and not ru in word and not word.isdigit() and word.islower() and len(word)>1)]\n", " for document in documents] \n", "\n", "# remove words that appear only once\n", "from collections import defaultdict\n", "frequency = defaultdict(int)\n", "for text in texts:\n", " for token in text:\n", " frequency[token] += 1\n", "\n", "texts = [[token for token in text if frequency[token] > 1]\n", " for text in texts]\n", "\n", "dictionary = corpora.Dictionary(texts)\n", "# dictionary.save('/tmp/twitter.dict')\n", "\n", "corpus = [dictionary.doc2bow(text) for text in texts]\n", "# corpora.MmCorpus.serialize('/tmp/twitter.mm', corpus)\n", "\n", "tfidf = models.TfidfModel(corpus)\n", "\n", "corpus_tfidf = tfidf[corpus]\n", "\n", "lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=number_of_topics)\n", "corpus_lsi = lsi[corpus_tfidf]\n", "\n", "lsi.print_topics(number_of_topics)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" } }, "nbformat": 4, "nbformat_minor": 1 }