{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n# LDA Model\n\nIntroduces Gensim's LDA model and demonstrates its use on the NIPS corpus.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The purpose of this tutorial is to demonstrate how to train and tune an LDA model.\n\nIn this tutorial we will:\n\n* Load input data.\n* Pre-process that data.\n* Transform documents into bag-of-words vectors.\n* Train an LDA model.\n\nThis tutorial will **not**:\n\n* Explain how Latent Dirichlet Allocation works\n* Explain how the LDA model performs inference\n* Teach you all the parameters and options for Gensim's LDA implementation\n\nIf you are not familiar with the LDA model or how to use it in Gensim, I (Olavur Mortensen)\nsuggest you read up on that before continuing with this tutorial. Basic\nunderstanding of the LDA model should suffice. Examples:\n\n* `Introduction to Latent Dirichlet Allocation `_\n* Gensim tutorial: `sphx_glr_auto_examples_core_run_topics_and_transformations.py`\n* Gensim's LDA model API docs: :py:class:`gensim.models.LdaModel`\n\nI would also encourage you to consider each step when applying the model to\nyour data, instead of just blindly applying my solution. The different steps\nwill depend on your data and possibly your goal with the model.\n\n## Data\n\nI have used a corpus of NIPS papers in this tutorial, but if you're following\nthis tutorial just to learn about LDA I encourage you to consider picking a\ncorpus on a subject that you are familiar with. Qualitatively evaluating the\noutput of an LDA model is challenging and can require you to understand the\nsubject matter of your corpus (depending on your goal with the model).\n\nNIPS (Neural Information Processing Systems) is a machine learning conference\nso the subject matter should be well suited for most of the target audience\nof this tutorial. You can download the original data from Sam Roweis'\n`website `_. The code below will\nalso do that for you.\n\n.. Important::\n The corpus contains 1740 documents, and not particularly long ones.\n So keep in mind that this tutorial is not geared towards efficiency, and be\n careful before applying the code to a large dataset.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import io\nimport os.path\nimport re\nimport tarfile\n\nimport smart_open\n\ndef extract_documents(url='https://cs.nyu.edu/~roweis/data/nips12raw_str602.tgz'):\n with smart_open.open(url, \"rb\") as file:\n with tarfile.open(fileobj=file) as tar:\n for member in tar.getmembers():\n if member.isfile() and re.search(r'nipstxt/nips\\d+/\\d+\\.txt', member.name):\n member_bytes = tar.extractfile(member).read()\n yield member_bytes.decode('utf-8', errors='replace')\n\ndocs = list(extract_documents())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we have a list of 1740 documents, where each document is a Unicode string.\nIf you're thinking about using your own corpus, then you need to make sure\nthat it's in the same format (list of Unicode strings) before proceeding\nwith the rest of this tutorial.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(len(docs))\nprint(docs[0][:500])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pre-process and vectorize the documents\n\nAs part of preprocessing, we will:\n\n* Tokenize (split the documents into tokens).\n* Lemmatize the tokens.\n* Compute bigrams.\n* Compute a bag-of-words representation of the data.\n\nFirst we tokenize the text using a regular expression tokenizer from NLTK. We\nremove numeric tokens and tokens that are only a single character, as they\ndon't tend to be useful, and the dataset contains a lot of them.\n\n.. Important::\n\n This tutorial uses the nltk library for preprocessing, although you can\n replace it with something else if you want.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Tokenize the documents.\nfrom nltk.tokenize import RegexpTokenizer\n\n# Split the documents into tokens.\ntokenizer = RegexpTokenizer(r'\\w+')\nfor idx in range(len(docs)):\n docs[idx] = docs[idx].lower() # Convert to lowercase.\n docs[idx] = tokenizer.tokenize(docs[idx]) # Split into words.\n\n# Remove numbers, but not words that contain numbers.\ndocs = [[token for token in doc if not token.isnumeric()] for doc in docs]\n\n# Remove words that are only one character.\ndocs = [[token for token in doc if len(token) > 1] for doc in docs]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use the WordNet lemmatizer from NLTK. A lemmatizer is preferred over a\nstemmer in this case because it produces more readable words. Output that is\neasy to read is very desirable in topic modelling.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Lemmatize the documents.\nfrom nltk.stem.wordnet import WordNetLemmatizer\n\nlemmatizer = WordNetLemmatizer()\ndocs = [[lemmatizer.lemmatize(token) for token in doc] for doc in docs]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We find bigrams in the documents. Bigrams are sets of two adjacent words.\nUsing bigrams we can get phrases like \"machine_learning\" in our output\n(spaces are replaced with underscores); without bigrams we would only get\n\"machine\" and \"learning\".\n\nNote that in the code below, we find bigrams and then add them to the\noriginal data, because we would like to keep the words \"machine\" and\n\"learning\" as well as the bigram \"machine_learning\".\n\n.. Important::\n Computing n-grams of large dataset can be very computationally\n and memory intensive.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Compute bigrams.\nfrom gensim.models import Phrases\n\n# Add bigrams and trigrams to docs (only ones that appear 20 times or more).\nbigram = Phrases(docs, min_count=20)\nfor idx in range(len(docs)):\n for token in bigram[docs[idx]]:\n if '_' in token:\n # Token is a bigram, add to document.\n docs[idx].append(token)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We remove rare words and common words based on their *document frequency*.\nBelow we remove words that appear in less than 20 documents or in more than\n50% of the documents. Consider trying to remove words only based on their\nfrequency, or maybe combining that with this approach.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Remove rare and common tokens.\nfrom gensim.corpora import Dictionary\n\n# Create a dictionary representation of the documents.\ndictionary = Dictionary(docs)\n\n# Filter out words that occur less than 20 documents, or more than 50% of the documents.\ndictionary.filter_extremes(no_below=20, no_above=0.5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we transform the documents to a vectorized form. We simply compute\nthe frequency of each word, including the bigrams.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Bag-of-words representation of the documents.\ncorpus = [dictionary.doc2bow(doc) for doc in docs]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how many tokens and documents we have to train on.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print('Number of unique tokens: %d' % len(dictionary))\nprint('Number of documents: %d' % len(corpus))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training\n\nWe are ready to train the LDA model. We will first discuss how to set some of\nthe training parameters.\n\nFirst of all, the elephant in the room: how many topics do I need? There is\nreally no easy answer for this, it will depend on both your data and your\napplication. I have used 10 topics here because I wanted to have a few topics\nthat I could interpret and \"label\", and because that turned out to give me\nreasonably good results. You might not need to interpret all your topics, so\nyou could use a large number of topics, for example 100.\n\n``chunksize`` controls how many documents are processed at a time in the\ntraining algorithm. Increasing chunksize will speed up training, at least as\nlong as the chunk of documents easily fit into memory. I've set ``chunksize =\n2000``, which is more than the amount of documents, so I process all the\ndata in one go. Chunksize can however influence the quality of the model, as\ndiscussed in Hoffman and co-authors [2], but the difference was not\nsubstantial in this case.\n\n``passes`` controls how often we train the model on the entire corpus.\nAnother word for passes might be \"epochs\". ``iterations`` is somewhat\ntechnical, but essentially it controls how often we repeat a particular loop\nover each document. It is important to set the number of \"passes\" and\n\"iterations\" high enough.\n\nI suggest the following way to choose iterations and passes. First, enable\nlogging (as described in many Gensim tutorials), and set ``eval_every = 1``\nin ``LdaModel``. When training the model look for a line in the log that\nlooks something like this::\n\n 2016-06-21 15:40:06,753 - gensim.models.ldamodel - DEBUG - 68/1566 documents converged within 400 iterations\n\nIf you set ``passes = 20`` you will see this line 20 times. Make sure that by\nthe final passes, most of the documents have converged. So you want to choose\nboth passes and iterations to be high enough for this to happen.\n\nWe set ``alpha = 'auto'`` and ``eta = 'auto'``. Again this is somewhat\ntechnical, but essentially we are automatically learning two parameters in\nthe model that we usually would have to specify explicitly.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Train LDA model.\nfrom gensim.models import LdaModel\n\n# Set training parameters.\nnum_topics = 10\nchunksize = 2000\npasses = 20\niterations = 400\neval_every = None # Don't evaluate model perplexity, takes too much time.\n\n# Make an index to word dictionary.\ntemp = dictionary[0] # This is only to \"load\" the dictionary.\nid2word = dictionary.id2token\n\nmodel = LdaModel(\n corpus=corpus,\n id2word=id2word,\n chunksize=chunksize,\n alpha='auto',\n eta='auto',\n iterations=iterations,\n num_topics=num_topics,\n passes=passes,\n eval_every=eval_every\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can compute the topic coherence of each topic. Below we display the\naverage topic coherence and print the topics in order of topic coherence.\n\nNote that we use the \"Umass\" topic coherence measure here (see\n:py:func:`gensim.models.ldamodel.LdaModel.top_topics`), Gensim has recently\nobtained an implementation of the \"AKSW\" topic coherence measure (see\naccompanying blog post, http://rare-technologies.com/what-is-topic-coherence/).\n\nIf you are familiar with the subject of the articles in this dataset, you can\nsee that the topics below make a lot of sense. However, they are not without\nflaws. We can see that there is substantial overlap between some topics,\nothers are hard to interpret, and most of them have at least some terms that\nseem out of place. If you were able to do better, feel free to share your\nmethods on the blog at http://rare-technologies.com/lda-training-tips/ !\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "top_topics = model.top_topics(corpus)\n\n# Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics.\navg_topic_coherence = sum([t[1] for t in top_topics]) / num_topics\nprint('Average topic coherence: %.4f.' % avg_topic_coherence)\n\nfrom pprint import pprint\npprint(top_topics)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Things to experiment with\n\n* ``no_above`` and ``no_below`` parameters in ``filter_extremes`` method.\n* Adding trigrams or even higher order n-grams.\n* Consider whether using a hold-out set or cross-validation is the way to go for you.\n* Try other datasets.\n\n## Where to go from here\n\n* Check out a RaRe blog post on the AKSW topic coherence measure (http://rare-technologies.com/what-is-topic-coherence/).\n* pyLDAvis (https://pyldavis.readthedocs.io/en/latest/index.html).\n* Read some more Gensim tutorials (https://github.com/RaRe-Technologies/gensim/blob/develop/tutorials.md#tutorials).\n* If you haven't already, read [1] and [2] (see references).\n\n## References\n\n1. \"Latent Dirichlet Allocation\", Blei et al. 2003.\n2. \"Online Learning for Latent Dirichlet Allocation\", Hoffman et al. 2010.\n\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 0 }