{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## New Distance Metrics for Probability Distribution and Bag of Words " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A small tutorial to illustrate the new distance functions.\n", "\n", "We would need this mostly when comparing how similar two probability distributions are, and in the case of gensim, usually for LSI or LDA topic distributions after we have a LDA model.\n", "\n", "Gensim already has functionalities for this, in the sense of getting most similar documents - [this](http://radimrehurek.com/topic_modeling_tutorial/3%20-%20Indexing%20and%20Retrieval.html), [this](https://radimrehurek.com/gensim/tut3.html) and [this](https://radimrehurek.com/gensim/similarities/docsim.html) are such examples of documentation and tutorials.\n", "\n", "What this tutorial shows is a building block of these larger methods, which are a small suite of distance metrics.\n", "We'll start by setting up a small corpus and showing off the methods." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from gensim.corpora import Dictionary\n", "from gensim.models import ldamodel\n", "from gensim.matutils import kullback_leibler, jaccard, hellinger, sparse2full\n", "import numpy" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# you can use any corpus, this is just illustratory\n", "\n", "texts = [['bank','river','shore','water'],\n", " ['river','water','flow','fast','tree'],\n", " ['bank','water','fall','flow'],\n", " ['bank','bank','water','rain','river'],\n", " ['river','water','mud','tree'],\n", " ['money','transaction','bank','finance'],\n", " ['bank','borrow','money'], \n", " ['bank','finance'],\n", " ['finance','money','sell','bank'],\n", " ['borrow','sell'],\n", " ['bank','loan','sell']]\n", "\n", "dictionary = Dictionary(texts)\n", "corpus = [dictionary.doc2bow(text) for text in texts]" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(0,\n", " u'0.164*bank + 0.142*water + 0.108*river + 0.076*flow + 0.067*borrow + 0.063*sell + 0.060*tree + 0.048*money + 0.046*fast + 0.044*rain'),\n", " (1,\n", " u'0.196*bank + 0.120*finance + 0.100*money + 0.082*sell + 0.067*river + 0.065*water + 0.056*transaction + 0.049*loan + 0.046*tree + 0.040*mud')]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "numpy.random.seed(1) # setting random seed to get the same results each time.\n", "model = ldamodel.LdaModel(corpus, id2word=dictionary, num_topics=2)\n", "\n", "model.show_topics()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a few sample documents and get them ready to test Similarity. Let's call the 1st topic the water topic and the second topic the finance topic.\n", "\n", "Note: these are all distance metrics. This means that a value between 0 and 1 is returned, where values closer to 0 indicate a smaller 'distance' and therefore a larger similarity." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [], "source": [ "doc_water = ['river', 'water', 'shore']\n", "doc_finance = ['finance', 'money', 'sell']\n", "doc_bank = ['finance', 'bank', 'tree', 'water']\n", "\n", "# now let's make these into a bag of words format\n", "\n", "bow_water = model.id2word.doc2bow(doc_water) \n", "bow_finance = model.id2word.doc2bow(doc_finance) \n", "bow_bank = model.id2word.doc2bow(doc_bank) \n", "\n", "# we can now get the LDA topic distributions for these\n", "lda_bow_water = model[bow_water]\n", "lda_bow_finance = model[bow_finance]\n", "lda_bow_bank = model[bow_bank]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hellinger and Kullback–Leibler" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're now ready to apply our distance metrics.\n", "\n", "Let's start with the popular Hellinger distance. \n", "The Hellinger distance metric gives an output in the range [0,1] for two probability distributions, with values closer to 0 meaning they are more similar." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.51251199778753576" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hellinger(lda_bow_water, lda_bow_finance)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.23407305272210427" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hellinger(lda_bow_finance, lda_bow_bank)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Makes sense, right? In the first example, Document 1 and Document 2 are hardly similar, so we get a value of roughly 0.5. \n", "\n", "In the second case, the documents are a lot more similar, semantically. Trained with the model, they give a much less distance value." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's run similar examples down with Kullback Leibler." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.30823547" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "kullback_leibler(lda_bow_water, lda_bow_bank)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.19881117" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "kullback_leibler(lda_bow_finance, lda_bow_bank)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*NOTE!*\n", "\n", "KL is not a Distance Metric in the mathematical sense, and hence is not symmetrical. \n", "This means that `kullback_leibler(lda_bow_finance, lda_bow_bank)` is not equal to `kullback_leibler(lda_bow_bank, lda_bow_finance)`. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.24780412" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# As you can see, the values are not equal. We'll get more into the details of this later on in the notebook.\n", "kullback_leibler(lda_bow_bank, lda_bow_finance)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In our previous examples we saw that there were lower distance values between bank and finance than for bank and water, even if it wasn't by a huge margin. What does this mean?\n", "\n", "The `bank` document is a combination of both water and finance related terms - but as bank in this context is likely to belong to the finance topic, the distance values are less between the finance and bank bows." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(0, 0.44146764073708339), (1, 0.55853235926291656)]" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# just to confirm our suspicion that the bank bow is more to do with finance:\n", "\n", "model.get_document_topics(bow_bank)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's evident that while it isn't too skewed, it it more towards the finance topic." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Distance metrics (also referred to as similarity metrics), as suggested in the examples above, are mainly for probability distributions, but the methods can accept a bunch of formats for input. You can do some further reading on [Kullback Leibler](https://en.wikipedia.org/wiki/Kullback–Leibler_divergence) and [Hellinger](https://en.wikipedia.org/wiki/Hellinger_distance) to figure out what suits your needs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Jaccard " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now look at the [Jaccard Distance](https://en.wikipedia.org/wiki/Jaccard_index) metric for similarity between bags of words (i.e, documents)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.8571428571428572" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "jaccard(bow_water, bow_bank)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.8333333333333334" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "jaccard(doc_water, doc_bank)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.0" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "jaccard(['word'], ['word'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The three examples above feature 2 different input methods. \n", "\n", "In the first case, we present to jaccard document vectors already in bag of words format. The distance can be defined as 1 minus the size of the intersection upon the size of the union of the vectors. \n", "\n", "We can see (on manual inspection as well), that the distance is likely to be high - and it is. \n", "\n", "The last two examples illustrate the ability for jaccard to accept even lists (i.e, documents) as inputs.\n", "In the last case, because they are the same vectors, the value returned is 0 - this means the distance is 0 and they are very similar. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Distance Metrics for Topic Distributions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While there are already standard methods to identify similarity of documents, our distance metrics has one more interesting use-case: topic distributions. \n", "\n", "Let's say we want to find out how similar our two topics are, water and finance." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(3, 0.196),\n", " (12, 0.12),\n", " (10, 0.1),\n", " (14, 0.082),\n", " (2, 0.067),\n", " (0, 0.065),\n", " (11, 0.056),\n", " (15, 0.049),\n", " (5, 0.046),\n", " (9, 0.04)]" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "topic_water, topic_finance = model.show_topics()\n", "\n", "# some pre processing to get the topics in a format acceptable to our distance metrics\n", "\n", "def make_topics_bow(topic):\n", " # takes the string returned by model.show_topics()\n", " # split on strings to get topics and the probabilities\n", " topic = topic.split('+')\n", " # list to store topic bows\n", " topic_bow = []\n", " for word in topic:\n", " # split probability and word\n", " prob, word = word.split('*')\n", " # get rid of spaces\n", " word = word.replace(\" \",\"\")\n", " # convert to word_type\n", " word = model.id2word.doc2bow([word])[0][0]\n", " topic_bow.append((word, float(prob)))\n", " return topic_bow\n", "\n", "finance_distribution = make_topics_bow(topic_finance[1])\n", "water_distribution = make_topics_bow(topic_water[1])\n", "\n", "# the finance topic in bag of words format looks like this:\n", "finance_distribution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've got our topics in a format more acceptable by our functions, let's use a Distance metric to see how similar the word distributions in the topics are." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.36453028040240248" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hellinger(water_distribution, finance_distribution)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our value of roughly 0.36 means that the topics are not TOO distant with respect to their word distributions.\n", "This makes sense again, because of overlapping words like `bank` and a small size dictionary." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Some things to take care of " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In our previous example we didn't use Kullback Leibler to test for similarity for a reason - KL is not a Distance 'Metric' in the technical sense (you can see what a metric is [here](https://en.wikipedia.org/wiki/Metric_(mathematics)). The nature of it, mathematically also means we must be a little careful before using it, because since it involves the log function, a zero can mess things up. For example:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "inf" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# 16 here is the number of features the probability distribution draws from\n", "kullback_leibler(water_distribution, finance_distribution, 16) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That wasn't very helpful, right? This just means that we have to be a bit careful about our inputs. Our old example didn't work out because they were some missing values for some words (because `show_topics()` only returned the top 10 topics). \n", "\n", "This can be remedied, though." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.19781515" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# return ALL the words in the dictionary for the topic-word distribution.\n", "topic_water, topic_finance = model.show_topics(num_words=len(model.id2word))\n", "\n", "# do our bag of words transformation again\n", "finance_distribution = make_topics_bow(topic_finance[1])\n", "water_distribution = make_topics_bow(topic_water[1])\n", "\n", "# and voila!\n", "kullback_leibler(water_distribution, finance_distribution)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You may notice that the distance for this is quite less, indicating a high similarity. This may be a bit off because of the small size of the corpus, where all topics are likely to contain a decent overlap of word probabilities. You will likely get a better value for a bigger corpus.\n", "\n", "So, just remember, if you intend to use KL as a metric to measure similarity or distance between two distributions, avoid zeros by returning the ENTIRE distribution. Since it's unlikely any probability distribution will ever have absolute zeros for any feature/word, returning all the values like we did will make you good to go." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## So - what exactly are Distance Metrics? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Having seen the practical usages of these measures (i.e, to find similarity), let's learn a little about what exactly Distance Measures and Metrics are. \n", "\n", "I mentioned in the previous section that KL was not a distance metric. There are 4 conditons for for a distance measure to be a matric:\n", "\n", "1.\td(x,y) >= 0\n", "2. d(x,y) = 0 <=> x = y\n", "3. d(x,y) = d(y,x)\n", "4. d(x,z) <= d(x,y) + d(y,z)\n", "\n", "That is: it must be non-negative; if x and y are the same, distance must be zero; it must be symmetric; and it must obey the triangle inequality law. \n", "\n", "Simple enough, right? \n", "Let's test these out for our measures." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.22491784692602151" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# normal Hellinger\n", "hellinger(water_distribution, finance_distribution)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.22491784692602151" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# we swap finance and water distributions and get the same value. It is indeed symmetric!\n", "hellinger(finance_distribution, water_distribution)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.0" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# if we pass the same values, it is zero.\n", "hellinger(water_distribution, water_distribution)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.23407305272210427" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# for triangle inequality let's use LDA document distributions\n", "hellinger(lda_bow_finance, lda_bow_bank)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.79979376323008911" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Triangle inequality works too!\n", "hellinger(lda_bow_finance, lda_bow_water) + hellinger(lda_bow_water, lda_bow_bank)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So Hellinger is indeed a metric. Let's check out KL. " ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.2149342" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "kullback_leibler(finance_distribution, water_distribution)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "0.19781515" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "kullback_leibler(water_distribution, finance_distribution)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We immediately notice that when we swap the values they aren't equal! One of the four conditions not fitting is enough for it to not be a metric. \n", "\n", "However, just because it is not a metric, (strictly in the mathematical sense) does not mean that it is not useful to figure out the distance between two probability distributions. KL Divergence is widely used for this purpose, and is probably the most 'famous' distance measure in fields like Information Theory.\n", "\n", "For a nice review of the mathematical differences between Hellinger and KL, [this](http://stats.stackexchange.com/questions/130432/differences-between-bhattacharyya-distance-and-kl-divergence) link does a very good job. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That brings us to the end of this small tutorial.\n", "The scope for adding new similarity metrics is large, as there exist an even larger suite of metrics and methods to add to the matutils.py file. ([This](http://nzcsrsc08.canterbury.ac.nz/site/proceedings/Individual_Papers/pg049_Similarity_Measures_for_Text_Document_Clustering.pdf) is one paper which talks about some of them)\n", "\n", "Looking forward to more PRs towards this functionality in Gensim! :)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 0 }