{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Using FastText via Gensim" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This tutorial is about using the Gensim wrapper for the [FastText](https://github.com/facebookresearch/fastText) library for training FastText models, loading them and performing similarity operations and vector lookups analogous to Word2Vec." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## When to use FastText?\n", "The main principle behind FastText is that the morphological structure of a word carries important information about the meaning of the word, which is not taken into account by traditional word embeddings, which train a unique word embedding for every individual word. This is especially significant for morphologically rich languages (German, Turkish) in which a single word can have a large number of morphological forms, each of which might occur rarely, thus making it hard to train good word embeddings. \n", "FastText attempts to solve this by treating each word as the aggregation of its subwords. For the sake of simplicity and language-independence, subwords are taken to the character ngrams of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams. \n", "According to a detailed comparison of Word2Vec and FastText in [this notebook](Word2Vec_FastText_Comparison.ipynb), FastText does significantly better on syntactic tasks as compared to the original Word2Vec, especially when the size of the training corpus is small. Word2Vec slightly outperforms FastText on semantic tasks though. The differences grow smaller as the size of training corpus increases. \n", "Training time for FastText is significantly higher than the Gensim version of Word2Vec (`15min 42s` vs `6min 42s` on text8, 17 mil tokens, 5 epochs, and a vector size of 100). \n", "FastText can be used to obtain vectors for out-of-vocabulary (oov) words, by summing up vectors for its component char-ngrams, provided at least one of the char-ngrams was present in the training data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)\n", "\n", "You need to have FastText setup locally to be able to train models. See [installation instructions for FastText](https://github.com/facebookresearch/fastText/#requirements) if you don't have FastText installed." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "FastText(vocab=1762, size=100, alpha=0.025)\n" ] } ], "source": [ "import gensim, os\n", "from gensim.models.wrappers.fasttext import FastText\n", "\n", "# Set FastText home to the path to the FastText executable\n", "ft_home = '/home/jayant/Projects/fastText/fasttext'\n", "\n", "# Set file names for train and test data\n", "data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep\n", "lee_train_file = data_dir + 'lee_background.cor'\n", "\n", "model = FastText.train(ft_home, lee_train_file)\n", "\n", "print(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the folllowing parameters from the original word2vec - \n", " - model: Training architecture. Allowed values: `cbow`, `skipgram` (Default `cbow`)\n", " - size: Size of embeddings to be learnt (Default 100)\n", " - alpha: Initial learning rate (Default 0.025)\n", " - window: Context window size (Default 5)\n", " - min_count: Ignore words with number of occurrences below this (Default 5)\n", " - loss: Training objective. Allowed values: `ns`, `hs`, `softmax` (Default `ns`)\n", " - sample: Threshold for downsampling higher-frequency words (Default 0.001)\n", " - negative: Number of negative words to sample, for `ns` (Default 5)\n", " - iter: Number of epochs (Default 5)\n", " - sorted_vocab: Sort vocab by descending frequency (Default 1)\n", " - threads: Number of threads to use (Default 12)\n", " \n", "In addition, FastText has two additional parameters - \n", " - min_n: min length of char ngrams to be used (Default 3)\n", " - max_n: max length of char ngrams to be used for (Default 6)\n", "These control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If `max_n` is set to 0, or to be lesser than `min_n`, no character ngrams are used, and the model effectively reduces to Word2Vec." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "FastText(vocab=816, size=50, alpha=0.025)\n" ] } ], "source": [ "model = FastText.train(ft_home, lee_train_file, size=50, alpha=0.05, min_count=10)\n", "print(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Continuation of training with FastText models is not supported." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Saving/loading models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Models can be saved and loaded via the `load` and `save` methods." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "FastText(vocab=816, size=50, alpha=0.025)\n" ] } ], "source": [ "model.save('saved_fasttext_model')\n", "loaded_model = FastText.load('saved_fasttext_model')\n", "print(loaded_model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `save_word2vec_method` causes the vectors for ngrams to be lost. As a result, a model loaded in this way will behave as a regular word2vec model. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Word vector lookup\n", "FastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n", "False\n", "[-0.47196999 -0.17528 0.19518 -0.31948 0.42835999 0.083281\n", " -0.15183 0.43415001 0.41251001 -0.10186 -0.54948997 0.12667\n", " 0.14816 -0.065804 -0.21105 -0.42304999 0.011494 0.53068\n", " -0.57410997 -0.53930998 -0.33537999 0.16154 0.12377 -0.23537\n", " -0.14629 -0.34777001 0.27304 0.20597 0.12581 0.36671999\n", " 0.32075 0.27351999 -0.13311 -0.04975 -0.52293003 -0.2766\n", " 0.11863 -0.009231 -0.66074997 0.018031 0.57145 0.35547\n", " 0.21588001 0.14431 -0.31865999 0.32027 0.55005002 0.19374999\n", " 0.36609 -0.54184002]\n", "[-0.4256132 -0.11521876 0.20166218 -0.34812452 0.30932881 0.02802653\n", " -0.18951961 0.4175721 0.41008326 -0.09026544 -0.50756483 0.07746826\n", " 0.09458492 0.01440104 -0.17157355 -0.35189211 0.00103696 0.50923289\n", " -0.49944138 -0.38334864 -0.34287725 0.18023167 0.18014225 -0.22820314\n", " -0.08267317 -0.31241801 0.26023088 0.20673522 0.07008089 0.31678561\n", " 0.31590793 0.16198126 -0.09287339 -0.1722331 -0.43232849 -0.26644917\n", " 0.10019614 0.08444232 -0.57080398 0.07581607 0.50339428 0.28109486\n", " 0.05507131 0.10023506 -0.17840675 0.18620458 0.42583067 0.00790601\n", " 0.2036875 -0.4925791 ]\n" ] } ], "source": [ "print('night' in model.wv.vocab)\n", "print('nights' in model.wv.vocab)\n", "print(model['night'])\n", "print(model['nights'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The word vector lookup operation only works if atleast one of the component character ngrams is present in the training corpus. For example -" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "ename": "KeyError", "evalue": "'all ngrams for word axe absent from model'", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mKeyError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;31m# Raises a KeyError since none of the character ngrams of the word `axe` are present in the training data\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mmodel\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'axe'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;32m/home/jayant/Projects/gensim/gensim/models/word2vec.pyc\u001b[0m in \u001b[0;36m__getitem__\u001b[0;34m(self, words)\u001b[0m\n\u001b[1;32m 1304\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1305\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m__getitem__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mwords\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1306\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwv\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__getitem__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mwords\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1307\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1308\u001b[0m \u001b[0;34m@\u001b[0m\u001b[0mstaticmethod\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m/home/jayant/Projects/gensim/gensim/models/keyedvectors.pyc\u001b[0m in \u001b[0;36m__getitem__\u001b[0;34m(self, words)\u001b[0m\n\u001b[1;32m 363\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mwords\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstring_types\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 364\u001b[0m \u001b[0;31m# allow calls like trained_model['office'], as a shorthand for trained_model[['office']]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 365\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mword_vec\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mwords\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 366\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 367\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mvstack\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mword_vec\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mword\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mword\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mwords\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m/home/jayant/Projects/gensim/gensim/models/wrappers/fasttext.pyc\u001b[0m in \u001b[0;36mword_vec\u001b[0;34m(self, word, use_norm)\u001b[0m\n\u001b[1;32m 89\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mword_vec\u001b[0m \u001b[0;34m/\u001b[0m \u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mngrams\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 90\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;31m# No ngrams of the word are present in self.ngrams\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 91\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mKeyError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'all ngrams for word %s absent from model'\u001b[0m \u001b[0;34m%\u001b[0m \u001b[0mword\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 92\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 93\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0minit_sims\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreplace\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mFalse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mKeyError\u001b[0m: 'all ngrams for word axe absent from model'" ] } ], "source": [ "# Raises a KeyError since none of the character ngrams of the word `axe` are present in the training data\n", "model['axe']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `in` operation works slightly differently from the original word2vec. It tests whether a vector for the given word exists or not, not whether the word is present in the word vocabulary. To test whether a word is present in the training word vocabulary -" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "False\n", "True\n" ] } ], "source": [ "# Tests if word present in vocab\n", "print(\"word\" in model.wv.vocab)\n", "# Tests if vector present for word\n", "print(\"word\" in model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Similarity operations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have atleast one character ngram present in the training data." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "False\n", "True\n" ] }, { "data": { "text/plain": [ "0.97944545147919504" ] }, "execution_count": 7, "output_type": "execute_result", "metadata": {} } ], "source": [ "print(\"nights\" in model.wv.vocab)\n", "print(\"night\" in model.wv.vocab)\n", "model.similarity(\"night\", \"nights\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Syntactically similar words generally have high similarity in FastText models, since a large number of the component char-ngrams will be the same. As a result, FastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided [here](Word2Vec_FastText_Comparison.ipynb).\n", "\n", "Other similarity operations -" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(u'12', 0.9912641048431396),\n", " (u'across', 0.990070641040802),\n", " (u'few', 0.9840448498725891),\n", " (u'deaths', 0.9840392470359802),\n", " (u'parts', 0.9835165739059448),\n", " (u'One', 0.9833074808120728),\n", " (u'running', 0.9832631349563599),\n", " (u'2', 0.982011079788208),\n", " (u'victory', 0.9806963801383972),\n", " (u'each', 0.9789758920669556)]" ] }, "execution_count": 8, "output_type": "execute_result", "metadata": {} } ], "source": [ "# The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only\n", "model.most_similar(\"nights\")" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.97543218704680112" ] }, "execution_count": 9, "output_type": "execute_result", "metadata": {} } ], "source": [ "model.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant'])" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'lunch'" ] }, "execution_count": 10, "output_type": "execute_result", "metadata": {} } ], "source": [ "model.doesnt_match(\"breakfast cereal dinner lunch\".split())" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(u'against', 0.94775390625),\n", " (u'after', 0.923099935054779),\n", " (u'West', 0.910752534866333),\n", " (u'again', 0.903070867061615),\n", " (u'arrest', 0.8878517150878906),\n", " (u'suicide', 0.8750319480895996),\n", " (u'After', 0.8682445287704468),\n", " (u'innings', 0.859328031539917),\n", " (u'Test', 0.8542338609695435),\n", " (u'during', 0.852535605430603)]" ] }, "execution_count": 11, "output_type": "execute_result", "metadata": {} } ], "source": [ "model.most_similar(positive=['baghdad', 'england'], negative=['london'])" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'correct': [], 'incorrect': [], 'section': u'capital-common-countries'},\n", " {'correct': [], 'incorrect': [], 'section': u'capital-world'},\n", " {'correct': [], 'incorrect': [], 'section': u'currency'},\n", " {'correct': [], 'incorrect': [], 'section': u'city-in-state'},\n", " {'correct': [],\n", " 'incorrect': [(u'HE', u'SHE', u'HIS', u'HER'),\n", " (u'HIS', u'HER', u'HE', u'SHE')],\n", " 'section': u'family'},\n", " {'correct': [], 'incorrect': [], 'section': u'gram1-adjective-to-adverb'},\n", " {'correct': [], 'incorrect': [], 'section': u'gram2-opposite'},\n", " {'correct': [], 'incorrect': [], 'section': u'gram3-comparative'},\n", " {'correct': [(u'BIG', u'BIGGEST', u'GOOD', u'BEST')],\n", " 'incorrect': [(u'GOOD', u'BEST', u'BIG', u'BIGGEST')],\n", " 'section': u'gram4-superlative'},\n", " {'correct': [(u'GO', u'GOING', u'SAY', u'SAYING'),\n", " (u'LOOK', u'LOOKING', u'SAY', u'SAYING'),\n", " (u'RUN', u'RUNNING', u'SAY', u'SAYING'),\n", " (u'SAY', u'SAYING', u'LOOK', u'LOOKING')],\n", " 'incorrect': [(u'GO', u'GOING', u'LOOK', u'LOOKING'),\n", " (u'GO', u'GOING', u'RUN', u'RUNNING'),\n", " (u'LOOK', u'LOOKING', u'RUN', u'RUNNING'),\n", " (u'LOOK', u'LOOKING', u'GO', u'GOING'),\n", " (u'RUN', u'RUNNING', u'GO', u'GOING'),\n", " (u'RUN', u'RUNNING', u'LOOK', u'LOOKING'),\n", " (u'SAY', u'SAYING', u'GO', u'GOING'),\n", " (u'SAY', u'SAYING', u'RUN', u'RUNNING')],\n", " 'section': u'gram5-present-participle'},\n", " {'correct': [(u'AUSTRALIA', u'AUSTRALIAN', u'ISRAEL', u'ISRAELI'),\n", " (u'INDIA', u'INDIAN', u'ISRAEL', u'ISRAELI'),\n", " (u'INDIA', u'INDIAN', u'AUSTRALIA', u'AUSTRALIAN')],\n", " 'incorrect': [(u'AUSTRALIA', u'AUSTRALIAN', u'INDIA', u'INDIAN'),\n", " (u'ISRAEL', u'ISRAELI', u'AUSTRALIA', u'AUSTRALIAN'),\n", " (u'ISRAEL', u'ISRAELI', u'INDIA', u'INDIAN')],\n", " 'section': u'gram6-nationality-adjective'},\n", " {'correct': [],\n", " 'incorrect': [(u'GOING', u'WENT', u'SAYING', u'SAID'),\n", " (u'GOING', u'WENT', u'TAKING', u'TOOK'),\n", " (u'SAYING', u'SAID', u'TAKING', u'TOOK'),\n", " (u'SAYING', u'SAID', u'GOING', u'WENT'),\n", " (u'TAKING', u'TOOK', u'GOING', u'WENT'),\n", " (u'TAKING', u'TOOK', u'SAYING', u'SAID')],\n", " 'section': u'gram7-past-tense'},\n", " {'correct': [],\n", " 'incorrect': [(u'BUILDING', u'BUILDINGS', u'CHILD', u'CHILDREN'),\n", " (u'BUILDING', u'BUILDINGS', u'MAN', u'MEN'),\n", " (u'CHILD', u'CHILDREN', u'MAN', u'MEN'),\n", " (u'CHILD', u'CHILDREN', u'BUILDING', u'BUILDINGS'),\n", " (u'MAN', u'MEN', u'BUILDING', u'BUILDINGS'),\n", " (u'MAN', u'MEN', u'CHILD', u'CHILDREN')],\n", " 'section': u'gram8-plural'},\n", " {'correct': [], 'incorrect': [], 'section': u'gram9-plural-verbs'},\n", " {'correct': [(u'BIG', u'BIGGEST', u'GOOD', u'BEST'),\n", " (u'GO', u'GOING', u'SAY', u'SAYING'),\n", " (u'LOOK', u'LOOKING', u'SAY', u'SAYING'),\n", " (u'RUN', u'RUNNING', u'SAY', u'SAYING'),\n", " (u'SAY', u'SAYING', u'LOOK', u'LOOKING'),\n", " (u'AUSTRALIA', u'AUSTRALIAN', u'ISRAEL', u'ISRAELI'),\n", " (u'INDIA', u'INDIAN', u'ISRAEL', u'ISRAELI'),\n", " (u'INDIA', u'INDIAN', u'AUSTRALIA', u'AUSTRALIAN')],\n", " 'incorrect': [(u'HE', u'SHE', u'HIS', u'HER'),\n", " (u'HIS', u'HER', u'HE', u'SHE'),\n", " (u'GOOD', u'BEST', u'BIG', u'BIGGEST'),\n", " (u'GO', u'GOING', u'LOOK', u'LOOKING'),\n", " (u'GO', u'GOING', u'RUN', u'RUNNING'),\n", " (u'LOOK', u'LOOKING', u'RUN', u'RUNNING'),\n", " (u'LOOK', u'LOOKING', u'GO', u'GOING'),\n", " (u'RUN', u'RUNNING', u'GO', u'GOING'),\n", " (u'RUN', u'RUNNING', u'LOOK', u'LOOKING'),\n", " (u'SAY', u'SAYING', u'GO', u'GOING'),\n", " (u'SAY', u'SAYING', u'RUN', u'RUNNING'),\n", " (u'AUSTRALIA', u'AUSTRALIAN', u'INDIA', u'INDIAN'),\n", " (u'ISRAEL', u'ISRAELI', u'AUSTRALIA', u'AUSTRALIAN'),\n", " (u'ISRAEL', u'ISRAELI', u'INDIA', u'INDIAN'),\n", " (u'GOING', u'WENT', u'SAYING', u'SAID'),\n", " (u'GOING', u'WENT', u'TAKING', u'TOOK'),\n", " (u'SAYING', u'SAID', u'TAKING', u'TOOK'),\n", " (u'SAYING', u'SAID', u'GOING', u'WENT'),\n", " (u'TAKING', u'TOOK', u'GOING', u'WENT'),\n", " (u'TAKING', u'TOOK', u'SAYING', u'SAID'),\n", " (u'BUILDING', u'BUILDINGS', u'CHILD', u'CHILDREN'),\n", " (u'BUILDING', u'BUILDINGS', u'MAN', u'MEN'),\n", " (u'CHILD', u'CHILDREN', u'MAN', u'MEN'),\n", " (u'CHILD', u'CHILDREN', u'BUILDING', u'BUILDINGS'),\n", " (u'MAN', u'MEN', u'BUILDING', u'BUILDINGS'),\n", " (u'MAN', u'MEN', u'CHILD', u'CHILDREN')],\n", " 'section': 'total'}]" ] }, "execution_count": 13, "output_type": "execute_result", "metadata": {} } ], "source": [ "model.accuracy(questions='questions-words.txt')" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.7756597547632444" ] }, "execution_count": 15, "output_type": "execute_result", "metadata": {} } ], "source": [ "# Word Movers distance\n", "sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()\n", "sentence_president = 'The president greets the press in Chicago'.lower().split()\n", "\n", "# Remove their stopwords.\n", "from nltk.corpus import stopwords\n", "stopwords = stopwords.words('english')\n", "sentence_obama = [w for w in sentence_obama if w not in stopwords]\n", "sentence_president = [w for w in sentence_president if w not in stopwords]\n", "\n", "# Compute WMD.\n", "distance = model.wmdistance(sentence_obama, sentence_president)\n", "distance" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2.0 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.6" } }, "nbformat": 4, "nbformat_minor": 0 }