{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Seminar 1: Fun with Word Embeddings (3 points)\n", "\n", "Today we gonna play with word embeddings: train our own little embedding, load one from gensim model zoo and use it to visualize text corpora.\n", "\n", "This whole thing is gonna happen on top of embedding dataset.\n", "\n", "__Requirements:__ `pip install --upgrade nltk gensim bokeh` , but only if you're running locally." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# download the data:\n", "!wget https://www.dropbox.com/s/obaitrix9jyu84r/quora.txt?dl=1 -O ./quora.txt\n", "# alternative download link: https://yadi.sk/i/BPQrUu1NaTduEw" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "import numpy as np\n", "\n", "data = list(open(\"./quora.txt\", encoding=\"utf-8\"))\n", "data[50]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Tokenization:__ a typical first step for an nlp task is to split raw data into words.\n", "The text we're working with is in raw format: with all the punctuation and smiles attached to some words, so a simple str.split won't do.\n", "\n", "Let's use __`nltk`__ - a library that handles many nlp tasks like tokenization, stemming or part-of-speech tagging." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nltk.tokenize import WordPunctTokenizer\n", "tokenizer = WordPunctTokenizer()\n", "\n", "print(tokenizer.tokenize(data[50]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# TASK: lowercase everything and extract tokens with tokenizer. \n", "# data_tok should be a list of lists of tokens for each line in data.\n", "\n", "data_tok = # YOUR CODE" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "assert all(isinstance(row, (list, tuple)) for row in data_tok), \"please convert each line into a list of tokens (strings)\"\n", "assert all(all(isinstance(tok, str) for tok in row) for row in data_tok), \"please convert each line into a list of tokens (strings)\"\n", "is_latin = lambda tok: all('a' <= x.lower() <= 'z' for x in tok)\n", "assert all(map(lambda l: not is_latin(l) or l.islower(), map(' '.join, data_tok))), \"please make sure to lowercase the data\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print([' '.join(row) for row in data_tok[:2]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Word vectors:__ as the saying goes, there's more than one way to train word embeddings. There's Word2Vec and GloVe with different objective functions. Then there's fasttext that uses character-level models to train word embeddings. \n", "\n", "The choice is huge, so let's start someplace small: __gensim__ is another nlp library that features many vector-based models incuding word2vec." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from gensim.models import Word2Vec\n", "model = Word2Vec(data_tok, \n", " size=32, # embedding vector size\n", " min_count=5, # consider words that occured at least 5 times\n", " window=5).wv # define context as a 5-word window around the target word" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# now you can get word vectors !\n", "model.get_vector('anything')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# or query similar words directly. Go play with it!\n", "model.most_similar('bread')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using pre-trained model\n", "\n", "Took it a while, huh? Now imagine training life-sized (100~300D) word embeddings on gigabytes of text: wikipedia articles or twitter posts. \n", "\n", "Thankfully, nowadays you can get a pre-trained word embedding model in 2 lines of code (no sms required, promise)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import gensim.downloader as api\n", "model = api.load('glove-twitter-100')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.most_similar(positive=[\"coder\", \"money\"], negative=[\"brain\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing word vectors\n", "\n", "One way to see if our vectors are any good is to plot them. Thing is, those vectors are in 30D+ space and we humans are more used to 2-3D.\n", "\n", "Luckily, we machine learners know about __dimensionality reduction__ methods.\n", "\n", "Let's use that to plot 1000 most frequent words" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "words = sorted(model.vocab.keys(), \n", " key=lambda word: model.vocab[word].count,\n", " reverse=True)[:1000]\n", "\n", "print(words[::100])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# for each word, compute it's vector with model\n", "word_vectors = # YOUR CODE" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "assert isinstance(word_vectors, np.ndarray)\n", "assert word_vectors.shape == (len(words), 100)\n", "assert np.isfinite(word_vectors).all()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Linear projection: PCA\n", "\n", "The simplest linear dimensionality reduction method is __P__rincipial __C__omponent __A__nalysis.\n", "\n", "In geometric terms, PCA tries to find axes along which most of the variance occurs. The \"natural\" axes, if you wish.\n", "\n", "\n", "\n", "\n", "Under the hood, it attempts to decompose object-feature matrix $X$ into two smaller matrices: $W$ and $\\hat W$ minimizing _mean squared error_:\n", "\n", "$$\\|(X W) \\hat{W} - X\\|^2_2 \\to_{W, \\hat{W}} \\min$$\n", "- $X \\in \\mathbb{R}^{n \\times m}$ - object matrix (**centered**);\n", "- $W \\in \\mathbb{R}^{m \\times d}$ - matrix of direct transformation;\n", "- $\\hat{W} \\in \\mathbb{R}^{d \\times m}$ - matrix of reverse transformation;\n", "- $n$ samples, $m$ original dimensions and $d$ target dimensions;\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.decomposition import PCA\n", "\n", "# map word vectors onto 2d plane with PCA. Use good old sklearn api (fit, transform)\n", "# after that, normalize vectors to make sure they have zero mean and unit variance\n", "word_vectors_pca = # YOUR CODE\n", "\n", "# and maybe MORE OF YOUR CODE here :)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "assert word_vectors_pca.shape == (len(word_vectors), 2), \"there must be a 2d vector for each word\"\n", "assert max(abs(word_vectors_pca.mean(0))) < 1e-5, \"points must be zero-centered\"\n", "assert max(abs(1.0 - word_vectors_pca.std(0))) < 1e-2, \"points must have unit variance\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Let's draw it!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import bokeh.models as bm, bokeh.plotting as pl\n", "from bokeh.io import output_notebook\n", "output_notebook()\n", "\n", "def draw_vectors(x, y, radius=10, alpha=0.25, color='blue',\n", " width=600, height=400, show=True, **kwargs):\n", " \"\"\" draws an interactive plot for data points with auxilirary info on hover \"\"\"\n", " if isinstance(color, str): color = [color] * len(x)\n", " data_source = bm.ColumnDataSource({ 'x' : x, 'y' : y, 'color': color, **kwargs })\n", "\n", " fig = pl.figure(active_scroll='wheel_zoom', width=width, height=height)\n", " fig.scatter('x', 'y', size=radius, color='color', alpha=alpha, source=data_source)\n", "\n", " fig.add_tools(bm.HoverTool(tooltips=[(key, \"@\" + key) for key in kwargs.keys()]))\n", " if show: pl.show(fig)\n", " return fig" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "draw_vectors(word_vectors_pca[:, 0], word_vectors_pca[:, 1], token=words)\n", "\n", "# hover a mouse over there and see if you can identify the clusters" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing neighbors with t-SNE\n", "PCA is nice but it's strictly linear and thus only able to capture coarse high-level structure of the data.\n", "\n", "If we instead want to focus on keeping neighboring points near, we could use TSNE, which is itself an embedding method. Here you can read __[more on TSNE](https://distill.pub/2016/misread-tsne/)__." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.manifold import TSNE\n", "\n", "# map word vectors onto 2d plane with TSNE. hint: use verbose=100 to see what it's doing.\n", "# normalize them as just lke with pca\n", "\n", "\n", "word_tsne = #YOUR CODE" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": false }, "outputs": [], "source": [ "draw_vectors(word_tsne[:, 0], word_tsne[:, 1], color='green', token=words)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing phrases\n", "\n", "Word embeddings can also be used to represent short phrases. The simplest way is to take __an average__ of vectors for all tokens in the phrase with some weights.\n", "\n", "This trick is useful to identify what data are you working with: find if there are any outliers, clusters or other artefacts.\n", "\n", "Let's try this new hammer on our data!\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def get_phrase_embedding(phrase):\n", " \"\"\"\n", " Convert phrase to a vector by aggregating it's word embeddings. See description above.\n", " \"\"\"\n", " # 1. lowercase phrase\n", " # 2. tokenize phrase\n", " # 3. average word vectors for all words in tokenized phrase\n", " # skip words that are not in model's vocabulary\n", " # if all words are missing from vocabulary, return zeros\n", " \n", " vector = np.zeros([model.vector_size], dtype='float32')\n", " \n", " # YOUR CODE\n", " \n", " return vector\n", " \n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "vector = get_phrase_embedding(\"I'm very sure. This never happened to me before...\")\n", "\n", "assert np.allclose(vector[::10],\n", " np.array([ 0.31807372, -0.02558171, 0.0933293 , -0.1002182 , -1.0278689 ,\n", " -0.16621883, 0.05083408, 0.17989802, 1.3701859 , 0.08655966],\n", " dtype=np.float32))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# let's only consider ~5k phrases for a first run.\n", "chosen_phrases = data[::len(data) // 1000]\n", "\n", "# compute vectors for chosen phrases\n", "phrase_vectors = # YOUR CODE" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "assert isinstance(phrase_vectors, np.ndarray) and np.isfinite(phrase_vectors).all()\n", "assert phrase_vectors.shape == (len(chosen_phrases), model.vector_size)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# map vectors into 2d space with pca, tsne or your other method of choice\n", "# don't forget to normalize\n", "\n", "phrase_vectors_2d = TSNE(verbose=1000).fit_transform(phrase_vectors)\n", "\n", "phrase_vectors_2d = (phrase_vectors_2d - phrase_vectors_2d.mean(axis=0)) / phrase_vectors_2d.std(axis=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "draw_vectors(phrase_vectors_2d[:, 0], phrase_vectors_2d[:, 1],\n", " phrase=[phrase[:50] for phrase in chosen_phrases],\n", " radius=20,)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, let's build a simple \"similar question\" engine with phrase embeddings we've built." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# compute vector embedding for all lines in data\n", "data_vectors = np.array([get_phrase_embedding(l) for l in data])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def find_nearest(query, k=10):\n", " \"\"\"\n", " given text line (query), return k most similar lines from data, sorted from most to least similar\n", " similarity should be measured as cosine between query and line embedding vectors\n", " hint: it's okay to use global variables: data and data_vectors. see also: np.argpartition, np.argsort\n", " \"\"\"\n", " # YOUR CODE\n", " \n", " return " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "results = find_nearest(query=\"How do i enter the matrix?\", k=10)\n", "\n", "print(''.join(results))\n", "\n", "assert len(results) == 10 and isinstance(results[0], str)\n", "assert results[0] == 'How do I get to the dark web?\\n'\n", "assert results[3] == 'What can I do to save the world?\\n'" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "find_nearest(query=\"How does Trump?\", k=10)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "find_nearest(query=\"Why don't i ask a question myself?\", k=10)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "__Now what?__\n", "* Try running TSNE on all data, not just 1000 phrases\n", "* See what other embeddings are there in the model zoo: `gensim.downloader.info()`\n", "* Take a look at [FastText](https://github.com/facebookresearch/fastText) embeddings\n", "* Optimize find_nearest with locality-sensitive hashing: use [nearpy](https://github.com/pixelogik/NearPy) or `sklearn.neighbors`." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 2 }