{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "984959cd4675237d2964d0036770360d", "grade": false, "grade_id": "cell-6f17f4f5348cd26a", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "# Part 2: Word Embeddings" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "09ae37aba80f8a3d948cadbed7991dee", "grade": false, "grade_id": "cell-426d00926ef2a948", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "# Execute this code block to install dependencies when running on colab\n", "try:\n", " import torch\n", "except:\n", " from os.path import exists\n", " from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag\n", " platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())\n", " cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\\.\\([0-9]*\\)\\.\\([0-9]*\\)$/cu\\1\\2/'\n", " accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'\n", "\n", " !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl torchvision\n", "\n", "try: \n", " import torchbearer\n", "except:\n", " !pip install torchbearer\n", " \n", "try:\n", " import torchtext\n", "except:\n", " !pip install torchtext\n", " \n", "try:\n", " import spacy\n", "except:\n", " !pip install spacy\n", "\n", "try:\n", " spacy.load('en')\n", "except:\n", " !python -m spacy download en" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "d61a9db4eae318662e2d525b44722941", "grade": false, "grade_id": "cell-cc5d1e090cea2dd0", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Word embeddings transform a one-hot encoded vector (a vector that is 0 in elements except one, which is 1) into a much smaller dimension vector of real numbers. The one-hot encoded vector is a *sparse vector*, whilst the real valued vector is a *dense vector*. \n", "\n", "The key concept in these word embeddings is that words that appear in similar _contexts_ appear nearby in the vector space, i.e. the Euclidean distance between these two word vectors is small. By context here, we mean the surrounding words. For example in the sentences \"I purchased some items at the shop\" and \"I purchased some items at the store\" the words 'shop' and 'store' appear in the same context and thus should be close together in vector space.\n", "\n", "We'll talk about some of the well-known algorithms for learning embeddings in the lectures, but you might have already heard of a popular model called *word2vec*, which was first published in a rejected ICLR submission (it has some pretty damning reviews, but also has thousands of citations!). In this lab we'll use pre-trained *GloVe* vectors. *GloVe* is a different algorithm for computing word vectors, although the outcome is similar to *word2vec*. These pre-trained embeddings have been trained on a gigantic corpus. We can use these pre-trained vectors within any of our models, with the idea that as they have already learned the context of each word they will give us a better starting point for our word vectors. This usually leads to faster training time and/or improved accuracy.\n", "\n", "In PyTorch, we use word vectors with the `nn.Embedding` layer, which takes a _**[sentence length, batch size]**_ tensor and transforms it into a _**[sentence length, batch size, embedding dimensions]**_ tensor. `nn.Embedding` layers can be trained from scratch, or they can be initialised (and optionally fixed) with pre-trained embedding data. The key thing to remember about an `nn.Embedding` is that it does not need to explicitly use a one-hot vector representation at any point; it just maps an index to a vector. This is important because it implies massive computational savings; more concretly an Emdedding is essentially a linear map in which the weight matrix of the linear layer is multiplied by a one-hot sparse-vector to produce a lower-dimensional (dense) output. This is exactly equivalent to just selecting the column of the weight matrix corresponding to the index represented by the sparse vector.\n", "\n", "In this part of the lab we won't be training any models; instead we'll be looking at the word embeddings and investigating a few interesting things we can do with them.\n", "\n", "## Loading the GloVe vectors\n", "\n", "First, we'll load the pre-trained GloVe vectors. The `name` field specifies what the vectors have been trained on, here the `6B` means a corpus of 6 billion words. The `dim` argument specifies the dimensionality of the word vectors. GloVe vectors are available in 50, 100, 200 and 300 dimensions. There is also a `42B` and `840B` glove vectors, however they are only available at 300 dimensions. The first time you run this it will take time as the vectors need to be downloaded:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "ad157a2a3b61280e0b6f17bd3ad12faa", "grade": false, "grade_id": "cell-aceafa53d8c3ee9e", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "import torchtext.vocab\n", "\n", "glove = torchtext.vocab.GloVe(name='6B', dim=100)\n", "\n", "print(f'There are {len(glove.itos)} words in the vocabulary')" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "6d6d3a9d7c029ce69fbc5b4b65d8dcf7", "grade": false, "grade_id": "cell-504c748c992c4304", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "As shown above, there are 400,000 unique words in the GloVe vocabulary. These are the most common words found in the corpus the vectors were trained on. **In these set of GloVe vectors, every single word is lower-case only.**\n", "\n", "`glove.vectors` is the actual tensor containing the values of the embeddings." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "5c17fc906f39bcc8ad50be640173a9fa", "grade": false, "grade_id": "cell-d581dee4f722cca5", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "glove.vectors.shape" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "a45800d85ac4b4b0b28b7c5786e7f611", "grade": false, "grade_id": "cell-8b7f0ea5ae20558b", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We can see what word is associated with each row by checking the `itos` (int to string) list. \n", "\n", "Below implies that row 0 is the vector associated with the word 'the', row 1 for ',' (comma), row 2 for '.' (period), etc." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "056f9034fdc1ea90e5eb7f2aec199e04", "grade": false, "grade_id": "cell-c729cc0c78c40e6c", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "glove.itos[:10]" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "d54851b373cca20e645497dfe6dfc8f0", "grade": false, "grade_id": "cell-f0e2bc91e46d85ab", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We can also use the `stoi` (string to int) dictionary, in which we input a word and receive the associated integer/index. If you try get the index of a word that is not in the vocabulary, you receive an error." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "4328787e224e941b4dcc53c2315626ee", "grade": false, "grade_id": "cell-a95267b694432ed1", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "glove.stoi['the']" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "b506644789934cd8b5a7afcc4e3ccc39", "grade": false, "grade_id": "cell-ba0310634f767896", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We can get the vector of a word by first getting the integer associated with it and then indexing into the word embedding tensor with that index." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "bbc1b6a3ec1c87301f8b913034b5798c", "grade": false, "grade_id": "cell-ec58401ce38b8fe9", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "glove.vectors[glove.stoi['the']]" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "d9591f8c7576a4211a24259d5a08f495", "grade": false, "grade_id": "cell-3f6513c100744743", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We'll be doing this a lot. __Use the following block to create a function that takes in word embeddings and a word and returns the associated vector.__ You should throw an error if the word doesn't exist in the vocabulary:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "cc248cb64c4760e65292e3e7a8c4b3cd", "grade": true, "grade_id": "cell-665d7b4d1dd8f339", "locked": false, "points": 4, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "def get_vector(embeddings, word):\n", " # YOUR CODE HERE\n", " raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "ca9447056d8b427e99b7bd0372902d9c", "grade": false, "grade_id": "cell-9ff707bac4fd2556", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "As before, we use a word to get the associated vector." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "fdf27ce0601aef03ff1efdd73bd779ca", "grade": false, "grade_id": "cell-6311b14c496949f5", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "get_vector(glove, 'the')" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "db7798ec9825a17c0ccede676217ccff", "grade": false, "grade_id": "cell-5ce5018fe3a64aa5", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "## Similar Contexts\n", "\n", "Now to start looking at the context of different words. \n", "\n", "If we want to find the words similar to a certain input word, we first find the vector of this input word, then we scan through our vocabulary finding any vectors similar to this input word vector.\n", "\n", "The function below returns the closest 10 words to an input word vector:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "e0c40de99156af4a0ccbfde1e6c8c73c", "grade": false, "grade_id": "cell-7caf2bf37d7f6acd", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "import torch\n", "\n", "def closest_words(embeddings, vector, n=10):\n", " distances = [(w, torch.dist(vector, get_vector(embeddings, w)).item()) for w in embeddings.itos]\n", " return sorted(distances, key = lambda w: w[1])[:n]" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "67ad88185491caa67edc4c043b2dfffa", "grade": false, "grade_id": "cell-37ae589db1788914", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Let's try it out with 'korea'. The closest word is the word 'korea' itself (not very interesting), however all of the words are related in some way. Pyongyang is the capital of North Korea, DPRK is the official name of North Korea, etc.\n", "\n", "Interestingly, we also get 'Japan' and 'China', implies that Korea, Japan and China are frequently talked about together in similar contexts. This makes sense as they are geographically situated near each other. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "3c53201c7b9717f670c96e6db2ce1fc0", "grade": false, "grade_id": "cell-f0315fd8a79e03c2", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "closest_words(glove, get_vector(glove, 'korea'))" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "cc290c1df8992513c81e9f6ab7d30812", "grade": false, "grade_id": "cell-aa3ce8cccc903a24", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Looking at another country, India, we also get nearby countries: Thailand, Malaysia and Sri Lanka (as two separate words). Australia is relatively close to India (geographically), but Thailand and Malaysia are closer. So why is Australia closer to India in vector space? A plausible explaination is that India and Australia appear together in the context of [cricket](https://en.wikipedia.org/wiki/Cricket) matches." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "5b12751bffec3905109d45bdf5503e5f", "grade": false, "grade_id": "cell-e42121c7951db2d3", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "closest_words(glove, get_vector(glove, 'india'))" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "69ad20637fe1ba949c18119468a615e2", "grade": false, "grade_id": "cell-7aa75c705fe39e2b", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "We'll also create another function that will nicely print out the tuples returned by our closest_words function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "d38ea85b7b473d4405eb968377727e9e", "grade": false, "grade_id": "cell-e0e61a784e7b5c03", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "def print_tuples(tuples):\n", " for w, d in tuples:\n", " print(f'({d:02.04f}) {w}') " ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "c314c093d0931d12e5e56b0473de07b4", "grade": false, "grade_id": "cell-158d48e3c33227e7", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Using the `print_tuples` function use the code block below to print out the 10 neighbours of 'jaguar':" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "d396d6fedd051278308325893bff1a7c", "grade": true, "grade_id": "cell-c883f0e0c2c194b4", "locked": false, "points": 2, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "5efa0cdc77c06d8a495a6832ac7a2bd3", "grade": false, "grade_id": "cell-3964ab858fdb82d8", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__Use the following block to explain the results.__ (hint: use Google if you don't know what any of the terms are!)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "nbgrader": { "checksum": "ece0ec94dff4b1e0bef5bacbec0db9f1", "grade": true, "grade_id": "cell-ebf08e96b0987860", "locked": false, "points": 5, "schema_version": 1, "solution": true } }, "source": [ "YOUR ANSWER HERE" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "37e34f44b1798d1f74fb49210ef1da43", "grade": false, "grade_id": "cell-9f417bc0fb8d5287", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "## Analogies\n", "\n", "Another property of word embeddings is that we can apply standard arithmetic operations. This can give interesting results.\n", "\n", "We'll show an example of this first, and then explain it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "895399907963f0f7a83448dec5165c5d", "grade": false, "grade_id": "cell-a0f613c8739e5a3c", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "def analogy(embeddings, word1, word2, word3, n=5):\n", " \n", " candidate_words = closest_words(embeddings, get_vector(embeddings, word2) - get_vector(embeddings, word1) + get_vector(embeddings, word3), n+3)\n", " \n", " candidate_words = [x for x in candidate_words if x[0] not in [word1, word2, word3]][:n]\n", " \n", " print(f'{word1} is to {word2} as {word3} is to...')\n", " \n", " return candidate_words" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "9994548d295d48b18a1767ebb094a3e6", "grade": false, "grade_id": "cell-c197612b5ec94762", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "print_tuples(analogy(glove, 'man', 'king', 'woman'))" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "5596f944b3ea0977744e61ac0dc9691e", "grade": false, "grade_id": "cell-a388955bf35cd584", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "This is the canonical example which shows off this property of word embeddings. So why does it work? Why does the vector of 'woman' added to the vector of 'king' minus the vector of 'man' give us 'queen'?\n", "\n", "If we think about it, the vector calculated from 'king' minus 'man' gives us a \"royalty vector\". This is the vector associated with traveling from a man to his royal counterpart, a king. If we add this \"royality vector\" to 'woman', this should travel to her royal equivalent, which is a queen!\n", "\n", "We can do this with other analogies too. For example, this gets an \"acting career vector\":" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "e3ba3a850f1feed61e5f60a7f998ddd3", "grade": false, "grade_id": "cell-733267c1033df3c0", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "print_tuples(analogy(glove, 'man', 'actor', 'woman'))" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "a865a1470a970a0636a4d3fae76cab84", "grade": false, "grade_id": "cell-3a873ba9cf3355e4", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__Use the following block to compute a 'capital city vector' that predicts the capital of England based on the capital and name of another country__:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "328c71eb5601837b5202b85934e9bb1c", "grade": true, "grade_id": "cell-9f399ee568e4ae5b", "locked": false, "points": 2, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "33e404a915d22e882399fb954ab2ebef", "grade": false, "grade_id": "cell-3dbff5286a5d4d7e", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__Use the following block to compute an 'musical genre vector' that predicts the genre of music played by Eminem based on another musician/band and their genre__:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "5f8f424d440dd6dcd184b16b8c8be299", "grade": true, "grade_id": "cell-d04e647ab1bc99c8", "locked": false, "points": 2, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }