{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Neural Networks that write like Shakespeare\n", "\n", "In this Chapter, we will:\n", "- Solve the Challenge of arbitrary length input.\n", "- Discover the surprising power of averaged word vectors\n", "- Uncover the limitations of bag-of-words vectors\n", "- Use identity vectors to sum word embeddings\n", "- Learn the transition matrix\n", "- Learn to create useful sentence vectors\n", "- Implement forward propagation in Python\n", "- Implement forward propagation and backpropagation with arbitrary length\n", "\n", "> [Andrej Karpathy] \"There is something magical about recurrent neural networks\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Challenge of Arbitrary Length\n", "### Let's Model Arbitrary Long Sequences of Data using Neural Networks!\n", "\n", "In this chapter, we'll expand on the intuition of an embedding conveying the meaning of a word by creating embeddings that convey the meaning of variable-length phrases and setences. If we want to create a vector that held an entire sequence of symbols within its contents in the same way a word embedding holds information about the word in the form of a vector, how could we do that?\n", "\n", "If we concatenated or stacked each word embedding in the sentence, we will have a vector that may represent the whole sentence.\n", "\n", "
\n", "\n", "But this approach leaves something to be desired, bigger sentences will have more stacked embeddings. for example, we can't compare two sentences because they have different shapes.\n", "\n", "
\n", "\n", "In Theory, these two sentences should be very similar. Comparing their vectors should indicate high similarity." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Do comparisons really matter?\n", "### Why we should care about whether we can compare two sentence vectors?\n", "\n", "The act of comparing two vectors is useful because it allows us to understand what the network sees. Even though we can't read two vectors, but we can check their similarity. In this case, we are interested in analyzing the perspective of the Neural Network.\n", "\n", "We want to create sentence vectors that are useful for predicting things about the sentence. Meaning, at the minimum, similar setences should in theory result in similar vector representations.\n", "\n", "
\n", "\n", "But how about averaging word vectors from the sentence? This method has the following characteristics:\n", "- Word order does not matter: this technique loses words order information.\n", "- Same-size result vector: We don't have to worry about the alignment because each word vector is in the same length.\n", "- The sentences \"The Cat Sat\" & \"The Cat Sat Still\" will have similar vector representations because most words are the same.\n", "- Even Better, it's likely that \"A Dog Walked\" will be similar to \"A Cat Sat\" because **The Actual word representations are similar**.\n", "\n", "We can conclude that the techniuqe of averaging words is a strong one, although not perfect." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Suprising Power of Averaged Word Vectors\n", "### It's the amazingly powerful go-to tool for neural prediction\n", "\n", "To play with word embeddings, let's first train them:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import sys, random, math\n", "from collections import Counter\n", "import numpy as np\n", "np.random.seed(1)\n", "random.seed(1)\n", "\n", "IMDB_PATH = '/Users/mohamedakramzaytar/data/2019/Q2/kaggle/IMDB/reviews.txt'\n", "IMDB_LABEL_PATH = '/Users/mohamedakramzaytar/data/2019/Q2/kaggle/IMDB/labels.txt'" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "f = open(IMDB_PATH, 'r')\n", "raw_reviews = f.readlines()\n", "f.close()" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "tokens = list(map(lambda x: x.split(\" \"), raw_reviews))\n", "word_counter = Counter()\n", "for review in tokens:\n", " for token in review:\n", " word_counter[token] -= 1" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "_ = word_counter.most_common()\n", "vocab = list(set(map(lambda x: x[0], word_counter.most_common())))" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "word2index = {}\n", "for i, word in enumerate(vocab):\n", " word2index[word] = i" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "concatenated = list()\n", "input_dataset = list()\n", "for review in tokens:\n", " review_indices = list()\n", " for token in review:\n", " try:\n", " review_indices.append(word2index[token])\n", " concatenated.append(word2index[token])\n", " except:\n", " \"\"\n", " input_dataset.append(review_indices)\n", "concatenated = np.array(concatenated)\n", "random.shuffle(input_dataset)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "lr, epochs = (.05, 2)\n", "hidden_size, window, negative = 50, 2, 5\n", "W0 = (np.random.rand(len(vocab), hidden_size) - 0.5) * 0.2\n", "W1 = np.random.rand(len(vocab), hidden_size)*0\n", "layer_2_target = np.zeros(negative+1)\n", "layer_2_target[0] = 1" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def similar(target='beautiful'):\n", " target_index = word2index[target]\n", " \n", " scores = Counter()\n", " for word, index in word2index.items():\n", " raw_difference = W0[index] - W0[target_index]\n", " squared_difference = raw_difference * raw_difference\n", " scores[word] = -math.sqrt(sum(squared_difference))\n", " return scores.most_common(10)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def sigmoid(x):\n", " return 1 / (1 + np.exp(-x))" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Progress:0.0 [('terrible', -0.0), ('genital', -0.39326532865973723), ('midkoff', -0.3938513073457206), ('ethan', -0.3968892076212083), ('streep', -0.3979724210300289), ('bjorlin', -0.3981196278255343), ('injure', -0.3982249442410485), ('munchies', -0.39891945750964025), ('orgolini', -0.40061746244471097), ('vega', -0.4015861557887159)]\n", "Progress:0.1 [('terrible', -0.0), ('unique', -1.927270765664731), ('magnificent', -2.045146852188629), ('student', -2.062339523062212), ('deeply', -2.0687588315141556), ('fantastic', -2.08371892800822), ('rare', -2.0969774764288616), ('tough', -2.128273679772459), ('worthwhile', -2.149016740284797), ('dreadful', -2.1524088693570316)]\n", "Progress:0.2 [('terrible', -0.0), ('horrible', -3.108306431979032), ('brilliant', -3.396949380238907), ('fantastic', -3.4444297632941083), ('compelling', -3.6848868031292823), ('lame', -3.6906613668414274), ('remarkable', -3.7214051894822826), ('great', -3.749394484829831), ('superb', -3.764204679513025), ('tedious', -3.78968255882158)]\n", "Progress:0.3 [('terrible', -0.0), ('horrible', -3.498158178893929), ('tremendous', -3.761329799226001), ('fantastic', -3.7628561218791723), ('mediocre', -3.766116610681203), ('laughable', -3.7805893810816893), ('marvelous', -3.813188768185005), ('dreadful', -3.8134367604842216), ('breathtaking', -3.862370005681857), ('spectacular', -3.8694653810324677)]\n", "Progress:0.4 [('terrible', -0.0), ('horrible', -2.967964656933557), ('brilliant', -3.5420185698927042), ('pathetic', -3.7138289510002624), ('remarkable', -3.770923141465496), ('superb', -3.7941187646097148), ('fantastic', -3.8364360547450773), ('laughable', -3.845696114343685), ('great', -3.906869347583601), ('bad', -3.921186208070328)]\n", "Progress:0.5 [('terrible', -0.0), ('horrible', -3.1989896124441066), ('brilliant', -3.3647957125926413), ('superb', -3.446848086835425), ('pathetic', -3.6557981030906634), ('fantastic', -3.673566586441344), ('lame', -3.8944061428190775), ('remarkable', -4.004842620824482), ('marvelous', -4.053430718224174), ('great', -4.0758362861596575)]\n", "Progress:0.6 [('terrible', -0.0), ('horrible', -2.7739353122626755), ('fantastic', -3.050940700260329), ('brilliant', -3.232619045738981), ('pathetic', -3.5566710306301847), ('superb', -3.713381489932097), ('lame', -3.739970542211976), ('marvelous', -3.841008362820846), ('haunting', -3.846184382564499), ('magnificent', -3.847871555156092)]\n", "Progress:0.7 [('terrible', -0.0), ('horrible', -2.5027876972527996), ('brilliant', -3.387431880847732), ('fantastic', -3.805817922179162), ('superb', -3.8359369916740205), ('magnificent', -3.8404436166584484), ('pathetic', -3.9100961380425856), ('laughable', -3.9562924360256875), ('marvelous', -3.9658570094816135), ('lame', -4.01274210326551)]\n", "Progress:0.8 [('terrible', -0.0), ('horrible', -3.3525614136062547), ('fantastic', -3.787226665684703), ('dire', -3.812683534917237), ('brilliant', -3.824813307264604), ('dreadful', -3.9178990948548855), ('laughable', -3.9454124321935535), ('fabulous', -3.9960864707094257), ('miserable', -4.04534807188886), ('phenomenal', -4.051791600383227)]\n", "Progress:0.9 [('terrible', -0.0), ('horrible', -2.8685809548008545), ('brilliant', -3.2730169748091433), ('horrendous', -3.675608725486543), ('dreadful', -3.7564170172164517), ('phenomenal', -3.7578615711368135), ('bad', -3.7787265252348763), ('marvelous', -3.815732576454664), ('fantastic', -3.9545000164096757), ('great', -3.975672000914959)]\n", "[('terrible', -0.0), ('horrible', -2.7898917830688132), ('brilliant', -3.3469631230517645), ('phenomenal', -3.690963614102198), ('pathetic', -3.795400367744756), ('marvelous', -3.9134681822713833), ('superb', -3.920458050440294), ('masterful', -3.979517882443988), ('bad', -3.9993630029279785), ('mediocre', -4.138258710066284)]\n" ] } ], "source": [ "for review_i, review in enumerate(input_dataset * epochs):\n", " for target_i in range(len(review)):\n", " target_samples = [review[target_i]] + list(concatenated[(np.random.rand(negative)*len(concatenated)).astype('int').tolist()])\n", " left_context = review[max(0, target_i-window):target_i]\n", " right_context = review[target_i+1: min(len(review), target_i+window)]\n", " layer_1 = np.mean(W0[left_context+right_context], axis=0)\n", " layer_2 = sigmoid(layer_1.dot(W1[target_samples].T))\n", " layer_2_delta = layer_2 - layer_2_target\n", " layer_1_delta = layer_2_delta.dot(W1[target_samples])\n", " W0[left_context+right_context] -= layer_1_delta*lr\n", " W1[target_samples] -= np.outer(layer_2_delta, layer_1)*lr\n", " if(review_i % 5000 == 0):\n", " print('\\rProgress:'+str(review_i/float(len(input_dataset)*epochs)) + \" \" + str(similar('terrible')))\n", "print(similar('terrible'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's experiment with average sentence embeddings:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "norms = np.sum(W0 * W0, axis=1)\n", "norms.resize(norms.shape[0], 1)\n", "normed_weights = W0 * norms" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "def make_sent_vect(words):\n", " # 1st part get index of each word\n", " # 2nd part filters only words in vocab\n", " indices = list(map(lambda x: word2index[x], filter(lambda x: x in word2index, words)))\n", " return np.mean(normed_weights[indices], axis=0)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "reviews_to_vectors = list()\n", "for review in tokens:\n", " reviews_to_vectors.append(make_sent_vect(review))\n", "reviews_to_vectors = np.array(reviews_to_vectors)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "def most_similar_reviews(review):\n", " v = make_sent_vect(review)\n", " scores = Counter()\n", " for i, val in enumerate(reviews_to_vectors.dot(v)):\n", " scores[i] = val\n", " most_similar = list()\n", " for idx, score in scores.most_common(3):\n", " most_similar.append(raw_reviews[idx][0:40])\n", " return most_similar" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['i have never seen such terrible performa',\n", " 'i think this show is definitely the grea',\n", " 'if you havn t seen this movie i highly ']" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "most_similar_reviews(['beautiful', 'amazing'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There appear to be interesting statistical information within these vectors, such as when negative & embeddings cluster together." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## How is information stored in these embeddings?\n", "### When you average word embeddings, average shapes remain\n", "\n", "To understand the meaning of embeddings, let's visualize a word vector as a squiggly line, like this one:\n", "\n", "
\n", "\n", "Instead of thinking that a vector is just a list of numbers, we visualize it as a line that goes through low/high points that corresponds to the low/high numbers in the vector. If we selected several words from the corpus, they might look like these:\n", "\n", "
\n", "\n", "By considering the similaries between various words, we may notice that each word's shape is unique. However, \"terrible\" & \"boring\" have certain similarities in their shapes. The most interesting thing is that part of these squiggles have meaning in and of themselves.\n", "\n", "Consider the negative words for example, there is nothing magical about the common spike these words have in common, if we retrained the network, the spike would appear somewhere else. What gives the \"negative\" sensation to the shape is that all negative words have it.\n", "\n", "These shapes were molded by training so that every part of the curve convery a certain meaning that would prove useful in the correlation summarization task. When we take an average curve over multiple word embeddings, common meaning holds while other opposite meanings cancel out or get weakened." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## How does a Neural Network use embeddings?\n", "### Neural Networks detect the curves that have correlation with the target label\n", "\n", "NN simply looks for various bumps and variations in the input layer and compresses that information to predict the target label (missing word/sentiment/...). \n", "\n", "Through training, the NN shapes the word embeddings in such a way that it clusters them, all for the goal of making it easier to predict the target label. It compresses the signal in a way that allows it to minimize its loss function (i.e. correctly predict the target label)\n", "\n", "Let's think about the case when we have longer sentences, the longer the sentence, the more its representation will be averaged to zero. So avereging word embeddings is a bit mushy. Meaning that the longer the sentence, the more likely that it will average out to be a straight line.\n", "\n", "In short, this concept of storing a sentence in a fixed length averaged vector doesn't decay nicely. That being said, typical sentences aren't that long anyway." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Limitations of Bag-of-Words Vectors\n", "### Order becomes irrelevant when you average word embeddings\n", "\n", "The biggest issue with averaged word embeddings is that they lose the order information. By losing order, we also lose grammar and syntax.\n", "\n", "This way of averaging or summing a bunch of words to have a sentence/text representation is called the bag-of-words approach. The new representation creates a bag, because order isn't preserved.\n", "\n", "We want a way to represent sentence vectors that preserve order and yield different representations for different word orders. More importantly, the way in which order changes the resulting vector **should be learned**. By learning, the neural network will find an order representation that minimizes its loss function and yield interesting properties about word order in general.\n", "\n", "One of the most famous and successful ways of generating vectors for sequences are Recurrent Neural Networks (RNNs). Identity matrices have one property: if you multiply any vector by them, you'll get the same vector." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using Identity Vectors to sum word embeddings\n", "### Let's implement the same logic using a different approach\n", "\n", "
\n", "\n", "This is the standard way of simply summing up the word embeddings to form a fixed length sentence embedding.\n", "\n", "
\n", "\n", "The example above adds another step while summing, it multiplies each element by the identity matrix. Note that we're not doing anything special here, this indentity way of summing up the elements yields the same resulting vector as the first example, but consider this: **If the matrices used are anything but the identity matrix, changing the order of the word embeddings will change the resulting vector**.\n", "\n", "Let's see this in Code:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Matrices that change absolutely nothing\n", "### Let's create sentence embeddings using the Identity matrix in Python " ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "a = np.array([1,2,3])\n", "b = np.array([.1, .2, .3])\n", "c = np.array([-1, -.5, 0])\n", "d = np.array([0, 0, 0])" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[1., 0., 0.],\n", " [0., 1., 0.],\n", " [0., 0., 1.]])" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "identity = np.eye(3)\n", "identity" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([1., 2., 3.]),\n", " array([0.1, 0.2, 0.3]),\n", " array([-1. , -0.5, 0. ]),\n", " array([0., 0., 0.]))" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a.dot(identity), b.dot(identity), c.dot(identity), d.dot(identity)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "this = np.array([2, 4, 6])\n", "movie = np.array([10, 10, 10])\n", "rocks = np.array([1, 1, 1])" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[13 15 17]\n", "[13. 15. 17.]\n" ] } ], "source": [ "print(this + movie + rocks)\n", "print(((this.dot(identity) + movie).dot(identity) + rocks).dot(identity))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We yielded the same results just because the identity matrix is a very special type of matrix. In fact, the identity matrix is the only matrix capable of returning the same vector as doing direct sum, no other matrix has this guarantee." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learning the Transition Matrices\n", "### What if you allowed the identity matrices to change to minimize the loss?\n", "\n", "Let's remember the goal: \"generating sequence embeddings that cluster according to the meaning of the sentence\". These sentence embeddings should care about the word order. Our hypothesis is that if we used the word vectors dotted with corresponding identity matrices, and turned the identity matrices into **weight matrices**, the network will preserve order and extract meaningful sentence representations that make use of words order.\n", "\n", "Whenever we train a neural network, we should set a task for it to learn useful representations, in our case, what is the task that will allow the network to extract useful sentence embeddings? The answer is to **Train a Network that takes a list of words & learns to predict the Next Word**. In Other words, A **Language Model**.\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learning to Create Useful Sentence Vectors\n", "### Create the Sentence Vector, Make a Prediction, & Modify the sentence vector via its parts.\n", "\n", "We think about creating a sentence embedding, and use it to predict the next word, and then modifying the respective parts (word embeddings, weight matrices) to make the prediction a little bit better. Here are the inner components of the neural network:\n", "\n", "
\n", "\n", "RNNs process data in two steps:\n", "1. Create the input sentence embedding using the weight matrices and the word vector representations.\n", "2. Use the input vector representation to predict the probability for each vocab token to be the next word.\n", "\n", "We will initialize the weight matrices as identity matrices, however, they will change throughout training just as the word embeddings & weights corresponding to the dense classification layers. We'll also constrain the transition matrices to be the same, meaning:\n", "- $W_0=W_1=W_2...$\n", "- Whatever logic learned in one transition is use in any other transition." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Forward Propagation in Python\n", "### Let's take this Idea & see how to perform a forward propagation for our network\n", "\n", "Now that we have a conceptual idea of what we're trying to build, let's check out a toy version in Python. First, let's create the weights (we use a limited vocab of `9` words):" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "def softmax(v):\n", " v = np.atleast_2d(v)\n", " return np.exp(v)/np.sum(np.exp(v), axis=1, keepdims=True)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "# initial word embeddings\n", "word_vects = {}\n", "word_vects['yankees'] = np.array([[0., 0., 0.]])\n", "word_vects['bears'] = np.array([[0., 0., 0.]])\n", "word_vects['braves'] = np.array([[0., 0., 0.]])\n", "word_vects['red'] = np.array([[0., 0., 0.]])\n", "word_vects['sox'] = np.array([[0., 0., 0.]])\n", "word_vects['lose'] = np.array([[0., 0., 0.]])\n", "word_vects['defeat'] = np.array([[0., 0., 0.]])\n", "word_vects['beat'] = np.array([[0., 0., 0.]])\n", "word_vects['tie'] = np.array([[0., 0., 0.]])" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "# sentence embedding to output classification weights\n", "sent2output = np.random.rand(3, len(word_vects))" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "# initial weight matrices\n", "identity = np.eye(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above code creates 3 sets of weights:\n", "- A dictionnary of word embeddings.\n", "- The identity or transition matrix.\n", "- A classification layer.\n", "\n", "The classification layer serves to predict the next word over the word vocab, using the sentence vector representation. With these tools, forward propagation is trivial, let's take an example:" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "layer_0 = word_vects['red']\n", "layer_1 = layer_0.dot(identity) + word_vects['sox']\n", "layer_2 = layer_1.dot(identity) + word_vects['defeat']\n", "pred = softmax(layer_2.dot(sent2output))" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0.11111111, 0.11111111, 0.11111111, 0.11111111, 0.11111111,\n", " 0.11111111, 0.11111111, 0.11111111, 0.11111111]])" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pred" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## How do you backpropagate into this?\n", "### It might seem trickier, but we use the same steps we already learned\n", "\n", "
\n", "\n", "To calculate the gradients, we will go back through the identity matrix and further:\n", "- We calculate the gradients over the word embeddings injected in each layer. \n", "- When we add two vectors together during forward propagation, we will backpropagate the same gradient into both sides of addition. \n", "- When we generate `layer_2_delta`, we backpropagate it twice, once across the identity matrix to create `layer_1_delta` and again to `word_vectors[word]`." ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "y = np.array([1, 0, 0, 0, 0, 0, 0, 0, 0]) # one-hot vector for yankees\n", "pred_delta = pred - y\n", "layer_2_delta = pred_delta.dot(sent2output.T)\n", "defeat_delta = layer_2_delta * 1\n", "layer_1_delta = layer_2_delta.dot(identity.T)\n", "sox_delta = layer_1_delta * 1\n", "layer_0_delta = layer_1_delta.dot(identity.T) # which is `red_delta`" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "lr = .01\n", "word_vects['red'] -= layer_0_delta*lr\n", "word_vects['sox'] -= sox_delta*lr\n", "word_vects['defeat'] -= defeat_delta*lr\n", "identity -= np.outer(layer_0, layer_1_delta)*lr\n", "identity -= np.outer(layer_1, layer_2_delta)*lr\n", "sent2output -= np.outer(layer_2, pred_delta)*lr" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'yankees': array([[0., 0., 0.]]),\n", " 'bears': array([[0., 0., 0.]]),\n", " 'braves': array([[0., 0., 0.]]),\n", " 'red': array([[ 0.00127242, -0.00064627, 0.00298193]]),\n", " 'sox': array([[ 0.00127242, -0.00064627, 0.00298193]]),\n", " 'lose': array([[0., 0., 0.]]),\n", " 'defeat': array([[ 0.00127242, -0.00064627, 0.00298193]]),\n", " 'beat': array([[0., 0., 0.]]),\n", " 'tie': array([[0., 0., 0.]])}" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "word_vects" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Let's Train it!\n", "### We have all the tools; let's train the network on a toy corpus" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's first train the network on a toy dataset called `Babi dataset`. This dataset is a synthetically generated Q/A corpus to teach machines how to answer questions about an environment.\n", "\n", "First let's download and decompress the Babi dataset using bash commands:" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--2020-11-13 15:44:18-- http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-1.tar.gz\n", "Resolving www.thespermwhale.com (www.thespermwhale.com)... 69.65.3.211\n", "Connecting to www.thespermwhale.com (www.thespermwhale.com)|69.65.3.211|:80... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 1282454 (1.2M) [application/x-gzip]\n", "Saving to: ‘tasks_1-20_v1-1.tar.gz’\n", "\n", "tasks_1-20_v1-1.tar 100%[===================>] 1.22M 389KB/s in 3.2s \n", "\n", "2020-11-13 15:44:22 (389 KB/s) - ‘tasks_1-20_v1-1.tar.gz’ saved [1282454/1282454]\n", "\n" ] } ], "source": [ "!wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-1.tar.gz" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "!mv tasks_1-20_v1-1.tar.gz static/data/" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "x tasksv11/\n", "x tasksv11/en/\n", "x tasksv11/LICENSE\n", "x tasksv11/README\n", "x tasksv11/shuffled/\n", "x tasksv11/shuffled/qa10_indefinite-knowledge_test.txt\n", "x tasksv11/shuffled/qa10_indefinite-knowledge_train.txt\n", "x tasksv11/shuffled/qa11_basic-coreference_test.txt\n", "x tasksv11/shuffled/qa11_basic-coreference_train.txt\n", "x tasksv11/shuffled/qa12_conjunction_test.txt\n", "x tasksv11/shuffled/qa12_conjunction_train.txt\n", "x tasksv11/shuffled/qa13_compound-coreference_test.txt\n", "x tasksv11/shuffled/qa13_compound-coreference_train.txt\n", "x tasksv11/shuffled/qa14_time-reasoning_test.txt\n", "x tasksv11/shuffled/qa14_time-reasoning_train.txt\n", "x tasksv11/shuffled/qa15_basic-deduction_test.txt\n", "x tasksv11/shuffled/qa15_basic-deduction_train.txt\n", "x tasksv11/shuffled/qa16_basic-induction_test.txt\n", "x tasksv11/shuffled/qa16_basic-induction_train.txt\n", "x tasksv11/shuffled/qa17_positional-reasoning_test.txt\n", "x tasksv11/shuffled/qa17_positional-reasoning_train.txt\n", "x tasksv11/shuffled/qa18_size-reasoning_test.txt\n", "x tasksv11/shuffled/qa18_size-reasoning_train.txt\n", "x tasksv11/shuffled/qa19_path-finding_test.txt\n", "x tasksv11/shuffled/qa19_path-finding_train.txt\n", "x tasksv11/shuffled/qa1_single-supporting-fact_test.txt\n", "x tasksv11/shuffled/qa1_single-supporting-fact_train.txt\n", "x tasksv11/shuffled/qa20_agents-motivations_test.txt\n", "x tasksv11/shuffled/qa20_agents-motivations_train.txt\n", "x tasksv11/shuffled/qa2_two-supporting-facts_test.txt\n", "x tasksv11/shuffled/qa2_two-supporting-facts_train.txt\n", "x tasksv11/shuffled/qa3_three-supporting-facts_test.txt\n", "x tasksv11/shuffled/qa3_three-supporting-facts_train.txt\n", "x tasksv11/shuffled/qa4_two-arg-relations_test.txt\n", "x tasksv11/shuffled/qa4_two-arg-relations_train.txt\n", "x tasksv11/shuffled/qa5_three-arg-relations_test.txt\n", "x tasksv11/shuffled/qa5_three-arg-relations_train.txt\n", "x tasksv11/shuffled/qa6_yes-no-questions_test.txt\n", "x tasksv11/shuffled/qa6_yes-no-questions_train.txt\n", "x tasksv11/shuffled/qa7_counting_test.txt\n", "x tasksv11/shuffled/qa7_counting_train.txt\n", "x tasksv11/shuffled/qa8_lists-sets_test.txt\n", "x tasksv11/shuffled/qa8_lists-sets_train.txt\n", "x tasksv11/shuffled/qa9_simple-negation_test.txt\n", "x tasksv11/shuffled/qa9_simple-negation_train.txt\n", "x tasksv11/en/qa10_indefinite-knowledge_test.txt\n", "x tasksv11/en/qa10_indefinite-knowledge_train.txt\n", "x tasksv11/en/qa11_basic-coreference_test.txt\n", "x tasksv11/en/qa11_basic-coreference_train.txt\n", "x tasksv11/en/qa12_conjunction_test.txt\n", "x tasksv11/en/qa12_conjunction_train.txt\n", "x tasksv11/en/qa13_compound-coreference_test.txt\n", "x tasksv11/en/qa13_compound-coreference_train.txt\n", "x tasksv11/en/qa14_time-reasoning_test.txt\n", "x tasksv11/en/qa14_time-reasoning_train.txt\n", "x tasksv11/en/qa15_basic-deduction_test.txt\n", "x tasksv11/en/qa15_basic-deduction_train.txt\n", "x tasksv11/en/qa16_basic-induction_test.txt\n", "x tasksv11/en/qa16_basic-induction_train.txt\n", "x tasksv11/en/qa17_positional-reasoning_test.txt\n", "x tasksv11/en/qa17_positional-reasoning_train.txt\n", "x tasksv11/en/qa18_size-reasoning_test.txt\n", "x tasksv11/en/qa18_size-reasoning_train.txt\n", "x tasksv11/en/qa19_path-finding_test.txt\n", "x tasksv11/en/qa19_path-finding_train.txt\n", "x tasksv11/en/qa1_single-supporting-fact_test.txt\n", "x tasksv11/en/qa1_single-supporting-fact_train.txt\n", "x tasksv11/en/qa20_agents-motivations_test.txt\n", "x tasksv11/en/qa20_agents-motivations_train.txt\n", "x tasksv11/en/qa2_two-supporting-facts_test.txt\n", "x tasksv11/en/qa2_two-supporting-facts_train.txt\n", "x tasksv11/en/qa3_three-supporting-facts_test.txt\n", "x tasksv11/en/qa3_three-supporting-facts_train.txt\n", "x tasksv11/en/qa4_two-arg-relations_test.txt\n", "x tasksv11/en/qa4_two-arg-relations_train.txt\n", "x tasksv11/en/qa5_three-arg-relations_test.txt\n", "x tasksv11/en/qa5_three-arg-relations_train.txt\n", "x tasksv11/en/qa6_yes-no-questions_test.txt\n", "x tasksv11/en/qa6_yes-no-questions_train.txt\n", "x tasksv11/en/qa7_counting_test.txt\n", "x tasksv11/en/qa7_counting_train.txt\n", "x tasksv11/en/qa8_lists-sets_test.txt\n", "x tasksv11/en/qa8_lists-sets_train.txt\n", "x tasksv11/en/qa9_simple-negation_test.txt\n", "x tasksv11/en/qa9_simple-negation_train.txt\n" ] } ], "source": [ "!tar -xvf static/data/tasks_1-20_v1-1.tar.gz" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With some simple Python, we can open, clean, & preprocess the data to feed it to the RNN:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "import sys, random, math\n", "from collections import Counter\n", "import numpy as np" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "f = open('static/data/tasksv11/en/qa1_single-supporting-fact_train.txt', 'r')\n", "raw = f.readlines()\n", "f.close()" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [], "source": [ "sentences = list()\n", "for line in raw[:1000]:\n", " sentences.append(line.lower().replace(\"\\n\", \"\").split(\" \")[1:])" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[['mary', 'moved', 'to', 'the', 'bathroom.'],\n", " ['john', 'went', 'to', 'the', 'hallway.'],\n", " ['where', 'is', 'mary?', '\\tbathroom\\t1']]" ] }, "execution_count": 43, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sentences[:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This dataset contains a lot of simple statements and questions. Each Question is followed by the correct answer. When used in the context of QA, the neural network reads the statements then answers the question. \n", "\n", "For now, our network will attempt to finish each sentence when given one or more starting words." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setting Things Up\n", "### Before we can create matrices, we need to learn how many parameters we have." ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "vocab = set()\n", "for sent in sentences:\n", " for word in sent:\n", " vocab.add(word)\n", "vocab = list(vocab)" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [], "source": [ "word2index = {}\n", "for i, word in enumerate(vocab):\n", " word2index[word] = i" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [], "source": [ "def sent2indices(sent):\n", " indices = list()\n", " for word in sent:\n", " indices.append(word2index[word])\n", " return indices" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [], "source": [ "def softmax(v):\n", " e_v = np.exp(v - np.max(v))\n", " return e_v / e_v.sum(axis=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We set the word embedding size to `10`:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [], "source": [ "np.random.seed(1)" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [], "source": [ "embed_size = 10\n", "word_embeddings = (np.random.rand(len(vocab), embed_size) - 0.5) * 0.1\n", "W0 = np.eye(embed_size)\n", "empty_sentence_embedding = np.zeros(embed_size)\n", "W1 = (np.random.rand(embed_size, len(vocab)) - 0.5) * 0.1\n", "# target vocab vectors for the loss function\n", "y_hots = np.eye(len(vocab))" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "((10, 10), (10, 82))" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "W0.shape, W1.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Forward Propagation with Arbitrary Length\n", "### We'll forward propagate using the same logic described earlier\n", "\n", "
\n", "\n", "Instead of predicting only the last word, we will make a prediction for each timestep using all previous words. This is more efficient than doing a new forward propagation everytime we want to predict a new word (from the beginning of the phrase)." ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [], "source": [ "def predict(sent):\n", " layers = list()\n", " layer = {}\n", " layer['hidden'] = empty_sentence_embedding\n", " layers.append(layer)\n", " \n", " loss = 0\n", " \n", " # forward propagates\n", " preds = list()\n", " for target_i in range(len(sent)):\n", " layer = {}\n", " \n", " # tries to predict the next term\n", " layer['pred'] = softmax(layers[-1]['hidden'].dot(W1))\n", " \n", " # `sent[target_i]` gets actual word, which is a number that represent word in vocab, then we get its prediction in the proba distribution\n", " loss += -np.log(layer['pred'][sent[target_i]])\n", " \n", " # generates the next hidden state\n", " layer['hidden'] = layers[-1]['hidden'].dot(W0) + word_embeddings[sent[target_i]]\n", " layers.append(layer)\n", " \n", " return layers, loss" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The List called \"layers\" represent a new way of doing forward propagation. We should notice that we end up doing more forward propagation if the sentence length is longer." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Backpropagation with Arbitrary Length\n", "### You'll backpropagate using the same logic described earlier\n", "\n", "Let's implement backpropagation over arbitrary sequence lengths. The most important object is the **layers** list, which has two vectors\n", "- `layer['state']`\n", "- `layer['previous_hidden']`\n", "\n", "In order to backpropagate, we'll take the output's gradient and add a new object to each list called `layer['state_delta']` which will represent the gradient at that layer. We're building the same logic in a way that it can consume the variable-length sequences from the forward propagation logic." ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['went', 'to', 'the', 'hallway.']" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sentences[1][1:]" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [], "source": [ "for iter in range(30000):\n", " # forward propagation\n", " lr = .001\n", " # getting sentence indices w/o 1st word\n", " # why it leaves the first word of each sentence? -> to use it instead of \"NaN\" as first layer['hidden']\n", " sent = sent2indices(sentences[iter%len(sentences)][1:]) \n", " layers, loss = predict(sent) # predicts, returns hidden layer embeddings + loss\n", " \n", " # backpropagation\n", " for layer_i in reversed(range(len(layers))): # loop over hidden states (layers)\n", " layer = layers[layer_i] # get corresponding layer\n", " target = sent[layer_i - 1] # get target word\n", " \n", " # If not the 1st layer\n", " if (layer_i > 0): # if not first layer\n", " layer['output_delta'] = layer['pred'] - y_hots[target] # delta\n", " new_hidden_delta = layer['output_delta'].dot(W1.transpose()) # gradient\n", " \n", " # If last layer, don't pull from a later one, because it doesn't exist\n", " # seems that for each hidden layer, its hidden delta is depedent upon the last decoding operation + next layer gradient\n", " if(layer_i == len(layers)-1):\n", " layer['hidden_delta'] = new_hidden_delta\n", " else:\n", " layer['hidden_delta'] = new_hidden_delta + layers[layer_i+1]['hidden_delta'].dot(W0.transpose())\n", " else: # if the first layer\n", " layer['hidden_delta'] = layers[layer_i+1]['hidden_delta'].dot(W0.transpose())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following figure simplifies one training loop:\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Weight update with arbitrary length\n", "### We'll update the weights using the same logic described earlier\n", "\n", "After having stored the gradients for each layer, now we need to add the weight update code:" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Perplexity :81.95028707568486\n", "Perplexity :81.78060980022715\n", "Perplexity :81.51436435831427\n", "Perplexity :81.01004082432779\n", "Perplexity :79.96135229022907\n", "Perplexity :77.53659835083533\n", "Perplexity :70.45797177871069\n", "Perplexity :41.73186994211299\n", "Perplexity :24.3413252435313\n", "Perplexity :19.820219349001245\n", "Perplexity :18.502932946608617\n", "Perplexity :17.16337351696595\n", "Perplexity :15.200622187379624\n", "Perplexity :12.259293966301598\n", "Perplexity :8.975708726197752\n", "Perplexity :7.120714615818728\n", "Perplexity :6.122010097753815\n", "Perplexity :5.486491492025226\n", "Perplexity :5.079309978044186\n", "Perplexity :4.798546702336612\n", "Perplexity :4.611469221507808\n", "Perplexity :4.5010239902745655\n", "Perplexity :4.440082283729496\n", "Perplexity :4.387575014051199\n", "Perplexity :4.324829616690472\n", "Perplexity :4.251929353507349\n", "Perplexity :4.174533982649985\n", "Perplexity :4.098962793552853\n", "Perplexity :4.03059393832612\n", "Perplexity :3.9706506361787377\n" ] } ], "source": [ "for iter in range(30000):\n", " # forward propagation\n", " lr = .001\n", " # getting sentence indices w/o 1st word\n", " # why it leaves the first word of each sentence? -> to use it instead of \"NaN\" as first layer['hidden']\n", " sent = sent2indices(sentences[iter%len(sentences)][1:]) \n", " layers, loss = predict(sent) # predicts, returns hidden layer embeddings + loss\n", " \n", " # backpropagation\n", " for layer_i in reversed(range(len(layers))): # loop over hidden states (layers)\n", " layer = layers[layer_i] # get corresponding layer\n", " target = sent[layer_i - 1] # get target word\n", " \n", " # If not the 1st layer\n", " if (layer_i > 0): # if not first layer\n", " layer['output_delta'] = layer['pred'] - y_hots[target] # delta\n", " new_hidden_delta = layer['output_delta'].dot(W1.transpose()) # gradient\n", " \n", " # If last layer, don't pull from a later one, because it doesn't exist\n", " # seems that for each hidden layer, its hidden delta is depedent upon the last decoding operation + next layer gradient\n", " if(layer_i == len(layers)-1):\n", " layer['hidden_delta'] = new_hidden_delta\n", " else:\n", " layer['hidden_delta'] = new_hidden_delta + layers[layer_i+1]['hidden_delta'].dot(W0.transpose())\n", " else: # if the first layer\n", " layer['hidden_delta'] = layers[layer_i+1]['hidden_delta'].dot(W0.transpose())\n", " \n", " # Update weights of NaN Embedding\n", " empty_sentence_embedding -= layers[0]['hidden_delta'] * lr / float(len(sent))\n", " for layer_i, layer in enumerate(layers[1:]):\n", " # update decoder\n", " W1 -= np.outer(layers[layer_i][\"hidden\"], layer['output_delta'])*lr / float(len(sent))\n", " embed_i = sent[layer_i]\n", " # update embeddings\n", " word_embeddings[embed_i] -= layers[layer_i]['hidden_delta']*lr/float(len(sent))\n", " # update encoder\n", " W0 -= np.outer(layers[layer_i]['hidden'], layer['hidden_delta']) * lr / float(len(sent))\n", " \n", " if (iter % 1000 == 0):\n", " print(\"Perplexity :\" + str(np.exp(loss/len(sent))))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In information theory, preplexity is a measurement that predicts how well a probability distribution or a probability model predicts a sample. It may be used to compare probability models.\n", "\n", "A low Perplexity indicate that the model is good at predicting a sample." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Execution and Output Analysis\n", "\n", "Perplexity is the probability of the current label (word), passed through a log function, negated, and exponentiated (e^x). Perplexity measures the difference between two probability distributions. It is high when two probability distributions doesn't match, and low (approaching 1) when the two distributions are close to each other. However, this metric hardly tells us what's going on in the weights.\n", "\n", "Perplexity has faced some critisicm over the years (particularly in the language modeling community) for being overused as a metric.\n", "\n", "Let's look a little more closely at the predictions:" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['sandra', 'moved', 'to', 'the', 'garden.']\n" ] } ], "source": [ "sent_index = 4\n", "l, _ = predict(sent2indices(sentences[sent_index]))\n", "print(sentences[sent_index])" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Prev Input:sandra True: moved Pred: is\n", "Prev Input:moved True: to Pred: to\n", "Prev Input:to True: the Pred: the\n", "Prev Input:the True: garden. Pred: bedroom.\n" ] } ], "source": [ "for i, each_layer in enumerate(l[1:-1]):\n", " input = sentences[sent_index][i]\n", " true = sentences[sent_index][i+1]\n", " pred = vocab[each_layer['pred'].argmax()]\n", " print(\"Prev Input:\" + input + (' ' * (12 - len(input))) + \"True: \" + true + (' ' * (15 - len(true))) + \"Pred: \" + pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This code takes a sentence and predicts the word the model think is most likely." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Looking at predictions can help us understand what's going on \n", "\n", "We can look at the predictions of the neural network as it trains to understand not only what kind of patterns it picks up, but also in the order in which it does so. Neural Networks tend to start off random, but after some amount of training, the NN picks up the most frequent word and predicts it as a default (this is an extremely common error in RNNs). \n", "\n", "It's important to know that there is almost no way this network can predict the next word perfectly. More context is needed to solve this task, but the fact that it's unsolvable, creates educational analysis for the ways in which it fails." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "### RNNs Predict over arbitrary Length Sequences\n", "\n", "In this chapter, we learned how to create vector representations of arbitrary long sentences.\n", "\n", "One question remains: How does a neural network fit a variable length sequence into a fixed size vector? The truth is, sentence vectors don't encode everything in their vector. The name of the game in RNNs is not just what these vectors remember, but also what they forget.\n", "\n", "In the case of predicting the next word, RNNs learn that only the last few words matter.\n", "\n", "---" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.6" } }, "nbformat": 4, "nbformat_minor": 4 }