{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Neural Networks that understand Language: King - Man + Woman == ?\n", "\n", "In this chapter, we will:\n", "- Introduce Natural Language Processing (NLP)\n", "- Define supervised NLP\n", "- Capture Word Correlation in input data\n", "- Introduce the Embedding Layer\n", "- Compare Word Embeddings\n", "- Define the task of \"filling\" in the blank in sentences\n", "- We will derive meaning from the loss function\n", "- Analyze Word Analogies\n", "\n", "> [John Pfeiffer] \"Man is a Slow, Sloppy, and Brilliant Thinker; Computers are Fast, Accurate, and Stupid!\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What does it mean to understand language?\n", "### What kinds of predictions do people make about language?\n", "\n", "
\n", "\n", "Natural Language Processing (NLP), is a much older field that overlaps deep learning. NLP is dedicated exlusively to the automated task of understanding human language." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Natural Language Processing (NLP)\n", "\n", "NLP is divided into a collection of tasks and challenges. We present a few types of classification problems that are common in NLP:\n", "- Use the characters of a document to predict where words start and end.\n", "- Use the words of a document to predict where sentences start and end.\n", "- Use the words in a sentence to predict the part of speech of each word.\n", "- Use words in a sentence to predict where named entities (person, place, thing) references start and end.\n", "- Use sentences in a document to predict which pronouns refer to the same person/place/thing.\n", "- Use words in a sentence to predict the sentiment of a sentence.\n", "\n", "NLP tasks seek to do one of three things:\n", "- label a region of text.\n", "- Link two or more regions of text.\n", "- Try to fill in missing information based on Context.\n", "\n", "Until recently, most of the state-of-the-art (SoTA) NLP Algorithms were advanced, probabilistic, non-parametric models but the recent development and popularization of two major neural algorithms have swept the field of NLP:\n", "- Neural Word Embeddings.\n", "- Recurrent Neural Networks.\n", "\n", "NLP also plays a very special role in AGI (Artificial General Intelligence), because language is the bedrock of logic and communication of humans." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Supervised NLP\n", "### Words go in, & predictions come out\n", "\n", "Up until now, we represented inputs as numbers, but NLP uses text as input, the question is how do we process text? We know that NNs map input numbers to output numbers, for this reason, we need to convert our words into their corresponding numerical representation. As it turns out, the way we transform text into numbers is exteremly important!\n", " \n", "
\n", "\n", "In order to find the optimal numerical representation for text, we need to look at the underlying input-to-output problem, let's take an example:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## IMDB Movie Reviews Dataset\n", "### Problem: Predict if a review (text) is positive or negative\n", "\n", "The IMDB Reviews Dataset is a collection of Review/Rating Pairs that often looks like the following:\n", "\n", "> \"This Movie was terrible, The Plot was Dry, The acting unconvincing, and I spilled popcorn on my shirt!\" — Rating: 1 Stars.\n", "\n", "The entire dataset consists of around 50K reviews. The input reviews are usually a few sentences & the output rating is between 1 and 5 stars. It should be obvious that this sentiment dataset is very different from other sentiment datasets, such as product reviews or hospital patient reviews.\n", "\n", "While preparing the data, we will adjust the range of stars from 1 to 5 into 0 to 1 so we can use binary softmax (Sigmoid). On top of that, the input data is a list of characters, this presents a few problems:\n", "- The input data is text instead of numbers.\n", "- Input is variable-Length Text.\n", "\n", "We will opt to use each \"word\" as a single entity instead of \"characters\" since we would not expect any characters to have correlation with the output (sentiment). On the other hand, words such as \"terrible\", \"unconvincing\", \"bad\" give a strong indication about the sentiment of the reviewer. Several words can have a bit of correlation with the output, by negative, we mean as the frequency of these words increases, ratings tend to decrease in number of stars." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Capturing Word Correlation in Input Data\n", "### Bag of words: Given a review's Vocabulary, predict the sentiment" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import re\n", "import pandas as pd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We first need to download the [dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) into a suitable directory:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "IMDB_PATH = '/Users/mohamedakramzaytar/data/2019/Q2/kaggle/IMDB/imdb_master.csv'" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/Users/mohamedakramzaytar/data/2019/Q2/kaggle/IMDB/imdb_master.csv\n" ] } ], "source": [ "!ls $IMDB_PATH" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
reviewsentiment
0One of the other reviewers has mentioned that ...positive
1A wonderful little production. <br /><br />The...positive
2I thought this was a wonderful way to spend ti...positive
3Basically there's a family where a little boy ...negative
4Petter Mattei's \"Love in the Time of Money\" is...positive
5Probably my all-time favorite movie, a story o...positive
6I sure would like to see a resurrection of a u...positive
\n", "
" ], "text/plain": [ " review sentiment\n", "0 One of the other reviewers has mentioned that ... positive\n", "1 A wonderful little production.

The... positive\n", "2 I thought this was a wonderful way to spend ti... positive\n", "3 Basically there's a family where a little boy ... negative\n", "4 Petter Mattei's \"Love in the Time of Money\" is... positive\n", "5 Probably my all-time favorite movie, a story o... positive\n", "6 I sure would like to see a resurrection of a u... positive" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.read_csv(IMDB_PATH, encoding=\"ISO-8859-1\") # added encoding to fix error\n", "df.head(7)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(\"One of the other reviewers has mentioned that after watching just 1 Oz episode you'll be hooked. They are right, as this is exactly what happened with me.

The first thing that struck me about Oz was its brutality and unflinching scenes of violence, which set in right from the word GO. Trust me, this is not a show for the faint hearted or timid. This show pulls no punches with regards to drugs, sex or violence. Its is hardcore, in the classic use of the word.

It is called OZ as that is the nickname given to the Oswald Maximum Security State Penitentary. It focuses mainly on Emerald City, an experimental section of the prison where all the cells have glass fronts and face inwards, so privacy is not high on the agenda. Em City is home to many..Aryans, Muslims, gangstas, Latinos, Christians, Italians, Irish and more....so scuffles, death stares, dodgy dealings and shady agreements are never far away.

I would say the main appeal of the show is due to the fact that it goes where other shows wouldn't dare. Forget pretty pictures painted for mainstream audiences, forget charm, forget romance...OZ doesn't mess around. The first episode I ever saw struck me as so nasty it was surreal, I couldn't say I was ready for it, but as I watched more, I developed a taste for Oz, and got accustomed to the high levels of graphic violence. Not just violence, but injustice (crooked guards who'll be sold out for a nickel, inmates who'll kill on order and get away with it, well mannered, middle class inmates being turned into prison bitches due to their lack of street skills or prison experience) Watching Oz, you may become comfortable with what is uncomfortable viewing....thats if you can get in touch with your darker side.\",\n", " 'positive')" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# let's take a look at one review:\n", "df.loc[0].review, df.loc[0].sentiment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A common preprocessing step is to create a matrix where each row represents a review and each column represents whether a review contains a particular word in the vocabulary. To create a vector for a review, we just need to loop over the content and put $1$s in places where the corresponding vocabulary words are present in the review. \n", "\n", "The size of the review vectors depends on the global vocabulary of the reviews. If we have 2,000 unique words, you need vectors of length 2,000. This form of storage, called **one-hot encoding**, is the most common way to store binary information, in our case, the presence/absence of particular vocabulary words from the text of a review. \n", "\n", "If our vocabulary have only 4 words, than the one-hot encoding might look like this:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "one_hots = {}\n", "one_hots['cat'] = np.array([1, 0, 0, 0])\n", "one_hots['the'] = np.array([0, 1, 0, 0])\n", "one_hots['dog'] = np.array([0, 0, 1, 0])\n", "one_hots['sat'] = np.array([0, 0, 0, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sent Encoding:[1 1 0 1]\n" ] } ], "source": [ "sentence = ['the', 'cat', 'sat']\n", "x = one_hots[sentence[0]] + one_hots[sentence[1]] + one_hots[sentence[2]]\n", "print('Sent Encoding:' + str(x))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We create a vector for each term in the vocabulary. Then we use vector addition to represent a set of words present in a sentence." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predicting Movie Reviews\n", "### With the Previous Strategy and network, we can predict the sentiment of any review\n", "\n", "The way to do it is to build a one-hot vector for the review then use the two-layer network to predict sentiment. " ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "import re\n", "import numpy as np\n", "import pandas as pd\n", "from collections import Counter\n", "\n", "IMDB_PATH = '/Users/mohamedakramzaytar/data/2019/Q2/kaggle/IMDB/imdb_master.csv'" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "df = pd.read_csv(IMDB_PATH, encoding=\"ISO-8859-1\")\n", "df = df[df['sentiment'].isin(['negative', 'positive'])]\n", "all_reviews_text = \" \".join(df.review.tolist())" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(11557297, 10000)" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# we get unique tokens\n", "all_tokens = all_reviews_text.split(\" \")\n", "unique_tokens = [v for (v, _) in Counter(all_tokens).most_common(10000)]\n", "len(all_tokens), len(unique_tokens)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "# create a function out of it\n", "def get_tokens(text):\n", " return list(set(text.split(\" \")))" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "# create one-hot representations of each token\n", "word_to_index, index_to_word = {}, {}\n", "for i, word in enumerate(unique_tokens):\n", " word_to_index[word], index_to_word[i] = i, word" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "df['words_count'] = df['review'].apply(lambda x: len(x.split(\" \"))) " ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
words_count
count50000.000000
mean231.145940
std171.326419
min4.000000
25%126.000000
50%173.000000
75%280.000000
max2470.000000
\n", "
" ], "text/plain": [ " words_count\n", "count 50000.000000\n", "mean 231.145940\n", "std 171.326419\n", "min 4.000000\n", "25% 126.000000\n", "50% 173.000000\n", "75% 280.000000\n", "max 2470.000000" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will set the size of the one-hot vector to be 10,000 (representing the 10K most frequent words in the corpus). In this case, the review length doesn't matter, we'll just add up each word in the review to get a final representation of the review in a 10,000 vector. \n", "\n", "Let's preprocess the training data:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "((40000, 3), (10000, 3))" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_idx = int(len(df) * (1 - 0.2))\n", "train, test = df.iloc[:test_idx], df.iloc[test_idx:]\n", "train.shape, test.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# we delete columns we're not interested in\n", "train = train.drop(columns=['words_count'])" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ ":2: SettingWithCopyWarning: \n", "A value is trying to be set on a copy of a slice from a DataFrame.\n", "Try using .loc[row_indexer,col_indexer] = value instead\n", "\n", "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", " train['y'] = train['sentiment'].replace({'negative': 0, 'positive': 1})\n" ] } ], "source": [ "# now we transform label into a number\n", "train['y'] = train['sentiment'].replace({'negative': 0, 'positive': 1})\n", "train = train.drop('sentiment', axis=1)" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "# shuffle train now ..\n", "train = train.sample(frac=1).reset_index(drop=True)" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "x, y = [], []\n", "for _, r in train.iterrows():\n", " review, label = r['review'], r['y']\n", " one_hot = np.zeros(10000)\n", " tokens = get_tokens(review)\n", " for token in tokens:\n", " if token in word_to_index:\n", " one_hot[word_to_index[token]] = int(1)\n", " x.append(one_hot)\n", " y.append(label)" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "((40000, 10000), (40000,))" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x, y = np.array(x), np.array(y)\n", "x.shape, y.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have the representations we need to move forward and create a dense neural network to train." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Intro to an embedding layer\n", "### There is one more trick to make the network faster\n", "\n", "
\n", "\n", "We know that the first layer is the dataset. The first layer will be followed by what's called a linear layer, then an activation `ReLU` layer, then another linear layer, and finally the output, which is the prediction layer.\n", "\n", "As it turns out, we can take a bit of a shortcut to `layer 1` by replacing the 1st linear layer with an embedding layer. An important thing to notice is that taking a vector of 1s and 0s is mathematically equivalent to **summing several rows of a matrix**. So we just have to sum `W_0`'s rows that mark available words to form the unique \"embedding layer\". Thus, it's much more efficient to select the relevant rows of `W_0` and sum them as opposed to doing a big vector-matrix multiplication.\n", "\n", "
\n", "\n", "Because the sentiment vocabulary is on the order of 70k words, most of the vector matrix multiplication is spent multiplying zeros in the input vector by weights before summing them, embeddings are much more efficient. The advantage is that summing a bunch of rows is much **faster**. " ] }, { "cell_type": "code", "execution_count": 78, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import sys\n", "\n", "IMDB_PATH = '/Users/mohamedakramzaytar/data/2019/Q2/kaggle/IMDB/reviews.txt'\n", "IMDB_LABEL_PATH = '/Users/mohamedakramzaytar/data/2019/Q2/kaggle/IMDB/labels.txt'" ] }, { "cell_type": "code", "execution_count": 79, "metadata": {}, "outputs": [], "source": [ "f = open(IMDB_PATH, mode='r')\n", "raw_reviews = f.readlines()\n", "f.close()" ] }, { "cell_type": "code", "execution_count": 80, "metadata": {}, "outputs": [], "source": [ "f = open(IMDB_LABEL_PATH, mode='r')\n", "raw_labels = f.readlines()\n", "f.close()" ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(25000, 25000)" ] }, "execution_count": 81, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(raw_reviews), len(raw_labels)" ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ "# python's map object is an iterator\n", "# we can also convert map objects to lists, tupes, ..\n", "tokens = list(map(lambda x: set(x.split(\" \")), raw_reviews))" ] }, { "cell_type": "code", "execution_count": 83, "metadata": {}, "outputs": [], "source": [ "# let's extract the vocab\n", "vocab = set()\n", "for sent in tokens:\n", " for word in sent:\n", " if (len(word)>0):\n", " vocab.add(word)\n", "vocab = list(vocab)" ] }, { "cell_type": "code", "execution_count": 84, "metadata": {}, "outputs": [], "source": [ "word2index = {}\n", "for i, word in enumerate(vocab):\n", " word2index[word] = i" ] }, { "cell_type": "code", "execution_count": 85, "metadata": {}, "outputs": [], "source": [ "# transform all reviews to vectors\n", "input_dataset = list()\n", "for sent in tokens:\n", " sent_indices = list()\n", " for word in sent:\n", " try:\n", " sent_indices.append(word2index[word])\n", " except:\n", " \"\"\n", " input_dataset.append(list(set(sent_indices)))" ] }, { "cell_type": "code", "execution_count": 86, "metadata": {}, "outputs": [], "source": [ "# same for target data\n", "target_dataset = list()\n", "for label in raw_labels:\n", " if label == \"positive\\n\":\n", " target_dataset.append(1)\n", " else:\n", " target_dataset.append(0)" ] }, { "cell_type": "code", "execution_count": 87, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "np.random.seed(1)" ] }, { "cell_type": "code", "execution_count": 88, "metadata": {}, "outputs": [], "source": [ "def sigmoid(x):\n", " return 1/(1+np.exp(-x))" ] }, { "cell_type": "code", "execution_count": 89, "metadata": {}, "outputs": [], "source": [ "lr, epochs = 0.01, 1\n", "embedding_layer_size = 100" ] }, { "cell_type": "code", "execution_count": 90, "metadata": {}, "outputs": [], "source": [ "W0 = (0.2 * np.random.random((len(vocab), embedding_layer_size))) - 0.1\n", "W1 = (0.2 * np.random.random((embedding_layer_size, 1))) - 0.1" ] }, { "cell_type": "code", "execution_count": 91, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Iter: 0 | Progress: 0.0% | Training Accuracy: 0.0%\n", "Iter: 1000 | Progress: 4.0% | Training Accuracy: 0.5%\n", "Iter: 2000 | Progress: 8.0% | Training Accuracy: 0.62%\n", "Iter: 3000 | Progress: 12.0% | Training Accuracy: 0.68%\n", "Iter: 4000 | Progress: 16.0% | Training Accuracy: 0.71%\n", "Iter: 5000 | Progress: 20.0% | Training Accuracy: 0.73%\n", "Iter: 6000 | Progress: 24.0% | Training Accuracy: 0.74%\n", "Iter: 7000 | Progress: 28.0% | Training Accuracy: 0.76%\n", "Iter: 8000 | Progress: 32.0% | Training Accuracy: 0.77%\n", "Iter: 9000 | Progress: 36.0% | Training Accuracy: 0.78%\n", "Iter: 10000 | Progress: 40.0% | Training Accuracy: 0.79%\n", "Iter: 11000 | Progress: 44.0% | Training Accuracy: 0.8%\n", "Iter: 12000 | Progress: 48.0% | Training Accuracy: 0.8%\n", "Iter: 13000 | Progress: 52.0% | Training Accuracy: 0.81%\n", "Iter: 14000 | Progress: 56.0% | Training Accuracy: 0.81%\n", "Iter: 15000 | Progress: 60.0% | Training Accuracy: 0.81%\n", "Iter: 16000 | Progress: 64.0% | Training Accuracy: 0.81%\n", "Iter: 17000 | Progress: 68.0% | Training Accuracy: 0.81%\n", "Iter: 18000 | Progress: 72.0% | Training Accuracy: 0.82%\n", "Iter: 19000 | Progress: 76.0% | Training Accuracy: 0.82%\n", "Iter: 20000 | Progress: 80.0% | Training Accuracy: 0.82%\n", "Iter: 21000 | Progress: 84.0% | Training Accuracy: 0.83%\n", "Iter: 22000 | Progress: 88.0% | Training Accuracy: 0.83%\n", "Iter: 23000 | Progress: 92.0% | Training Accuracy: 0.83%\n", "Test accuracy: 0.847\n" ] } ], "source": [ "# training loop\n", "correct, total = (0, 0)\n", "\n", "for epoch in range(epochs):\n", " \n", " # leave last 1000 for testing\n", " for i in range(len(input_dataset) - 1000):\n", " # Forward Propagation\n", " x, y = input_dataset[i], target_dataset[i]\n", " layer_1 = sigmoid(np.sum(W0[x], axis=0))\n", " layer_2 = sigmoid(layer_1.dot(W1))\n", " \n", " # Gradients Calc\n", " layer_2_delta = (layer_2 - y)\n", " layer_1_delta = layer_2_delta.dot(W1.T)\n", " \n", " # Backpropagation\n", " W0[x] -= layer_1_delta*lr # update only corresponding embeddings (w/o attached input to gradient).\n", " W1 -= np.outer(layer_1, layer_2_delta) * lr\n", " \n", " # training accuracy\n", " if(np.abs(layer_2_delta) < 0.5):\n", " correct += 1\n", " total += 1\n", " \n", " if (i%1000 == 0):\n", " progress = 100 * i / float(len(input_dataset))\n", " print(f\"Iter: {i} | Progress: {round(progress, 2)}% | Training Accuracy: {round(correct / float(total), 2)}%\")\n", " \n", " # test set evaluation\n", " correct, total = (0, 0)\n", " for i in range(len(input_dataset) - 1000, len(input_dataset)):\n", " x, y = input_dataset[i], target_dataset[i]\n", " layer_1 = sigmoid(np.sum(W0[x], axis=0))\n", " layer_2 = sigmoid(layer_1.dot(W1))\n", " if(np.abs(layer_2-y) < 0.5):\n", " correct += 1\n", " total += 1\n", " print(f\"Test accuracy: {correct / float(total)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Interpreting the Output\n", "### What did the Neural Network learn along the way?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Network was looking for correlation between the input data points and the output data points. It's extremely beneficial to know what kind of patterns the network detected while training and took as signal for predicting sentiment, just because the network was able to find correlation between the input and the output doesn't mean that it found every pattern of language. So understanding what the difference between what the network is able to currently learn from data sets and what it should learn to truly understand language is very important & essential to solve artificial general intelligence.\n", "\n", "To answer this question, let's start by considering was what **presented to the network**. We presented a presence/absence binary indicator for every word in the top 10,000 most frequent words in the corpus. We'd expect the network to know which words have strong correlation with negative opinions and which are positive, but this isn't the whole story." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Neural Architecture\n", "### How did the choice of architecture effect what the network was able to learn?\n", "\n", "Hidden layers are about grouping input data points coming from the previous layer into `n` groups. Each hidden neuron takes in a data point and asks \"is this data point in my group?\" and as the hidden layer learns, it searches for useful groupings. So what are the useful groupings for our task? We know that a grouping is useful if it manages to find hidden and interesting structure in the data. So:\n", "- bad groupings just memorize data.\n", "- good groupings capture phenomenas that are useful linguistically.\n", "\n", "For example, understanding the difference between \"terrible\" and \"not terrible\" is a powerful grouping. However, because the input to the network is a vocabulary and not a sequence, a sentence such as \"It is Great, Not terrible\" will be interpreted exactly like \"It is Terrible, Not great\".\n", "\n", "If we can construct two examples with the same activation hidden layer & the pattern is present in the first example while absent in the 2nd, then the network won't be able to detect the pattern we're interested in.\n", "\n", "### What should we see in the weights connecting weights to hidden neurons?\n", "\n", "We'd expect words that have similar predictive power should subscribe to similar groups.\n", "\n", "
\n", "\n", "Words that subscribe to similar groups, having similar weights, will have similar linguistic meaning with regards to the task at hand (sentiment analysis)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparing Word Embeddings\n", "### How Can we Visualize Weight Similarity?\n", "\n", "We can get the embedding of each word by simply extracting the corresponding row from the first weight matrix. We can also do word-to-word comparison by simply calculating the euclidian distance between the two vectors." ] }, { "cell_type": "code", "execution_count": 92, "metadata": {}, "outputs": [], "source": [ "from collections import Counter\n", "import math" ] }, { "cell_type": "code", "execution_count": 93, "metadata": {}, "outputs": [], "source": [ "def similar(target='beautiful'):\n", " target_index = word2index[target]\n", " scores = Counter()\n", " for word, index in word2index.items():\n", " raw_difference = W0[index] - W0[target_index]\n", " squared_difference = raw_difference**2\n", " scores[word] = -math.sqrt(sum(squared_difference))\n", " return scores.most_common(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This procedure will allow us to easily find out the similar words to a target word, examples:" ] }, { "cell_type": "code", "execution_count": 94, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[('beautiful', -0.0), ('wonderfully', -0.7347476245943578), ('each', -0.7397281670618566), ('recommended', -0.7700989754926751), ('job', -0.8021862760775765), ('fascinating', -0.803780429603366), ('masterpiece', -0.806440875020042), ('true', -0.8087076223072098), ('especially', -0.8096794093609967), ('sweet', -0.8117635303801121)]\n" ] } ], "source": [ "print(similar('beautiful'))" ] }, { "cell_type": "code", "execution_count": 95, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[('terrible', -0.0), ('annoying', -0.7717906985613661), ('poorly', -0.8084446689686995), ('avoid', -0.8088320312353884), ('worse', -0.8246951041670193), ('stupid', -0.8309245272531632), ('boring', -0.8385096157034106), ('bad', -0.8395554304307457), ('disappointment', -0.8654898536150686), ('unfortunately', -0.8780291885742453)]\n" ] } ], "source": [ "print(similar('terrible'))" ] }, { "cell_type": "code", "execution_count": 96, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[('average', -0.0), ('clearence', -0.6276424599813579), ('bizniss', -0.6339327432204244), ('brock', -0.6346237642100628), ('swordsmanship', -0.6370031733437977), ('sexegenarian', -0.6379006068844714), ('breckinridge', -0.6381035731502563), ('burnside', -0.6421944971499771), ('nudges', -0.6422029970777722), ('floorpan', -0.6482336019908111)]\n" ] } ], "source": [ "print(similar('average'))" ] }, { "cell_type": "code", "execution_count": 97, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[('love', -0.0), ('friendship', -0.7000998912887154), ('believable', -0.7095907323713758), ('nice', -0.716328595780204), ('worth', -0.7191739646420174), ('bit', -0.7275198610668071), ('together', -0.7277400327527178), ('true', -0.7297715834947054), ('also', -0.7372723172510404), ('gives', -0.743701219426435)]\n" ] } ], "source": [ "print(similar('love'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What we see is a standard phenomenon in the correlation summarization. It seeks to create similar latent representations within the network to facilitate information compression to arrive to the correct target label." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What is the meaning of a neuron?\n", "### Meaning is entirely based on the target labels being predicted\n", "\n", "We should notice that `Beautiful` & `recommended` are nearly identical, but **only in the context of sentiment prediction**. In the other hand, their meaning is quite different. \n", "\n", "The meaning of a neuron in the network depends entirely on the target labels. The NN is entirely ignorant of any other meaning outside the task it was trained on. So how do we make the meaning of a neuron more broad? Well, if we give it a task that requires broad understanding of language, it will learn new complexities and its neurons will become much more general.\n", "\n", "The Task we'll use to learn more interesting word embeddings is the \"fill in the blank\" task. There is nearly infinite training data (the internet) which provides an infinite signal to the network. A NN being able to learn to fill the blank requires at least some context language understanding." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Filling in the Blank\n", "### Learn Richer Word Meanings by having A Richer Signal\n", "\n", "The following example uses almost the same previous architecture with minor modifications. We'll split the text into 5 words sentences, then remove one word (focus term), and train the network to predict the focus term. \n", "\n", "We'll also use a technique called **negative sampling** to make the network train a bit more faster. Consider that in order to predict the focus term, we need one label for each possible word. This would require several thousand labels, which would cause the network to train slowly. To overcome this, we randomly ignore most of the labels for each forward propagation. Although this seems crude, it's a technique that works well in practice." ] }, { "cell_type": "code", "execution_count": 98, "metadata": {}, "outputs": [], "source": [ "import sys, random, math\n", "from collections import Counter\n", "import numpy as np\n", "\n", "IMDB_PATH = '/Users/mohamedakramzaytar/data/2019/Q2/kaggle/IMDB/reviews.txt'\n", "IMDB_LABEL_PATH = '/Users/mohamedakramzaytar/data/2019/Q2/kaggle/IMDB/labels.txt'" ] }, { "cell_type": "code", "execution_count": 101, "metadata": {}, "outputs": [], "source": [ "np.random.seed(1)\n", "random.seed(1)" ] }, { "cell_type": "code", "execution_count": 102, "metadata": {}, "outputs": [], "source": [ "f = open(IMDB_PATH, 'r')\n", "raw_reviews = f.readlines()\n", "f.close()" ] }, { "cell_type": "code", "execution_count": 103, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "25000" ] }, "execution_count": 103, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(raw_reviews)" ] }, { "cell_type": "code", "execution_count": 104, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(185, 127, 537)" ] }, "execution_count": 104, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokens = list(map(lambda x: x.split(\" \"), raw_reviews))\n", "len(tokens[0]), len(tokens[1]), len(tokens[2])" ] }, { "cell_type": "code", "execution_count": 105, "metadata": {}, "outputs": [], "source": [ "word_counter = Counter()" ] }, { "cell_type": "code", "execution_count": 106, "metadata": {}, "outputs": [], "source": [ "for review in tokens:\n", " for token in review:\n", " word_counter[token] -= 1" ] }, { "cell_type": "code", "execution_count": 107, "metadata": {}, "outputs": [], "source": [ "_ = word_counter.most_common() # least common in this case." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`most_common()` just sorts out the data, it doesn't take the Top N most common tokens unless you force it to (by giving it an argument)." ] }, { "cell_type": "code", "execution_count": 108, "metadata": {}, "outputs": [], "source": [ "vocab = list(set(map(lambda x: x[0], word_counter.most_common())))" ] }, { "cell_type": "code", "execution_count": 109, "metadata": {}, "outputs": [], "source": [ "word2index = {}\n", "for i, word in enumerate(vocab):\n", " word2index[word] = i" ] }, { "cell_type": "code", "execution_count": 110, "metadata": {}, "outputs": [], "source": [ "concatenated = list()\n", "input_dataset = list()\n", "for review in tokens:\n", " review_indices = list()\n", " for token in review:\n", " try:\n", " review_indices.append(word2index[token])\n", " concatenated.append(word2index[token])\n", " except:\n", " \"\"\n", " input_dataset.append(review_indices)\n", "concatenated = np.array(concatenated)\n", "random.shuffle(input_dataset)" ] }, { "cell_type": "code", "execution_count": 111, "metadata": {}, "outputs": [], "source": [ "lr, epochs = (.05, 2)\n", "hidden_size, window, negative = 50, 2, 5" ] }, { "cell_type": "code", "execution_count": 119, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "((74075, 50), (74075, 50))" ] }, "execution_count": 119, "metadata": {}, "output_type": "execute_result" } ], "source": [ "W0 = (np.random.rand(len(vocab), hidden_size) - 0.5) * 0.2\n", "W1 = np.zeros((len(vocab), hidden_size))\n", "W0.shape, W1.shape" ] }, { "cell_type": "code", "execution_count": 120, "metadata": {}, "outputs": [], "source": [ "layer_2_target = np.zeros(negative+1)\n", "layer_2_target[0] = 1" ] }, { "cell_type": "code", "execution_count": 123, "metadata": {}, "outputs": [], "source": [ "def similar(target='beautiful', top=7):\n", " target_index = word2index[target]\n", " \n", " scores = Counter()\n", " for word, index in word2index.items():\n", " raw_difference = W0[index] - W0[target_index]\n", " squared_difference = raw_difference * raw_difference\n", " scores[word] = -math.sqrt(sum(squared_difference))\n", " return scores.most_common(top)" ] }, { "cell_type": "code", "execution_count": 124, "metadata": {}, "outputs": [], "source": [ "def sigmoid(x):\n", " return 1 / (1 + np.exp(-x))" ] }, { "cell_type": "code", "execution_count": 125, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Progress: 0.0 | `Terrible` nearest neighbors: [('terrible', -0.0), ('misperceived', -0.37629838414529776), ('origination', -0.38645406817879285), ('bumpuses', -0.3866817101079805), ('recognition', -0.3870682177410917)]\n", "Progress: 0.02 | `Terrible` nearest neighbors: [('terrible', -0.0), ('superb', -0.9395550125335567), ('fantastic', -0.9785445410150915), ('brilliant', -1.0009723994783375), ('excellent', -1.0507492536281997)]\n", "Progress: 0.04 | `Terrible` nearest neighbors: [('terrible', -0.0), ('brilliant', -1.4181573161847985), ('horrible', -1.435351250932622), ('hilarious', -1.4828872270341869), ('fantastic', -1.5224293865473035)]\n", "Progress: 0.06 | `Terrible` nearest neighbors: [('terrible', -0.0), ('fantastic', -1.3645701568801774), ('brilliant', -1.452030183491021), ('horrible', -1.4872861958792385), ('convincing', -1.6451485631159968)]\n", "Progress: 0.08 | `Terrible` nearest neighbors: [('terrible', -0.0), ('fantastic', -1.635188731505891), ('brilliant', -1.6478061610864096), ('convincing', -1.8233281433244541), ('lame', -1.903116578071796)]\n", "Progress: 0.1 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -1.8219440801238125), ('fantastic', -1.8671267127593203), ('lame', -1.9050522776833492), ('fascinating', -1.9732384841951174)]\n", "Progress: 0.12 | `Terrible` nearest neighbors: [('terrible', -0.0), ('brilliant', -2.1433165398420138), ('horrible', -2.1638703481599477), ('fascinating', -2.2868941326807817), ('weak', -2.3292512816130797)]\n", "Progress: 0.14 | `Terrible` nearest neighbors: [('terrible', -0.0), ('brilliant', -2.5092534705974803), ('fantastic', -2.6508016767739924), ('fascinating', -2.7452444884676908), ('horrible', -2.865792216162147)]\n", "Progress: 0.16 | `Terrible` nearest neighbors: [('terrible', -0.0), ('brilliant', -2.6959329307597475), ('horrible', -2.988478730789771), ('fascinating', -3.1247256619089545), ('terrific', -3.129899894825866)]\n", "Progress: 0.18 | `Terrible` nearest neighbors: [('terrible', -0.0), ('brilliant', -2.8446078019182597), ('horrible', -2.956595849168358), ('superb', -3.1012022870890967), ('fascinating', -3.262813698847574)]\n", "Progress: 0.2 | `Terrible` nearest neighbors: [('terrible', -0.0), ('brilliant', -2.9033044381747177), ('horrible', -3.0115624244181727), ('fantastic', -3.2470408994109534), ('superb', -3.256887985703758)]\n", "Progress: 0.22 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.0915151786846438), ('fantastic', -3.1265211166700086), ('brilliant', -3.1454738767589787), ('superb', -3.183344902892115)]\n", "Progress: 0.24 | `Terrible` nearest neighbors: [('terrible', -0.0), ('brilliant', -3.0127884429926897), ('horrible', -3.15807211093518), ('fantastic', -3.2550240363481096), ('superb', -3.4725565464616666)]\n", "Progress: 0.26 | `Terrible` nearest neighbors: [('terrible', -0.0), ('brilliant', -2.7865435546532855), ('horrible', -3.057266418335675), ('fantastic', -3.6015249499836965), ('remarkable', -3.7331652129773283)]\n", "Progress: 0.28 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.908750351571196), ('brilliant', -3.0256961917363117), ('remarkable', -3.3441415219255313), ('fantastic', -3.346827479733814)]\n", "Progress: 0.3 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.1965770870240044), ('brilliant', -3.232387427139472), ('fantastic', -3.3406354622067576), ('laughable', -3.4525431309832544)]\n", "Progress: 0.32 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.7193204217548046), ('fantastic', -3.4924465018991375), ('brilliant', -3.578142963089179), ('pathetic', -3.69027244678774)]\n", "Progress: 0.34 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.7508534380433756), ('laughable', -3.564716387829581), ('brilliant', -3.6256016784964853), ('ridiculous', -3.691640952693665)]\n", "Progress: 0.36 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.6235371223238038), ('brilliant', -3.456400101821386), ('fantastic', -3.591454386466738), ('pathetic', -3.6334012075001842)]\n", "Progress: 0.38 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.683096639333961), ('fantastic', -3.4242003788819737), ('pathetic', -3.4360802291007326), ('brilliant', -3.4872253519455176)]\n", "Progress: 0.4 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.1624804317677495), ('fantastic', -3.5925838881609518), ('superb', -3.768105964314105), ('wonderful', -3.862213327361215)]\n", "Progress: 0.42 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.437289914569698), ('fantastic', -3.5216332851140426), ('wonderful', -3.650556914213743), ('superb', -3.657396309163536)]\n", "Progress: 0.44 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.2723437444549783), ('wonderful', -3.4368907041506995), ('brilliant', -3.7929820192531545), ('superb', -3.849470415444905)]\n", "Progress: 0.46 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.2460430522999157), ('fantastic', -3.701891129630293), ('superb', -3.8686900453638264), ('wonderful', -3.8932476027869205)]\n", "Progress: 0.48 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.9614105229354526), ('fantastic', -3.6383683495777577), ('wonderful', -3.7598207425786945), ('marvelous', -3.8547592780789506)]\n", "Progress: 0.5 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.1209779081546825), ('pathetic', -3.9281919379578993), ('bad', -3.9341848616327995), ('ridiculous', -3.973607969275994)]\n", "Progress: 0.52 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.3213812570213213), ('superb', -3.6699560346070927), ('pathetic', -3.6779200874922333), ('brilliant', -3.741357710902307)]\n", "Progress: 0.54 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.9042261348652203), ('ridiculous', -3.523791388919498), ('brilliant', -3.553038115161444), ('pathetic', -3.670621870859373)]\n", "Progress: 0.56 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.019056443525238), ('ridiculous', -3.491907182197584), ('brilliant', -3.5144127474855265), ('pathetic', -3.637777041244951)]\n", "Progress: 0.58 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.047805663412288), ('ridiculous', -3.284331453269096), ('pathetic', -3.416744552532168), ('fantastic', -3.5162930264465393)]\n", "Progress: 0.6 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -3.0350582648337365), ('pathetic', -3.141418301245476), ('brilliant', -3.3284278791026645), ('ridiculous', -3.3681009224333156)]\n", "Progress: 0.62 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.8424670383329955), ('brilliant', -3.190611111009397), ('fantastic', -3.3554779094550637), ('pathetic', -3.613499339625761)]\n", "Progress: 0.64 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.6288033645941358), ('ridiculous', -3.1037980754584007), ('fantastic', -3.4010410214913493), ('brilliant', -3.414474269349876)]\n", "Progress: 0.66 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.8451699935635197), ('ridiculous', -3.360397446507512), ('fantastic', -3.466029776873303), ('brilliant', -3.578601473087416)]\n", "Progress: 0.68 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.66673055814208), ('fantastic', -3.3362097606735692), ('brilliant', -3.373603786961304), ('magnificent', -3.624886774249765)]\n", "Progress: 0.7 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.8630220563840676), ('fantastic', -3.3992359956414386), ('wonderful', -3.566726961067627), ('magnificent', -3.591300947805222)]\n", "Progress: 0.72 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.9368471700599974), ('magnificent', -3.608067713166859), ('brilliant', -3.655464546153343), ('dreadful', -3.7050087712182838)]\n", "Progress: 0.74 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.635848871982613), ('brilliant', -3.6651707790712265), ('fantastic', -3.7062670345209163), ('horrid', -3.7328953795618287)]\n", "Progress: 0.76 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.503166496540591), ('fantastic', -3.629923785322746), ('wonderful', -3.86941026115452), ('brilliant', -3.901485156616881)]\n", "Progress: 0.78 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.412898394155387), ('horrid', -3.549930811490535), ('dreadful', -3.5937964712495667), ('dire', -3.758413186706391)]\n", "Progress: 0.8 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.7875323979515803), ('brilliant', -3.553032704687312), ('fantastic', -3.5650466133229215), ('horrid', -3.6895680657641807)]\n", "Progress: 0.82 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.505351530365435), ('horrid', -3.5555145292149475), ('dreadful', -3.723125002700873), ('horrendous', -3.7799764137444125)]\n", "Progress: 0.84 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.52225303835475), ('horrid', -3.659388277805759), ('horrendous', -3.6596442985085273), ('brilliant', -3.7064657941555823)]\n", "Progress: 0.86 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.403871102526396), ('brilliant', -3.4543509427136447), ('dreadful', -3.6327538154677845), ('horrendous', -3.638227523309254)]\n", "Progress: 0.88 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.580263267663093), ('brilliant', -3.168048700465928), ('fantastic', -3.47530255959384), ('horrendous', -3.5527378429665823)]\n", "Progress: 0.9 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.5560655161199812), ('brilliant', -3.2765502997525595), ('phenomenal', -3.6192939150249934), ('marvelous', -3.627323403235288)]\n", "Progress: 0.92 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.773181520872384), ('brilliant', -3.393156773756116), ('superb', -3.628130376783132), ('fantastic', -3.6532165585570455)]\n", "Progress: 0.94 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.7740421486011266), ('pathetic', -3.3994302985804645), ('phenomenal', -3.4011259732447283), ('brilliant', -3.4238172462044756)]\n", "Progress: 0.96 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.7104808985858138), ('pathetic', -3.3380165149787935), ('phenomenal', -3.5040384247065166), ('brilliant', -3.6382402279199804)]\n", "Progress: 0.98 | `Terrible` nearest neighbors: [('terrible', -0.0), ('horrible', -2.421661394464606), ('phenomenal', -3.4302258651982), ('pathetic', -3.5310157422128037), ('fantastic', -3.5986885562168713)]\n", "[('terrible', -0.0), ('horrible', -2.9616723194748578), ('pathetic', -3.53126362760028), ('phenomenal', -3.704929909889765), ('brilliant', -3.8445386415610043), ('dreadful', -3.9673549749081625), ('superb', -4.113236415218835)]\n" ] } ], "source": [ "for review_i, review in enumerate(input_dataset * epochs):\n", " for target_i in range(len(review)):\n", " # predict only a random subset, because it's really expensive to predict every vocab\n", " # We can't do a softmax over all possible words, we will predict for the target word + a subset of the total vocab\n", " target_samples = [review[target_i]] + list(concatenated[(np.random.rand(negative)*len(concatenated)).astype('int').tolist()])\n", " \n", " # get tokens on the right & on Left of target word\n", " left_context = review[max(0, target_i-window):target_i]\n", " right_context = review[target_i+1: min(len(review), target_i+window)]\n", " \n", " # feed forward\n", " # context words w/o target word\n", " # mean instead of sum, interesting\n", " layer_1 = np.mean(W0[left_context+right_context], axis=0)\n", " # using sigmoid here is kind of weird because there is only one true target token\n", " layer_2 = sigmoid(layer_1.dot(W1[target_samples].T))\n", " layer_2_delta = layer_2 - layer_2_target\n", " layer_1_delta = layer_2_delta.dot(W1[target_samples])\n", " \n", " # update weights\n", " W0[left_context+right_context] -= layer_1_delta*lr\n", " W1[target_samples] -= np.outer(layer_2_delta, layer_1)*lr\n", " \n", " if(review_i % 1000 == 0):\n", " print(f\"Progress: {round(review_i/float(len(input_dataset)*epochs), 3)} | `Terrible` nearest neighbors: {similar('terrible', top=5)}\")\n", "print(similar('terrible'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The word embeddings get trained according to the task the neural network is trained on, let's give. a few examples:\n", "- **Sentiment Analysis**: Embeddings are grouped together depending on how Positive/Negative they are or depending on How they effect a review being good or bad.\n", "- **Filling the Blank**: Embeddings are grouped together depending on how close/far they are when filling blanks.\n", " - Solve: \"*I ___ You so much!*\"\n", " - Possible Solution — \"I hate You so much!\"\n", " - Possible Solution — \"I love You so much!\"\n", "In this sense, hate & love are pretty close!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Meaning is Derived from Loss\n", "\n", "
\n", "\n", "Before, words were clustered according to the likelihood that the review is positive/negative. Now, they are clustered based on the likelihood that they will occur on the same phrase (regardless of the sentiment behind a review).\n", "\n", "Our key takeaway is that even though we are training on the same dataset, using a very similar network architecture, we can influence what the network learns by changing the loss function (task). Even though it's looking at the same information, we can alter its learning behavior by simply changing the output structure.\n", "\n", "Let's call the process of choosing what the network should learn: **Intelligence Targeting**. We can also change how the network measures error, its architetcure, and regularization, this is also a way of performing Intelligence targeting.\n", "\n", "In deep learning research, all of the above techniques fall under the umbrella term: **Loss function construction**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Neural Networks don't really **LEARN** Data; they minimize Loss Functions\n", "### The Choice of Loss Function Determines the Network's Knowledge\n", "\n", "Considering that Learning is all about minimizing a loss function, this gives a broader understanding of how neural networks work.\n", "\n", "Different kinds of layers, activations, regularization techniques, datasets, aren't really that different. For Example: if the network is overfitting, we can augment the loss fucntion by choosing simpler non-linearities, adding dropout, enforcing regularizations, adding more data and so on. All of these techniques will have a similar effect on the loss function and the learning behavior.\n", "\n", "With learning, everything is contained within the loss function and **If something is going wrong, remember that the solution is in the loss function**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## King - Man + Woman ~= Queen\n", "\n", "The task of filling in the blank creates an interesting property called \"word analogies\". Analogies are one of the famous properties of word embeddings (or trained vectors). \n", "\n", "We can take different embeddings and perform algebric operations on them to discover these analogies." ] }, { "cell_type": "code", "execution_count": 126, "metadata": {}, "outputs": [], "source": [ "def analogy(positive=['terrible', 'good'], negative=['bad']):\n", " norms = np.sum(W0*W0, axis=1)\n", " norms.resize((norms.shape[0], 1))\n", " # normalize weights for vector-level operations\n", " normed_weights = W0 * norms\n", " query_vect = np.zeros(W0.shape[1])\n", " for word in positive:\n", " query_vect += normed_weights[word2index[word]]\n", " for word in negative:\n", " query_vect -= normed_weights[word2index[word]]\n", " \n", " scores = Counter()\n", " for word, index in word2index.items():\n", " raw_difference = W0[index] - query_vect\n", " squared_difference = raw_difference * raw_difference\n", " scores[word] = -math.sqrt(sum(squared_difference))\n", " return scores.most_common(10)[1:]" ] }, { "cell_type": "code", "execution_count": 132, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[('lee', -174.27474476700854),\n", " ('been', -174.49725533677292),\n", " ('david', -174.61356143328723),\n", " ('william', -174.6143457630514),\n", " ('walken', -174.70899829891496),\n", " ('st', -174.73844365112527),\n", " ('simon', -174.76909796361042),\n", " ('sean', -174.8055256904309),\n", " ('smith', -174.89679055038982)]" ] }, "execution_count": 132, "metadata": {}, "output_type": "execute_result" } ], "source": [ "analogy(['elizabeth', 'he'], ['she'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Word Analogies\n", "### Linear Compression of an Existing Property in the Data\n", "\n", "Even though \"Word Analogy\" Discovery was initially very exciting, the deep learning NLP paradigm didn't move forward from that to discover new features, instead, current language models rely on ~~Recurrent Neural Networks to do language modeling~~ (This book was released before ELMO, BERT, & GPT-2, that is why the author considers RNNs to be the SoTA in Language modeling).\n", "\n", "Nevertheless, we need to understand why this concept emerged out of the network as a result of us training the network to fill in the blank? If we imagine the word embeddings to have two dimensions, then it would be easier to know why word analogies work:\n", "\n", "
" ] }, { "cell_type": "code", "execution_count": 133, "metadata": {}, "outputs": [], "source": [ "king = [.6, .1]\n", "man = [.5, .0]\n", "woman = [.0, .8]\n", "queen = [.1, 1.0]" ] }, { "cell_type": "code", "execution_count": 134, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0.09999999999999998, 0.1]" ] }, "execution_count": 134, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[x_i - y_i for (x_i, y_i) in zip(king, man)]" ] }, { "cell_type": "code", "execution_count": 135, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0.1, 0.19999999999999996]" ] }, "execution_count": 135, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[x_i - y_i for (x_i, y_i) in zip(queen, woman)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The relative usefulness to the final prediction between \"Man\"/\"King\" & \"Woman\"/\"Queen\" is similar because the difference between \"King\" and \"Man\" Leaves a vector that represents **Royalty**. There are a bunch of male/female related words in one grouping, and a bunch of king/queen related words in another grouping. Because the relative distance between the two group is constant, it means that the distances between each group items will be relatively the same. \n", "\n", "This phenomena can be traced back to the chosen loss. What is important is that learning analogies is more about the properties of language than deep learning. Any linear compression of these co-occurent statistics will yield the same results." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "### We've learned a lot about Word embeddings & the impact of loss on learning\n", "\n", "We've also unpacked the principles of using neural networks to model language.\n", "\n", "---" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.6" } }, "nbformat": 4, "nbformat_minor": 4 }