{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework and bake-off: word-level entailment with neural networks" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "__author__ = \"Christopher Potts\"\n", "__version__ = \"CS224u, Stanford, Spring 2020\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "1. [Overview](#Overview)\n", "1. [Set-up](#Set-up)\n", "1. [Data](#Data)\n", " 1. [Edge disjoint](#Edge-disjoint)\n", " 1. [Word disjoint](#Word-disjoint)\n", "1. [Baseline](#Baseline)\n", " 1. [Representing words: vector_func](#Representing-words:-vector_func)\n", " 1. [Combining words into inputs: vector_combo_func](#Combining-words-into-inputs:-vector_combo_func)\n", " 1. [Classifier model](#Classifier-model)\n", " 1. [Baseline results](#Baseline-results)\n", "1. [Homework questions](#Homework-questions)\n", " 1. [Hypothesis-only baseline [2 points]](#Hypothesis-only-baseline-[2-points])\n", " 1. [Alternatives to concatenation [2 points]](#Alternatives-to-concatenation-[2-points])\n", " 1. [A deeper network [2 points]](#A-deeper-network-[2-points])\n", " 1. [Your original system [3 points]](#Your-original-system-[3-points])\n", "1. [Bake-off [1 point]](#Bake-off-[1-point])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The general problem is word-level natural language inference.\n", "\n", "Training examples are pairs of words $(w_{L}, w_{R}), y$ with $y = 1$ if $w_{L}$ entails $w_{R}$, otherwise $0$.\n", "\n", "The homework questions below ask you to define baseline models for this and develop your own system for entry in the bake-off, which will take place on a held-out test-set distributed at the start of the bake-off. (Thus, all the data you have available for development is available for training your final system before the bake-off begins.)\n", "\n", "\"wordentail-diagram.png\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set-up" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See [the first notebook in this unit](nli_01_task_and_data.ipynb) for set-up instructions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from collections import defaultdict\n", "import json\n", "import numpy as np\n", "import os\n", "import pandas as pd\n", "from torch_shallow_neural_classifier import TorchShallowNeuralClassifier\n", "import nli\n", "import utils" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DATA_HOME = 'data'\n", "\n", "NLIDATA_HOME = os.path.join(DATA_HOME, 'nlidata')\n", "\n", "wordentail_filename = os.path.join(\n", " NLIDATA_HOME, 'nli_wordentail_bakeoff_data.json')\n", "\n", "GLOVE_HOME = os.path.join(DATA_HOME, 'glove.6B')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data\n", "\n", "I've processed the data into two different train/test splits, in an effort to put some pressure on our models to actually learn these semantic relations, as opposed to exploiting regularities in the sample.\n", "\n", "* `edge_disjoint`: The `train` and `dev` __edge__ sets are disjoint, but many __words__ appear in both `train` and `dev`.\n", "* `word_disjoint`: The `train` and `dev` __vocabularies are disjoint__, and thus the edges are disjoint as well.\n", "\n", "These are very different problems. For `word_disjoint`, there is real pressure on the model to learn abstract relationships, as opposed to memorizing properties of individual words." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with open(wordentail_filename) as f:\n", " wordentail_data = json.load(f)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The outer keys are the splits plus a list giving the vocabulary for the entire dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wordentail_data.keys()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Edge disjoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wordentail_data['edge_disjoint'].keys()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is what the split looks like; all three have this same format:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wordentail_data['edge_disjoint']['dev'][: 5]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's test to make sure no edges are shared between `train` and `dev`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nli.get_edge_overlap_size(wordentail_data, 'edge_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we expect, a *lot* of vocabulary items are shared between `train` and `dev`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nli.get_vocab_overlap_size(wordentail_data, 'edge_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a large percentage of the entire vocab:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "len(wordentail_data['vocab'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the distribution of labels in the `train` set. It's highly imbalanced, which will pose a challenge for learning. (I'll go ahead and reveal that the `dev` set is similarly distributed.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def label_distribution(split):\n", " return pd.DataFrame(wordentail_data[split]['train'])[1].value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "label_distribution('edge_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Word disjoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wordentail_data['word_disjoint'].keys()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the `word_disjoint` split, no __words__ are shared between `train` and `dev`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nli.get_vocab_overlap_size(wordentail_data, 'word_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because no words are shared between `train` and `dev`, no edges are either:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nli.get_edge_overlap_size(wordentail_data, 'word_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The label distribution is similar to that of `edge_disjoint`, though the overall number of examples is a bit smaller:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "label_distribution('word_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Baseline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Even in deep learning, __feature representation is vital and requires care!__ For our task, feature representation has two parts: representing the individual words and combining those representations into a single network input." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Representing words: vector_func" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's consider two baseline word representations methods:\n", "\n", "1. Random vectors (as returned by `utils.randvec`).\n", "1. 50-dimensional GloVe representations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def randvec(w, n=50, lower=-1.0, upper=1.0):\n", " \"\"\"Returns a random vector of length `n`. `w` is ignored.\"\"\"\n", " return utils.randvec(n=n, lower=lower, upper=upper)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Any of the files in glove.6B will work here:\n", "\n", "glove_dim = 50\n", "\n", "glove_src = os.path.join(GLOVE_HOME, 'glove.6B.{}d.txt'.format(glove_dim))\n", "\n", "# Creates a dict mapping strings (words) to GloVe vectors:\n", "GLOVE = utils.glove2dict(glove_src)\n", "\n", "def glove_vec(w): \n", " \"\"\"Return `w`'s GloVe representation if available, else return \n", " a random vector.\"\"\"\n", " return GLOVE.get(w, randvec(w, n=glove_dim))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Combining words into inputs: vector_combo_func" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we decide how to combine the two word vectors into a single representation. In more detail, where `u` is a vector representation of the left word and `v` is a vector representation of the right word, we need a function `vector_combo_func` such that `vector_combo_func(u, v)` returns a new input vector `z` of dimension `m`. A simple example is concatenation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def vec_concatenate(u, v):\n", " \"\"\"Concatenate np.array instances `u` and `v` into a new np.array\"\"\"\n", " return np.concatenate((u, v))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`vector_combo_func` could instead be vector average, vector difference, etc. (even combinations of those) – there's lots of space for experimentation here; [homework question 2](#Alternatives-to-concatenation-[1-point]) below pushes you to do some exploration." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Classifier model\n", "\n", "For a baseline model, I chose `TorchShallowNeuralClassifier`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "net = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Baseline results\n", "\n", "The following puts the above pieces together, using `vector_func=glove_vec`, since `vector_func=randvec` seems so hopelessly misguided for `word_disjoint`!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "word_disjoint_experiment = nli.wordentail_experiment(\n", " train_data=wordentail_data['word_disjoint']['train'],\n", " assess_data=wordentail_data['word_disjoint']['dev'], \n", " model=net, \n", " vector_func=glove_vec,\n", " vector_combo_func=vec_concatenate)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Homework questions\n", "\n", "Please embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Hypothesis-only baseline [2 points]\n", "\n", "During our discussion of SNLI and MultiNLI, we noted that a number of research teams have shown that hypothesis-only baselines for NLI tasks can be remarkably robust. This question asks you to explore briefly how this baseline effects the 'edge_disjoint' and 'word_disjoint' versions of our task.\n", "\n", "For this problem, submit two functions:\n", "\n", "1. A `vector_combo_func` function called `hypothesis_only` that simply throws away the premise, using the unmodified hypothesis (second) vector as its representation of the example.\n", "\n", "1. A function called `run_hypothesis_only_evaluation` that does the following:\n", " 1. Loops over the two conditions 'word_disjoint' and 'edge_disjoint' and the two `vector_combo_func` values `vec_concatenate` and `hypothesis_only`, calling `nli.wordentail_experiment` to train on the conditions 'train' portion and assess on its 'dev' portion, with `glove_vec` as the `vector_func`. So that the results are consistent, use an `sklearn.linear_model.LogisticRegression` with default parameters as the model.\n", " 1. Returns a `dict` mapping `(condition_name, function_name)` pairs to the 'macro-F1' score for that pair, as returned by the call to `nli.wordentail_experiment`. (Tip: you can get the `str` name of your function `hypothesis_only` with `hypothesis_only.__name__`.)\n", " \n", "The test functions `test_hypothesis_only` and `test_run_hypothesis_only_evaluation` will help ensure that your functions have the desired logic." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "##### YOUR CODE HERE\n", "\n", "\n", " ##### YOUR CODE HERE\n", "\n", "\n", "\n", "\n", "def run_hypothesis_only_evaluation():\n", " ##### YOUR CODE HERE\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_hypothesis_only(hypothesis_only):\n", " v = hypothesis_only(1, 2)\n", " assert v == 2 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_hypothesis_only(hypothesis_only)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_run_hypothesis_only_evaluation(run_hypothesis_only_evaluation):\n", " results = run_hypothesis_only_evaluation()\n", " assert ('word_disjoint', 'vec_concatenate') in results, \\\n", " \"The return value of `run_hypothesis_only_evaluation` does not have the intended kind of keys\"\n", " assert isinstance(results[('word_disjoint', 'vec_concatenate')], float), \\\n", " \"The values of the `run_hypothesis_only_evaluation` result should be floats\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_run_hypothesis_only_evaluation(run_hypothesis_only_evaluation)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Alternatives to concatenation [2 points]\n", "\n", "We've so far just used vector concatenation to represent the premise and hypothesis words. This question asks you to explore two simple alternative:\n", "\n", "1. Write a function `vec_diff` that, for a given pair of vector inputs `u` and `v`, returns the element-wise difference between `u` and `v`.\n", "\n", "1. Write a function `vec_max` that, for a given pair of vector inputs `u` and `v`, returns the element-wise max values between `u` and `v`.\n", "\n", "You needn't include your uses of `nli.wordentail_experiment` with these functions, but we assume you'll be curious to see how they do!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def vec_diff(u, v):\n", " ##### YOUR CODE HERE\n", "\n", "\n", "\n", " \n", "def vec_max(u, v):\n", " ##### YOUR CODE HERE\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_vec_diff(vec_diff):\n", " u = np.array([10.2, 8.1])\n", " v = np.array([1.2, -7.1])\n", " result = vec_diff(u, v)\n", " expected = np.array([9.0, 15.2])\n", " assert np.array_equal(result, expected), \\\n", " \"Expected {}; got {}\".format(expected, result)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_vec_diff(vec_diff)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_vec_max(vec_max):\n", " u = np.array([1.2, 8.1])\n", " v = np.array([10.2, -7.1])\n", " result = vec_max(u, v)\n", " expected = np.array([10.2, 8.1])\n", " assert np.array_equal(result, expected), \\\n", " \"Expected {}; got {}\".format(expected, result)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_vec_max(vec_max)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A deeper network [2 points]\n", "\n", "It is very easy to subclass `TorchShallowNeuralClassifier` if all you want to do is change the network graph: all you have to do is write a new `define_graph`. If your graph has new arguments that the user might want to set, then you should also redefine `__init__` so that these values are accepted and set as attributes.\n", "\n", "For this question, please subclass `TorchShallowNeuralClassifier` so that it defines the following graph:\n", "\n", "$$\\begin{align}\n", "h_{1} &= xW_{1} + b_{1} \\\\\n", "r_{1} &= \\textbf{Bernoulli}(1 - \\textbf{dropout\\_prob}, n) \\\\\n", "d_{1} &= r_1 * h_{1} \\\\\n", "h_{2} &= f(d_{1}) \\\\\n", "h_{3} &= h_{2}W_{2} + b_{2}\n", "\\end{align}$$\n", "\n", "Here, $r_{1}$ and $d_{1}$ define a dropout layer: $r_{1}$ is a random binary vector of dimension $n$, where the probability of a value being $1$ is given by $1 - \\textbf{dropout_prob}$. $r_{1}$ is multiplied element-wise by our first hidden representation, thereby zeroing out some of the values. The result is fed to the user's activation function $f$, and the result of that is fed through another linear layer to produce $h_{3}$. (Inside `TorchShallowNeuralClassifier`, $h_{3}$ is the basis for a softmax classifier, so no activation function is applied to it.)\n", "\n", "For your implementation, please use `nn.Sequential`, `nn.Linear`, and `nn.Dropout` to define the required layers.\n", "\n", "For comparison, using this notation, `TorchShallowNeuralClassifier` defines the following graph:\n", "\n", "$$\\begin{align}\n", "h_{1} &= xW_{1} + b_{1} \\\\\n", "h_{2} &= f(h_{1}) \\\\\n", "h_{3} &= h_{2}W_{2} + b_{2}\n", "\\end{align}$$\n", "\n", "The following code starts this sub-class for you, so that you can concentrate on `define_graph`. Be sure to make use of `self.dropout_prob`\n", "\n", "For this problem, submit just your completed `TorchDeepNeuralClassifier`. You needn't evaluate it, though we assume you will be keen to do that!\n", "\n", "You can use `test_TorchDeepNeuralClassifier` to ensure that your network has the intended structure." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch.nn as nn\n", "\n", "class TorchDeepNeuralClassifier(TorchShallowNeuralClassifier):\n", " def __init__(self, dropout_prob=0.7, **kwargs):\n", " self.dropout_prob = dropout_prob\n", " super().__init__(**kwargs)\n", " \n", " def define_graph(self):\n", " \"\"\"Complete this method!\n", " \n", " Returns\n", " -------\n", " an `nn.Module` instance, which can be a free-standing class you \n", " write yourself, as in `torch_rnn_classifier`, or the outpiut of \n", " `nn.Sequential`, as in `torch_shallow_neural_classifier`.\n", " \n", " \"\"\"\n", " ##### YOUR CODE HERE\n", "\n", "\n", " \n", "\n", "##### YOUR CODE HERE \n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_TorchDeepNeuralClassifier(TorchDeepNeuralClassifier):\n", " dropout_prob = 0.55\n", " assert hasattr(TorchDeepNeuralClassifier(), \"dropout_prob\"), \\\n", " \"TorchDeepNeuralClassifier must have an attribute `dropout_prob`.\"\n", " try:\n", " inst = TorchDeepNeuralClassifier(dropout_prob=dropout_prob)\n", " except TypeError:\n", " raise TypeError(\"TorchDeepNeuralClassifier must allow the user \"\n", " \"to set `dropout_prob` on initialization\")\n", " inst.input_dim = 10\n", " inst.n_classes_ = 5\n", " graph = inst.define_graph()\n", " assert len(graph) == 4, \\\n", " \"The graph should have 4 layers; yours has {}\".format(len(graph)) \n", " expected = {\n", " 0: 'Linear',\n", " 1: 'Dropout',\n", " 2: 'Tanh',\n", " 3: 'Linear'}\n", " for i, label in expected.items():\n", " name = graph[i].__class__.__name__\n", " assert label in name, \\\n", " \"The {} layer of the graph should be a {} layer; yours is {}\".format(i, label, name)\n", " assert graph[1].p == dropout_prob, \\\n", " \"The user's value for `dropout_prob` should be the value of `p` for the Dropout layer.\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_TorchDeepNeuralClassifier(TorchDeepNeuralClassifier)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Your original system [3 points]\n", "\n", "This is a simple dataset, but our focus on the 'word_disjoint' condition ensures that it's a challenging one, and there are lots of modeling strategies one might adopt. \n", "\n", "You are free to do whatever you like. We require only that your system differ in some way from those defined in the preceding questions. They don't have to be completely different, though. For example, you might want to stick with the model but represent examples differently, or the reverse.\n", "\n", "Keep in mind that, for the bake-off evaluation, the 'edge_disjoint' portions of the data are off limits. You can, though, train on the combination of the 'word_disjoint' 'train' and 'dev' portions. You are free to use different pretrained word vectors and the like. Please do not introduce additional entailment datasets into your training data, though.\n", "\n", "Please embed your code in this notebook so that we can rerun it.\n", "\n", "In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Enter your system description in this cell.\n", "# Please do not remove this comment.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bake-off [1 point]\n", "\n", "The goal of the bake-off is to achieve the highest macro-average F1 score on __word_disjoint__, on a test set that we will make available at the start of the bake-off. The announcement will go out on the discussion forum. To enter, you'll be asked to run `nli.bake_off_evaluation` on the output of your chosen `nli.wordentail_experiment` run. \n", "\n", "The cells below this one constitute your bake-off entry.\n", "\n", "The rules described in the [Your original system](#Your-original-system-[3-points]) homework question are also in effect for the bake-off.\n", "\n", "Systems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.\n", "\n", "Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.\n", "\n", "The announcement will include the details on where to submit your entry." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Enter your bake-off assessment code into this cell. \n", "# Please do not remove this comment.\n", "##### YOUR CODE HERE\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# On an otherwise blank line in this cell, please enter\n", "# your macro-avg f1 value as reported by the code above. \n", "# Please enter only a number between 0 and 1 inclusive.\n", "# Please do not remove this comment.\n", "\n", "##### YOUR CODE HERE\n", "\n", "\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 1 }