{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework 4: Word-level entailment with neural networks" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "__author__ = \"Christopher Potts\"\n", "__version__ = \"CS224u, Stanford, Spring 2019\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "1. [Overview](#Overview)\n", "1. [Set-up](#Set-up)\n", "1. [Data](#Data)\n", " 1. [Edge disjoint](#Edge-disjoint)\n", " 1. [Word disjoint](#Word-disjoint)\n", "1. [Baseline](#Baseline)\n", " 1. [Representing words: vector_func](#Representing-words:-vector_func)\n", " 1. [Combining words into inputs: vector_combo_func](#Combining-words-into-inputs:-vector_combo_func)\n", " 1. [Classifier model](#Classifier-model)\n", " 1. [Baseline results](#Baseline-results)\n", "1. [Homework questions](#Homework-questions)\n", " 1. [Hypothesis-only baseline [2 points]](#Hypothesis-only-baseline-[2-points])\n", " 1. [Alternatives to concatenation [1 point]](#Alternatives-to-concatenation-[1-point])\n", " 1. [A deeper network [2 points]](#A-deeper-network-[2-points])\n", " 1. [Your original system [4 points]](#Your-original-system-[4-points])\n", "1. [Bake-off [1 point]](#Bake-off-[1-point])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The general problem is word-level natural language inference.\n", "\n", "Training examples are pairs of words $(w_{L}, w_{R}), y$ with $y = 1$ if $w_{L}$ entails $w_{R}$, otherwise $0$.\n", "\n", "The homework questions below ask you to define baseline models for this and develop your own system for entry in the bake-off, which will take place on a held-out test-set distributed at the start of the bake-off. (Thus, all the data you have available for development is available for training your final system before the bake-off begins.)\n", "\n", "\"wordentail-diagram.png\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set-up" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See [the first notebook in this unit](nli_01_task_and_data.ipynb) for set-up instructions." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from collections import defaultdict\n", "import json\n", "import numpy as np\n", "import os\n", "import pandas as pd\n", "from torch_shallow_neural_classifier import TorchShallowNeuralClassifier\n", "import nli\n", "import utils" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "DATA_HOME = 'data'\n", "\n", "NLIDATA_HOME = os.path.join(DATA_HOME, 'nlidata')\n", "\n", "wordentail_filename = os.path.join(\n", " NLIDATA_HOME, 'nli_wordentail_bakeoff_data.json')\n", "\n", "GLOVE_HOME = os.path.join(DATA_HOME, 'glove.6B')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data\n", "\n", "I've processed the data into two different train/test splits, in an effort to put some pressure on our models to actually learn these semantic relations, as opposed to exploiting regularities in the sample.\n", "\n", "* `edge_disjoint`: The `train` and `dev` __edge__ sets are disjoint, but many __words__ appear in both `train` and `dev`.\n", "* `word_disjoint`: The `train` and `dev` __vocabularies are disjoint__, and thus the edges are disjoint as well.\n", "\n", "These are very different problems. For `word_disjoint`, there is real pressure on the model to learn abstract relationships, as opposed to memorizing properties of individual words." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "with open(wordentail_filename, encoding='utf8') as f:\n", " wordentail_data = json.load(f)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The outer keys are the splits plus a list giving the vocabulary for the entire dataset:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "dict_keys(['edge_disjoint', 'vocab', 'word_disjoint'])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "wordentail_data.keys()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Edge disjoint" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "dict_keys(['dev', 'train'])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "wordentail_data['edge_disjoint'].keys()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is what the split looks like; all three have this same format:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[['sweater', 'stroke'], 0],\n", " [['constipation', 'hypovolemia'], 0],\n", " [['disease', 'inflammation'], 0],\n", " [['herring', 'animal'], 1],\n", " [['cauliflower', 'outlook'], 0]]" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "wordentail_data['edge_disjoint']['dev'][: 5]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's test to make sure no edges are shared between `train` and `dev`:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nli.get_edge_overlap_size(wordentail_data, 'edge_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we expect, a *lot* of vocabulary items are shared between `train` and `dev`:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2916" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nli.get_vocab_overlap_size(wordentail_data, 'edge_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a large percentage of the entire vocab:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "8470" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(wordentail_data['vocab'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the distribution of labels in the `train` set. It's highly imbalanced, which will pose a challenge for learning. (I'll go ahead and reveal that the `dev` set is similarly distributed.)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "def label_distribution(split):\n", " return pd.DataFrame(wordentail_data[split]['train'])[1].value_counts()" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 14650\n", "1 2745\n", "Name: 1, dtype: int64" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "label_distribution('edge_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Word disjoint" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "dict_keys(['dev', 'train'])" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "wordentail_data['word_disjoint'].keys()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the `word_disjoint` split, no __words__ are shared between `train` and `dev`:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nli.get_vocab_overlap_size(wordentail_data, 'word_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because no words are shared between `train` and `dev`, no edges are either:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nli.get_edge_overlap_size(wordentail_data, 'word_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The label distribution is similar to that of `edge_disjoint`, though the overall number of examples is a bit smaller:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 7199\n", "1 1349\n", "Name: 1, dtype: int64" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "label_distribution('word_disjoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Baseline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Even in deep learning, __feature representation is vital and requires care!__ For our task, feature representation has two parts: representing the individual words and combining those representations into a single network input." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Representing words: vector_func" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's consider two baseline word representations methods:\n", "\n", "1. Random vectors (as returned by `utils.randvec`).\n", "1. 50-dimensional GloVe representations." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "def randvec(w, n=50, lower=-1.0, upper=1.0):\n", " \"\"\"Returns a random vector of length `n`. `w` is ignored.\"\"\"\n", " return utils.randvec(n=n, lower=lower, upper=upper)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "# Any of the files in glove.6B will work here:\n", "\n", "glove_dim = 50\n", "\n", "glove_src = os.path.join(GLOVE_HOME, 'glove.6B.{}d.txt'.format(glove_dim))\n", "\n", "# Creates a dict mapping strings (words) to GloVe vectors:\n", "GLOVE = utils.glove2dict(glove_src)\n", "\n", "def glove_vec(w): \n", " \"\"\"Return `w`'s GloVe representation if available, else return \n", " a random vector.\"\"\"\n", " return GLOVE.get(w, randvec(w, n=glove_dim))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Combining words into inputs: vector_combo_func" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we decide how to combine the two word vectors into a single representation. In more detail, where `u` is a vector representation of the left word and `v` is a vector representation of the right word, we need a function `vector_combo_func` such that `vector_combo_func(u, v)` returns a new input vector `z` of dimension `m`. A simple example is concatenation:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "def vec_concatenate(u, v):\n", " \"\"\"Concatenate np.array instances `u` and `v` into a new np.array\"\"\"\n", " return np.concatenate((u, v))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`vector_combo_func` could instead be vector average, vector difference, etc. (even combinations of those) – there's lots of space for experimentation here; [homework question 2](#Alternatives-to-concatenation-[1-point]) below pushes you to do some exploration." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Classifier model\n", "\n", "For a baseline model, I chose `TorchShallowNeuralClassifier`:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "net = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Baseline results\n", "\n", "The following puts the above pieces together, using `vector_func=glove_vec`, since `vector_func=randvec` seems so hopelessly misguided for `word_disjoint`!" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Finished epoch 100 of 100; error is 0.026732386788353324" ] }, { "name": "stdout", "output_type": "stream", "text": [ " precision recall f1-score support\n", "\n", " 0 0.92 0.94 0.93 1910\n", " 1 0.42 0.36 0.39 239\n", "\n", " micro avg 0.87 0.87 0.87 2149\n", " macro avg 0.67 0.65 0.66 2149\n", "weighted avg 0.87 0.87 0.87 2149\n", "\n" ] } ], "source": [ "word_disjoint_experiment = nli.wordentail_experiment(\n", " train_data=wordentail_data['word_disjoint']['train'],\n", " assess_data=wordentail_data['word_disjoint']['dev'], \n", " model=net, \n", " vector_func=glove_vec,\n", " vector_combo_func=vec_concatenate)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Homework questions\n", "\n", "Please embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Hypothesis-only baseline [2 points]\n", "\n", "During our discussion of SNLI and MultiNLI, we noted that a number of research teams have shown that hypothesis-only baselines for NLI tasks can be remarkably robust. This question asks you to explore briefly how this baseline effects the 'edge_disjoint' and 'word_disjoint' versions of our task.\n", "\n", "For this problem, submit code the following:\n", "\n", "1. A `vector_combo_func` function called `hypothesis_only` that simply throws away the premise, using the unmodified hypothesis (second) vector as its representation of the example.\n", "\n", "1. Code for looping over the two conditions 'word_disjoint' and 'edge_disjoint' and the two `vector_combo_func` values `vec_concatenate` and `hypothesis_only`, calling `nli.wordentail_experiment` to train on the conditions 'train' portion and assess on its 'dev' portion, with `glove_vec` as the `vector_func`. So that the results are consistent, use an `sklearn.linear_model.LogisticRegression` with default parameters as the model.\n", "\n", "1. Print out the percentage-wise increase in macro-F1 over the `hypothesis_only` baseline that `vec_concatenate` delivers for each of the two conditions. For example, if `hypothesis_only` returns 0.52 for condition `C` and `vec_concatenate` delivers 0.75 for `C`, then you'd report a ((0.75 / 0.52) - 1) * 100 = 44.23 percent increase for `C`. The values you need are stored in the dictionary returned by `nli.wordentail_experiment`, with key 'macro-F1'. Please round the percentages to two digits." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Alternatives to concatenation [1 point]\n", "\n", "We've so far just used vector concatenation to represent the premise and hypothesis words. This question asks you to explore a simple alternative. \n", "\n", "For this problem, submit code the following:\n", "\n", "1. A new potential value for `vector_combo_func` that does something different from concatenation. Options include, but are not limited to, element-wise addition, difference, and multiplication. These can be combined with concatenation if you like.\n", "1. Include a use of `nli.wordentail_experiment` in the same configuration as the one in [Baseline results](#Baseline-results) above, but with your new value of `vector_combo_func`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A deeper network [2 points]\n", "\n", "It is very easy to subclass `TorchShallowNeuralClassifier` if all you want to do is change the network graph: all you have to do is write a new `define_graph`. If your graph has new arguments that the user might want to set, then you should also redefine `__init__` so that these values are accepted and set as attributes.\n", "\n", "For this question, please subclass `TorchShallowNeuralClassifier` so that it defines the following graph:\n", "\n", "$$\\begin{align}\n", "h_{1} &= xW_{1} + b_{1} \\\\\n", "r_{1} &= \\textbf{Bernoulli}(1 - \\textbf{dropout_prob}, n) \\\\\n", "d_{1} &= r_1 * h_{1} \\\\\n", "h_{2} &= f(d_{1}) \\\\\n", "h_{3} &= h_{2}W_{2} + b_{2}\n", "\\end{align}$$\n", "\n", "Here, $r_{1}$ and $d_{1}$ define a dropout layer: $r_{1}$ is a random binary vector of dimension $n$, where the probability of a value being $1$ is given by $1 - \\textbf{dropout_prob}$. $r_{1}$ is multiplied element-wise by our first hidden representation, thereby zeroing out some of the values. The result is fed to the user's activation function $f$, and the result of that is fed through another linear layer to produce $h_{3}$. (Inside `TorchShallowNeuralClassifier`, $h_{3}$ is the basis for a softmax classifier, so no activation function is applied to it.)\n", "\n", "For comparison, using this notation, `TorchShallowNeuralClassifier` defines the following graph:\n", "\n", "$$\\begin{align}\n", "h_{1} &= xW_{1} + b_{1} \\\\\n", "h_{2} &= f(h_{1}) \\\\\n", "h_{3} &= h_{2}W_{2} + b_{2}\n", "\\end{align}$$\n", "\n", "The following code starts this sub-class for you, so that you can concentrate on `define_graph`. Be sure to make use of `self.dropout_prob`\n", "\n", "For this problem, submit just your completed `TorchDeepNeuralClassifier`. You needn't evaluate it, though we assume you will be keen to do that!" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "import torch.nn as nn\n", "\n", "class TorchDeepNeuralClassifier(TorchShallowNeuralClassifier):\n", " def __init__(self, dropout_prob=0.7, **kwargs):\n", " self.dropout_prob = dropout_prob\n", " super().__init__(**kwargs)\n", " \n", " def define_graph(self):\n", " \"\"\"Complete this method!\n", " \n", " Returns\n", " -------\n", " an `nn.Module` instance, which can be a free-standing class you \n", " write yourself, as in `torch_rnn_classifier`, or the outpiut of \n", " `nn.Sequential`, as in `torch_shallow_neural_classifier`.\n", " \n", " \"\"\"\n", " \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Your original system [4 points]\n", "\n", "This is a simple dataset, but our focus on the 'word_disjoint' condition ensures that it's a challenging one, and there are lots of modeling strategies one might adopt. \n", "\n", "You are free to do whatever you like. We require only that your system differ in some way from those defined in the preceding questions. They don't have to be completely different, though. For example, you might want to stick with the model but represent examples differently, or the reverse.\n", "\n", "Keep in mind that, for the bake-off evaluation, the 'edge_disjoint' portions of the data are off limits. You can, though, train on the combination of the 'word_disjoint' 'train' and 'dev' portions. You are free to use different pretrained word vectors and the like. Please do not introduce additional entailment datasets into your training data, though.\n", "\n", "Please embed your code in this notebook so that we can rerun it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bake-off [1 point]\n", "\n", "The goal of the bake-off is to achieve the highest macro-average F1 score on __word_disjoint__, on a test set that we will make available at the start of the bake-off on May 6. The announcement will go out on Piazza. To enter, you'll be asked to run `nli.bake_off_evaluation` on the output of your chosen `nli.wordentail_experiment` run. \n", "\n", "To enter the bake-off, upload this notebook on Canvas:\n", "\n", "https://canvas.stanford.edu/courses/99711/assignments/187250\n", "\n", "The cells below this one constitute your bake-off entry.\n", "\n", "The rules described in the [Your original system](#Your-original-system-[4-points]) homework question are also in effect for the bake-off.\n", "\n", "Systems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.\n", "\n", "The bake-off will close at 4:30 pm on May 8. Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "# Enter your bake-off assessment code into this cell. \n", "# Please do not remove this comment.\n" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "# On an otherwise blank line in this cell, please enter\n", "# your macro-avg f1 value as reported by the code above. \n", "# Please enter only a number between 0 and 1 inclusive.\n", "# Please do not remove this comment.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 1 }