{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Natural language inference: Models" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "__author__ = \"Christopher Potts\"\n", "__version__ = \"CS224u, Stanford, Spring 2018\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "0. [Overview](#Overview)\n", "0. [Set-up](#Set-up)\n", "0. [Sparse feature representations](#Sparse-feature-representations)\n", " 0. [Feature representations](#Feature-representations)\n", " 0. [Model wrapper](#Model-wrapper)\n", " 0. [Assessment](#Assessment)\n", "0. [Sentence-encoding models](#Sentence-encoding-models)\n", " 0. [Dense representations with a linear classifier](#Dense-representations-with-a-linear-classifier)\n", " 0. [Dense representations with a shallow neural network](#Dense-representations-with-a-shallow-neural-network)\n", " 0. [Recurrent neural networks](#Recurrent-neural-networks)\n", " 0. [Other sentence-encoding model ideas](#Other-sentence-encoding-model-ideas)\n", "0. [Chained models](#Chained-models)\n", " 0. [Simple RNN](#Simple-RNN)\n", " 0. [Separate premise and hypothesis RNNs](#Separate-premise-and-hypothesis-RNNs)\n", "0. [Attention mechanisms](#Attention-mechanisms)\n", "0. [Other findings](#Other-findings)\n", "0. [Exploratory exercises](#Exploratory-exercises)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "This notebook defines and explores a number of models for NLI. The general plot is familiar from [our work with the Stanford Sentiment Treebank](sst_01_overview.ipynb):\n", "\n", "1. Models based on sparse feature representations\n", "1. Linear classifiers and feed-forward neural classifiers using dense feature representations\n", "1. Recurrent and tree-structured neural networks\n", "\n", "The twist here is that, while NLI is another classification problem, the inputs have important high-level structure: __a premise__ and __a hypothesis__. This invites exploration of a host of neural model designs:\n", "\n", "* In __sentence-encoding__ models, the premise and hypothesis are analyzed separately, combined only for the final classification step.\n", "\n", "* In __chained__ models, the premise is processed first, then the hypotheses, giving a unified representation of the pair.\n", "\n", "NLI resembles sequence-to-sequence problems like __machine translation__ and __language modeling__. The central modeling difference is that NLI doesn't produce an output sequence, but rather consumes two sequences to produce a label. Still, there are enough affinities that many ideas have been shared among these fields." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set-up\n", "\n", "* See [the previous notebook](nli_01_task_and_data.ipynb) for set-up instructions for this unit. \n", "\n", "* Additionally, make sure you still have [the Wikipedia 2014 + Gigaword 5 distribution of the pretrained GloVe vectors](http://nlp.stanford.edu/data/glove.6B.zip). This is probably already in `vsmdata`." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from collections import Counter\n", "from itertools import product\n", "import numpy as np\n", "from sklearn.linear_model import LogisticRegression\n", "import tensorflow as tf\n", "from tf_rnn_classifier import TfRNNClassifier\n", "from tf_shallow_neural_classifier import TfShallowNeuralClassifier\n", "from tf_rnn_classifier import TfRNNClassifier\n", "import nli\n", "import os\n", "import sst\n", "import utils" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "glove_home = os.path.join('vsmdata', 'glove.6B')\n", "\n", "data_home = \"nlidata\"\n", "\n", "snli_home = os.path.join(data_home, \"snli_1.0\")\n", "\n", "multinli_home = os.path.join(data_home, \"multinl_i.0\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sparse feature representations\n", "\n", "We begin by looking at models based in sparse, hand-built feature representations. As in earlier units of the course, we will see that __these models are competitive__: easy to design, fast to optimize, and highly effective." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Feature representations\n", "\n", "The guiding idea for NLI sparse features is that one wants to knit together the premise and hypothesis, so that the model can learn about their relationships rather than just about each part separately." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With `word_overlap_phi`, we just get the set of words that occur in both the premise and hypothesis." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def word_overlap_phi(t1, t2): \n", " \"\"\"Basis for features for the words in both the premise and hypothesis.\n", " This tends to produce very sparse representations.\n", " \n", " Parameters\n", " ----------\n", " t1, t2 : `nltk.tree.Tree`\n", " As given by `str2tree`.\n", " \n", " Returns\n", " -------\n", " defaultdict\n", " Maps each word in both `t1` and `t2` to 1.\n", " \n", " \"\"\"\n", " overlap = set([w1 for w1 in t1.leaves() if w1 in t2.leaves()])\n", " return Counter(overlap)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With `word_cross_product_phi`, we count all the pairs $(w_{1}, w_{1})$ where $w_{1}$ is a word from the premise and $w_{2}$ is a word from the hypothesis. This creates a very large feature space. These models are very strong right out of the box, and they can be supplemented with more fine-grained features." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def word_cross_product_phi(t1, t2):\n", " \"\"\"Basis for cross-product features. This tends to produce pretty \n", " dense representations.\n", " \n", " Parameters\n", " ----------\n", " t1, t2 : `nltk.tree.Tree`\n", " As given by `str2tree`.\n", " \n", " Returns\n", " -------\n", " defaultdict\n", " Maps each (w1, w2) in the cross-product of `t1.leaves()` and \n", " `t2.leaves()` to its count. This is a multi-set cross-product\n", " (repetitions matter).\n", " \n", " \"\"\"\n", " return Counter([(w1, w2) for w1, w2 in product(t1.leaves(), t2.leaves())])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model wrapper\n", "\n", "Our experiment framework is basically the same as the one we used for the Stanford Sentiment Treebank. Here, I actually use `sst.fit_classifier_with_crossvalidation` (from that unit) to create a wrapper around `LogisticRegression` for cross-validation of hyperparameters. At this point, I am not sure what parameters will be good for our NLI datasets, so this hyperparameter search is vital." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def fit_maxent_with_crossvalidation(X, y):\n", " \"\"\"A MaxEnt model of dataset with hyperparameter cross-validation.\n", " \n", " Parameters\n", " ----------\n", " X : 2d np.array\n", " The matrix of features, one example per row.\n", " \n", " y : list\n", " The list of labels for rows in `X`. \n", " \n", " Returns\n", " -------\n", " sklearn.linear_model.LogisticRegression\n", " A trained model instance, the best model found.\n", " \n", " \"\"\" \n", " basemod = LogisticRegression(fit_intercept=True)\n", " cv = 3\n", " param_grid = {'C': [0.4, 0.6, 0.8, 1.0],\n", " 'penalty': ['l1','l2']} \n", " return sst.fit_classifier_with_crossvalidation(X, y, basemod, cv, param_grid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assessment\n", "\n", "Because SNLI and MultiNLI are huge, we can't afford to do experiments on the full datasets all the time. Thus, we will mainly work within the training sets, using the train readers to sample smaller datasets that can then be divided for training and assessment.\n", "\n", "Here, we sample 10% of the training examples. I set the random seed (`random_state=42`) so that we get consistency across the samples; setting `random_state=None` will give new random samples each time." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "train_reader = nli.SNLITrainReader(\n", " samp_percentage=0.10, random_state=42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An experimental dataset can be built directly from the reader and a feature function:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "dataset = nli.build_dataset(train_reader, word_overlap_phi)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "dict_keys(['X', 'y', 'vectorizer', 'raw_examples'])" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dataset.keys()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, it's more efficient to use `nli.experiment` to bring all these pieces together. This wrapper will work for all the models we consider." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best params {'C': 0.6, 'penalty': 'l2'}\n", "Best score: 0.412\n", " precision recall f1-score support\n", "\n", "contradiction 0.436 0.621 0.513 5572\n", " entailment 0.455 0.388 0.419 5498\n", " neutral 0.379 0.272 0.317 5496\n", "\n", " avg / total 0.423 0.428 0.416 16566\n", "\n" ] } ], "source": [ "_ = nli.experiment(\n", " train_reader=nli.SNLITrainReader(samp_percentage=0.10), \n", " phi=word_overlap_phi,\n", " train_func=fit_maxent_with_crossvalidation,\n", " assess_reader=None,\n", " random_state=42)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best params {'C': 0.4, 'penalty': 'l1'}\n", "Best score: 0.605\n", " precision recall f1-score support\n", "\n", "contradiction 0.673 0.633 0.652 5520\n", " entailment 0.616 0.694 0.653 5517\n", " neutral 0.595 0.554 0.574 5396\n", "\n", " avg / total 0.628 0.628 0.627 16433\n", "\n" ] } ], "source": [ "_ = nli.experiment(\n", " train_reader=nli.SNLITrainReader(samp_percentage=0.10), \n", " phi=word_cross_product_phi,\n", " train_func=fit_maxent_with_crossvalidation,\n", " assess_reader=None,\n", " random_state=42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As expected `word_cross_product_phi` is very strong. Let's take the hyperparameters chosen there and use them for an experiment in which we train on the entire training set and evaluate on the dev set; this seems like a good way to balance responsible search over hyperparameters with our resource limitations." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "def fit_maxent_classifier_with_preselected_params(X, y): \n", " mod = LogisticRegression(\n", " fit_intercept=True, \n", " penalty='ll', \n", " solver='saga', ## Required for penalty='ll'.\n", " C=0.4)\n", " mod.fit(X, y)\n", " return mod" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " precision recall f1-score support\n", "\n", "contradiction 0.762 0.729 0.745 3278\n", " entailment 0.708 0.795 0.749 3329\n", " neutral 0.716 0.657 0.685 3235\n", "\n", " avg / total 0.729 0.728 0.727 9842\n", "\n", "CPU times: user 19min 17s, sys: 9.56 s, total: 19min 26s\n", "Wall time: 19min 26s\n" ] } ], "source": [ "%%time\n", "_ = nli.experiment(\n", " train_reader=nli.SNLITrainReader(samp_percentage=1.0), \n", " assess_reader=nli.SNLIDevReader(samp_percentage=1.0),\n", " phi=word_cross_product_phi,\n", " train_func=fit_maxent_classifier_with_preselected_params,\n", " random_state=None)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This baseline is very similar to the one established in the original SNLI paper by Bowman et al. for models like this one." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sentence-encoding models\n", "\n", "We turn now to sentence-encoding models. The hallmark of these is that the premise and hypothesis get their own representation in some sense, and then those representations are combined to predict the label. [Bowman et al. 2015](http://aclweb.org/anthology/D/D15/D15-1075.pdf) explore models of this form as part of introducing SNLI.\n", "\n", "The feed-forward networks we used in [the word-level bake-off](nli_wordentail_bakeoff.ipynb) are members of this family of models: each word was represented separately, and the concatenation of those representations was used as the input to the model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dense representations with a linear classifier\n", "\n", "Perhaps the simplest sentence-encoding model sums (or averages, etc.) the word representations for the premise, does the same for the hypothesis, and concatenates those two representations for use as the input to a linear classifier. \n", "\n", "Here's a diagram that is meant to suggest the full space of models of this form:\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's an implementation of this model where \n", "\n", "* The embedding is GloVe.\n", "* The word representations are summed.\n", "* The premise and hypothesis vectors are concatenated.\n", "* A softmax classifier is used at the top." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "glove_lookup = utils.glove2dict(\n", " os.path.join(glove_home, 'glove.6B.50d.txt'))" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "def glove_leaves_phi(t1, t2, np_func=np.sum):\n", " \"\"\"Represent `tree` as a combination of the vector of its words.\n", " \n", " Parameters\n", " ----------\n", " t1 : nltk.Tree \n", " t2 : nltk.Tree \n", " np_func : function (default: np.sum)\n", " A numpy matrix operation that can be applied columnwise, \n", " like `np.mean`, `np.sum`, or `np.prod`. The requirement is that \n", " the function take `axis=0` as one of its arguments (to ensure\n", " columnwise combination) and that it return a vector of a \n", " fixed length, no matter what the size of the tree is.\n", " \n", " Returns\n", " -------\n", " np.array\n", " \n", " \"\"\" \n", " prem_vecs = _get_tree_vecs(t1, glove_lookup, np_func) \n", " hyp_vecs = _get_tree_vecs(t2, glove_lookup, np_func) \n", " return np.concatenate((prem_vecs, hyp_vecs))\n", " \n", " \n", "def _get_tree_vecs(tree, lookup, np_func):\n", " allvecs = np.array([lookup[w] for w in tree.leaves() if w in lookup]) \n", " if len(allvecs) == 0:\n", " dim = len(next(iter(lookup.values())))\n", " feats = np.zeros(dim) \n", " else: \n", " feats = np_func(allvecs, axis=0) \n", " return feats" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best params {'C': 0.6, 'penalty': 'l1'}\n", "Best score: 0.508\n", " precision recall f1-score support\n", "\n", "contradiction 0.492 0.471 0.481 5456\n", " entailment 0.499 0.558 0.527 5558\n", " neutral 0.531 0.492 0.511 5543\n", "\n", " avg / total 0.508 0.507 0.506 16557\n", "\n" ] } ], "source": [ "_ = nli.experiment(\n", " train_reader=nli.SNLITrainReader(samp_percentage=0.10), \n", " phi=glove_leaves_phi,\n", " train_func=fit_maxent_with_crossvalidation,\n", " assess_reader=None,\n", " random_state=42,\n", " vectorize=False) # Ask `experiment` not to featurize; we did it already." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dense representations with a shallow neural network\n", "\n", "A small tweak to the above is to use a neural network instead of a softmax classifier at the top:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "def fit_shallow_neural_classifier_with_crossvalidation(X, y): \n", " basemod = TfShallowNeuralClassifier(max_iter=1000)\n", " cv = 3\n", " param_grid = {'hidden_dim': [25, 50, 100]}\n", " return sst.fit_classifier_with_crossvalidation(X, y, basemod, cv, param_grid) " ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Iteration 1000: loss: 31.445966720581055" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Best params {'hidden_dim': 100}\n", "Best score: 0.538\n", " precision recall f1-score support\n", "\n", "contradiction 0.623 0.431 0.510 5438\n", " entailment 0.551 0.591 0.570 5432\n", " neutral 0.507 0.624 0.560 5529\n", "\n", " avg / total 0.560 0.549 0.547 16399\n", "\n" ] } ], "source": [ "_ = nli.experiment(\n", " train_reader=nli.SNLITrainReader(samp_percentage=0.10), \n", " phi=glove_leaves_phi,\n", " train_func=fit_shallow_neural_classifier_with_crossvalidation,\n", " assess_reader=None,\n", " random_state=42,\n", " vectorize=False) # Ask `experiment` not to featurize; we did it already." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Recurrent neural networks\n", "\n", "A more sophisticated sentence-encoding model processes the premise and hypothesis with separate RNNs and uses the concatenation of their final states as the basis for the classification decision at the top:\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This model is particularly easy to implement using [the TensorFlow framework for this course](tensorflow_models.ipynb):\n", " \n", "1. Define a subclass of `TfRNNClassifier`.\n", "1. Define `build_graph`.\n", "1. Tell `train_dict` and `test_dict` to featurize the incoming examples `X` as pairs of list of words, one for the premise and the other for the hypothesis.\n", "\n", "Here is a complete implementation:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "class TfNLISentenceRepRNN(TfRNNClassifier):\n", " \n", " def build_graph(self):\n", " self._define_embedding()\n", " \n", " # Separate RNN graphs:\n", " self.prem_last = self.build_premise_graph() \n", " self.hyp_last = self.build_hypothesis_graph()\n", " \n", " # The outputs are labels as usual:\n", " self.outputs = tf.placeholder(\n", " tf.float32, shape=[None, self.output_dim])\n", " \n", " # Output softmax layer:\n", " self.last = tf.concat((self.prem_last, self.hyp_last), axis=1)\n", " \n", " self.last_dim = self.hidden_dim * 2\n", " \n", " self.W_ly = self.weight_init(\n", " self.last_dim, self.output_dim, 'W_ly')\n", " self.b_y = self.bias_init(self.output_dim, 'b_y')\n", " self.model = tf.matmul(self.last, self.W_ly) + self.b_y\n", " \n", " def build_premise_graph(self):\n", " self.premises = tf.placeholder(\n", " tf.int32, [None, self.max_length])\n", " self.prem_lengths = tf.placeholder(tf.int32, [None])\n", " self.prem_feats = tf.nn.embedding_lookup(\n", " self.embedding, self.premises)\n", " self.prem_cell = self.cell_class(\n", " self.hidden_dim, activation=self.hidden_activation)\n", " with tf.variable_scope('premise'):\n", " prem_outputs, prem_state = tf.nn.dynamic_rnn(\n", " self.prem_cell,\n", " self.prem_feats,\n", " dtype=tf.float32,\n", " sequence_length=self.prem_lengths)\n", " prem_last = self._get_final_state(self.prem_cell, prem_state)\n", " return prem_last \n", " \n", " def build_hypothesis_graph(self):\n", " self.hypotheses = tf.placeholder(\n", " tf.int32, [None, self.max_length])\n", " self.hyp_lengths = tf.placeholder(tf.int32, [None])\n", " self.hyp_feats = tf.nn.embedding_lookup(\n", " self.embedding, self.hypotheses)\n", " self.hyp_cell = self.cell_class(\n", " self.hidden_dim, activation=self.hidden_activation)\n", " with tf.variable_scope('hypothesis'):\n", " hyp_outputs, hyp_state = tf.nn.dynamic_rnn(\n", " self.hyp_cell,\n", " self.hyp_feats,\n", " dtype=tf.float32,\n", " sequence_length=self.hyp_lengths)\n", " hyp_last = self._get_final_state(self.hyp_cell, hyp_state)\n", " return hyp_last \n", " \n", " def train_dict(self, X, y): \n", " X_prem, X_hyp = zip(*X) \n", " X_prem, prem_lengths = self._convert_X(X_prem)\n", " X_hyp, hyp_lengths = self._convert_X(X_hyp)\n", " return {self.premises: X_prem, \n", " self.hypotheses: X_hyp, \n", " self.prem_lengths: prem_lengths, \n", " self.hyp_lengths: hyp_lengths, \n", " self.outputs: y}\n", " \n", " def test_dict(self, X): \n", " X_prem, X_hyp = zip(*X) \n", " X_prem, prem_lengths = self._convert_X(X_prem)\n", " X_hyp, hyp_lengths = self._convert_X(X_hyp)\n", " return {self.premises: X_prem, \n", " self.hypotheses: X_hyp, \n", " self.prem_lengths: prem_lengths, \n", " self.hyp_lengths: hyp_lengths} " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For evaluation, we define a wrapper for `TfNLISentenceRepRNN`:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "def fit_sentence_rep_rnn(X, y): \n", " vocab = get_vocab(X, n_words=2000)\n", " # Reduce the network size or `max_iter` for non-GPU usage:\n", " mod = TfNLISentenceRepRNN(vocab, hidden_dim=50, max_iter=1000)\n", " mod.fit(X, y)\n", " return mod" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Examples are represented as pairs of lists of words:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "def sentence_rep_rnn_phi(t1, t2):\n", " return [t1.leaves(), t2.leaves()]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We carry over our usual method for getting a vocabulary for the RNN:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "def get_vocab(X, n_words=None): \n", " wc = Counter([w for pair in X for ex in pair for w in ex])\n", " wc = wc.most_common(n_words) if n_words else wc.items()\n", " vocab = {w for w, c in wc}\n", " vocab.add(\"$UNK\")\n", " return sorted(vocab)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And finally a basic experiment; for a real analysis, we would train for much longer, find the optimal hyperparameters, and then scale this up to a full train/dev evaluation." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Iteration 1000: loss: 33.303855180740356" ] }, { "name": "stdout", "output_type": "stream", "text": [ " precision recall f1-score support\n", "\n", "contradiction 0.553 0.574 0.563 5700\n", " entailment 0.601 0.582 0.592 5511\n", " neutral 0.544 0.540 0.542 5305\n", "\n", " avg / total 0.566 0.566 0.566 16516\n", "\n" ] } ], "source": [ "_ = nli.experiment(\n", " train_reader=nli.SNLITrainReader(samp_percentage=0.10), \n", " phi=sentence_rep_rnn_phi,\n", " train_func=fit_sentence_rep_rnn,\n", " assess_reader=None,\n", " random_state=42,\n", " vectorize=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Other sentence-encoding model ideas\n", "\n", "Given that [we already explored tree-structured neural networks (TreeNNs)](sst_03_neural_networks.ipynb#Tree-structured-neural-networks), it's natural to consider these as the basis for sentence-encoding NLI models:\n", "\n", "\n", "\n", "And this is just the begnning: any model used to represent sentences is presumably a candidate for use in sentence-encoding NLI!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Chained models\n", "\n", "The final major class of NLI designs we look at are those in which the premise and hypothesis are processed sequentially, as a pair. These don't deliver representations of the premise or hypothesis separately. They bear the strongest resemblance to classic sequence-to-sequence models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simple RNN\n", "\n", "In the simplest version of this model, we just concatenate the premise and hypothesis. The model itself is identical to the one we used for the Stanford Sentiment Treebank:\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To implement this, we can use `TfRNNClassifier` out of the box. We just need to concatenate the leaves of the premise and hypothesis trees:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "def simple_chained_rep_rnn_phi(t1, t2):\n", " \"\"\"Map `t1` and `t2` to a single list of leaf nodes.\n", " \n", " A slight variant might insert a designated boundary symbol between \n", " the premise leaves and the hypothesis leaves. Be sure to add it to \n", " the vocab in that case, else it will be $UNK.\n", " \"\"\"\n", " return t1.leaves() + t2.leaves()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's a quick evaluation, just to get a feel for this model:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "def fit_simple_chained_rnn(X, y): \n", " vocab = get_vocab(X, n_words=2000)\n", " # Reduce the network size or `max_iter` for non-GPU usage:\n", " mod = TfRNNClassifier(vocab, hidden_dim=50, max_iter=1000)\n", " mod.fit(X, y)\n", " return mod" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Iteration 1000: loss: 40.29029595851898" ] }, { "name": "stdout", "output_type": "stream", "text": [ " precision recall f1-score support\n", "\n", "contradiction 0.380 0.277 0.320 5507\n", " entailment 0.465 0.473 0.469 5407\n", " neutral 0.430 0.540 0.478 5497\n", "\n", " avg / total 0.425 0.429 0.422 16411\n", "\n" ] } ], "source": [ "_ = nli.experiment(\n", " train_reader=nli.SNLITrainReader(samp_percentage=0.10), \n", " phi=simple_chained_rep_rnn_phi,\n", " train_func=fit_simple_chained_rnn,\n", " assess_reader=None,\n", " random_state=42,\n", " vectorize=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Separate premise and hypothesis RNNs\n", "\n", "A natural variation on the above is to give the premise and hypothesis each their own RNN:\n", "\n", "\n", "\n", "This greatly increases the number of parameters, but it gives the model more chances to learn that appearing in the premise is different from appearing in the hypothesis. One could even push this idea further by giving the premise and hypothesis their own embeddings as well.\n", "\n", "Implementing this in our TensorFlow is very easy, involving only minor modifications to `TfNLISentenceRepRNN` above. Note that [tf.nn.dynamic_rnn](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) has a keyword parameter `initial_state` that can be a `Tensor`. Thus, the final state of the premise RNN can be passed in here to chain the two RNNs together." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Attention mechanisms\n", "\n", "Many of the best-performing systems in [the SNLI leaderboard](https://nlp.stanford.edu/projects/snli/) use __attention mechanisms__ to help the model learn important associations between words in the premise and words in the hypothesis. I believe [Rocktäschel et al. (2015)](https://arxiv.org/pdf/1509.06664v1.pdf) were the first to explore such models for NLI.\n", "\n", "For instance, if _puppy_ appears in the premise and _dog_ in the conclusion, then that might be a high-precision indicator that the correct relationship is entailment.\n", "\n", "This diagram is a high-level schematic for adding attention mechanisms to a chained RNN model for NLI:\n", "\n", "\n", "\n", "Since TensorFlow will handle the details of backpropagation, implementing these models is largely reduced to figuring out how to wrangle the states of the model in the desired way." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Other findings\n", "\n", "1. A high-level lesson of [the SNLI leaderboard](https://nlp.stanford.edu/projects/snli/) is that one can do __extremely well__ with simple neural models whose hyperparameters are selected via extensive cross-validation. This is mathematically interesting but might be dispiriting to those of us without vast resources to devote to these computations! (On the flip side, cleverly designed linear models or ensembles with sparse feature representations might beat all of these entrants with a fraction of the computational budget.)\n", "\n", "1. In an outstanding project for this course in 2016, [Leonid Keselman](https://leonidk.com) observed that [one can do much better than chance on SNLI by processing only the hypothesis](https://leonidk.com/stanford/cs224u.html). This relates to [observations we made in the word-level bake-off](nli_wordentail_bakeoff.ipynb) about how certain terms will tend to appear more on the right in entailment pairs than on the left.\n", "\n", "1. As we pointed out at the start of this unit, [Dagan et al. (2006) pitched NLI as a general-purpose NLU task](nli_01_task_and_data.ipynb#Overview). We might then hope that the representations we learn on this task will transfer to others. So far, the evidence for this is decidedly mixed. I suspect the core scientific idea is sound, but that __we still lack the needed methods for doing transfer learning__.\n", "\n", "1. For SNLI, we seem to have entered the inevitable phase in machine learning problems where __ensembles do best__." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exploratory exercises\n", "\n", "These are largely meant to give you a feel for the material, but some of them could lead to projects and help you with future work for the course. These are not for credit.\n", "\n", "1. When we [feed dense representations to a simple classifier](#Dense-representations-with-a-linear-classifier), what is the effect of changing the combination functions (e.g., changing `sum` to `mean`; changing `concatenate` to `difference`)? What happens if we swap out `LogisticRegression` for, say, an [sklearn.ensemble.RandomForestClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) instance?\n", "\n", "1. Implement the [Separate premise and hypothesis RNN](#Separate-premise-and-hypothesis-RNNs) and evaluate it, comparing in particular against [the version that simply concatenates the premise and hypothesis](#Simple-RNN). Does having all these additional parameters pay off? Do you need more training examples to start to see the value of this idea?\n", "\n", "1. The illustrations above all use SNLI. It is worth experimenting with MultiNLI as well. It has both __matched__ and __mismatched__ dev sets that are worth exploring. It's also interesting to think about combining SNLI and MultiNLI, to get additional training instances, to push the models to generalize more, and to assess transfer learning hypotheses." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" } }, "nbformat": 4, "nbformat_minor": 2 }