{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Natural language inference: task and datasets" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "__author__ = \"Christopher Potts\"\n", "__version__ = \"CS224u, Stanford, Spring 2020\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "1. [Overview](#Overview)\n", "1. [Our version of the task](#Our-version-of-the-task)\n", "1. [Primary resources](#Primary-resources)\n", "1. [Set-up](#Set-up)\n", "1. [SNLI](#SNLI)\n", " 1. [SNLI properties](#SNLI-properties)\n", " 1. [Working with SNLI](#Working-with-SNLI)\n", "1. [MultiNLI](#MultiNLI)\n", " 1. [MultiNLI properties](#MultiNLI-properties)\n", " 1. [Working with MultiNLI](#Working-with-MultiNLI)\n", " 1. [Annotated MultiNLI subsets](#Annotated-MultiNLI-subsets)\n", "1. [Adversarial NLI](#Adversarial-NLI)\n", " 1. [Adversarial NLI properties](#Adversarial-NLI-properties)\n", " 1. [Working with Adversarial NLI](#Working-with-Adversarial-NLI)\n", "1. [Other NLI datasets](#Other-NLI-datasets)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "Natural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.\n", "\n", "[Dagan et al. (2006)](https://u.cs.biu.ac.il/~nlp/RTE1/Proceedings/dagan_et_al.pdf), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:\n", "\n", "> It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, __a QA system__ has to identify texts that entail a hypothesized answer. [...] Similarly, for certain __Information Retrieval__ queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In __multi-document summarization__ a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in __MT evaluation__ a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition \"engines\" which may provide useful generic modules across applications." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Our version of the task\n", "\n", "Our NLI data will look like this:\n", "\n", "| Premise | Relation | Hypothesis |\n", "|---------|---------------|------------|\n", "| turtle | contradiction | linguist |\n", "| A turtled danced | entails | A turtle moved |\n", "| Every reptile danced | entails | Every turtle moved |\n", "| Some turtles walk | contradicts | No turtles move |\n", "| James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |\n", "\n", "In the [word-entailment bakeoff](hw_wordentail.ipynb), we looked at a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Primary resources\n", "\n", "We're going to focus on three NLI corpora:\n", "\n", "* [The Stanford Natural Language Inference corpus (SNLI)](https://nlp.stanford.edu/projects/snli/)\n", "* [The Multi-Genre NLI Corpus (MultiNLI)](https://www.nyu.edu/projects/bowman/multinli/)\n", "* [The Adversarial NLI Corpus (ANLI)](https://github.com/facebookresearch/anli)\n", "\n", "The first was collected by a group at Stanford, led by [Sam Bowman](https://www.nyu.edu/projects/bowman/), and the second was collected by a group at NYU, also led by [Sam Bowman](https://www.nyu.edu/projects/bowman/). Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.\n", "\n", "The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is \"Adversarial\" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.\n", "\n", "This notebook presents tools for working with these corpora. The [second notebook in the unit](nli_02_models.ipynb) concerns models of NLI." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set-up\n", "\n", "* As usual, you need to be fully set up to work with [the CS224u repository](https://github.com/cgpotts/cs224u/).\n", "\n", "* If you haven't already, download [the course data](http://web.stanford.edu/class/cs224u/data/data.tgz), unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change `DATA_HOME` below.)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import nli\n", "import os\n", "import pandas as pd\n", "import random" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "DATA_HOME = os.path.join(\"data\", \"nlidata\")\n", "\n", "SNLI_HOME = os.path.join(DATA_HOME, \"snli_1.0\")\n", "\n", "MULTINLI_HOME = os.path.join(DATA_HOME, \"multinli_1.0\")\n", "\n", "ANNOTATIONS_HOME = os.path.join(DATA_HOME, \"multinli_1.0_annotations\")\n", "\n", "ANLI_HOME = os.path.join(DATA_HOME, \"anli_v0.1\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SNLI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### SNLI properties" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* All the premises are captions from the [Flickr30K corpus](http://shannon.cs.illinois.edu/DenotationGraph/).\n", "\n", "\n", "* Some of the sentences rather depressingly reflect stereotypes ([Rudinger et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/W17-1609/w17-1609)).\n", "\n", "\n", "* 550,152 train examples; 10K dev; 10K test\n", "\n", "\n", "* Mean length in tokens:\n", " * Premise: 14.1\n", " * Hypothesis: 8.3\n", "\n", "* Clause-types\n", " * Premise S-rooted: 74%\n", " * Hypothesis S-rooted: 88.9%\n", "\n", "\n", "* Vocab size: 37,026\n", "\n", "\n", "* 56,951 examples validated by four additional annotators\n", " * 58.3% examples with unanimous gold label\n", " * 91.2% of gold labels match the author's label\n", " * 0.70 overall Fleiss kappa\n", "\n", "\n", "* Top scores currently around 90%. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Working with SNLI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following readers should make it easy to work with SNLI:\n", " \n", "* `nli.SNLITrainReader`\n", "* `nli.SNLIDevReader`\n", "\n", "Writing a `Test` reader is easy and so left to the user who decides that a test-set evaluation is appropriate. We omit that code as a subtle way of discouraging use of the test set during project development.\n", "\n", "The base class, `nli.NLIReader`, is used by all the readers discussed here.\n", "\n", "Because the datasets are so large, it is often useful to be able to randomly sample from them. All of the reader classes discussed here support this with their keyword argument `samp_percentage`. For example, the following samples approximately 10% of the examples from the SNLI training set:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"NLIReader({'src_filename': 'data/nlidata/snli_1.0/snli_1.0_train.jsonl', 'filter_unlabeled': True, 'samp_percentage': 0.1, 'random_state': 42, 'gold_label_attr_name': 'gold_label'})" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nli.SNLITrainReader(SNLI_HOME, samp_percentage=0.10, random_state=42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we trade efficiency for precision in the number of cases we return; see the implementation for details.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All of the readers have a `read` method that yields `NLIExample` example instances. For SNLI, these have the following attributes:\n", "\n", "* __annotator_labels__: `list of str`\n", "* __captionID__: `str`\n", "* __gold_label__: `str`\n", "* __pairID__: `str`\n", "* __sentence1__: `str`\n", "* __sentence1_binary_parse__: `nltk.tree.Tree`\n", "* __sentence1_parse__: `nltk.tree.Tree`\n", "* __sentence2__: `str`\n", "* __sentence2_binary_parse__: `nltk.tree.Tree`\n", "* __sentence2_parse__: `nltk.tree.Tree`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following creates the label distribution for the training data:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "entailment 183416\n", "contradiction 183187\n", "neutral 182764\n", "- 785\n", "dtype: int64" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "snli_labels = pd.Series(\n", " [ex.gold_label for ex in nli.SNLITrainReader(\n", " SNLI_HOME, filter_unlabeled=False).read()])\n", "\n", "snli_labels.value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use `filter_unlabeled=True` (the default) to silently drop the examples for which `gold_label` is `-`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at a specific example in some detail:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "snli_iterator = iter(nli.SNLITrainReader(SNLI_HOME).read())" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "snli_ex = next(snli_iterator)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})\n" ] } ], "source": [ "print(snli_ex)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"NLIExample({'annotator_labels': ['neutral'], 'captionID': '3416050480.jpg#4', 'gold_label': 'neutral', 'pairID': '3416050480.jpg#4r1n', 'sentence1': 'A person on a horse jumps over a broken down airplane.', 'sentence1_binary_parse': Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])]), 'sentence1_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])]), 'sentence2': 'A person is training his horse for a competition.', 'sentence2_binary_parse': Tree('X', [Tree('X', ['A', 'person']), Tree('X', [Tree('X', ['is', Tree('X', [Tree('X', ['training', Tree('X', ['his', 'horse'])]), Tree('X', ['for', Tree('X', ['a', 'competition'])])])]), '.'])]), 'sentence2_parse': Tree('ROOT', [Tree('S', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('VP', [Tree('VBZ', ['is']), Tree('VP', [Tree('VBG', ['training']), Tree('NP', [Tree('PRP$', ['his']), Tree('NN', ['horse'])]), Tree('PP', [Tree('IN', ['for']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['competition'])])])])]), Tree('.', ['.'])])])})" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "snli_ex" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see from the above attribute list, there are __three versions__ of the premise and hypothesis sentences:\n", "\n", "1. Regular string representations of the data\n", "1. Unlabeled binary parses \n", "1. Labeled parses" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'A person on a horse jumps over a broken down airplane.'" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "snli_ex.sentence1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The binary parses lack node labels; so that we can use `nltk.tree.Tree` with them, the label `X` is added to all of them:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAkMAAAEnCAIAAAAl1tAdAAAACXBIWXMAAA3XAAAN1wFCKJt4AAAAHXRFWHRTb2Z0d2FyZQBHUEwgR2hvc3RzY3JpcHQgOS4xNnO9PXQAACAASURBVHic7d1PaOxafifw04+bptuVhDmZtnmLhGvUIQ02mY1c0EwWroUMg99mFle1mM3tbFRwF1n1Q9qll1I6s8nigs7q3W3pLWbzTKDOorwbUB0YGGzIwD3Y6YFu7MYnDFOV6TSNZ/F7T09dtsvlKlXp3/fD4+H6Jx2rdPXT73d+kr9zf3/PAAAAauuTsgcAAACwFkQyAACoN0QyAACoN0QyAACoN0QyAACot1dlDwCgHFJKxpht25xzxpjWWmttWZZlWWUPDQBeBjkZtJExRggRRZFSip6RUvb7fQpvAFAviGTQRpzz4XDIOaeEjPi+73leiaMCgNV8B1dGQ2sZY/r9/nA41FoLIeI4LntEALAKRDJoNaWUEMIYE8dxPj8DgBpBdRFazbZtavRAGAOoL0QyaLUgCFzXNcZkrR8AUDvowof2ok5Fz/OMMYPBAAVGgJrCPBm0FLV7jEYjeqiUiqJoOByWOyoAWAGqi9BGSqmjoyNjTBRF9IwQQkp5dHRU7sAAYAXIyQAAoN6QkwEAQL0hkgEAQL0hkgEAQL2hCx9aSt/e6ttbM51e/frX+z/4Ae90eKdjv35d9rgA4MUQyaDJ5OUl+yZoMcbU1RVjTF5c5N/z4x/+8L9//Jh/xjk8ZIzZ+/uMMb6zQz84BwdbGTIAvBh6F6HezGxG8UldXZnZzEynlGnRk3mL4xPFPFoIeyLm0cd5p8N3dqy9vezj9v4+39nZzO8HAM9DJIMayJKqr3Osmxszm+nbW31zk38b73QoPtH/rd1da3d3/Zqhur4206mZTtX1NfsmyKmrKzOdLli7/fo1KpYA24FIBlWxZMCw9vas3d0sK6KAQUFr+2POMsJ8iF0hIwSAdSCSwVa9qIhHGVV23K9dEQ8VS4DtQCSDgs2lKUtOXGVJVUvSFFQsAQqESAaryFrY6UC85MQVDsTLQMUS4KUQyeBJlDfMtbA/O3FFR9KyJq4aDxVLgIcQydouf2SkuMWWnrjC6X+loGIJrYVI1nyrVauKamGHKkDFEpoNkawh5iau6Ai15MQVKoEth4ol1B0iWZ0sOXGFIw4UBRVLqAVEssp5OHG1oAqEiSsoCyqWUB2IZCV49BCwtXsvAWwBKpawTYhkm/LoxNVTLewME1fQGqhYQuEQydayzB8NYb/fws5w4gnwBFQsYTWIZM8o6o+GAMCaULGEpyCSMVb2Hw0BgDWhYtly7Y1k8vIyOjur+B8NAYA1vahi6Rwc+KenWx8jrOtV2QMoDcUnqjnU9I+GAMCz+M4OFRgfLfXPVSxpJhtqp705GQAANMMnZQ8AAABgLYhkAABQb4hkAABQb+3q+JBSMsZs2+acM8a01lpry7Isyyp7aABQDhwWGqBFOZkxRggRRZFSip6RUvb7fdqPAaCFcFhohhZFMs75cDjknNOZF/F93/O8EkcFACXCYaEZWteFb4zp9/vD4VBrLYSI47jsEQFAyXBYqLvWRTLGmFJKCGGMieM4fyIGAK2Fw0Kttai6mLFtm2Z0sb8CAMFhodbaGMmCIHBd1xiTzfECQMvhsFBr7erCZ9903HqeZ4wZDAaoJAAADgt11655MprXHY1G9FApFUXRcDgsd1QAUCIcFhqgRdVFpdTR0ZExJooiekYIIaU8Ojoqd2AAUBYcFpqhXTkZAAA0T4tyMgAAaCREMgAAqDdEMgAAqLfWdeEzxtT1dZKmv/ntb//jn/+52+2WPRwAKI2+vdW3t+rqysxm//MXv/jTP/kT3ulYu7vW7q69v893dsoeICylRR0fZjYT43GSpurq6t91Ot//gz/45b/8i7W353a73vGxtbtb9gABYIPk5WX2f3V1ZaZTdXWVvco7nb/49NM/+t735MVF/lPO4SFjjKKavb9PQW6r44YltCKSifNzeXGRpCljzO12ncNDt9vlOztJmsrLyyRNzXRq7++73a7X6+EsDKDWzGymrq4o2TLT6dcp13SavcHe3+edjr2/zxhzDg54p2O/fj23EHl5aaZTdX1NS9C3t/rmJnvV2tujpI2WkP0fytLkSEZRKh+o3G734fmUmc2SNJ0Ldd7xcRlDBoAXoEhDtcGHaRZjzDk85Ds7FHiy/9Zc3YKsjlK3bHWoT25NAyOZvr0V5+dJmuqbG97peL2e2+0+POd69INJmorzc/ogRT6cagFUwfJJkv36Ne90tvYvN58C6pubLKZmb8hSQKpPPpoCwpqaE8kotRLjMZ0leb2ec3CwWkMHtYSI8dhMp5hIA9imfAsGBYbFE1eVDQxZ6GWMfR3qqhF6G6kJkYxKiFQbpDkwmgYrZMny8lKMx4wxe3+f0juUCwAK8WwLBnVYNKmZcK4cyhh7GKcLLIe2R40jmbq+pl5Eypy84+NHp8HWR9kezaUxxihSon0fYEmFtGA0GLbP+uoXyeamwajhcDvfa4mrBqiFLbdgNNtczjpXn8znrFSfbEDOurLaRLJKJUZbSwcBKquyLRjNtsw8Ip0rtOoCuBpEsipPVm1uig6gIhrTgtFg32bA35xYLK5PssZdAFfdSFajBsIC2yYBStTCFoxmW+YCuGZ8oZWLZLW+qGvlS9kAtgktBm32dSk4dwFcA27QVZVI1rAbbSx5exGATUMLBiyp1hOf5UeyZt/88KlbPpY9LmigWh+JoJoWnwlV5wZdpUUymgajQlzFp8HWl78NP5VMMZEGK0MLBpRr+Rt0sW1Vp7cdyVp+TG9V/Ib1oQUDamSZG3TlL4ArsCqwvUiGOltes2uq8FJowYCmypcQNneDro1HMvQ+LNCwPhdYBlowAAo/ddtsJEvStP/+PfrRn5W/9sD/7LPwzZuyRwQb8Z2//uvsZ7RgAMxZfAHcx7/7u6fO6jaekyVp2p5psPWp62u+s4Nz8KaKzs7QggHwIll90j89feo95XfhAwAArOOTsgcAAACwFkQyAACot1ebWKiUkjFm2zbnnDGmtdZaW5ZlWdYmVtcM2GgNhi8XYKOKz8mMMUKIKIqUUvSMlLLf79M/ZngUNlqD4csF2LTiIxnnfDgccs7p9JP4vu95XuHragxstAbDlwuwaZvqXTTG9Pv94XCotRZCxHG8ibU0DDZag+HLBdicDXbhK6WEEMaYOI7zZ6OwADZag+HLBdiQDfYu2rZN09r4R7s8bLQGw5cLsCEbjGRBELiua4zJJrrhWdhoDYYvF2BDNtKFz75pO/Y8zxgzGAxQTlkGNlqD4csF2KD7Dbi7u3McJ3s4mUxc193EipoEG63B8OUCrOPjx4+WZeX/Ec0pvrqolDo6OjLGRFFEzwghpJRHR0eFr6sxsNEaDF8uwJqMMVrrBWV53EEYAACqTinFOX/qtjiIZAAAUG+4gzAAANQbIhkAANQbIhkAANTbpq4nY4yp6+v/+o//+Eff+17ounxnZ3MrapLgyy//6Ze//C8//rHb7ZY9FihSkqby8vJ//epXf/Hpp2636xwclD0igObYVMeHOD8PkuS7r179v9/+9t//4R8O372zX7/exIoaw8xm/ffv5cXF/g9+cPXrX3u9Hs4AGkDf3orz8yRN9c2Ntbf3o08//adf/Yp+drtd7/jY2t0te4wAtVd8JDOzWZAkYjx2u934Jz/Rt7f99+/NdBq6rnd8XOy6GkNeXvbfv2eMxW/fut1udHYWJIm1t4czgJoys1mSpkmayosL3um43W4+D6P8TIzHjDHn8JBexVkLwMoKjmTq+nrwxRfq6ip0Xf/0lJ40s9ngiy+SNEWe8ajgyy+jr76y9/eH795lZ+jZlvQ/+yx886bcEcLy5OUlxTAzndr7+16v91SUomgnxmN1dfUw2gHA8oqMZFRR5J3Oo5kE5Rlzx+uWyyqKj4YrM5tFZ2fRV185h4fx27fYaFU2V0V8UeVwnc8CACsqks1VFJ/KuuZqaOuvt9aW3BpJmg4+fHj2bVAWcX4uLy6SNGWMUV618teEqiPAagqIZI9WFJ+yOAtpj0crik/JNhrKs9Whrq/FeJxVEd1u1+v1CvlqUHUEeKl1I9niiuJT6DjuHB4O371r23F55ViONpAqMLOZGI/F+bm+ueGdDk2DbejrQNURYEmrR7IlK4pPyYpmw3fv2nO+uWZ9FW0gJaI+jqyK6Bwebq0XF1VHgMVWjGQvqig+hRr011xIjbyoovgUtIFsGSVGYjw206m1t+cdH7vdbimbHVVHgKesEslWqyg+as3Eri4Knx1EG8imPQwbXq9XkaIuqo4Ac14WyTYUeAoMjRW0oY5NtIFsSI1KeTUaKsBGvSCSFVJRXLDwRt4KpJCK4gJoAylKfRMdVB0Blo1kW0ibGnYrkK1db4A2kHXkbyvFGPN6PefgoKYF2/oGY4A1PR/JtjyV1YxbgWz5GnC0gaxg+dtK1Q6qjtA2z0SyjVYUn1L3W4FsuqL4FLSBLEPf3iZpmr8grKmJC6qO0B6LIlmJjRg1vRVI6cNGG8gCBd5Wql5QdYTGezySVaQ5vl63AqlOKok2kDx1fU2pSeG3laodVB2hqR6JZKVUFJ9Sl1uBlFVRfAraQOi2UkmaVvCCsHKh6gjN8+rhU2Y6NbPZ5Gc/q8I/e7fbtff3++/f69vbsseySNVihv369ejzz6OzM3V1VfZYShMkCQWwJl3XsT6+s+MdH3vHx1nVkXc6iGRQa8X/zWiAijCzGUpny8CGgrpDJAMAgHr7pOwBAAAArAWRDCrNGKOUKnsUAFBp33Z85A8Ztm1zzksaEqxLSslyX6LWWmttWZZlWWUP7cWUUlEUjUajsgfSNE3aSQC+zcm01lJKKWUURUEQlDgmWIcxRggRRVF2XiKl7Pf7dOSqHcuyXNctexRN07CdBODbSGbbdhiGlmV5nmeMKXFMsA7O+XA45Jzns2rf9z3PK3FUK6MdsuxRNE3DdhKA+d7FIAjCMBRCMMZW262FEEmShGGYJEl2xuf7vuM49HOSJEmSGGM455Zl+b6f/XPSWg8GA8bYaDSitzHGHMfJRqKUonyRPssYC8MwW7VSSgihtaZ/omEY0pL7/T6VTWiBlmVlL73UU6sodi3rM8b0+/3hcKi1FkLEcVzKMFaWJAnthMR1XdoHaA+hs67sbfQwKyQopSzLsm2bvogwDG3bfna3ZM/tXc1T950E4Fv3OR8/fvR9n372PO9+VaPRyHGc0WhED+/u7jzPm0wm9/f3w+Ewv+TJZOK67tzHHcfxfT8Mw2xp2Uu2bd/d3WWfdRwnvyjHceZezR5yzuM4zl5a7bdbvIqi1lIUGoDruvkR1tFoNMp2S5L/3vMPHceh3YzSi/tv9r1sOU/tlmTB3tVUjdlJoOV+r3cxSZL8nITWeuUAadt2drZLuUsURbSK/KkfvY3OnfMoV6Of82fNlmVlpXzbtvOLEkJQwSR71XXdbMm2bWeJnW3bq/1qi1dR1FqKQgOwLKs9zTu2bdu2TT/Tnsw5z38LT+2WZMHe1VQt3EmgkX7vblVCiOxfsjEmSZIsnLxUdkAhnHOae5NSnpyczL354ZT+U4XNOI6FEEEQUH0v/7a5oj8tNn+cWt8WVlGgIAhc11VKKaXmvo7Wemq3JAv2rqbCTgLN8G0kU0p5npcPXScnJytHMinlXHpHEw+O4wyHw9WWSQedbEjGmJOTk8lkkn81TylV7JnmFlZRFDojoeadwWAQx3E1x7llT+2W7Lm9q5Gwk0BjfFtdFELMnYQ6jrNyV67WOktWaJaejhEPkxi6kGWZZVK3RfZw7l+d4zj5iweMMVEUFdvAvYVVFIIGRt0KnHPf96mPppEelqYXeGq3ZM/tXc3Tqp0EGu/rnCwIArqghKZ/s2eEEKv15vq+r7Wmdj5jTBzHdPJLkYyeZ9+cCGcdYlEUSSmVUlkFkhrPssUqpQaDQXYtZ35grutSLxYtWSlFFxXQk9SWRisKgiD/cHlPrYJ90wZWyFrWpJTq9/uc8yiK6DBNReOjo6PGZBi2bdNuYIyxLCvrOcyaFRljQRAMh0N6NTtLe2q3JAv2roZpw04CrbKROwhTJpfv1HjqPSvcTCS7F8mjy1/8aiG2sArIowv2504IlFLGmBftP8vslvhyAeqotEgGsCTqSli/HwG7JUBTPfKXNtckhKD5Brq4CrdxgxXkr4zO99avDLslQIPh75MBAEC94a+6ALSamc3MbFb2KADWUnx1ESpCXV//53/4B8bYf/ubv7Ffvy57OFA5+vY2Ojsbpunvfve7//SXf+n1es7BQdmDAlgFqovNlKTp4MOHP+WcMfa/jQld1zs+LntQUBXy8lKMx0ma8k7n7V/91XdfvfpyMtE3N9benn966na7fGen7DECvAAiWQOJ8/PBF1/Y+/ujzz9njPXfv5cXF6Hr+qenZQ8NSibOz8V4rK6urL097/jY6/WyoJWkqTg/lxcXvNPxej3v+Nja3S13tABLQiRrmuDLL6OvvvJ6vdB1s4PU4MMHMR57vV789m25w4NSmNksOjtL0lTf3DiHh263+1SOTiXHJE3NdOp2uyg5Qi0gkjXKgogVnZ0FSeIcHg7fvUPtqD0oMonxmDHm9Xput7tMZDKzmRiPxfk5So5QC4hkDWFmM6oixj/5yVOn2+L8PEgSa3d3+O4dCkeNJy8vo7OzNauFKDlCLSCSNYGZzU5+/nN1dbUgjBF1fX3y858zxkaff46GxqYS5+fR2VmB6RRKjlBxiGS1p66v++/fm+l0yeCkb2/779/r21s0NDaMvr2lhg4znTqHh97xsdvtFrh8lByhshDJ6o1yLN7pDN+9Wz7HykqRaGhsBnV9LcbjbDLM6/U2mnCj5AhVg0hWY/lu+xVOjdHQ2ABzQcU/Pd1akoSSI1QHIlldURhbsxcRDY01ZWazJE3zk2FlFYpRcoQqQCSrpQLTKTQ01svcZJh/elqRTAglRygRIlnNmNksSBIxHvuffRa+eVPIMtHQWAvy8jJJUzEe807H7Xb909MKhgqUHKEUiGR1sny3/UuhobHKxPl5kqby4sLa26MYVvHyHUqOsGWIZLWRBZv47dtiu6sJGhqrJh8P7P19KtmVPaiXQckRtgORrB62VgBEQ2MVNKxG17BfByoIkawG6E+0bK0pAw2NJcr/vZWGJTEoOcLmIJJV3ZoXja28UjQ0btmCv7fSMCg5QuEQySqN0qO5P9GyHVk9c/K3f4sDzUZFZ2eUqSz+eysNM1dyDF0XuxmsDJGs0tT1dZKmRXXbvxQdazBhtmni/FxdXS3591YahkqO8vJy9NOflj0WqDFEMgAAqLdPyh4AAADAWhDJAACg3hDJAACg3l6VPQDYCK21EIIxFoZh2WMBeISUkjFm2zbnnDGmtdZaW5ZlWVbZQ4P6QU7WTJZlhWGolCp7IACPMMYIIaIoynZRKWW/36fwBvBSiGQAsG2c8+FwyDmnhIz4vu95XomjgvpCdbGigiDQWhtjOOeUYK22HCGElNIYQwvJDhxKKSGE1pqOJvmXtNaDwYAxNhqNkiRJkoQx5jhOdpRRSgVBwBijsbHfr2HSR7KR+76fP1o13qMbNkkSKvbatk3bqt/vG2MYY3Ec0zZcsN2e/UZqKo7jfr8/HA611kqpOI7LHhHU1j1U0t3dXfZzGIZxHK+wEM559sHJZOJ5Xvaz4zjZKuYeEsdxfN8Pw5Aejkaj7CXbtuc+m700HA6ztdCrruuuMPKaWrBhP378mN8y9/f3+S2zzHZb8I3UF+2WruvO7X4AL4LqYkVxzo0xUkqttW3bWusVFmLbdnbanl+IEIJqO9lLruvSmX4eZQb0s+M4+eez+QzbtvOn0kmS5B/atu04zsMlN9WCDWtZljGG8jDGmJTStu3sg0tut6e+kfqi3dKyrFYl7lA4VBeryBgzGAyyWQSlVP6ot765+QnGmOu6URTNve2p4lUcx0IIqn9yzvNvk1KenJzMvd913SJGXQOLN6zneVEUUXUxSZJ8SXbJ7Vb3cuJDQRC4rquUKnwnh1ZBJKuiwWDg+372D1tKWWxPV5YZZJRSS54U02ezzMAYc3JyMplM6KHjOMPhsLiR1sziDes4DkW1bBYte1s7txvt1Z7n0albHMfIzGA1qC5WEed8rvRU7PIdx6GWDWKMiaJoycyJOhqyh8/mdnSd0HrjrY1nN6zrutR9PpddtXC70cahxJRz7vs+dbUArAB3EK6iJEmklFlp0XEcIYTrust3MBpj+v2+UsrzPPrUycmJUsp1XZqPoZ5G6ppTSoVhmMXOKIqklPlqT/5VKaUQIksptNZzTXR0kRAtmXKUfGNk4y3YsOTo6MhxnIdf5YLttvgbqSOlVL/f55y7rkv5/WAwSJLEsqwsvwdYHiJZRRljqDC1uQMWrYKt1Dvw7Gfn7uDQKuts2DZvN4CVIZIBAEC9YZ4MAADqDZEMAADqDV34AFAmeXn5f3/zm+Mf/Yjv7JQ9FqgrRDIAKIG+vU3SVJyf65ubne9+d/Zv/+b1es7Bgdvtlj00qB90fADAViVpSv8xxtxu1+127f19cX6epKm+ubH29txu1zs+tnZ3yx4p1AYiGQBsg769FefnYjw206m1t+cdH7vd7ly4StJUXl6K8Zgx5hweUkgrZ7hQK4hkALBBZjZL0lSMx+rqinc6lIQ5BweLPyLG4yRNs494vZ79+vXWxgy1g0gGABshLy+pimimU3t/3+v13G73RW0d6vqaQhotgUIaGkPgIUQyAChSvpWDdzper7f+pBfNosmLC8YYGkPgIUQyACjGw1aOYuMNzbShMQQeQiQDgLUs08pRLDSGwBxEMgBYxQqtHIUPAI0hQBDJAOBl1m/lKBYaQwCRDACWsolWjmKhMaS1EMkA4BmbbuUoFhpDWgiRDAAet/1WjmKhMaQ9EMkA4PeU3spRLDSGtAEiGQB8rWqtHMVCY0iDIZIBtF31WzmKhcaQ5kEkA2iverVyFAuNIU2CSAbQOnVv5SgWGkMaAJEMoF0GHz6I8bgBrRzFmmsMGX3+ObpCagSRDKBdkjQ1s1mTWjmKpa6vkzQN37wpeyDwAohkAABQb5+UPQAAAIC1IJIBAEC9IZIBAEC9vSp7AAAAlSClZIzZts05Z4xprbXWlmVZllX20OAZyMkAAJgxRggRRZFSip6RUvb7fQpvUHGIZAAAjHM+HA4555SQEd/3Pc8rcVSwJHThA7RFEARaa2MM59yyrDAMyx5R5Rhj+v3+cDjUWgsh4jgue0SwFEQygLagGEY/R1HEOUfC8ZBSSghhjInjOJ+fQZWhugjQFpxzY4yUUmtt27bWuuwRVRFtGcuyEMZqBL2LAK1gjBkMBtk8kFLKtu2yB1VFQRC4rquUwiaqEUQygFYYDAa+72eHZikluvIeom3ieR4FfhQY6wLVRYBW4JznM4wkSUocTDUZY6IookYYzrnv+4PBoOxBwVIQyQBawXGcwWAQBEEQBCcnJ5ZlJUkSBEHZ46oKpdTR0REFM3pGCCGlPDo6KndgsAz0LgK0hTFGKTWXnAE0ACIZAADUG6qLAABQb4hkAABQb+jCBwD4lrq+TtKUMeZ2u/br12UPB5aCeTIAAGZmMzEeJ2mqrq7++PvfZ4z9n3/9V2tvzzs+drtda3e37AHCIohkANBqSZrSf4wxt9t1Dg+942PGmDg/lxcXD5+HCkIkA4A20re34vxcjMdmOl2Qe+nb2yRNxfm5vrnhnY7b7Xq9HqqOVYNIBgAtYmazJE3FeKyurigyud2uc3Dw7AfV9TWVHxdHPigFIhkAtIK8vKQYxhhzDg8phvGdnZcuB1XHCkIkA4Amoypikqb65sba23O7Xe/4eP1cClXHSkEkA4AGoipikqby4oIx5vV6zsGB2+0WviJUHasAkQwAGiUfWuz9fa/XW62K+FKoOpYIkQwAmmCu3EcBbPvlPlQdS4FIBgD1NpcM0X9lDwpVx61CJAOAWqLbSuUvCPN6vS1UEV8KVcctQCQDgDrJ31aqRuU7VB03CpEMAOohSVN5eUkXhNU3uUHVcRMQyQCg0jZ0QVjpUHUsECIZAFTRyreVqhdUHQuBSAYA1UK3ldr+BWHlQtVxHYhkAFAJTa0ivhSqjitAJAOAklEA2/RtpeoFVccXQSQDgDIFX34ZffWVvb9PB+vGVxFfKl91nPzsZwhmj0IkA4Ay6dtbM5vhAP2sJE2Rqj4FkQwAAOrtk7IHAAAAsBZEMgAAqDdEMgB4npQyCAKl1CYWHgRBv9/fxJIbQEoppTTG0EOttZRSa13uqKoGkQwAnuc4juM42fG0WGEYbmjJdWeMEUJEUZSdQ0gp+/2+lLLcgVUNIhkAQEVxzofDIeecc5496fu+53kljqqCXpU9AACok6zGyDn3fd+2bXpeaz0YDBhjo9EoSZIkSRhjjuNkx1yllBBCa03H5TAM80fnTL/fN8bQEZyeoaXRk5Zl+b6ffbDf71uWZVkWrc6yrKcWW2txHPf7/eFwqLVWSsVxXPaIquceAGAJo9HIcZzRaEQP7+7uHMf5+PFj/j2O4/i+H4Zh9hH6YTKZOI5zd3f36EP6IC3Tdd3JZJI9PxwOPc/LHk4mE9d182vknMdxnL2af3OT0K/mum5+o0EG1UUAWJZt247j0M8UQqIomnsPpU30c/ZmIQRVybLluK5LiVRGKdXv9+M4zvI8xliSJPkUhAaQ/6Bt21naZ9t2U1sh6FezLKt5GWchUF0EgGXlYwxjzLKsh5Hj0SmcuZkexpjrunNRUAhh2/bc26SUJycnc0tzXfelI6+7IAhc11VKKaXmvgVgiGQAsDylVD6K0NzVMh982JqolJr7bBzHSqmTk5N89uY4TjZh1lrUqeh5njFmMBjEcYzMbA6qiwCwrPyFTYyxIAiWbKJzHCcIguyhMSaKooeplW3bYRhS0wc98zB101o3tYT4KNpWYRiyb7psqLMG8nDfRQB4XhRFSZL4vp8kiWVZjDGtdb41MYoiKWW+9hWGhwRTNgAABc1JREFUYb4OJoSQUtJnlVL5V+kCqbu7O1oslRNd16XDN11NRR+kCEcNisaYfr+vlPI8j94ZBIEQInvYADR3yDl3XZdmHweDAX0Fk8mk7NFVCCIZALyMUsoY83BO61nGGOrgzzpBlkcVthVWCm2ASAYAAPWGeTIAAKg3RDIAAKg3RDIAAKg3XE8GAFB10dnZ//jnf/7j73/fPz21dnfLHk7loOMDAKC6xPl5dHamb27+w5/92S/u7sx06n/2mX96ynd2yh5ahSCSAQBUkby8jM7O5MWFc3jon546BwdmNovOzqKvvuKdjtfrIZ5lEMkAAKpF394OPnyQFxfW3l745o3b7c69Gp2difGYdzqh63rHx2WNszoQyQAAqmL5KKVvb4MkSdLU2tvzT09bHs8QyQAAykeVQzEeM8aWrxxmFUh7fz90XefgYOMDrSREMgCAkkVnZ9HZmZlOvV4vdN2Xzn7Jy8vBhw/65iabUdvQOCsLkQwAoDRZa6Lb7Yauu06HfbYoSula1ayPSAYAUIKHrYmFLDaf3rUnniGSAQBs1eLWxPWtNuVWa4hkAABbss0GejObBUlC6/J6vfDNm82tq3SIZAAAG1dWnpTFzmY36yOSAQBs1pqtietT19dBkmyunlk6RDIAgE0psDVxfRvqMakCRDIAgOJVNmxkwdU5PIzfvm1GcyMiGQBAkTbdmliIhl18hkgGAFCMet3bN2tCacBfikEkAwBYV30v4WrGX4pBJAMAWEvprYnrq1c2+RAiGQDAWk7+/u/5zk7prYnro78UY2az0U9/WvZYXgaRDAAA6u2TsgcAAACwFkQyAACoN0QyAIDaU0oZY8oeRWkQyQAAHiGEGAwGSqmyB7KUIAiKHaoQIgiCumwBRDIAgEd4nsc5r0ui47quZVkFLtDzvDAM67IFXpU9AAAAWJfneWUPoUyIZAAAX0uSJEkSYwzn3HXduVeVUkIIrTXnnHNOKUuSJEIIxthwOOScM8aklFEUMcbiOLYsq9/vW5ZlWVaSJIwxy7Log8uPKggCrTWNij4+9yoVAMMwtG07/5LWejAYMMZGoxH9aowxx3E8zxNCJEkShmGSJFn90Pd9x3HWHNKzv29+I1uW5fv+i7bG4+4BAOD+fjgcep6XPYzj2Lbt0WhEDyeTieM4d3d3Dx/mP5UtKo7j7CHnPHs4mUwevn+xbKX39/dhGOaXnPF9PxvqHMdxfN8Pw5AeZm8bjUaO42QP7+7uPM+bTCbLLHbxkBb8vnMbeTKZuK776LBfBPNkAACMMZYkSRzH2UOaJ8seCiGyrIsxZtu267pZ2iGlZIz1+33KxpRS+eTGtu2s+mfbttb6RQOjySoppdZ6hY/TCH3fp5/nBpY9pCyTxr/mkBb8vnMbmQZAm3EdqC4CADDG2MPWhnyxjiqK+Vdd16XjvuM4UkrLsjjnVKnTWhfVf2GMGQwG2dqVUnMlxGU8NYs2t6gl+zvWGZKU8uTkZO7Jh4Xcl0IkAwBgjLGHszX5w/rDQ7xSij5i23YW0pRSNJFW1KgGg4Hv+1mokFJS/lcIKWU+iiwZgNcZkuM4w+FwhaEuhuoiAABjjFmWRb0bRCmVr3o5jhMEQfbQGBNFURYGOOcUFVzXDYKgwEjGOc9nPOsX4vK01lk5kdpDsiLkhoaUJbL5MaxQL52DOwgDAHwtCALqqcuekVL6vk8RSwhBVUTGmFIq3ytIH6QZoB/+8IdxHNP8kzGm3+8rpejyLHqnECJ7+KwkSaSUWR3PcRwhhOu69PGsczLrqGTf9EwyxqIoklLmq3/5MVMipbWmX8oY4/s+fXDxYhcMaZnfN4oipRQtijLdlzZzPoRIBgDwLUoRqI/84avGGJoJW7JbvRC00rlMaH0UyVb7RdYfEq3dtu1C8ldEMgCANlonklUNOj4AAFpHCEH1Q7r2q9g7XW0fcjIAAKg39C4CAEC9IZIBAEC9IZIBAEC9IZIBAEC9/X+YuVsOLTeCjAAAAABJRU5ErkJggg==", "text/plain": [ "Tree('X', [Tree('X', [Tree('X', ['A', 'person']), Tree('X', ['on', Tree('X', ['a', 'horse'])])]), Tree('X', [Tree('X', ['jumps', Tree('X', ['over', Tree('X', ['a', Tree('X', ['broken', Tree('X', ['down', 'airplane'])])])])]), '.'])])" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "snli_ex.sentence1_binary_parse" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the full parse tree with syntactic categories:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAmcAAAEACAIAAAB5yUY5AAAACXBIWXMAAA3XAAAN1wFCKJt4AAAAHXRFWHRTb2Z0d2FyZQBHUEwgR2hvc3RzY3JpcHQgOS4xNnO9PXQAACAASURBVHic7d0/jONYnh/wN7vjw7rqcHvcvSqcN3DJLOAAVyWG2XJwSXdA4YCeu8RoKnTPBKaAzgxMD2k46O6M2h7AgQ8FiMl0ZYbYcNYFnPWC7vBc9bItZfVcAhysS7v1bm5RwsG3N+XgN/OGoz8U9Yd/RH0/QUMSVdIj9cgv3x+yP7q/v2cAAACQwo+KLgAAAMDGQGoCAACkhdQEAABIC6kJAACQFlITAAAgrY+LLgAArA3nXD82TdM0zcn3KKWEEIwx27ZTLhVCKKXG3jnr8wGq7SNceQJQDVLKMAyjKHIch14RQgRBYFmWfk8YhkII27aVUpxzz/PSLG21WoZh0AeapkmPGWNBEOS3egAlcQ8AFWLbtn58e3sbf9rtdl3XHVt6dXU1d6nnefSi53m9Xk8/zmwlAMoL45oAlWUYRrwpGUVRp9OJL+10OmEYzl3quu7kh099EaDykJoAlcU5l1LSY6WU7lnVTNOkAcvkpVPHLzGoCdsJs4EAKkUI0Wg0GGNSSsdxut2ufn0yFxljlIvJSwFAQ1sToFIsy+r1er1eb2yqDs3xmXw/hWXyUgDQkJoA1eQ4jmVZvu/rV5RSY9HIOde5mLwUAAhSE6CyHMeRUuqhTdd1x0K03W57npdmKQAQXK8JUBFSylarJYSwLEtfpimlbDabnufRRZxhGHLOaSKPlHLyes1ZS+lKUCmlYRiGYYxdBgqwPZCaAFuHul5nxV7yUoAth9QEAABIC+OaAAAAaSE1AQAA0kJqAgAApIXUBNh2H332mf/2bdGlANgMSE0AAIC0kJoA286q1YouAsDGQGoCbDtjd7foIgBsDKQmAABAWkhNAGDy5qboIgBsBqQmADA1GhVdBIDNgNQEAABIC6kJAACQFlITYNsZOztFFwFgYyA1Abadub9fdBEANgZSEwAAIC2kJgAwdXdXdBEANgNSEwCYuL4uuggAmwGpCQAAkBZSEwAAIC2kJgAAQFpITYBtZx0cFF0EgI2B1ATYdub+vn18XHQpADbDR/f390WXAQAAYDOgrQkAAJAWUhMAACAtpCYAAEBaHxddAAAojJRSSskYsyyLMWYYRtElAig7zAYC2FJhGHLOHcdRSnHOpZQXFxdFFwqg7JCaAFuq0Wj0ej16rJR68ODB1dVVsUUCKD+MawJsIymlaZr6qWEY3W63wPIAbAqkJsA2oshst9s0rsm+G9oEgGTooQXYXpxzGtE0DMN1XQQnwFxITQBgSqlms9ntdjGNFiAZemgBtlEURbpvljFmGIZlWUKIAosEsBGQmgDbSAgRRVH8FSklemgB5sJdDgC2lFKq1WpRlyzn3HVddM8CzIVxTYDtpZSiXlnbtosuC8BmQGoCAACkhXFNAACAtJCaAAAAaSE1AQAA0sIcWoCKU6ORuL6mx7zfpwfy5kaNRoyx3/zud7/++7//9d/9nX18TIvMvT1jd5ce20dH+kVzby/PYgOUE2YDAWwqORzK4ZAxpu7uxGBAL+qAlMOhvLmZ+odWrUa5aO7t/c9f/ermd7/7D3/+5z/d2dFRqu7u9OdMQr7CNkNqApSOGAzU3R2L5SKLxaG4vqalk3SeWbUaPYhnmA42zX/7tv3uXffZM6den1uYeDYjX2FrITUB8qM7SMX1tU6d79uLM+LH2N3VKfj9g4MD3V5cLnt4v994/dr75JPgyZMl/nxSdvlq7O5aBwdrKSTAipCaAKtKHjhM6Ck19/cp8IydHXN/n17UUWHVasbOTkZllsPhg1evzL29ixcvMvqKBKvkq+5ejm80fQ6BfIWsITUBZkoeOEzoKY0PHOpDvG4mTvaU5u/Bq1dyOLz65S+zC+a10PnKYmckuoHOGOOXl7P+FvkKWUBqwjb6vkU4beBw1oF4ak+p7iDdoKMwDWf2nj8vQ36vS/ynRL5CdpCaUB3xntKFBg51Tylb68BhOYUfPrTevFnjcObGWXu+Jk+5gopBasIGmHqYSzNwOPUYl8/AYTmJwaDx+rVVq/U+/7zosmyAVfJ16qkY8rUCkJpQpKmzQha6xKK0A4clpEajxuvXajS6ePFi204Xsja1n4Olq8zI182C1IRMTF5iwbZp4LCcWqen4fv3Fy9fYjMWCPm66ZCasICpl1gsNHA4dbZFxQYOy4mGMwPH8R4/LroskMra8zXeH7OFwxPrgtQExmZcYrHQwOHUi9OxZ5aEGAwevHzpPnrUefq06LLA+s3N14RdeGoHD/I1AVKz4tZ4bzYMHG4oNRo9ePXK2NnpPX+Owx8kX3aFfJ0LqbmpEu7NxhYcOMR1adXWPDnh/X7v+XP8uLAQ5OtUSM1ySb4326IDh7hNNtANDTqffuo+fFh0WaCyls5XtoG3+Edq5mQt/6nT5CUWG33KBlmj+7NjOBPKY2q+btZ/oYPUXFVu/6kTwELUaHT4xRdF3Z8dYEWl/S/qkJpJ6LSoDP+pE8CiGl9+Ka6vL168QJWDassoX2cdrpGaST767DP9uMD/1AlgUWo0ap6cuA8fJvx30wDbZqF8nXWvZqRmEt7vo6cUAGDbiMHA2NlBWxMAAGAlPyq6AAAAABsDqQkAAJDWx0UXoEhSSimlaZqmadIrnHPGmGEYlmXpp8SyLMMwCiknwFRUgVmsco69ggoMsHZbnZpCCM65EKLX6xmGIaWkp6ZpdjodehpFkeM4jDHOuZTS8zwKVIDCcc7DMLRtmzFG/1ItFUJ4nmeaJiowwNpt+2wgzjmdjwdBQK/4vm/bNh2DGGONRqPX69FjpVSz2dRPAQrn+77rurqzhDEWhqFpmqjAABnBuCazbVspRf1ayXTPLUBJuK4bhmH8Fc65jswxqMAAq0NqMsZYEAS+7899m1JKCJFDeQBSMk0zXieFEAm5iAoMsLqtHtfU6BxcjwDFSSl1oAohdEcuQEk4jqOrbhiGY1UUFRhgvZCa3/I8r9lsTnZtGYZBLxqGgSMOlJDjOL7vO46jlDIMY2yiLCowwHohNb/num673R57UR90AMpJX3MydUQTFRhgvTCu+T2aFqSUKrogAItxHCcMw4R5QACQnpTy8PCw0WhMXbrVqUlDPr7vN5tNeiUIgiiK9NJGoyGEaDQa9KC4kgIksW2bcz42DwgVGGA5dFXFrF1m26/XBAAAGCOEMAwjfiW0htQEAABIa6t7aAEAABaC1AQAAEgLqTmTGo3UaFR0KQCWgdoLkBFcrzlODofR+Xl0fv5/lPq/X3/t1Ov28bFTrxs7O0UXDWAOqr283+eXl3/4k5/s/9EfOfW6fXRkHx0VXTSATSKHw/DDh+DJk8lFmA30LR2W4vqaMebU6//m4ODHH30Ufvggb27oFfv42H34sOCCAkwQg0F0fs4vL6n22sfH9tHRP33zzeC3v43Oz9XdnbG769TrVq2G8z+ANHi/33j9+v6rryYXbXtqToblZMuSDknR+bm8uaGjj3105NTrhRUagDE1GvHLS97v61y0j46m9ovwfp/eRud/lKlOvW7u7RVUdoCyQ2qOU6MRBSG/vGQzwnIS4hMKJ4dD6oCNzs8ZY+b+vlOvWwcHaephvP9W/y36bwEmITW/RWGpjzjUYbXESTfiE3I21gdLVdc+PrYODpb4NNoRxPU1+m8Bptr21FxXWE5CfEKmvm0a9vvxwXX76GiNnavovwWYtKWpmV1YTkJ8wrroesv7fXV3Z+7vUydq1tUJ/bcA2talJu381Ptk7u+7Dx/mdu6M+ITliMGAzvB0HyyNtS/XB7sK9N8CbEtqjoUltSzzP+gQxCekMdZBqvOpJB2k6L+F7VTx1CxVWE6isunTdvfRo1IVD/I3tTFH142UtjGH/lvYKtVMzXgalTAsJ5U83SFr1Qge9N/CNqhUak72fLqPHm1W9iA+t8rUG/dUo5MT/bdQVVVIzUoOEyI+qyr9jXuqoRrNaABtg1OzkmE5CfFZDavcuKca0H8L1bB5qUn3m698WE5CfG6i9d64pzLQfwuba2NSc+q91LfzvxlBfJZfDjfuqQb038LGKXtqpvmPR7YWHW7C9+/Zd4cb9+FDHJeLUtSNe6oB/bewKUqamgjL9PK8OyBMKs+NeyoD/bdQZqVLTd7vt8/OFvpfuoBMxqf76NF2dmLnw3/7trQ37qmGqf23wZMnRZcLtlrpUlMMBq03b9BaWoWOT6tW8x4/Lro4ldU6PWWMlfzGPdWg+28ZY52nT4suDmw1NRqJ6+upQ++lGNcEAADYCD8qugAAAAAbA6kJAACQFlITAAAgrY8z+lwppZTSNE3TNOkVzjljzDAMy7L0U2JZlmEYGZWkGpI3FzbmQqhysti2ir8ihNDv1NU1Lr61Z70H0kiot6jSUFpZzQaKoohzLoTo9XqGYUgpwzAUQpim2el06GkURY7j0PullJ7n4egzVfLmwsZcVBiGYRja36FXpJRCiKdPn/7qV7+Kb0whhOd59DZyeHiolzLGpJTdbjfnVaiAhHqLKg2ldp+ZXq/neZ7nefoVz/N6vZ5+atu2fnx7ext/CpOSNxc25kI8z7u6uoq/0ul0dOUc23qO48Sfxpe6rntxcZFZMasvod6iSkM5ZTuuadu2Uor6vpKhm2shyZsLG3Mu13XDMIy/wjmPNyjjxroH9bYNw9A0TWzqdUmot6jSUB6ZzwYKgsD3/blvU0rFx5MgWfLmwsacyzTN+CYSQkw9KCul2u32WGoGQcAYox5dz/OyLur2SKi3qNJQHlnNBtLoJDE+RKFJKXWgCiHoYASzJG8ubMxFOY6jq2UYhvEtFt+YNKlt8s9brRaGM1eXUG9RpaGcMk9Nxpjnec1mc7L7yzAMetEwDOwScyVvLmzMRTmO4/u+4zhKKcMw4g1K0zTj2zAMQ9/346/4vu95np5/OzVWIY2EeosqDeWUR2oyxlzXbbfbYy/qvQLSSN5c2JiL0pmXMKJJXNdtNBr6KV0Uof9krJ0KC0mot6jSUE453eWApgUppfL5OoA0HMcJw3BuanLO9ainUgoxCVBtUsrDw8P4uXJcVtdrSimbzSZjzDRNGv5RSh0eHna7Xdu2pZStVktPwQiCABPkEiRvLmzMVTx48MBxHD2pZ2xjEuqwpbYp57zVasW7ZKWUV1dXORe7AhLqLao0FEsI8eDBA8Mwbm9vJ5fi/zwBAAD4ASGEYRhTpywgNQEAANLC3dsBAADSQmoCAACkldOVJ3FqNBr89rcHP/+5sbOT/7dXjBgMGGPWwUHRBQFYD97v//rrr//1L36BWg3llHdq8n6/dXqq7u6M3d3O06f20VHOBagMNRq1z87a794xxuzj487Tp+beXtGFqhr/7Vtzb899+LDoglScHA6j83Nxfc37fXV396c//emvv/7a3N+3j47soyP7+Bhn2FAeuc4G8t++bb97Z9Vq/+kv/uK//s3fiOtr75NPgidPcitAZUTn53Ty4X3yiXVwoB97jx/j+LJGjS+/tGo1VNEsqNGIX16KwSA6P5c3N4wxq1azj4/toyOrVuOXl7zf5/1+fJFTr6MBCoXLKTXFYNB682YsJnWIdj79FDtDSnI4bJ2e8svLePtStzvN/f3gyROnXi+6mBWB1Fw7MRh8m4iXl4wxalBatZpTr0894dPN0Oj8nDFm7O469XrC+wGylkdqUjqa+/uTXbK6w9Z7/Nh7/Djrkmw6/+3b8P17xljgOJPdhmIw8KOIX1469XrgOOiwXR1Scy3UaKSTT93dMcaoTWkfHy90uhydn1PoiutrFmubYqAH8pRtauqGUULnoRqNWm/eROfnGJxLwPt9P4rE9bX76FHgOAln2e2zs/bZGWPMffQIh/sVITVXQV2sOuTM/X3qYl29L0QOh/TJNA5q7O5SBttHRziAQNYyTM3wwwc/ihhjnadP5+4nNFCX8s1bJd77mnL+lBqN/CgK37+3arXAcXAmvjSk5qLG5vXkkGfZZTPAVJmkpm4+OvV659NPUw4/qNGoeXJCHYzp/6ra4rN+Fj12p2+ewixIzTQS5vXkecY22Q9MI6CL9gMDJFt/avJ+v3lywhhbbqiSOhhxXcrUWT9L0EOhGDleAlIzwaLzenIu27dN3ljZcBELrMU6U1P3JVq1WvfZs6UP9FMn3G6V5Fk/i5LDoR9FNHIcOA7Ou9NDao5Z17ye3FA7OH4Ri318TNFezgJD+a0tNdceddt5XUp23arR+bn/9q28ucFlnekhNUk1xg71/FtcxAKrWE9q6mtLus+erTHh6LoUeXMTOE7lexeXmPWz9FdQB/hmHfIKsc2pmf+8njzRRSzFDsTChlo1NeVw2Dw5oSZmFi0YPR202telrDLrZ1HrGjHdBtuWmiWZ15MnXMQCi1opNfWlgd1nzzLdqfR1KWsZ5yuVojKMrgvKJ6c315akZpnn9eRprCNaz79FrwzELZmay11bsor8vzEH6531s6gc+oQ3XYVTc+Pm9eSJGqCTF7E49ToaoLBMahbY8sutdZu18lxMqadx4T58k6qXmtWY15Mn6q+ObzFcxLLlFktN3TopcEgs65HUrJWzhadPR3BZZ1w1UrPa83pyM/UiFrTOt9ACqSkGg+bJCV26UPhxZEOvS8lz1s+idB847sOnbW5qbuG8njyNjQTjIpatkjY1S5hSpUrxuTZl5qq+2qfwruMy2LjUxLye/E29iAV3UaiwVKnZOj0N378vYY+o7u0seXDK4fDBq1dsc+YA62lKt3/910WXpUgblJp0J0vM6ynQZE84roqupFSpKQYDdXdX2o6d6Pzc3N8v+QGifXbmPnpUqnOOZHI4pClCRRcEUpHDYfjhA+b1lAQNf6LFWUl5/K/UAAAA1fCjogsAAACwMZCaAAAAaSE1ofqUUkKIoksBAFXwcfwJ55weGIZhWdbYW4UQSqmxF03TNE0zu/LFSSmllPFvpAIbhmEYxqxFkyuSKb0NiS5AQuFzLmHc3FLFV8eyLMMwCinnioQQ7Xa71+sVXZD12Kw6Vm1bsgdB3PepKaWMoogxRr9rGIaMsSAI9M8chiE9FkKYpqlfD4Ign7IKITjnQoher0cxSU9N07Rte9aiTqeTT/EYY0op2kn0JtI7T0Lh8yzhmORS0dMoihzHYYxxzqWUnudt3CHYNE1ahQrYuDpWbVuyB8EP3Mf0er1er6efXlxcuK6rn3qepx/ot+kX89Hr9TzPi3+pLkzCovxN/epSlVCbWyrbtvXj29vb+FMo0AbVsWrDHrRtksY1qT9BDwi5rjv5nqkvZsq2baWUlHKhRSVRzhKmL9Vm9fVFUdSIoe4TxpiUstFo+L4ffxs99b/TaDRarVYYhvS3tBfQUyEEvYGM9ZcKIej1ZrNJH5XvSpe0jlVbVfcgmGrObCDq+aTHU8cvcxvUjAuCYNbBKGFRSZSzhClLtVnTahzH6X3H8zx9UDNNs9fr6RWht9HTIAiEEPQKnTL2er1ut0uJ67qu53m+79u2TR/b7XajKIpvk1ar1e12aZHjOIVsrnLWsWqr5B4EU82fQzs5A6hwdL5Go7DpF5VEOUuYUCoppW6BNZvN3Iaxi2JZlm4N0HAUjVfF32DbNj02DCMIgna7rZeapqlPNC3LKmRAsZx1rNqwB22P+alZSGtyLs/zoiiamugJi0qinCWcVSrDMGzbtm2bWmDoXxrbAoZhxDcazQGh42Or1SrqVy5nHas27EFb4uPkxVEUeZ6XT1EW5bpu/Bw/5aKSKGcJp5aK9vlCylNOnPP4jFy68IAe0xFT7zJKqUajcXFxkX8hWVnrWLVhD9oGSW1NutSknG1N9t0I/NSz6YRFJVHOEpazVDlYqDNTSqmPjFLKVqulY1IIoeccse8u4irK1v6aBcI23wbf371dStlsNtl3uzrl5VgXfBiGURRJKekqsSAI8uxt0CU0TbPb7TLGlFKHh4fdbtc0zVmL8jzLU0pRMfQm0pfKJRS+wPPQ5FJRJAgh6FfO+edeL8455zxen33fV0pR/6ppmu12myaE09V1NLmDc06162c/+1kQBK7r0pglXYdnmqZSyvO8+BXudK5JO5GU0rbt9c4z37g6Vm3bsweBhv/zBLbCZGqy7253tdAdWyg1k0NIz5NEVgFUz5xxTYBqGBuMJBmd+GMcC6DCkJpQWVEU6VHG+PUkSwvDkD7QMIxOp1PaIX8AyA56aAEAANJCWxOmE4PB/5Ly35mmdXBQdFkANokajfjl5f8Q4n//5jf/6k/+5N9blrm/j/2oMlK1NT/67LPe8+f20VEOBVoC7/cbr1/ff/VV0QWZqfHll1atFjx5UnRB0mqfnflR9Ic/+ck/+/GPO0+fOvV60SWCOfy3b8X1de/zz4suyDZSo5G4vub9vry5EYOBvLmh1//g44//3+9/r99mHx9btZq5t2fu7ZX2cApzoa0JP6BGo9abN9H5ufvo0X/+y7/8j1991Tw58T75ZIMiHyAHvN8X19dyOKSwpBf/ba32zTffMMb+5c9//l/+6q/chw/9t2/b7979C8N4+Gd/9k/ffBOdn+s3W7UahSg9MHZ2ClsZWER1UpP3+zh9W5EYDFpv3ojr686nn7oPHzLGep9/Trs9v7zsPX+OHRu2lhgM+OWlHA7F9bW4vqYXrVrNqdfNvb0//elP/xvn/PLS3N/Xuw9jLHjyxKnXW2/e/Pe//VunXr948YIxRp9AHxW+f0/vpF5cc3/fPjqi9mgBKwkpVCc1YUXhhw9+FBm7uxcvX8bHYIInT6yDg9bp6eEXX3SfPcOpCWwJMRjobOOXl/QitQudet2q1WhfkMNh++wsfP/e2N2N56VmHRxcvHjRPjtrn50dfvEFDXnE9yPe78vhkL6o/e5d+907xpixu0vfZR0cUJM0r/WGOZCawBhjrdPT8P17p17vfPrpZIOSjhHNk5PG69forYWq+rYdORjEY5KagN4nn9hHR2OnjPG89D75xHv8OKEzxnv82KnXW6enzZOTsR1t7GNpZJSKEb5/r+7uvn3bd8OiFKXrXHNYBFJz28nhsHlyIq6vk+PQ3Nu7ePGidXrafvdO3txMDVeAzUKTXSmfxPU15ZOxu2sfHVFMzhpuXCgvNXNvr/f557rRGTjOZMOUMWYdHFgHB3oKHjVDafR01rAoeoDyVIXUNHZ3iy7CporOz1unp4yxlHOkO0+fWrWaH0UPXr3qPnuGE17YLFMnu1JfqPvoUZq+UDUatc/O2u/eLZSXcbrRSdPuOk+fJn/j2JxbvQrq7m7WsCjmFmWqCqmJY/dyaJqPVastNM3HffjQqtVab948ePly6igOQKlMnexqHx/TLB6aepPmcygvqct0ubzUqNFJ56wPXr3yHj/2Hj9O+bfGzs5YXzGtIKWpHhY19/d1Xy6GRderCqkJi1KjUfPkhF9eLjdIaR0c9J4/b715QxNuA8fBiS2UR/Jk1yUGBeN56T565D1+vJYQcup1+/i49eaNH0W835/b6JxlLETjk5j0sKieW4Rh0dVVJzXlcFh0ETaDGAwar18zxrrPni19+wJjZ6f77BndDIGuVMF+CEVJOdl1CRnlpUb7kW50uo8erT7VjoZF9dP4FKf4sKh9fEx9vxgWXRRSc7tQzlm12lpyznv82D4+brx+3Xj9GrcQgtwsOtl1OeGHD+2zM3lzk0VexlGj048iujB6veegFI163/zByG5sWPTbmxZhWDSF6qQmJIvf9GeNfarWwcHVL3/ZPDnBLYQgO0tPdl2Ozkv7+DifWW/Gzg6dd7ZOTx+8fJndrjRrWJRm6saHRWluEYZFJyE1t8LkTX/WyNjZwS2EYL1Wn+y6nHhedp4+zbnr0j46+vZ+CBk0OhO+FMOiC0FqVt+sm/6sF24hBCta12TX5UTn5/7bt0XlpWbs7ARPnthHR7rRucpk3SUkDItO3nLB2NnZwmHRiqSmVasVXYSSSr7pz3rhFkKwkLVPdl0O7/fbZ2f88rLYvIyLNzrpms6iSpUwLLq1d6KvSGriRgeTUt70Z71wCyFIkN1k1+XovBy75XoZUKOT7vxOp6E5NzpnlWrWsOj23Im+IqkJYxa96c964RZCQPKZ7LqEMudlHN35nSYNFNvonCVhWHTqnegr8B90Vyc1dQ8PLHfTn/XCLYS2U86TXZcgh8PW6Wn58zIu3ujMZ7RlaWPDonPvRL+J/0F3qtS0j49L3gVa8nHNPKdu836//e5dGYYV9S2E/CjKeioHlGHz8n6fbqCRw2TXpT149Yoxtil5qcX/u7H22Vnhe3dKae5Ebx8f9z7/vNBiLuaj+/v7ossAayaHw1Idp8RgsOl9MpBS+OFDyc+QeL9feHt3FXI4NHZ3N7f8cTS3SN3dbdYNUpCaAAAAaf2o6AIAAABsDKQmAABAWkmzgZRSQgh6bFmWYRi5FAlS4ZzTA8MwLMsaWyqEUEqNvWiapmmaGZVHSimljH8FlZCKl7w0oyJBRspW9xLoopKx41jy0pKowCqwah0fklJTSklFF0KYptnpdPIqFcwhpYyiiDFGe0gYhoyxIAj0DhOGIT2m306/HgRBRkUSQnDOhRC9Xs8wDKo8uuYkL82oSJCFEta9hKLSEUwpJaWk469t22mWlkQFVoFU6vhwP0+n0+l2u47jzH0n5KnX6/V6Pf304uLCdV391PM8/UC/Tb+YXZE8z4t/S/zbk5fCBilh3UtGdW+5pSVRmVWowPFh/vWaUsogCJRSYRi6rrtKQodhGEVREARRFOm+X8/z4idHURRFUaSUMgzDNE3P8/S5qpSy1Woxxnq9Hr2NMWbbti6VEML3fcYY/S2Lnd4KIcIwlFIahmEYRvzUuNlsUs8AfaBpmvGli8rti8ZQz4wQgs43p/5SK/58adi2HUURdbYsuhTiplakKIqobWdZFtXtZrNJ3aGdToe26ip70HJKUveg/CpyfEgO1aurKx3+8dPJpfV6Pdu29RnE7e2t67oXFxf0tNvtxr/l4uJiso1r27bnfFJvugAACYlJREFUeUEQ6A/UiyzLur291X9r23b88dgi/fT+/t4wjE6no5cuvaa5fdH9xPk+vaI3i5bn+RoV6fb2Vv9qY+eSCUshLqEiXV1djVWb+D6y4h6UUgnrXrLKNNSWW1oSlTk+zJlDG0WR4zj6qZRy9Zy2LEs3Lukkut1u66+L92LTO+mMOI7OoOlxvJ1qmqYeG7csS39UGIbdblefcVuW5ThO/GMty9LnwjQ0vdyq5fZFs0zOwsgfDeBP/mpplgJJqEimaSql9A/NOY9Pl1hxD1pFGeoelF8Fjg9zUjMMQ9/3G41Go9EQQqxlZcbmRBmGET8ENH6I+prGPmFWb0+n05FS+r7fbDZbrZb+Q+rjir/TcZy1J1aeXzRLSXo2PM+b+sOlWQpsXkVyXTd+ohnfHVbcg1ZRkrpHms3m0ktLogKrMMumHx+SxjWFEK7r6nNSxlij0Yg/XQ7nfKz9qvc327a73e5yH0tbWRdPKdVoNC4uLti0s2AhRBbzs3P7oqmiKFr911mX+JF90aWQXJFs26atp0c99dtW2YNWUaq6x+Y1fMt8RNYqsAoJNvr4kNTWnJz+Y9v22OVBS5BS6i1C0xP0/uY4ztjGout40nwszZ7QT8cOJTRLiCil2u12PLnXJbcvmkTT/ctzvm/bdrwjcaGlMLciOY4ThmG73R7bQ1fZg5ZWtrrHGKPZSfrp2Jl68tKSqMAqJNjo48PM+9D6vh9FkWmaruvS7+H7PudcKeV53tI9PBS6dDkOjdB4nhff39rtNl2mw747n9ITTdvtNl3Eo/t4gyDQjznntPfSm6WU8cmBYRjSNzLGhBD6D5VSzWaTWtU0KdH3fTpdWO7ysny+SEpJXTS0snTMGvscmrEcn4SZ6fXCukimaVJzRyl1eHjY7XZt205eml2pNtesiqQ9ePDAtu3JyrP0HpRSCeveJKVUq9Wibx87FMxdWhIVWIW4Kh0f8r57O6Xm3A1Bb1viPhf6fkaTX5GwaL1y+yKotlUq0tJ7UJVQO3vWRkheWhIVWIXqKWlqAgAAlFCq/5V6XcIwpKFHumyxVAMhAAAAc+H/1wQAAEgL/1MYAABAWkjNqpHDoRgMii7FD/B+Xw6HRZcCMicGg5L/0Go04v1+0aVYSQVWYQzv99VoVHQpFoDUrJrwwwe/ZPejap6chB8+FF0KyJwfRSX/ocX1deP166JLsZIKrMKYxuvX4vq66FIsAKkJmbNqtaKLAACwHkhNAACAtJCaAAAAaSE1IQ+bNW4BADALUhMAACAtpCYAAEBaSE0AAIC0kJqQOVx5AgCVgdQEAABIC6kJAACQFlIT8oArTwCgGpCakAd1d1d0EQAA1gCpCQAAkBZSEwAAIC2kJmTO2NkpuggAAOuB1ITM4XpNAKgMpCYAAEBaP3758mXRZYA1+8Uf/3HZmnf//A/+wD46KroUkK1/+Md/tGo1c2+v6IIkqUBVrMAqxP3D739vHx0Zu7tFFyStj+7v74suAwAAwGZADy0AAEBaSE0AAIC0kJoAALAYIYRSquhSFAOpCTNJKX3f932/6ILA1uGc+74vhMjiw33fbzabWXzyKsIwbLVaGa3y2mXx64Rh6Pt++TcCUhNmMk0zCIKS12CoJNu2bdvOqDUTBEEJ20mu6xqGUcKCTeU4jmma6/1M13WDICj/Rvi46AIAAMCGcV236CIUBqlZHb7vSymVUoZhUDNxXZ8chiHnXClFH2sYhl4khAjDUEppGIZhGHqplLLVajHGer1eFEVRFDHGbNvWO5sQgvp+qbSMsXiB6U/0unieF/9SKJvs6p7uCTQMw/M8y7Lo9TQVbGrNHNNsNqnY3W6XXkmoe81m0zRN0zTp6yZ3h0XFv8txnLGls1YhiqIwDBlj3W6XXuGct9ttxlin0/F9f/VCJvyg+hcJgkD/HCThRwnDMIqiIAiiKNLdV57n2ba9Ynnm/ijrP5jcQ1Xc3t7qx0EQdDqdtXysYRj6oy4uLlzX1YsuLi5s29bfO/b0/v7etm3P84IgoKe9Xk8vsixr7A/1om63O/YtjuOsZV0gI1nUvV6vZ9u2rjO3t7e2bV9dXcXfM6uCpamZ9JmO41xcXOjX59a9hN1hUWPf1el0LMtKuQqT39vtdnXBVi/k3B/U87z47hw360eZ/EFd141v/IRPTi5PwvpmcTBBalbK7e1tr9e7urrq9Xqe563lM+N5NvbUdd14bb6/v+90OvEKbdv2rAOo4zjdblc/jR8NJ6t1p9OJvxlKaO11b/Jzrq6uxgJgVgVLUzMno/Q+Rd1L2B0WNfld8VBJXoUgCOidjuNQPnmep3eitRQy+QdNTs2pP8rk59BZS8pPTihPwvpmcTBBD21FKKVarRb15DDGhBBjnSdZ0F+nOY5DPUXarPGPTqdDU+aoAyr+Ns55o9EYe/9k/xWURHZ1b+xzTNOUUo69Z2oFS1MzwzC0LGvsbXnWvck5L/H1TV4F27Y556ZpGoZBHZ5SynVNz1n9B5211499TsqJP6uUJ4sfFKlZEa1WKz7qwznnnGf9pZM1XgiRZsyA/tDzPP200WhcXFzQU9u29SATlF92dU8IET/A0dBUmj9MUzM7nY4QotFo6NFBlm/dm1yXeLGTV8GyLEpQ27aFEHTqua6CZfeDcs7jP2jKpF+lPFn8oLjypCIMw4iff9HAeNZs245fzamUarfbac7jaJqDfjq3WSClnGxkQElkV/doDpp+6vt+yqmbKWumZVlBENCEIHolz7pnmmZ8LxBCxDfd3FUwDINCyHEc3/fXmJrZ/aBSSr15aeqQPnXOqDxZ/KC4e3tFRFHEOdc9GLZth2FIYx5Lf6ZSqtlsCiHoOirGWKPRoNP/TqdD76HptXTCKITQ0+ra7TbnPN6XEp9xxzkPw1B3uUgp47Mf6c+FEPSxdERbcbIiZCeLutdut6Mo8jwviiKqBmOVJLmCsdk1kzHWbDY557e3t/Sx1IOnCzyr7k3uDr7vh2Gony7B9/2xBjTn3PM8SseEVdB/S3vi4eFhp9OhK1xXL2TCD6qn7+qZvYyxTqdDhZy719Mf0koppTzP023NhE9OKE+a9V37wQSpWR1KKerDyWFEc/J7GWMpJ5Gn/0PazSYHn6BsMq17dPO2JarB0jWT5Vj3qOlD105MLl1lFVaRxQ9Km3S5FVm9PGv8QZGaAACQuVVSs1QwGwgAALIVhiF1wNK1lWu/G1+e0NYEAABIC3NoAQAA0kJqAgAApIXUBAAASAupCQAAkNb/B+bpIp6wJ82fAAAAAElFTkSuQmCC", "text/plain": [ "Tree('ROOT', [Tree('S', [Tree('NP', [Tree('NP', [Tree('DT', ['A']), Tree('NN', ['person'])]), Tree('PP', [Tree('IN', ['on']), Tree('NP', [Tree('DT', ['a']), Tree('NN', ['horse'])])])]), Tree('VP', [Tree('VBZ', ['jumps']), Tree('PP', [Tree('IN', ['over']), Tree('NP', [Tree('DT', ['a']), Tree('JJ', ['broken']), Tree('JJ', ['down']), Tree('NN', ['airplane'])])])]), Tree('.', ['.'])])])" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "snli_ex.sentence1_parse" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The leaves of either tree are tokenized versions of them:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['A',\n", " 'person',\n", " 'on',\n", " 'a',\n", " 'horse',\n", " 'jumps',\n", " 'over',\n", " 'a',\n", " 'broken',\n", " 'down',\n", " 'airplane',\n", " '.']" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "snli_ex.sentence1_parse.leaves()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MultiNLI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### MultiNLI properties\n", "\n", "\n", "* Train premises drawn from five genres: \n", " 1. Fiction: works from 1912–2010 spanning many genres\n", " 1. Government: reports, letters, speeches, etc., from government websites\n", " 1. The _Slate_ website\n", " 1. Telephone: the Switchboard corpus\n", " 1. Travel: Berlitz travel guides\n", "\n", "\n", "* Additional genres just for dev and test (the __mismatched__ condition): \n", " 1. The 9/11 report\n", " 1. Face-to-face: The Charlotte Narrative and Conversation Collection\n", " 1. Fundraising letters\n", " 1. Non-fiction from Oxford University Press\n", " 1. _Verbatim_ articles about linguistics\n", "\n", "\n", "* 392,702 train examples; 20K dev; 20K test\n", "\n", "\n", "* 19,647 examples validated by four additional annotators\n", " * 58.2% examples with unanimous gold label\n", " * 92.6% of gold labels match the author's label\n", "\n", "\n", "* Test-set labels available as a Kaggle competition. \n", "\n", " * Top matched scores currently around 0.81.\n", " * Top mismatched scores currently around 0.83." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Working with MultiNLI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For MultiNLI, we have the following readers: \n", "\n", "* `nli.MultiNLITrainReader`\n", "* `nli.MultiNLIMatchedDevReader`\n", "* `nli.MultiNLIMismatchedDevReader`\n", "\n", "The MultiNLI test sets are available on Kaggle ([matched version](https://www.kaggle.com/c/multinli-matched-open-evaluation) and [mismatched version](https://www.kaggle.com/c/multinli-mismatched-open-evaluation))." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The interface to these is the same as for the SNLI readers:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"NLIReader({'src_filename': 'data/nlidata/multinli_1.0/multinli_1.0_train.jsonl', 'filter_unlabeled': True, 'samp_percentage': 0.1, 'random_state': 42, 'gold_label_attr_name': 'gold_label'})" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nli.MultiNLITrainReader(MULTINLI_HOME, samp_percentage=0.10, random_state=42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `NLIExample` instances for MultiNLI have the same attributes as those for SNLI. Here is the list repeated from above for convenience:\n", "\n", "* __annotator_labels__: `list of str`\n", "* __captionID__: `str`\n", "* __gold_label__: `str`\n", "* __pairID__: `str`\n", "* __sentence1__: `str`\n", "* __sentence1_binary_parse__: `nltk.tree.Tree`\n", "* __sentence1_parse__: `nltk.tree.Tree`\n", "* __sentence2__: `str`\n", "* __sentence2_binary_parse__: `nltk.tree.Tree`\n", "* __sentence2_parse__: `nltk.tree.Tree`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The full label distribution:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "contradiction 130903\n", "neutral 130900\n", "entailment 130899\n", "dtype: int64" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "multinli_labels = pd.Series(\n", " [ex.gold_label for ex in nli.MultiNLITrainReader(\n", " MULTINLI_HOME, filter_unlabeled=False).read()])\n", "\n", "multinli_labels.value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "No examples in the MultiNLI train set lack a gold label, so the value of the `filter_unlabeled` parameter has no effect here, but it does have an effect in the `Dev` versions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Annotated MultiNLI subsets\n", "\n", "MultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "matched_ann_filename = os.path.join(\n", " ANNOTATIONS_HOME,\n", " \"multinli_1.0_matched_annotations.txt\")\n", "\n", "mismatched_ann_filename = os.path.join(\n", " ANNOTATIONS_HOME, \n", " \"multinli_1.0_mismatched_annotations.txt\")" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "def view_random_example(annotations, random_state=42):\n", " random.seed(random_state)\n", " ann_ex = random.choice(list(annotations.items()))\n", " pairid, ann_ex = ann_ex\n", " ex = ann_ex['example'] \n", " print(\"pairID: {}\".format(pairid))\n", " print(ann_ex['annotations'])\n", " print(ex.sentence1)\n", " print(ex.gold_label)\n", " print(ex.sentence2)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "matched_ann = nli.read_annotated_subset(matched_ann_filename, MULTINLI_HOME)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "pairID: 63218c\n", "[]\n", "Recently, however, I have settled down and become decidedly less experimental.\n", "contradiction\n", "I am still as experimental as ever, and I am always on the move.\n" ] } ], "source": [ "view_random_example(matched_ann)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adversarial NLI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adversarial NLI properties\n", "\n", "The ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:\n", "\n", "1. The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The precise wording is more informally, along the lines of the SNLI/MultiNLI task).\n", "\n", "1. The crowdworker submits a hypothesis text.\n", "\n", "1. The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.\n", "\n", "1. If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.\n", "\n", "The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:\n", "\n", "| Round | Model | Training data | Context sources | \n", "|:------:|:------------|:---------------------------|:-----------------|\n", "| 1 | [BERT-large](https://www.aclweb.org/anthology/N19-1423/) | SNLI + MultiNLI | Wikipedia |\n", "| 2 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Wikipedia |\n", "| 3 | [ROBERTa](https://arxiv.org/abs/1907.11692) | SNLI + MultiNLI + [NLI-FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) + Round 1 | Various |\n", "\n", "Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.\n", "\n", "The [project README](https://github.com/facebookresearch/anli/blob/master/README.md) seeks to establish some rules for how the rounds can be used for training and evaluation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Working with Adversarial NLI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For ANLI, we have the following readers: \n", "\n", "* `nli.ANLITrainReader`\n", "* `nli.ANLIDevReader`\n", "\n", "As with SNLI, we leave the writing of a `Test` version to the user, as a way of discouraging inadvertent use of the test set during project development." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because ANLI is distributed in three rounds, and the rounds can be used independently or pooled, the interface has a `rounds` argument. The default is `rounds=(1,2,3)`, but any subset of them can be specified. Here are some illustrations using the `Train` reader; the `Dev` interface is the same:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "R(1,): 16,946\n", "R(2,): 45,460\n", "R(3,): 100,459\n", "R(1, 2, 3): 162,865\n" ] } ], "source": [ "for rounds in ((1,), (2,), (3,), (1,2,3)):\n", " count = len(list(nli.ANLITrainReader(ANLI_HOME, rounds=rounds).read()))\n", " print(\"R{0:}: {1:,}\".format(rounds, count))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above figures correspond to those in Table 2 of the paper. I am not sure what accounts for the differences of 100 examples in round 2 (and, in turn, in the grand total)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "ANLI uses a different set of attributes from SNLI/MultiNLI. Here is a summary of what `NLIExample` instances offer for this corpus:\n", "\n", "* __uid__: a unique identifier; akin to `pairID` in SNLI/MultiNLI \n", "* __context__: the premise; corresponds to `sentence1` in SNLI/MultiNLI\n", "* __hypothesis__: the hypothesis; corresponds to `sentence2` in SNLI/MultiNLI\n", "* __label__: the gold label; corresponds to `gold_label` in SNLI/MultiNLI\n", "* __model_label__: the label predicted by the model used in the current round\n", "* __reason__: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current __context__/__hypothesis__ pair\n", "* __emturk__: for dev (and test), this is `True` if the annotator contributed only dev (test) exmples, else `False`; in turn, it is `False` for all train examples.\n", "* __genre__: the source for the __context__ text\n", "* __tag__: information about the round and train/dev/test classification\n", "\n", "All these attribute are `str`-valued except for `emturk`, which is `bool`-valued." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The labels in this datset are conceptually the same as for ` SNLI/MultiNLI`, but they are encoded differently:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "n 68789\n", "e 52111\n", "c 41965\n", "dtype: int64" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "anli_labels = pd.Series([ex.label for ex in nli.ANLITrainReader(ANLI_HOME).read()])\n", "\n", "anli_labels.value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the dev set, the `label` and `model_label` values are always different, suggesting that these evaluations will be very challenging for present-day models:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "False 3200\n", "dtype: int64" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.Series(\n", " [ex.label == ex.model_label for ex in nli.ANLIDevReader(ANLI_HOME).read()]\n", ").value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the train set, they do sometimes correspond, and you can track the changes in the rate of correct model predictions across the rounds:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True 0.821197\n", "False 0.178803\n", "Name: Round 1, dtype: float64\n", "\n", "True 0.932028\n", "False 0.067972\n", "Name: Round 2, dtype: float64\n", "\n", "True 0.915916\n", "False 0.084084\n", "Name: Round 3, dtype: float64\n", "\n" ] } ], "source": [ "for r in (1,2,3):\n", " dist = pd.Series(\n", " [ex.label == ex.model_label for ex in nli.ANLITrainReader(ANLI_HOME, rounds=(r,)).read()]\n", " ).value_counts()\n", " dist = dist / dist.sum()\n", " dist.name = \"Round {}\".format(r)\n", " print(dist, end=\"\\n\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This corresponds to Table 2, \"Model error rate (Verified)\", in the paper. (I am not sure what accounts for the slight differences in the percentages.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Other NLI datasets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* [The FraCaS textual inference test suite](http://www-nlp.stanford.edu/~wcmac/downloads/) is a smaller, hand-built dataset that is great for evaluating a model's ability to handle complex logical patterns.\n", "\n", "* [SemEval 2013](https://www.cs.york.ac.uk/semeval-2013/) had a wide range of interesting data sets for NLI and related tasks.\n", "\n", "* [The SemEval 2014 semantic relatedness shared task](http://alt.qcri.org/semeval2014/task1/) used an NLI dataset called [Sentences Involving Compositional Knowledge (SICK)](http://alt.qcri.org/semeval2014/task1/index.php?id=data-and-tools).\n", "\n", "* [MedNLI](https://physionet.org/physiotools/mimic-code/mednli/) is specialized to the medical domain, using data derived from [MIMIC III](https://mimic.physionet.org).\n", "\n", "* [XNLI](https://github.com/facebookresearch/XNLI) is a multilingual NLI dataset derived from MultiNLI.\n", "\n", "* [Diverse Natural Language Inference Collection (DNC)](http://decomp.io/projects/diverse-natural-language-inference/) transforms existing annotations from other tasks into NLI problems for a diverse range of reasoning challenges.\n", "\n", "* [SciTail](http://data.allenai.org/scitail/) is an NLI dataset derived from multiple-choice science exam questions and Web text.\n", "\n", "* [NLI Style FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) is a version of [the FEVER dataset](http://fever.ai) put into a standard NLI format. It was used by the Adversarial NLI team to train models for their annotation round 2.\n", "\n", "* Models for NLI might be adapted for use with [the 30M Factoid Question-Answer Corpus](http://agarciaduran.org/).\n", "\n", "* Models for NLI might be adapted for use with [the Penn Paraphrase Database](http://paraphrase.org/)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 2 }