{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework and bake-off: Relation extraction using distant supervision" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "__author__ = \"Bill MacCartney and Christopher Potts\"\n", "__version__ = \"CS224u, Stanford, Spring 2020\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "1. [Overview](#Overview)\n", "1. [Set-up](#Set-up)\n", "1. [Baselines](#Baselines)\n", " 1. [Hand-build feature functions](#Hand-build-feature-functions)\n", " 1. [Distributed representations](#Distributed-representations)\n", "1. [Homework questions](#Homework-questions)\n", " 1. [Different model factory [1 points]](#Different-model-factory-[1-points])\n", " 1. [Directional unigram features [1.5 points]](#Directional-unigram-features-[1.5-points])\n", " 1. [The part-of-speech tags of the \"middle\" words [1.5 points]](#The-part-of-speech-tags-of-the-\"middle\"-words-[1.5-points])\n", " 1. [Bag of Synsets [2 points]](#Bag-of-Synsets-[2-points])\n", " 1. [Your original system [3 points]](#Your-original-system-[3-points])\n", "1. [Bake-off [1 point]](#Bake-off-[1-point])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "This homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. \n", "\n", "As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set-up\n", "\n", "See [the first notebook in this unit](rel_ext_01_task.ipynb#Set-up) for set-up instructions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import os\n", "import rel_ext\n", "from sklearn.linear_model import LogisticRegression\n", "import utils" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rel_ext_data_home = os.path.join('data', 'rel_ext_data')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dataset = rel_ext.Dataset(corpus, kb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "splits = dataset.build_splits(\n", " split_names=['tiny', 'train', 'dev'],\n", " split_fracs=[0.01, 0.79, 0.20],\n", " seed=1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "splits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Baselines" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Hand-build feature functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):\n", " for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):\n", " for word in ex.middle.split(' '):\n", " feature_counter[word] += 1\n", " for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):\n", " for word in ex.middle.split(' '):\n", " feature_counter[word] += 1\n", " return feature_counter" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "featurizers = [simple_bag_of_words_featurizer]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseline_results = rel_ext.experiment(\n", " splits,\n", " train_split='train',\n", " test_split='dev',\n", " featurizers=featurizers,\n", " model_factory=model_factory,\n", " verbose=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Studying model weights might yield insights:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rel_ext.examine_model_weights(baseline_results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Distributed representations\n", "\n", "This simple baseline sums the GloVe vector representations for all of the words in the \"middle\" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "GLOVE_HOME = os.path.join('data', 'glove.6B')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "glove_lookup = utils.glove2dict(\n", " os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def glove_middle_featurizer(kbt, corpus, np_func=np.sum):\n", " reps = []\n", " for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):\n", " for word in ex.middle.split():\n", " rep = glove_lookup.get(word)\n", " if rep is not None:\n", " reps.append(rep)\n", " # A random representation of the right dimensionality if the\n", " # example happens not to overlap with GloVe's vocabulary:\n", " if len(reps) == 0:\n", " dim = len(next(iter(glove_lookup.values()))) \n", " return utils.randvec(n=dim)\n", " else:\n", " return np_func(reps, axis=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "glove_results = rel_ext.experiment(\n", " splits,\n", " train_split='train',\n", " test_split='dev',\n", " featurizers=[glove_middle_featurizer], \n", " vectorize=False, # Crucial for this featurizer!\n", " verbose=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Homework questions\n", "\n", "Please embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Different model factory [1 points]\n", "\n", "The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).\n", "\n", "__To submit:__ A wrapper function `run_svm_model_factory` that does the following: \n", "\n", "1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. \n", "1. Trains on the 'train' part of `splits`.\n", "1. Assesses on the `dev` part of `splits`.\n", "1. Uses `featurizers` as defined above. \n", "1. Returns the return value of `rel_ext.experiment` for this set-up.\n", "\n", "The function `test_run_svm_model_factory` will check that your function conforms to these general specifications." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def run_svm_model_factory():\n", " \n", " ##### YOUR CODE HERE\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_run_svm_model_factory(run_svm_model_factory):\n", " results = run_svm_model_factory()\n", " assert 'featurizers' in results, \\\n", " \"The return value of `run_svm_model_factory` seems not to be correct\"\n", " # Check one of the models to make sure it's an SVC:\n", " assert 'SVC' in results['models']['adjoins'].__class__.__name__, \\\n", " \"It looks like the model factor wasn't set to use an SVC.\" " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if 'IS_GRADESCOPE_ENV' not in os.environ:\n", " test_run_svm_model_factory(run_svm_model_factory)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Directional unigram features [1.5 points]\n", "\n", "The current bag-of-words representation makes no distinction between \"forward\" and \"reverse\" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. \n", "\n", "__To submit:__\n", "\n", "1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes \"forward\" and \"reverse\". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.\n", "\n", "2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)\n", "\n", "3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def directional_bag_of_words_featurizer(kbt, corpus, feature_counter): \n", " # Append these to the end of the keys you add/access in \n", " # `feature_counter` to distinguish the two orders. You'll\n", " # need to use exactly these strings in order to pass \n", " # `test_directional_bag_of_words_featurizer`.\n", " subject_object_suffix = \"_SO\"\n", " object_subject_suffix = \"_OS\"\n", " \n", " ##### YOUR CODE HERE\n", "\n", "\n", " return feature_counter\n", "\n", "\n", "# Call to `rel_ext.experiment`:\n", "##### YOUR CODE HERE \n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_directional_bag_of_words_featurizer(corpus):\n", " from collections import defaultdict\n", " kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')\n", " feature_counter = defaultdict(int)\n", " # Make sure `feature_counter` is being updated, not reinitialized:\n", " feature_counter['is_OS'] += 5\n", " feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)\n", " expected = defaultdict(\n", " int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})\n", " assert feature_counter == expected, \\\n", " \"Expected:\\n{}\\nGot:\\n{}\".format(expected, feature_counter)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if 'IS_GRADESCOPE_ENV' not in os.environ:\n", " test_directional_bag_of_words_featurizer(corpus)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The part-of-speech tags of the \"middle\" words [1.5 points]\n", "\n", "Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.\n", "\n", "__To submit:__\n", "\n", "1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given \n", "\n", " `The/DT dog/N napped/V`\n", " \n", " we obtain the list of bigram POS sequences\n", " \n", " `b = [' DT', 'DT N', 'N V', 'V ']`. \n", " \n", " Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.\n", "\n", "2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):\n", " \n", " ##### YOUR CODE HERE\n", "\n", "\n", " return feature_counter\n", "\n", "\n", "def get_tag_bigrams(s):\n", " \"\"\"Suggested helper method for `middle_bigram_pos_tag_featurizer`.\n", " This should be defined so that it returns a list of str, where each \n", " element is a POS bigram.\"\"\"\n", " # The values of `start_symbol` and `end_symbol` are defined\n", " # here so that you can use `test_middle_bigram_pos_tag_featurizer`.\n", " start_symbol = \"\"\n", " end_symbol = \"\"\n", " \n", " ##### YOUR CODE HERE\n", "\n", "\n", "\n", " \n", "def get_tags(s): \n", " \"\"\"Given a sequence of word/POS elements (lemmas), this function\n", " returns a list containing just the POS elements, in order. \n", " \"\"\"\n", " return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]\n", "\n", "\n", "def parse_lem(lem):\n", " \"\"\"Helper method for parsing word/POS elements. It just splits\n", " on the rightmost / and returns (word, POS) as a tuple of str.\"\"\"\n", " return lem.strip().rsplit('/', 1) \n", "\n", "# Call to `rel_ext.experiment`:\n", "##### YOUR CODE HERE\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_middle_bigram_pos_tag_featurizer(corpus):\n", " from collections import defaultdict\n", " kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')\n", " feature_counter = defaultdict(int)\n", " # Make sure `feature_counter` is being updated, not reinitialized:\n", " feature_counter[' VBZ'] += 5\n", " feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)\n", " expected = defaultdict(\n", " int, {' VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN ':1})\n", " assert feature_counter == expected, \\\n", " \"Expected:\\n{}\\nGot:\\n{}\".format(expected, feature_counter)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if 'IS_GRADESCOPE_ENV' not in os.environ:\n", " test_middle_bigram_pos_tag_featurizer(corpus)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Bag of Synsets [2 points]\n", "\n", "The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:\n", "\n", "```\n", "from nltk.corpus import wordnet as wn\n", "dog = wn.synsets('dog', pos='n')\n", "dog\n", "[Synset('dog.n.01'),\n", " Synset('frump.n.01'),\n", " Synset('dog.n.03'),\n", " Synset('cad.n.01'),\n", " Synset('frank.n.02'),\n", " Synset('pawl.n.01'),\n", " Synset('andiron.n.01')]\n", "```\n", "\n", "This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.\n", "\n", "__To submit:__\n", "\n", "1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.\n", "\n", "2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nltk.corpus import wordnet as wn\n", "\n", "def synset_featurizer(kbt, corpus, feature_counter):\n", " \n", " ##### YOUR CODE HERE\n", "\n", "\n", " return feature_counter\n", "\n", "\n", "def get_synsets(s):\n", " \"\"\"Suggested helper method for `synset_featurizer`. This should\n", " be completed so that it returns a list of stringified Synsets \n", " associated with elements of `s`.\n", " \"\"\" \n", " # Use `parse_lem` from the previous question to get a list of\n", " # (word, POS) pairs. Remember to convert the POS strings.\n", " wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]\n", " \n", " ##### YOUR CODE HERE\n", "\n", "\n", " \n", " \n", "def convert_tag(t):\n", " \"\"\"Converts tags so that they can be used by WordNet:\n", " \n", " | Tag begins with | WordNet tag |\n", " |-----------------|-------------|\n", " | `N` | `n` |\n", " | `V` | `v` |\n", " | `J` | `a` |\n", " | `R` | `r` |\n", " | Otherwise | `None` |\n", " \"\"\" \n", " if t[0].lower() in {'n', 'v', 'r'}:\n", " return t[0].lower()\n", " elif t[0].lower() == 'j':\n", " return 'a'\n", " else:\n", " return None \n", "\n", "\n", "# Call to `rel_ext.experiment`:\n", "##### YOUR CODE HERE \n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_synset_featurizer(corpus):\n", " from collections import defaultdict\n", " kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')\n", " feature_counter = defaultdict(int)\n", " # Make sure `feature_counter` is being updated, not reinitialized:\n", " feature_counter[\"Synset('be.v.01')\"] += 5\n", " feature_counter = synset_featurizer(kbt, corpus, feature_counter)\n", " # The full return values for this tend to be long, so we just\n", " # test a few examples to avoid cluttering up this notebook.\n", " test_cases = {\n", " \"Synset('be.v.01')\": 6,\n", " \"Synset('embody.v.02')\": 1\n", " }\n", " for ss, expected in test_cases.items(): \n", " result = feature_counter[ss]\n", " assert result == expected, \\\n", " \"Incorrect count for {}: Expected {}; Got {}\".format(ss, expected, result)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if 'IS_GRADESCOPE_ENV' not in os.environ:\n", " test_synset_featurizer(corpus)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Your original system [3 points]\n", "\n", "There are many options, and this could easily grow into a project. Here are a few ideas:\n", "\n", "- Try out different classifier models, from `sklearn` and elsewhere.\n", "- Add a feature that indicates the length of the middle.\n", "- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).\n", "- Introduce features based on the entity mentions themselves. \n", "- Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.\n", "- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.\n", "- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?\n", "\n", "In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Enter your system description in this cell.\n", "# Please do not remove this comment.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bake-off [1 point]\n", "\n", "For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:\n", "\n", "1. Only one evaluation is permitted.\n", "1. No additional system tuning is permitted once the bake-off has started.\n", "\n", "The cells below this one constitute your bake-off entry.\n", "\n", "People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.\n", "\n", "Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.\n", "\n", "The announcement will include the details on where to submit your entry." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Enter your bake-off assessment code in this cell. \n", "# Please do not remove this comment.\n", "if 'IS_GRADESCOPE_ENV' not in os.environ:\n", " pass\n", " # Please enter your code in the scope of the above conditional.\n", " ##### YOUR CODE HERE\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# On an otherwise blank line in this cell, please enter\n", "# your macro-average f-score (an F_0.5 score) as reported \n", "# by the code above. Please enter only a number between \n", "# 0 and 1 inclusive. Please do not remove this comment.\n", "if 'IS_GRADESCOPE_ENV' not in os.environ:\n", " pass\n", " # Please enter your score in the scope of the above conditional.\n", " ##### YOUR CODE HERE\n", "\n", "\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" }, "widgets": { "state": {}, "version": "1.1.2" } }, "nbformat": 4, "nbformat_minor": 2 }