{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Relation extraction using distant supervision: Experiments" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "__author__ = \"Bill MacCartney\"\n", "__version__ = \"CS224U, Stanford, Spring 2019\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "1. [Overview](#Overview)\n", "1. [Set-up](#Set-up)\n", "1. [Building a classifier](#Building-a-classifier)\n", " 1. [Featurizers](#Featurizers)\n", " 1. [Experiments](#Experiments)\n", "1. [Analysis](#Analysis)\n", " 1. [Examining the trained models](#Examining-the-trained-models)\n", " 1. [Discovering new relation instances](#Discovering-new-relation-instances)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "OK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.\n", "\n", "Just like we did in the unit on [supervised sentiment analysis](https://github.com/cgpotts/cs224u/blob/master/sst_02_hand_built_features.ipynb), we'll leverage the `sklearn` library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set-up\n", "\n", "See [the first notebook in this unit](rel_ext_01_task.ipynb#Set-up) for set-up instructions." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from collections import Counter\n", "import os\n", "import rel_ext" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "rel_ext_data_home = os.path.join('data', 'rel_ext_data')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in [the previous notebook](rel_ext_01_task.ipynb)." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "dataset = rel_ext.Dataset(corpus, kb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following code splits up our data in a way that supports experimentation:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'tiny': Corpus with 3,474 examples; KB with 445 triples,\n", " 'train': Corpus with 249,003 examples; KB with 34,229 triples,\n", " 'dev': Corpus with 79,219 examples; KB with 11,210 triples,\n", " 'all': Corpus with 331,696 examples; KB with 45,884 triples}" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "splits = dataset.build_splits()\n", "\n", "splits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Building a classifier" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Featurizers\n", "\n", "Featurizers are functions which define the feature representation for our model. The primary input to a featurizer will be the `KBTriple` for which we are generating features. But since our features will be derived from corpus examples containing the entities of the `KBTriple`, we must also pass in a reference to a `Corpus`. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.\n", "\n", "Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the `KBTriple`, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between \"forward\" and \"reverse\" examples." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):\n", " for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):\n", " for word in ex.middle.split(' '):\n", " feature_counter[word] += 1\n", " for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):\n", " for word in ex.middle.split(' '):\n", " feature_counter[word] += 1\n", " return feature_counter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's how this featurizer works on a single example:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "KBTriple(rel='contains', sbj='Brickfields', obj='Kuala_Lumpur_Sentral_railway_station')" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "kbt = kb.kb_triples[0]\n", "\n", "kbt" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'it was just a quick 10-minute walk to'" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({'it': 1,\n", " 'was': 1,\n", " 'just': 1,\n", " 'a': 1,\n", " 'quick': 1,\n", " '10-minute': 1,\n", " 'walk': 1,\n", " 'to': 2,\n", " 'the': 1})" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can experiment with adding new kinds of features just by implementing additional featurizers, following `simple_bag_of_words_featurizer` as an example.\n", "\n", "Now, in order to apply machine learning algorithms such as those provided by `sklearn`, we need a way to convert datasets of `KBTriple`s into feature matrices. The following steps achieve that: " ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "kbts_by_rel, labels_by_rel = dataset.build_dataset()\n", "\n", "featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Experiments\n", "\n", "Now we need some functions to train models, make predictions, and evaluate the results. We'll start with `train_models()`. This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train, and a model factory, which is a function which initializes an `sklearn` classifier. It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "train_result = rel_ext.train_models(\n", " splits, \n", " featurizers=[simple_bag_of_words_featurizer])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next comes `predict()`. This function takes as arguments a dictionary of data splits, the outputs of `train_models()`, and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by prediction)." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "predictions, true_labels = rel_ext.predict(\n", " splits, train_result, split_name='dev')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now `evaluate_predictions()`. This function takes as arguments the parallel dictionaries of predictions and true labels produced by `predict()`. It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "relation precision recall f-score support size\n", "------------------ --------- --------- --------- --------- ---------\n", "adjoins 0.877 0.386 0.699 407 7057\n", "author 0.810 0.519 0.728 657 7307\n", "capital 0.652 0.238 0.484 126 6776\n", "contains 0.778 0.605 0.736 4487 11137\n", "film_performance 0.782 0.597 0.736 984 7634\n", "founders 0.822 0.414 0.686 469 7119\n", "genre 0.517 0.151 0.348 205 6855\n", "has_sibling 0.858 0.251 0.578 625 7275\n", "has_spouse 0.892 0.338 0.672 754 7404\n", "is_a 0.705 0.217 0.486 618 7268\n", "nationality 0.578 0.192 0.412 386 7036\n", "parents 0.827 0.538 0.747 390 7040\n", "place_of_birth 0.558 0.206 0.415 282 6932\n", "place_of_death 0.415 0.105 0.261 209 6859\n", "profession 0.659 0.188 0.439 308 6958\n", "worked_at 0.705 0.261 0.526 303 6953\n", "------------------ --------- --------- --------- --------- ---------\n", "macro-average 0.715 0.325 0.560 11210 117610\n" ] }, { "data": { "text/plain": [ "0.5596964276671111" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rel_ext.evaluate_predictions(predictions, true_labels)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we introduce `rel_ext.experiment()`, which basically chains together `rel_ext.train_models()`, `rel_ext.predict()`, and `rel_ext.evaluate_predictions()`. For convenience, this function returns the output of `rel_ext.train_models()` as its result." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Running `rel_ext.experiment()` in its default configuration will give us a baseline result for machine-learned models." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "relation precision recall f-score support size\n", "------------------ --------- --------- --------- --------- ---------\n", "adjoins 0.877 0.386 0.699 407 7057\n", "author 0.810 0.519 0.728 657 7307\n", "capital 0.652 0.238 0.484 126 6776\n", "contains 0.778 0.605 0.736 4487 11137\n", "film_performance 0.782 0.597 0.736 984 7634\n", "founders 0.822 0.414 0.686 469 7119\n", "genre 0.517 0.151 0.348 205 6855\n", "has_sibling 0.858 0.251 0.578 625 7275\n", "has_spouse 0.892 0.338 0.672 754 7404\n", "is_a 0.705 0.217 0.486 618 7268\n", "nationality 0.578 0.192 0.412 386 7036\n", "parents 0.827 0.538 0.747 390 7040\n", "place_of_birth 0.558 0.206 0.415 282 6932\n", "place_of_death 0.415 0.105 0.261 209 6859\n", "profession 0.659 0.188 0.439 308 6958\n", "worked_at 0.705 0.261 0.526 303 6953\n", "------------------ --------- --------- --------- --------- ---------\n", "macro-average 0.715 0.325 0.560 11210 117610\n" ] } ], "source": [ "_ = rel_ext.experiment(\n", " splits,\n", " featurizers=[simple_bag_of_words_featurizer])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our `top_k_middles_classifier` from [the previous notebook](rel_ext_01_task.ipynb#A-simple-baseline-model). This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.\n", "\n", "But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Examining the trained models\n", "\n", "One important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators?" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Highest and lowest feature weights for relation adjoins:\n", "\n", " 2.556 Taluks\n", " 2.483 Córdoba\n", " 2.481 Valais\n", " ..... .....\n", " -1.316 Cook\n", " -1.438 he\n", " -1.459 who\n", "\n", "Highest and lowest feature weights for relation author:\n", "\n", " 2.699 book\n", " 2.566 musical\n", " 2.507 books\n", " ..... .....\n", " -2.791 1945\n", " -2.885 17th\n", " -2.998 1818\n", "\n", "Highest and lowest feature weights for relation capital:\n", "\n", " 3.700 capital\n", " 1.718 km\n", " 1.459 posted\n", " ..... .....\n", " -1.165 southwestern\n", " -1.612 Dehradun\n", " -1.870 state\n", "\n", "Highest and lowest feature weights for relation contains:\n", "\n", " 2.288 bordered\n", " 2.119 Ontario\n", " 2.021 third-largest\n", " ..... .....\n", " -2.347 Midlands\n", " -2.496 who\n", " -2.718 Mile\n", "\n", "Highest and lowest feature weights for relation film_performance:\n", "\n", " 4.404 alongside\n", " 4.049 starring\n", " 3.604 movie\n", " ..... .....\n", " -1.578 poem\n", " -1.718 tragedy\n", " -1.756 or\n", "\n", "Highest and lowest feature weights for relation founders:\n", "\n", " 3.993 founded\n", " 3.865 founder\n", " 3.435 co-founder\n", " ..... .....\n", " -1.587 band\n", " -1.673 novel\n", " -1.764 Bauhaus\n", "\n", "Highest and lowest feature weights for relation genre:\n", "\n", " 2.792 series\n", " 2.776 movie\n", " 2.635 album\n", " ..... .....\n", " -1.326 's\n", " -1.410 and\n", " -1.664 at\n", "\n", "Highest and lowest feature weights for relation has_sibling:\n", "\n", " 5.362 brother\n", " 4.208 sister\n", " 2.790 Marlon\n", " ..... .....\n", " -1.350 alongside\n", " -1.414 Her\n", " -1.999 formed\n", "\n", "Highest and lowest feature weights for relation has_spouse:\n", "\n", " 5.038 wife\n", " 4.283 widow\n", " 4.221 married\n", " ..... .....\n", " -1.227 which\n", " -1.265 reported\n", " -1.298 Sir\n", "\n", "Highest and lowest feature weights for relation is_a:\n", "\n", " 2.789 \n", " 2.692 order\n", " 2.467 philosopher\n", " ..... .....\n", " -1.741 birds\n", " -3.094 cat\n", " -4.383 characin\n", "\n", "Highest and lowest feature weights for relation nationality:\n", "\n", " 2.932 born\n", " 1.859 leaving\n", " 1.839 Set\n", " ..... .....\n", " -1.406 or\n", " -1.608 1961\n", " -1.710 American\n", "\n", "Highest and lowest feature weights for relation parents:\n", "\n", " 4.626 daughter\n", " 4.525 father\n", " 4.495 son\n", " ..... .....\n", " -1.487 defeated\n", " -1.524 Sonam\n", " -1.584 filmmaker\n", "\n", "Highest and lowest feature weights for relation place_of_birth:\n", "\n", " 3.997 born\n", " 3.004 birthplace\n", " 2.905 mayor\n", " ..... .....\n", " -1.319 American\n", " -1.412 or\n", " -1.507 and\n", "\n", "Highest and lowest feature weights for relation place_of_death:\n", "\n", " 2.330 died\n", " 1.821 where\n", " 1.660 living\n", " ..... .....\n", " -1.225 as\n", " -1.232 and\n", " -1.283 created\n", "\n", "Highest and lowest feature weights for relation profession:\n", "\n", " 3.338 \n", " 2.538 philosopher\n", " 2.377 American\n", " ..... .....\n", " -1.298 Texas\n", " -1.302 in\n", " -1.972 on\n", "\n", "Highest and lowest feature weights for relation worked_at:\n", "\n", " 3.077 CEO\n", " 2.922 professor\n", " 2.818 employee\n", " ..... .....\n", " -1.406 bassist\n", " -1.684 family\n", " -1.730 or\n", "\n" ] } ], "source": [ "rel_ext.examine_model_weights(train_result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)\n", "\n", "The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some _other_ relation which is anti-correlated with the target relation. (As an example, \"directed\" is a negative indicator for the `author` relation.)\n", "\n", "__Optional exercise:__ Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Discovering new relation instances\n", "\n", "Another way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?\n", "\n", "Because the goal is to discover new relation instances which are _true_ but _absent from the KB_, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.\n", "\n", "To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as \"negative examples\"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Highest probability examples for relation adjoins:\n", "\n", " 1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver')\n", " 1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada')\n", " 1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean')\n", " 1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico')\n", " 1.000 KBTriple(rel='adjoins', sbj='Pakistan', obj='Lahore')\n", " 1.000 KBTriple(rel='adjoins', sbj='Lahore', obj='Pakistan')\n", " 1.000 KBTriple(rel='adjoins', sbj='Sicily', obj='Italy')\n", " 1.000 KBTriple(rel='adjoins', sbj='Italy', obj='Sicily')\n", " 1.000 KBTriple(rel='adjoins', sbj='Great_Britain', obj='Europe')\n", " 1.000 KBTriple(rel='adjoins', sbj='Europe', obj='Great_Britain')\n", "\n", "Highest probability examples for relation author:\n", "\n", " 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='A_Christmas_Carol')\n", " 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='Brave_New_World')\n", " 1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception')\n", " 1.000 KBTriple(rel='author', sbj='The_Doors_of_Perception', obj='Aldous_Huxley')\n", " 1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen')\n", " 1.000 KBTriple(rel='author', sbj='Brave_New_World', obj='Aldous_Huxley')\n", " 1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice')\n", " 1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens')\n", " 1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens')\n", " 1.000 KBTriple(rel='author', sbj='Charles_Dickens', obj='Oliver_Twist')\n", "\n", "Highest probability examples for relation capital:\n", "\n", " 1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh')\n", " 1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka')\n", " 1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan')\n", " 1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu')\n", " 1.000 KBTriple(rel='capital', sbj='Delhi', obj='India')\n", " 1.000 KBTriple(rel='capital', sbj='India', obj='Delhi')\n", " 1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh')\n", " 1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow')\n", " 1.000 KBTriple(rel='capital', sbj='Pakistan', obj='Lahore')\n", " 1.000 KBTriple(rel='capital', sbj='Lahore', obj='Pakistan')\n", "\n", "Highest probability examples for relation contains:\n", "\n", " 1.000 KBTriple(rel='contains', sbj='Canada', obj='Vancouver')\n", " 1.000 KBTriple(rel='contains', sbj='Sydney', obj='New_South_Wales')\n", " 1.000 KBTriple(rel='contains', sbj='Tenerife', obj='Canary_Islands')\n", " 1.000 KBTriple(rel='contains', sbj='Vancouver', obj='Canada')\n", " 1.000 KBTriple(rel='contains', sbj='Melbourne', obj='Australia')\n", " 1.000 KBTriple(rel='contains', sbj='Dhaka', obj='Bangladesh')\n", " 1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples')\n", " 1.000 KBTriple(rel='contains', sbj='Edmonton', obj='Canada')\n", " 1.000 KBTriple(rel='contains', sbj='Pakistan', obj='Lahore')\n", " 1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne')\n", "\n", "Highest probability examples for relation film_performance:\n", "\n", " 1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan')\n", " 1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein')\n", " 1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol')\n", " 1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens')\n", " 1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha')\n", " 1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar')\n", " 1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely')\n", " 1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline')\n", " 1.000 KBTriple(rel='film_performance', sbj='Hrithik_Roshan', obj='Kaho_Naa..._Pyaar_Hai')\n", " 1.000 KBTriple(rel='film_performance', sbj='Kaho_Naa..._Pyaar_Hai', obj='Hrithik_Roshan')\n", "\n", "Highest probability examples for relation founders:\n", "\n", " 1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad')\n", " 1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer')\n", " 1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet')\n", " 1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant')\n", " 1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics')\n", " 1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee')\n", " 1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk')\n", " 1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX')\n", " 1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire')\n", " 1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan')\n", "\n", "Highest probability examples for relation genre:\n", "\n", " 0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook')\n", " 0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight')\n", " 0.997 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens')\n", " 0.997 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist')\n", " 0.989 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon')\n", " 0.989 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd')\n", " 0.986 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield')\n", " 0.986 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi')\n", " 0.953 KBTriple(rel='genre', sbj='Ronald_Reagan', obj='Jurassic_Park_III')\n", " 0.953 KBTriple(rel='genre', sbj='Jurassic_Park_III', obj='Ronald_Reagan')\n", "\n", "Highest probability examples for relation has_sibling:\n", "\n", " 1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera')\n", " 1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera')\n", " 1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum')\n", " 1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum')\n", " 1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle')\n", " 1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright')\n", " 1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman')\n", " 1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson')\n", " 1.000 KBTriple(rel='has_sibling', sbj='Aretha_Franklin', obj='Dionne_Warwick')\n", " 1.000 KBTriple(rel='has_sibling', sbj='Dionne_Warwick', obj='Aretha_Franklin')\n", "\n", "Highest probability examples for relation has_spouse:\n", "\n", " 1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun')\n", " 1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten')\n", " 1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet')\n", " 1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant')\n", " 1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman')\n", " 1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson')\n", " 1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists')\n", " 1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks')\n", " 1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England')\n", " 1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England')\n", "\n", "Highest probability examples for relation is_a:\n", "\n", " 1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver')\n", " 1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada')\n", " 1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera')\n", " 1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae')\n", " 1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea')\n", " 1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile')\n", " 1.000 KBTriple(rel='is_a', sbj='Hibiscus', obj='Malvaceae')\n", " 1.000 KBTriple(rel='is_a', sbj='Malvaceae', obj='Hibiscus')\n", " 1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae')\n", " 1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird')\n", "\n", "Highest probability examples for relation nationality:\n", "\n", " 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Suryavarman_II')\n", " 1.000 KBTriple(rel='nationality', sbj='Suryavarman_II', obj='Cambodia')\n", " 1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire')\n", " 1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus')\n", " 1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia')\n", " 1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni')\n", " 1.000 KBTriple(rel='nationality', sbj='Jess_Margera', obj='April_Margera')\n", " 1.000 KBTriple(rel='nationality', sbj='April_Margera', obj='Jess_Margera')\n", " 1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire')\n", " 1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan')\n", "\n", "Highest probability examples for relation parents:\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 1.000 KBTriple(rel='parents', sbj='Lincoln_Borglum', obj='Gutzon_Borglum')\n", " 1.000 KBTriple(rel='parents', sbj='Gutzon_Borglum', obj='Lincoln_Borglum')\n", " 1.000 KBTriple(rel='parents', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great')\n", " 1.000 KBTriple(rel='parents', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon')\n", " 1.000 KBTriple(rel='parents', sbj='Thomas_Boleyn,_1st_Earl_of_Wiltshire', obj='Anne_Boleyn')\n", " 1.000 KBTriple(rel='parents', sbj='Anne_Boleyn', obj='Thomas_Boleyn,_1st_Earl_of_Wiltshire')\n", " 1.000 KBTriple(rel='parents', sbj='Jess_Margera', obj='April_Margera')\n", " 1.000 KBTriple(rel='parents', sbj='April_Margera', obj='Jess_Margera')\n", " 1.000 KBTriple(rel='parents', sbj='Prince_Philip,_Duke_of_Edinburgh', obj='Anne,_Princess_Royal')\n", " 1.000 KBTriple(rel='parents', sbj='Anne,_Princess_Royal', obj='Prince_Philip,_Duke_of_Edinburgh')\n", "\n", "Highest probability examples for relation place_of_birth:\n", "\n", " 1.000 KBTriple(rel='place_of_birth', sbj='Cambodia', obj='Suryavarman_II')\n", " 1.000 KBTriple(rel='place_of_birth', sbj='Suryavarman_II', obj='Cambodia')\n", " 1.000 KBTriple(rel='place_of_birth', sbj='Nepal', obj='Bagmati_Zone')\n", " 1.000 KBTriple(rel='place_of_birth', sbj='Bagmati_Zone', obj='Nepal')\n", " 0.999 KBTriple(rel='place_of_birth', sbj='San_Antonio', obj='Actor')\n", " 0.999 KBTriple(rel='place_of_birth', sbj='Actor', obj='San_Antonio')\n", " 0.999 KBTriple(rel='place_of_birth', sbj='Titus', obj='Roman_Empire')\n", " 0.999 KBTriple(rel='place_of_birth', sbj='Roman_Empire', obj='Titus')\n", " 0.998 KBTriple(rel='place_of_birth', sbj='Roman_Empire', obj='Septimius_Severus')\n", " 0.998 KBTriple(rel='place_of_birth', sbj='Septimius_Severus', obj='Roman_Empire')\n", "\n", "Highest probability examples for relation place_of_death:\n", "\n", " 1.000 KBTriple(rel='place_of_death', sbj='Titus', obj='Roman_Empire')\n", " 1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Titus')\n", " 1.000 KBTriple(rel='place_of_death', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great')\n", " 1.000 KBTriple(rel='place_of_death', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon')\n", " 1.000 KBTriple(rel='place_of_death', sbj='England', obj='Elizabeth_I_of_England')\n", " 1.000 KBTriple(rel='place_of_death', sbj='Elizabeth_I_of_England', obj='England')\n", " 1.000 KBTriple(rel='place_of_death', sbj='Uruguay', obj='World_Trade_Organization')\n", " 1.000 KBTriple(rel='place_of_death', sbj='World_Trade_Organization', obj='Uruguay')\n", " 0.999 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Tiberius_Julius_Alexander')\n", " 0.999 KBTriple(rel='place_of_death', sbj='Tiberius_Julius_Alexander', obj='Roman_Empire')\n", "\n", "Highest probability examples for relation profession:\n", "\n", " 1.000 KBTriple(rel='profession', sbj='Canada', obj='Vancouver')\n", " 1.000 KBTriple(rel='profession', sbj='Vancouver', obj='Canada')\n", " 1.000 KBTriple(rel='profession', sbj='Little_Women', obj='Louisa_May_Alcott')\n", " 1.000 KBTriple(rel='profession', sbj='Louisa_May_Alcott', obj='Little_Women')\n", " 1.000 KBTriple(rel='profession', sbj='Jess_Margera', obj='April_Margera')\n", " 1.000 KBTriple(rel='profession', sbj='April_Margera', obj='Jess_Margera')\n", " 1.000 KBTriple(rel='profession', sbj='Aldous_Huxley', obj='Eyeless_in_Gaza')\n", " 1.000 KBTriple(rel='profession', sbj='Eyeless_in_Gaza', obj='Aldous_Huxley')\n", " 0.999 KBTriple(rel='profession', sbj='Hrithik_Roshan', obj='Kaho_Naa..._Pyaar_Hai')\n", " 0.999 KBTriple(rel='profession', sbj='Kaho_Naa..._Pyaar_Hai', obj='Hrithik_Roshan')\n", "\n", "Highest probability examples for relation worked_at:\n", "\n", " 1.000 KBTriple(rel='worked_at', sbj='William_C._Durant', obj='Louis_Chevrolet')\n", " 1.000 KBTriple(rel='worked_at', sbj='Louis_Chevrolet', obj='William_C._Durant')\n", " 1.000 KBTriple(rel='worked_at', sbj='SpaceX', obj='Elon_Musk')\n", " 1.000 KBTriple(rel='worked_at', sbj='Elon_Musk', obj='SpaceX')\n", " 1.000 KBTriple(rel='worked_at', sbj='Leonard_Chess', obj='Chess_Records')\n", " 1.000 KBTriple(rel='worked_at', sbj='Chess_Records', obj='Leonard_Chess')\n", " 1.000 KBTriple(rel='worked_at', sbj='Genghis_Khan', obj='Mongol_Empire')\n", " 1.000 KBTriple(rel='worked_at', sbj='Mongol_Empire', obj='Genghis_Khan')\n", " 1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Comic_book')\n", " 1.000 KBTriple(rel='worked_at', sbj='Comic_book', obj='Marvel_Comics')\n", "\n" ] } ], "source": [ "rel_ext.find_new_relation_instances(\n", " dataset,\n", " featurizers=[simple_bag_of_words_featurizer])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are actually some good discoveries here! The predictions for the `author` relation seem especially good. Of course, there are also plenty of bad results, and a few that are downright comical. We may hope that as we improve our models and optimize performance in our automatic evaluations, the results we observe in this manual evaluation improve as well.\n", "\n", "__Optional exercise:__ Note that every time we predict that a given relation holds between entities `X` and `Y`, we also predict, with equal confidence, that it holds between `Y` and `X`. Why? How could we fix this?\n", "\n", "\\[ [top](#Relation-extraction-using-distant-supervision) \\]" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" }, "widgets": { "state": {}, "version": "1.1.2" } }, "nbformat": 4, "nbformat_minor": 2 }