{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework 1: Word similarity tasks" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "__author__ = \"Christopher Potts\"\n", "__version__ = \"CS224u, Stanford, Spring 2019\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "1. [Overview](#Overview)\n", "1. [Set-up](#Set-up)\n", "1. [Dataset readers](#Dataset-readers)\n", "1. [Dataset comparisons](#Dataset-comparisons)\n", " 1. [Vocab overlap](#Vocab-overlap)\n", " 1. [Pair overlap and score correlations](#Pair-overlap-and-score-correlations)\n", "1. [Evaluation](#Evaluation)\n", " 1. [Dataset evaluation](#Dataset-evaluation)\n", " 1. [Dataset error analysis](#Dataset-error-analysis)\n", " 1. [Full evaluation](#Full-evaluation)\n", "1. [Homework questions](#Homework-questions)\n", " 1. [PPMI as a baseline [1 point]](#PPMI-as-a-baseline-[1-point])\n", " 1. [Gigaword with LSA at a few dimensions [1 point]](#Gigaword-with-LSA-at-a-few-dimensions-[1-point])\n", " 1. [Gigaword with GloVe for a small number of iterations [1 point]](#Gigaword-with-GloVe-for-a-small-number-of-iterations-[1-point])\n", " 1. [t-test reweighting [2 points]](#t-test-reweighting-[2-points])\n", " 1. [Your original system [4 points]](#Your-original-system-[4-points])\n", "1. [Bake-off [1 point]](#Bake-off-[1-point])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "Word similarity datasets have long been used to evaluate distributed representations. This notebook provides basic code for conducting such analyses with a number of datasets:\n", "\n", "| Dataset | Pairs | Task-type | Current best Spearman $\\rho$ | Best $\\rho$ paper | |\n", "|---------|-------|-----------|------------------------------|-------------------|---|\n", "| [WordSim-353](http://www.cs.technion.ac.il/~gabr/resources/data/wordsim353/) | 353 | Relatedness | 82.8 | [Speer et al. 2017](https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14972) |\n", "| [MTurk-771](http://www2.mta.ac.il/~gideon/mturk771.html) | 771 | Relatedness | 81.0 | [Speer et al. 2017](https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14972) |\n", "| [The MEN Test Collection](http://clic.cimec.unitn.it/~elia.bruni/MEN) | 3,000 | Relatedness | 86.6 | [Speer et al. 2017](https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14972) | \n", "| [SimVerb-3500-dev](http://people.ds.cam.ac.uk/dsg40/simverb.html) | 500 | Similarity | 61.1 | [Mrkišć et al. 2016](https://arxiv.org/pdf/1603.00892.pdf) |\n", "| [SimVerb-3500-test](http://people.ds.cam.ac.uk/dsg40/simverb.html) | 3,000 | Similarity | 62.4 | [Mrkišć et al. 2016](https://arxiv.org/pdf/1603.00892.pdf) |\n", "\n", "Each of the similarity datasets contains word pairs with an associated human-annotated similarity score. (We convert these to distances to align intuitively with our distance measure functions.) The evaluation code measures the distance between the word pairs in your chosen VSM (which should be a `pd.DataFrame`).\n", "\n", "The evaluation metric for each dataset is the [Spearman correlation coefficient $\\rho$](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient) between the annotated scores and your distances, as is standard in the literature. We also macro-average these correlations across the datasets for an overall summary. (In using the macro-average, we are saying that we care about all the datasets equally, even though they vary in size.)\n", "\n", "This homework ([questions at the bottom of this notebook](#Homework-questions)) asks you to write code that uses the count matrices in `data/vsmdata` to create and evaluate some baseline models as well as an original model $M$ that you design. This accounts for 9 of the 10 points for this assignment.\n", "\n", "For the associated bake-off, we will distribute two new word similarity or relatedness datasets and associated reader code, and you will evaluate $M$ (no additional training or tuning allowed!) on those new datasets. Systems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set-up" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from collections import defaultdict\n", "import csv\n", "import itertools\n", "import numpy as np\n", "import os\n", "import pandas as pd\n", "from scipy.stats import spearmanr\n", "import vsm" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "VSM_HOME = os.path.join('data', 'vsmdata')\n", "\n", "WORDSIM_HOME = os.path.join('data', 'wordsim')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dataset readers" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def wordsim_dataset_reader(\n", " src_filename, \n", " header=False, \n", " delimiter=',', \n", " score_col_index=2):\n", " \"\"\"Basic reader that works for all similarity datasets. They are \n", " all tabular-style releases where the first two columns give the \n", " word and a later column (`score_col_index`) gives the score.\n", "\n", " Parameters\n", " ----------\n", " src_filename : str\n", " Full path to the source file.\n", " header : bool\n", " Whether `src_filename` has a header. Default: False\n", " delimiter : str\n", " Field delimiter in `src_filename`. Default: ','\n", " score_col_index : int\n", " Column containing the similarity scores Default: 2\n", "\n", " Yields\n", " ------\n", " (str, str, float)\n", " (w1, w2, score) where `score` is the negative of the similarity\n", " score in the file so that we are intuitively aligned with our\n", " distance-based code. To align with our VSMs, all the words are \n", " downcased.\n", "\n", " \"\"\"\n", " with open(src_filename, encoding='utf8') as f:\n", " reader = csv.reader(f, delimiter=delimiter)\n", " if header:\n", " next(reader)\n", " for row in reader:\n", " w1 = row[0].strip().lower()\n", " w2 = row[1].strip().lower()\n", " score = row[score_col_index]\n", " # Negative of scores to align intuitively with distance functions:\n", " score = -float(score)\n", " yield (w1, w2, score)\n", "\n", "def wordsim353_reader():\n", " \"\"\"WordSim-353: http://www.cs.technion.ac.il/~gabr/resources/data/wordsim353/\"\"\"\n", " src_filename = os.path.join(\n", " WORDSIM_HOME, 'wordsim353', 'combined.csv')\n", " return wordsim_dataset_reader(\n", " src_filename, header=True)\n", "\n", "def mturk771_reader():\n", " \"\"\"MTURK-771: http://www2.mta.ac.il/~gideon/mturk771.html\"\"\"\n", " src_filename = os.path.join(\n", " WORDSIM_HOME, 'MTURK-771.csv')\n", " return wordsim_dataset_reader(\n", " src_filename, header=False)\n", "\n", "def simverb3500dev_reader():\n", " \"\"\"SimVerb-3500: http://people.ds.cam.ac.uk/dsg40/simverb.html\"\"\"\n", " src_filename = os.path.join(\n", " WORDSIM_HOME, 'SimVerb-3500', 'SimVerb-500-dev.txt')\n", " return wordsim_dataset_reader(\n", " src_filename, delimiter=\"\\t\", header=True, score_col_index=3)\n", "\n", "def simverb3500test_reader():\n", " \"\"\"SimVerb-3500: http://people.ds.cam.ac.uk/dsg40/simverb.html\"\"\"\n", " src_filename = os.path.join(\n", " WORDSIM_HOME, 'SimVerb-3500', 'SimVerb-3000-test.txt')\n", " return wordsim_dataset_reader(\n", " src_filename, delimiter=\"\\t\", header=True, score_col_index=3)\n", "\n", "def men_reader():\n", " \"\"\"MEN: http://clic.cimec.unitn.it/~elia.bruni/MEN\"\"\"\n", " src_filename = os.path.join(\n", " WORDSIM_HOME, 'MEN', 'MEN_dataset_natural_form_full')\n", " return wordsim_dataset_reader(\n", " src_filename, header=False, delimiter=' ') " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This collection of readers will be useful for flexible evaluations:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "READERS = (wordsim353_reader, mturk771_reader, simverb3500dev_reader, \n", " simverb3500test_reader, men_reader)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dataset comparisons\n", "\n", "This section does some basic analysis of the datasets. The goal is to obtain a deeper understanding of what problem we're solving – what strengths and weaknesses the datasets have and how they relate to each other. For a full-fledged project, we would want to continue work like this and report on it in the paper, to provide context for the results." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def get_reader_name(reader):\n", " \"\"\"Return a cleaned-up name for the similarity dataset \n", " iterator `reader`\n", " \"\"\"\n", " return reader.__name__.replace(\"_reader\", \"\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Vocab overlap\n", "\n", "How many vocabulary items are shared across the datasets?" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "def get_reader_vocab(reader):\n", " \"\"\"Return the set of words (str) in `reader`.\"\"\"\n", " vocab = set()\n", " for w1, w2, _ in reader():\n", " vocab.add(w1)\n", " vocab.add(w2)\n", " return vocab" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def get_reader_vocab_overlap(readers=READERS):\n", " \"\"\"Get data on the vocab-level relationships between pairs of \n", " readers. Returns a a pd.DataFrame containing this information.\n", " \"\"\"\n", " data = []\n", " for r1, r2 in itertools.product(readers, repeat=2): \n", " v1 = get_reader_vocab(r1)\n", " v2 = get_reader_vocab(r2)\n", " d = {\n", " 'd1': get_reader_name(r1),\n", " 'd2': get_reader_name(r2),\n", " 'overlap': len(v1 & v2), \n", " 'union': len(v1 | v2),\n", " 'd1_size': len(v1),\n", " 'd2_size': len(v2)}\n", " data.append(d)\n", " return pd.DataFrame(data)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "vocab_overlap = get_reader_vocab_overlap()" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "def vocab_overlap_crosstab(vocab_overlap):\n", " \"\"\"Return an intuitively formatted `pd.DataFrame` giving \n", " vocab-overlap counts for all the datasets represented in \n", " `vocab_overlap`, the output of `get_reader_vocab_overlap`.\n", " \"\"\" \n", " xtab = pd.crosstab(\n", " vocab_overlap['d1'], \n", " vocab_overlap['d2'], \n", " values=vocab_overlap['overlap'], \n", " aggfunc=np.mean)\n", " # Blank out the upper left to reduce visual clutter:\n", " for i in range(0, xtab.shape[0]):\n", " for j in range(i+1, xtab.shape[1]):\n", " xtab.iloc[i, j] = '' \n", " return xtab " ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
d2menmturk771simverb3500devsimverb3500testwordsim353
d1
men751
mturk7712301113
simverb3500dev2367536
simverb3500test3094532823
wordsim353861581317437
\n", "
" ], "text/plain": [ "d2 men mturk771 simverb3500dev simverb3500test wordsim353\n", "d1 \n", "men 751 \n", "mturk771 230 1113 \n", "simverb3500dev 23 67 536 \n", "simverb3500test 30 94 532 823 \n", "wordsim353 86 158 13 17 437" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vocab_overlap_crosstab(vocab_overlap)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This looks reasonable. By design, the SimVerb dev and test sets have a lot of overlap. The other overlap numbers are pretty small, even adjusting for dataset size." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Pair overlap and score correlations\n", "\n", "How many word pairs are shared across datasets and, for shared pairs, what is the correlation between their scores? That is, do the datasets agree?" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "def get_reader_pairs(reader):\n", " \"\"\"Return the set of alphabetically-sorted word (str) tuples \n", " in `reader`\n", " \"\"\"\n", " return {tuple(sorted([w1, w2])): score for w1, w2, score in reader()}" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "def get_reader_pair_overlap(readers=READERS):\n", " \"\"\"Return a `pd.DataFrame` giving the number of overlapping \n", " word-pairs in pairs of readers, along with the Spearman \n", " correlations.\n", " \"\"\" \n", " data = []\n", " for r1, r2 in itertools.product(READERS, repeat=2):\n", " if r1.__name__ != r2.__name__:\n", " d1 = get_reader_pairs(r1)\n", " d2 = get_reader_pairs(r2)\n", " overlap = []\n", " for p, s in d1.items():\n", " if p in d2:\n", " overlap.append([s, d2[p]])\n", " if overlap:\n", " s1, s2 = zip(*overlap)\n", " rho = spearmanr(s1, s2)[0]\n", " else:\n", " rho = None \n", " d = {\n", " 'd1': get_reader_name(r1),\n", " 'd2': get_reader_name(r2), \n", " 'pair_overlap': len(overlap),\n", " 'rho': rho}\n", " data.append(d)\n", " df = pd.DataFrame(data)\n", " df = df.sort_values('pair_overlap', ascending=False)\n", " # Return only every other row to avoid repeats:\n", " return df[::2].reset_index(drop=True)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
d1d2pair_overlaprho
0menmturk771110.592191
1wordsim353men50.700000
2simverb3500testmturk77140.400000
3mensimverb3500test21.000000
4simverb3500testsimverb3500dev1NaN
5wordsim353simverb3500dev0NaN
6simverb3500testwordsim3530NaN
7simverb3500devwordsim3530NaN
8mturk771wordsim3530NaN
9mensimverb3500dev0NaN
\n", "
" ], "text/plain": [ " d1 d2 pair_overlap rho\n", "0 men mturk771 11 0.592191\n", "1 wordsim353 men 5 0.700000\n", "2 simverb3500test mturk771 4 0.400000\n", "3 men simverb3500test 2 1.000000\n", "4 simverb3500test simverb3500dev 1 NaN\n", "5 wordsim353 simverb3500dev 0 NaN\n", "6 simverb3500test wordsim353 0 NaN\n", "7 simverb3500dev wordsim353 0 NaN\n", "8 mturk771 wordsim353 0 NaN\n", "9 men simverb3500dev 0 NaN" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_reader_pair_overlap()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This looks reasonable: none of the datasets have a lot of overlapping pairs, so we don't have to worry too much about places where they give conflicting scores." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Evaluation\n", "\n", "This section builds up the evaluation code that you'll use for the homework and bake-off. For illustrations, I'll read in a VSM created from `data/vsmdata/giga_window5-scaled.csv.gz`:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "giga5 = pd.read_csv(\n", " os.path.join(VSM_HOME, \"giga_window5-scaled.csv.gz\"), index_col=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dataset evaluation" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "def word_similarity_evaluation(reader, df, distfunc=vsm.cosine):\n", " \"\"\"Word-similarity evalution framework.\n", " \n", " Parameters\n", " ----------\n", " reader : iterator\n", " A reader for a word-similarity dataset. Just has to yield\n", " tuples (word1, word2, score). \n", " df : pd.DataFrame\n", " The VSM being evaluated. \n", " distfunc : function mapping vector pairs to floats.\n", " The measure of distance between vectors. Can also be \n", " `vsm.euclidean`, `vsm.matching`, `vsm.jaccard`, as well as \n", " any other distance measure between 1d vectors. \n", " \n", " Raises\n", " ------\n", " ValueError\n", " If `df.index` is not a subset of the words in `reader`.\n", " \n", " Returns\n", " -------\n", " float, data\n", " `float` is the Spearman rank correlation coefficient between \n", " the dataset scores and the similarity values obtained from \n", " `df` using `distfunc`. This evaluation is sensitive only to \n", " rankings, not to absolute values. `data` is a `pd.DataFrame` \n", " with columns['word1', 'word2', 'score', 'distance'].\n", " \n", " \"\"\"\n", " data = []\n", " for w1, w2, score in reader():\n", " d = {'word1': w1, 'word2': w2, 'score': score}\n", " for w in [w1, w2]:\n", " if w not in df.index:\n", " raise ValueError(\n", " \"Word '{}' is in the similarity dataset {} but not in the \"\n", " \"DataFrame, making this evaluation ill-defined. Please \"\n", " \"switch to a DataFrame with an appropriate vocabulary.\".\n", " format(w, get_reader_name(reader))) \n", " d['distance'] = distfunc(df.loc[w1], df.loc[w2])\n", " data.append(d)\n", " data = pd.DataFrame(data)\n", " rho, pvalue = spearmanr(data['score'].values, data['distance'].values)\n", " return rho, data" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "rho, eval_df = word_similarity_evaluation(men_reader, giga5)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.40375964105441753" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rho" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
distancescoreword1word2
00.956828-50.0sunsunlight
10.979143-50.0automobilecar
20.970105-49.0riverwater
30.980475-49.0stairsstaircase
40.963624-49.0morningsunrise
\n", "
" ], "text/plain": [ " distance score word1 word2\n", "0 0.956828 -50.0 sun sunlight\n", "1 0.979143 -50.0 automobile car\n", "2 0.970105 -49.0 river water\n", "3 0.980475 -49.0 stairs staircase\n", "4 0.963624 -49.0 morning sunrise" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "eval_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dataset error analysis\n", "\n", "For error analysis, we can look at the words with the largest delta between the gold score and the distance value in our VSM. We do these comparisons based on ranks, just as with our primary metric (Spearman $\\rho$), and we normalize both rankings so that they have a comparable number of levels." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "def word_similarity_error_analysis(eval_df): \n", " eval_df['distance_rank'] = _normalized_ranking(eval_df['distance'])\n", " eval_df['score_rank'] = _normalized_ranking(eval_df['score'])\n", " eval_df['error'] = abs(eval_df['distance_rank'] - eval_df['score_rank'])\n", " return eval_df.sort_values('error')\n", " \n", " \n", "def _normalized_ranking(series):\n", " ranks = series.rank(method='dense')\n", " return ranks / ranks.sum() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Best predictions:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
distancescoreword1word2distance_rankscore_rankerror
10410.975007-32.0hummingbirdpelican0.0002430.0002442.434543e-07
23150.980834-13.0lilypigs0.0004880.0004874.016842e-07
29510.983473-4.0bucketgirls0.0006020.0006034.151568e-07
1500.968690-43.0nightsunset0.0001020.0001036.520315e-07
20620.979721-17.0oakpetals0.0004350.0004367.162632e-07
\n", "
" ], "text/plain": [ " distance score word1 word2 distance_rank score_rank \\\n", "1041 0.975007 -32.0 hummingbird pelican 0.000243 0.000244 \n", "2315 0.980834 -13.0 lily pigs 0.000488 0.000487 \n", "2951 0.983473 -4.0 bucket girls 0.000602 0.000603 \n", "150 0.968690 -43.0 night sunset 0.000102 0.000103 \n", "2062 0.979721 -17.0 oak petals 0.000435 0.000436 \n", "\n", " error \n", "1041 2.434543e-07 \n", "2315 4.016842e-07 \n", "2951 4.151568e-07 \n", "150 6.520315e-07 \n", "2062 7.162632e-07 " ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "word_similarity_error_analysis(eval_df).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Worst predictions:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
distancescoreword1word2distance_rankscore_rankerror
670.984622-45.0branchtwigs0.0006300.0000770.000553
1900.987704-43.0birdsstork0.0006570.0001030.000554
1850.990993-43.0bloomtulip0.0006630.0001030.000561
1670.991760-43.0bloomblossom0.0006640.0001030.000561
1980.992406-43.0bloomrose0.0006640.0001030.000561
\n", "
" ], "text/plain": [ " distance score word1 word2 distance_rank score_rank error\n", "67 0.984622 -45.0 branch twigs 0.000630 0.000077 0.000553\n", "190 0.987704 -43.0 birds stork 0.000657 0.000103 0.000554\n", "185 0.990993 -43.0 bloom tulip 0.000663 0.000103 0.000561\n", "167 0.991760 -43.0 bloom blossom 0.000664 0.000103 0.000561\n", "198 0.992406 -43.0 bloom rose 0.000664 0.000103 0.000561" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "word_similarity_error_analysis(eval_df).tail()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Full evaluation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A full evaluation is just a loop over all the readers on which one want to evaluate, with a macro-average at the end:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "def full_word_similarity_evaluation(df, readers=READERS, distfunc=vsm.cosine):\n", " \"\"\"Evaluate a VSM against all datasets in `readers`.\n", " \n", " Parameters\n", " ----------\n", " df : pd.DataFrame\n", " readers : tuple \n", " The similarity dataset readers on which to evaluate.\n", " distfunc : function mapping vector pairs to floats.\n", " The measure of distance between vectors. Can also be \n", " `vsm.euclidean`, `vsm.matching`, `vsm.jaccard`, as well as \n", " any other distance measure between 1d vectors. \n", " \n", " Returns\n", " -------\n", " pd.Series\n", " Mapping dataset names to Spearman r values.\n", " \n", " \"\"\" \n", " scores = {} \n", " for reader in readers:\n", " score, data_df = word_similarity_evaluation(reader, df, distfunc=distfunc)\n", " scores[get_reader_name(reader)] = score\n", " series = pd.Series(scores, name='Spearman r')\n", " series['Macro-average'] = series.mean()\n", " return series" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "wordsim353 0.327831\n", "mturk771 0.143146\n", "simverb3500dev -0.068038\n", "simverb3500test -0.066348\n", "men 0.403760\n", "Macro-average 0.148070\n", "Name: Spearman r, dtype: float64" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "full_word_similarity_evaluation(giga5, distfunc=vsm.cosine)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Homework questions\n", "\n", "Please embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### PPMI as a baseline [1 point]\n", "\n", "The insight behind PPMI is a recurring theme in word representation learning, so it is a natural baseline for our task. For this question, submit code to do the following:\n", "\n", "1. Read the two count matrices created with a window of 20 and a flat scaling function into `pd.DataFrame`s, as is done in the VSM notebooks. The files are `data/vsmdata/giga_window20-flat.csv.gz` and `data/vsmdata/imdb_window20-flat.csv.gz`, and the VSM notebooks provide examples of the needed code.\n", "\n", "1. Reweight these count matries with PPMI.\n", "\n", "1. Evaluate these reweighted matrices using `full_word_similarity_evaluation`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Gigaword with LSA at a few dimensions [1 point]\n", "\n", "We might expect PPMI and LSA to form a solid pipeline that combines the strengths of PPMI with those of dimensionality reduction. However, LSA has a hyper-parameter $k$ – the dimensionality of the final representations – that will impact performance. For this problem, submit code to do the following:\n", "\n", "1. Apply LSA with $k \\in \\{100, 500, 1000\\}$ to the PPMI reweighted version of `data/vsmdata/giga_window20-flat.csv.gz` that you created in the previous question.\n", "\n", "2. Print out each $k$ and its associated score. (For concise, clear code, you can use `results.loc['Macro-average']` where `results` is a `pd.DataFrame` returned by `full_word_similarity_evaluation`.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Gigaword with GloVe for a small number of iterations [1 point]\n", "\n", "Ideally, we would run GloVe for a very large number of iterations on a GPU machine to compare it against its close cousin PMI. However, we don't want this homework to cost you a lot of money or monopolize a lot of your available computing resources, so let's instead just probe GloVe a little bit to see if it has promise for our task. For this problem, submit code to do the following:\n", "\n", "1. Run GloVe for 10, 100, and 200 iterations on `data/vsmdata/giga_window20-flat.csv.gz`, using the `mittens` implementation of `GloVe`. \n", " * For all the other parameters to `mittens.GloVe` besides `max_iter`, use the package's defaults.\n", " * Because of the way that implementation is designed, these will have to be separate runs, but they should be relatively quick. \n", "1. Print out each value of `max_iter` and its associated score according to `full_word_similarity_evaluation`. The trend should give you a sense for whether it is worth running GloVe for more iterations.\n", " * Note: your trained GloVe matrix `X` needs to be wrapped in a `pd.DataFrame` to work with `full_word_similarity_evaluation`. `pd.DataFrame(X, index=giga20.index)` will do the trick.\n", " * Note: if `glv` is your GloVe model, then running `glv.sess.close()` after each model is trained will silence warnings from TensorFlow about interactive sessions being active." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "from mittens import GloVe\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### t-test reweighting [2 points]\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The t-test statistic can be thought of as a reweighting scheme. For a count matrix $X$, row index $i$, and column index $j$:\n", "\n", "$$\\textbf{ttest}(X, i, j) = \n", "\\frac{\n", " P(X, i, j) - \\big(P(X, i, *)P(X, *, j)\\big)\n", "}{\n", "\\sqrt{(P(X, i, *)P(X, *, j))}\n", "}$$\n", "\n", "where $P(X, i, j)$ is $X_{ij}$ divided by the total values in $X$, $P(X, i, *)$ is the sum of the values in row $i$ of $X$ divided by the total values in $X$, and $P(X, *, j)$ is the sum of the values in column $j$ of $X$ divided by the total values in $X$.\n", "\n", "For this problem, implement this reweighting scheme. You can use `test_ttest_implementation` below to check that your implementation is correct. You do not need to use this for any evaluations, though we hope you will be curious enough to do so!" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "def test_ttest_implementation(func):\n", " \"\"\"`func` should be an implementation of t-test reweighting as \n", " defined above.\n", " \"\"\"\n", " X = pd.DataFrame(np.array([\n", " [ 4., 4., 2., 0.],\n", " [ 4., 61., 8., 18.],\n", " [ 2., 8., 10., 0.],\n", " [ 0., 18., 0., 5.]])) \n", " actual = np.array([\n", " [ 0.33056, -0.07689, 0.04321, -0.10532],\n", " [-0.07689, 0.03839, -0.10874, 0.07574],\n", " [ 0.04321, -0.10874, 0.36111, -0.14894],\n", " [-0.10532, 0.07574, -0.14894, 0.05767]]) \n", " predicted = func(X)\n", " assert np.array_equal(predicted.round(5), actual)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Your original system [4 points]\n", "\n", "This question asks you to design your own model. You can of course include steps made above (ideally, the above questions informed your system design!), but your model should not be literally identical to any of the above models. Other ideas: retrofitting, autoencoders, GloVe, subword modeling, ... \n", "\n", "Your code needs to be able to \n", "\n", "1. operate on all of the count matrices in `data/vsmdata` (except `gigawordnyt-advmod-matrix.csv.gz`, which doesn't have the right vocab for this task and is included just for fun); __other pretrained vectors cannot be introduced__; and \n", "1. be self-contained, so that we can work with your model directly in your homework submission notebook. (If your model depends on external data or other resources, please upload to Canvas a zip archive containing those resources and your submission notebook.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bake-off [1 point]\n", "\n", "For the bake-off, we will release two additional datasets right after class on April 15. The announcement will go out on Piazza. We will also release reader code for these datasets that you can paste into this notebook. You will evaluate your custom model $M$ (from the previous question) on these new datasets using `full_word_similarity_evaluation`. Rules:\n", "\n", "1. Only one evaluation is permitted.\n", "1. No additional system tuning is permitted once the bake-off has started.\n", "\n", "To enter the bake-off, upload this notebook on Canvas:\n", "\n", "https://canvas.stanford.edu/courses/99711/assignments/187240\n", "\n", "The cells below this one constitute your bake-off entry.\n", "\n", "People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.\n", "\n", "The bake-off will close at 4:30 pm on April 17. Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "# Enter your bake-off assessment code into this cell. \n", "# Please do not remove this comment.\n", "\n" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "# On an otherwise blank line in this cell, please enter\n", "# your \"Macro-average\" value as reported by the code above. \n", "# Please enter only a number between 0 and 1 inclusive.\n", "# Please do not remove this comment.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" }, "widgets": { "state": {}, "version": "1.1.2" } }, "nbformat": 4, "nbformat_minor": 2 }