{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Evaluation methods in NLP" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "__author__ = \"Christopher Potts\"\n", "__version__ = \"CS224u, Stanford, Spring 2019\"" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Contents\n", "\n", "1. [Overview](#Overview)\n", "1. [Your projects](#Your-projects)\n", "1. [Set-up](#Set-up)\n", "1. [Data organization](#Data-organization)\n", " 1. [Train/dev/test](#Train/dev/test)\n", " 1. [No fixed splits](#No-fixed-splits)\n", "1. [Cross-validation](#Cross-validation)\n", " 1. [Random splits](#Random-splits)\n", " 1. [K-folds](#K-folds)\n", "1. [Baselines](#Baselines)\n", " 1. [Baselines are crucial for strong experiments](#Baselines-are-crucial-for-strong-experiments)\n", " 1. [Random baselines](#Random-baselines)\n", " 1. [Task-specific baselines](#Task-specific-baselines)\n", "1. [Hyperparameter optimization](#Hyperparameter-optimization)\n", " 1. [Rationale](#Rationale)\n", " 1. [The ideal hyperparameter optimization setting](#The-ideal-hyperparameter-optimization-setting)\n", " 1. [Practical considerations, and some compromises](#Practical-considerations,-and-some-compromises)\n", " 1. [Hyperparameter optimization tools](#Hyperparameter-optimization-tools)\n", "1. [Classifier comparison](#Classifier-comparison)\n", " 1. [Practical differences](#Practical-differences)\n", " 1. [Confidence intervals](#Confidence-intervals)\n", " 1. [Wilcoxon signed-rank test](#Wilcoxon-signed-rank-test)\n", " 1. [McNemar's test](#McNemar's-test)\n", "1. [Assessing models without convergence](#Assessing-models-without-convergence)\n", " 1. [Incremental dev set testing](#Incremental-dev-set-testing)\n", " 1. [Early stopping](#Early-stopping)\n", " 1. [Learning curves with confidence intervals](#Learning-curves-with-confidence-intervals)\n", "1. [The role of random parameter initialization](#The-role-of-random-parameter-initialization)\n", "1. [Closing remarks](#Closing-remarks)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Overview\n", "\n", "This notebook is an overview of experimental methods for NLU. My primary goal is to help you with the experiments you'll be doing for your projects. It is a companion to [the evaluation metrics notebook](evaluation_metrics.ipynb), which I suggest studying first.\n", "\n", "The teaching team will be paying special attention to how you conduct your evaluations, so this notebook should create common ground around what our values are.\n", "\n", "This notebook is far from comprehensive. I hope it covers the most common tools, techniques, and challenges in the field. Beyond that, I'm hoping the examples here suggest a perspective on experiments and evaluations that generalizes to other topics and techniques." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Your projects\n", "\n", "1. We will never evaluate a project based on how \"good\" the results are.\n", " 1. Publication venues do this, because they have additional constraints on space that lead them to favor positive evidence for new developments over negative results.\n", " 1. In CS224u, we are not subject to this constraint, so we can do the right and good thing of valuing positive results, negative results, and everything in between.\n", "\n", "1. We __will__ evaluate your project on: \n", " 1. The appropriateness of the metrics\n", " 1. The strength of the methods\n", " 1. The extent to which the paper is open and clear-sighted about the limits of its findings. " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Set-up" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "from collections import defaultdict\n", "import numpy as np\n", "import pandas as pd\n", "from scipy import stats\n", "from sklearn.datasets import make_classification\n", "from sklearn.model_selection import train_test_split\n", "from torch_shallow_neural_classifier import TorchShallowNeuralClassifier\n", "import utils" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Data organization" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Train/dev/test\n", "\n", "Many publicly available datasets are released with a train/dev/test structure. __We're all on the honor system to do test-set runs only when development is complete.__\n", "\n", "Splits like this basically presuppose a fairly large dataset.\n", "\n", "If there is no dev set as part of the distribution, then you might create one to simulate what a test run will be like, though you have to weigh this against the reduction in train-set size.\n", "\n", "Having a fixed test set ensures that all systems are assessed against the same gold data. This is generally good, but it is problematic where the test set turns out to have unusual properties that distort progress on the task." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### No fixed splits\n", "\n", "Many datasets are released without predefined splits. This poses challenges for assessment, especially comparative assessment: __for robust comparisons with prior work, you really have to rerun the models using your assessment regime on your splits__. For example, if you're doing [5-fold cross-validation](#K-folds), then all the systems should be trained and assessed using exactly the same folds, to control for variation in how difficult the splits are.\n", "\n", "If the dataset is large enough, you might create a train/test or train/dev/test split right at the start of your project and use it for all your experiments. This means putting the test portion in a locked box until the very end, when you assess all the relevant systems against it. For large datasets, this will certainly simplify your experimental set-up, for reasons that will become clear when we discuss [hyperparameter optimization](#Hyperparameter-optimization) below.\n", "\n", "For small datasets, carving out dev and test sets might leave you with too little data. The most problematic symptom of this is that performance is highly variable because there isn't enough data to optimize reliably. In such situations, you might give up on having fixed splits, opting instead for some form of cross-validation, which allows you to average over multiple runs." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Cross-validation\n", "\n", "In cross-validation, we take a set of examples $X$ and partition them into two or more train/test splits, and then we average over the results in some way." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Random splits\n", "\n", "When creating random train/test splits, we shuffle the examples and split them, with a pre-specified percentage $t$ used for training and another pre-specified percentage (usually $1-t$) used for testing.\n", "\n", "In general, we want these splits to be __stratified__ in the sense that the train and test splits have approximately the same distribution over the classes.\n", "\n", "#### The good and the bad of random splits\n", "\n", "A nice thing about random splits is that you can create as many as you want without having this impact the ratio of training to testing examples. \n", "\n", "This can also be a liability, though, as there's no guarantee that every example will be used the same number of times for training and testing. In principle, one might even evaluate on the same split more than once (though this will be fantastically unlikely for large datasets).\n", "\n", "#### Random splits in scikit-learn\n", "\n", "In scikit-learn, the function [train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) will do random splits. It is a wrapper around [ShuffleSplit](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html#sklearn.model_selection.ShuffleSplit) or [StratifiedShuffleSplit](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html#sklearn.model_selection.StratifiedShuffleSplit), depending on how the keyword argument `stratify` is used. A potential gotcha for classification problems: `train_test_split` does not stratify its splits by default, whereas stratified splits are desired in most situations." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### K-folds\n", "\n", "In K-fold cross-validation, one divides the data into $k$ folds of equal size and then conducts $k$ experiments. In each, fold $i$ is used for assessment, and all the other folds are merged together for training:\n", "\n", "$$\n", "\\begin{array}{c c c }\n", "\\textbf{Splits} & \\textbf{Experiment 1} & \\textbf{Experiment 2} & \\textbf{Experiment 3} \\\\\n", "\\begin{array}{|c|}\n", "\\hline\n", "\\textrm{fold } 1 \\\\\\hline\n", "\\textrm{fold } 2 \\\\\\hline\n", "\\textrm{fold } 3 \\\\\\hline\n", "\\end{array}\n", "& \n", "\\begin{array}{|c c|}\n", "\\hline\n", "\\textbf{Test} & \\textrm{fold } 1 \\\\\\hline\n", "\\textbf{Train} & \\textrm{fold } 2 \\\\\n", "& \\textrm{fold } 3 \\\\\\hline\n", "\\end{array}\n", "&\n", "\\begin{array}{|c c|}\n", "\\hline\n", "\\textbf{Test} & \\textrm{fold } 2 \\\\\\hline\n", "\\textbf{Train} & \\textrm{fold } 1 \\\\\n", "& \\textrm{fold } 3 \\\\\\hline\n", "\\end{array}\n", "&\n", "\\begin{array}{|c c|}\n", "\\hline\n", "\\textbf{Test} & \\textrm{fold } 3 \\\\\\hline\n", "\\textbf{Train} & \\textrm{fold } 1 \\\\\n", "& \\textrm{fold } 2 \\\\\\hline\n", "\\end{array}\n", "\\end{array}\n", "$$\n", "\n", "#### The good and the bad of k-folds\n", "\n", "* With k-folds, every example appears in a train set exactly $k-1$ times and in a test set exactly once. We noted above that random splits do not guarantee this.\n", "\n", "* A major drawback of k-folds is that the size of $k$ determines the size of the train/test splits. With 3-fold cross validation, one trains on 67% of the data and tests on 33%. With 10-fold cross-validation, one trains on 90% and tests on 10%. These are likely to be __very__ different experimental scenarios. This is a consideration one should have in mind when [comparing models](#Classifier-comparison) using statistical tests that depend on repeated runs." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### K-folds in scikit-learn\n", "\n", "* In scikit-learn, [KFold](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold) and [StratifiedKFold](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html#sklearn.model_selection.StratifiedKFold) are the primary classes for creating k-folds from a dataset. As with random splits, the stratified option is recommended for most classification problems, as one generally want to train and assess with the same label distribution.\n", "\n", "* The methods [cross_validate](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html#sklearn.model_selection.cross_validate) and [cross_val_score](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html#sklearn.model_selection.cross_val_score) are convenience methods that let you pass in a model (`estimator`), a dataset (`X` and `y`), and some cross-validation parameters, and they handle the repeated assessments. These are great. Two tips:\n", " * I strongly recommend passing in a `KFold` or `StratifiedKFold` instance as the value of `cv` to ensure that you get the split behavior that you desire.\n", " * Check that `scoring` has the value that you desire. For example, if you are going to report F1-scores, it's a mistake to leave `scoring=None`, as this will default to whatever your model reports with its `score` method, which is probably accuracy.\n", " " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Variants \n", " \n", "K-folds has a number of variants and special cases. Two that frequently arise in NLU:\n", "\n", "1. [LeaveOneOut](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeaveOneOut.html#sklearn.model_selection.LeaveOneOut) is the special case where the number of folds equals the number of examples. This is especially useful for very small datasets.\n", "\n", "1. [LeavePGroupsOut](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeavePGroupsOut.html#sklearn.model_selection.LeavePGroupsOut) creates folds based on criteria that you define. This is useful in situations where the datasets have important structure that the splits need to respect – e.g., you want to assess against a graph sub-network that is never seen on training." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Baselines\n", "\n", "Evaluation numbers in NLP (and throughout AI) __can never be understood properly in isolation__:\n", "\n", "* If your system gets 0.95 F1, that might seem great in absolute terms, but your readers will suspect the task is too easy and want to know what simple models achieve.\n", "\n", "* If your system gets 0.60 F1, you might despair, but it could turn out that humans achieve only 0.80, indicating that you got traction on a very challenging but basically coherent problem." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Baselines are crucial for strong experiments\n", "\n", "Defining baselines should not be an afterthought, but rather central to how you define your overall hypotheses. __Baselines are essential to building a persuasive case__, and they can also be used to illuminate specific aspects of the problem and specific virtues of your proposed system." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Random baselines\n", "\n", "Random baselines are almost always useful to include. scikit-learn has classes [DummyClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html#sklearn.dummy.DummyClassifier) and [DummyRegressor](http://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyRegressor.html#sklearn.dummy.DummyRegressor) that make it easy to include these baselines in your workflow. Each of them has a keyword argument `strategy` that allows you to specify a range of different styles of random guessing." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Task-specific baselines\n", "\n", "It is worth considering whether your problem suggests a baseline that will reveal something about the problem or the ways it is modeled. Two recent examples from NLU:\n", "\n", "1. As disussed briefly in [the NLI models notebook](nli_02_models.ipynb#Other-findings), [Leonid Keselman](https://leonidk.com/) observed [in his 2016 NLU course project](https://leonidk.com/stanford/cs224u.html) that one can do much better than chance on SNLI by processing only the hypothesis, ignoring the premise entirely. The exact interpretation of this is complex (we'll explore this a bit [in our NLI bake-off](http://nbviewer.jupyter.org/github/cgpotts/cs224u/blob/master/nli_wordentail_bakeoff.ipynb)), but it's certainly relevant for understanding how much a system has actually learned about reasoning from a premise to a conclusion.\n", " \n", "1. [Schwartz et al. (2017)](https://aclanthology.coli.uni-saarland.de/papers/W17-0907/w17-0907) develop a system for choosing between a coherent and incoherent ending for a story. Their best system achieves 75% accuracy by processing the story and the ending, but they achieve 72% using only stylistic features of the ending, ignoring the preceding story entirely. This puts the 75% – and the extent to which the system understands story completion – in a new light." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Hyperparameter optimization\n", "\n", "In machine learning, the __parameters__ of a model are those whose values are learned as part of optimizing the model itself. \n", "\n", "The __hyperparameters__ of a model are any settings that are set by a process that is outside of this optimization process. The boundary between a true setting of the model and a broader design choice will likely be blurry conceptually. For example: \n", "\n", "* The regularization term for a classifier is a clear hyperparameter – it appears in the model's objective function. \n", "* What about the method one uses for normalizing the feature values? This is probably not a setting of the model per se, but rather a choice point in your experimental framework.\n", " \n", "For the purposes of this discussion, we'll construe hyperparameters very broadly." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Rationale\n", "\n", "Hyperparameter optimization is one of the most important parts machine learning, and a crucial part of building a persuasive argument. To see why, it's helpful to imagine that you're in an ongoing debate with a very skeptical referee:\n", "\n", "1. You ran experiments with models A, B, and C. For each, you used the default hyperparameters as given by the implementations you're using. You found that C performed the best, and so you reported that in your paper.\n", "1. Your reviewer doesn't have visibility into your process, and maybe doesn't fully trust you. Did you try any other values for the hyperparameters without reporting that? If not, would you have done that if C hadn't outperformed the others? There is no way for the reviewer (or perhaps anyone) to answer these questions.\n", "1. So, from the reviewer's perspective, all we learned from your experiments is that there is some set of hyperparameters on which C wins this competition. But, strictly speaking, this conveys no new information; we knew before you did your experiments that we could find settings that would deliver this and all other outcomes. (They might not be __sensible__ settings, but remember you're dealing with a hard-bitten, unwavering skeptic.)\n", "\n", "Our best response to this situation is to allow these models to explore a wide range of hyperparameters, choose the best ones according to performance on training or development data, and then report how they do with those settings at test time. __This gives every model its best chance to succeed.__\n", "\n", "If you do this, the strongest argument that your skeptical reviewer can muster is that you didn't pick the right space of hyperparameters to explore for one or more of the models. Alas, there is no satisfying the skeptic, but we can at least feel happy that the outcome of these experiments will have a lot more scientific value than the ones described above with fixed hyperparameters." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### The ideal hyperparameter optimization setting\n", "\n", "When evaluating a model, the ideal regime for hyperparameter optimization is as follows:\n", "\n", "1. For each hyperparameter, identify a large set of values for it. \n", "2. Create a list of all the combinations of all the hyperparameter values. This will be the [cross-product](https://en.wikipedia.org/wiki/Cartesian_product) of all the values for all the features identified at step 1.\n", "3. For each of the settings, cross-validate it on the available training data.\n", "4. Choose the settings that did best in step 3, train on all the training data using those settings, and then evaluate that model on the test set.\n", "\n", "This is very demanding. First, The number of settings grows quickly with the number of hyperparameters and values. If hyperparameter $h_{1}$ has $5$ values and hyperparameter $h_{2}$ has $10$, then the number of settings is $5 \\cdot 10 = 50$. If we add a third hyperparameter $h_{3}$ with just $2$ values, then the number jumps to $100$. Second, if you're doing 5-fold cross-validation, then each model is trained 5 times. You're thus committed to training $500$ models.\n", "\n", "And it could get worse. Suppose you don't have a fixed train/test split, and you're instead reporting, say, the result of 10 random train/test splits. Strictly speaking, the optimal hyperparameters could be different for different splits. Thus, for each split, the above cross-validation should be conducted. Now you're committed to training $5,000$ systems!" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Practical considerations, and some compromises\n", "\n", "The above is untenable as a set of laws for the scientific community. If we adopted it, then complex models trained on large datasets would end up disfavored, and only the very wealthy would be able to participate. Here are some pragmatic steps you can take to alleviate this problem, in descending order of attractiveness. (That is, the lower you go on this list, the more likely the skeptic is to complain!)\n", "\n", "1. [Bergstra and Bengio (2012)](http://www.jmlr.org/papers/v13/bergstra12a.html) argue that __randomly sampling__ from the space of hyperparameters delivers results like the full \"grid search\" described above with a relatively few number of samples. __Hyperparameter optimization algorithms__ like those implemented in [Hyperopt](http://hyperopt.github.io/hyperopt/) and [scikit-optimize](https://github.com/scikit-optimize/scikit-optimize) allow guided sampling from the full space. All these methods control the exponential growth in settings that comes from any serious look at one's hyperparameters. \n", "\n", "1. In large deep learning systems, __the hyperparameter search could be done on the basis of just a few iterations__. The systems likely won't have converged, but it's a solid working assumption that early performance is highly predictive of final performance. You might even be able to justify this with learning curves over these initial iterations.\n", "\n", "1. Not all hyperparameters will contribute equally to outcomes. Via heuristic exploration, it is typically possible to __identify the less informative ones and set them by hand__. As long as this is justified in the paper, it shouldn't rile the skeptic too much.\n", "\n", "1. Where repeated train/test splits are being run, one might __find optimal hyperparameters via a single split__ and use them for all the subsequent splits. This is justified if the splits are very similar.\n", "\n", "1. In the worst case, one might have to adopt hyperparameters that were optimal for other experiments that have been published. The skeptic will complain that these findings don't translate to your new data sets. That's true, but it could be the only option. For example, how would one compare against [Rajkomar et al. (2018)](https://arxiv.org/abs/1801.07860) who report that \"the performance of all above neural networks were [sic] tuned automatically using Google Vizier [35] with a total of >201,000 GPU hours\"?" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Hyperparameter optimization tools\n", "\n", "* scikit-learn's [model_selection](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection) package has classes `GridSearchCV` and `RandomizedSearchCV`. These are very easy to use. (We used `GridSearchCV` in our sentiment unit.)\n", "\n", "* [scikit-optimize](https://github.com/scikit-optimize/scikit-optimize) offers a variety of methods for guided search through the grid of hyperparameters. [This post](https://roamanalytics.com/2016/09/15/optimizing-the-hyperparameter-of-which-hyperparameter-optimizer-to-use/) assesses these methods against grid search and fully randomized search, and it also provides [starter code](https://github.com/roamanalytics/roamresearch/tree/master/BlogPosts/Hyperparameter_tuning_comparison) for using these implementations with sklearn-style classifiers." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Classifier comparison\n", "\n", "Suppose you've assessed two classifier models. Their performance is probably different to some degree. What can be done to establish whether these models are different in any meaningful sense?" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Practical differences\n", "\n", "One very simple step one can take is to simply count up how many examples the models actually differ on. \n", "\n", "* If the test set has 1,000 examples, then a difference of 1% in accuracy or F1 will correspond to roughly 10 examples. We'll likely have intuitions about whether that difference has any practical import. \n", "\n", "* If the test set has 1M examples, then 1% will correspond to 10,000 examples, which seems sure to matter. Unless other considerations (e.g., cost, understandability) favor the less accurate model, the choice seems clear." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Confidence intervals\n", "\n", "If you can afford to run the model multiple times, then reporting confidence intervals based on the resulting scores could suffice to build an argument about whether the models are meaningfully different.\n", "\n", "The following will calculate a simple 95% confidence interval for a vector of scores `vals`:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def get_ci(vals):\n", " if len(set(vals)) == 1:\n", " return (vals[0], vals[0])\n", " loc = np.mean(vals)\n", " scale = np.std(vals) / np.sqrt(len(vals))\n", " return stats.t.interval(0.95, len(vals)-1, loc=loc, scale=scale)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's very likely that these confidence intervals will look very large relative to the variation that you actually observe. You probably can afford to do no more than 10–20 runs. Even if your model is performing very predictably over these runs (which it will, assuming your method for creating the splits is sound), the above intervals will be large in this situation. This might justify bootstrapping the confidence intervals. I recommend [scikits-bootstrap](https://github.com/cgevans/scikits-bootstrap) for this.\n", "\n", "__Important__: when evaluating multiple systems via repeated train/test splits or cross-validation, all the systems have to be run on the same splits. This is the only way to ensure that all the systems face the same challenges." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Wilcoxon signed-rank test\n", "\n", "NLPers always choose tables over plots for some reason, and confidence intervals are hard to display in tables. This might mean that you want to calculate a p-value. \n", "\n", "Where you can afford to run the models at least 10 times with different splits (and preferably more like 20), [Demšar (2006)](http://www.jmlr.org/papers/v7/demsar06a.html) recommends the [Wilcoxon signed-rank test](https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test). This is implemented in scipy as [scipy.stats.wilcoxon](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wilcoxon.html). This test relies only on the absolute differences between scores for each split and makes no assumptions about how the scores are distributed.\n", "\n", "Take care not to confuse this with [scipy.stats.ranksums](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ranksums.html), which does the Wilcoxon rank-sums test. This is also known as the [Mann–Whitney U test](https://en.wikipedia.org/wiki/Mann–Whitney_U_test), though SciPy distinguishes this as a separate test ([scipy.stats.mannwhitneyu](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mannwhitneyu.html#scipy.stats.mannwhitneyu)). In any case, the heart of this is that the signed-rank variant is more appropriate for classifier assessments, where we are always comparing systems trained and assessed on the same underlying pool of data.\n", "\n", "Like all tests of this form, we should be aware of what they can tell us and what they can't: \n", "\n", "* The test says __nothing__ about the practical importance of any differences observed. \n", "\n", "* __Small p-values do not reliably indicate large effect sizes__. (A small p-value will more strongly reflect the number of samples you have.)\n", "\n", "* Large p-values simply mean that the available evidence doesn't support a conclusion that the systems are different, not that there is no difference in fact. And even that limited conclusion is only relative to this particular, quite conservative test. \n", "\n", "All this is to say that these values should not be asked to stand on their own, but rather presented as part of a larger, evidence-driven argument." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### McNemar's test\n", "\n", "[McNemar's test](https://en.wikipedia.org/wiki/McNemar%27s_test) operates directly on the vectors of predictions for the two models being compared. As such, it doesn't require repeated runs, which is good where optimization is expensive.\n", "\n", "The basis for the test is a contingency table with the following form, for two models A and B:\n", "\n", "$$\\begin{array}{|c | c |}\n", "\\hline\n", "\\textrm{number of examples} & \\textrm{number of examples} \\\\\n", "\\textrm{where A and B are correct} & \\textrm{where A is correct, B incorrect} \n", "\\\\\\hline\n", "\\textrm{number of examples} & \\textrm{number of examples} \\\\\n", "\\textrm{where A is correct, B incorrect} & \\textrm{where both A and B are incorrect} \\\\\\hline\n", "\\end{array}$$\n", "\n", "Following [Dietterich (1998)](http://sci2s.ugr.es/keel/pdf/algorithm/articulo/dietterich1998.pdf), let the above be abbreviated to\n", "\n", "$$\\begin{array}{|c | c |}\n", "\\hline\n", "n_{11} & n_{10}\n", "\\\\\\hline\n", "n_{01} & n_{00} \\\\\n", "\\hline\n", "\\end{array}$$\n", "\n", "The null hypothesis tested is that the two models have the same error rate, i.e., that $n_{01} = n_{10}$. The test statistic is\n", "\n", "$$\n", "\\frac{\n", " \\left(|(n_{01} - n_{10}| - 1\\right)^{2}\n", "}{\n", " n_{01} + n_{10}\n", "}$$\n", "\n", "which has an approximately chi-squared distribution with 1 degree of freedom. \n", "\n", "An implementation is available in this repository: `utils.mcnemar`." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Assessing models without convergence\n", "\n", "When working with linear models, convergence issues rarely arise. Typically, the implementation has a fixed number of iterations it performs, or a threshold on the error, and the model stops when it reaches one of these points. We mostly don't reflect on this because of the speed and stability of these models.\n", "\n", "With neural networks, convergence takes center stage. The models rarely converge, or they converge at different rates between runs, and their performance on the test data is often heavily dependent on these differences. Sometimes a model with a low final error turns out to be great, and sometimes it turns out to be worse than one that finished with a higher error. Who knows?!" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Incremental dev set testing\n", "\n", "The key to addressing this uncertainty is to __regularly collect information about dev set performance as part of training__. For example, at every 100th iteration, one could make predictions on the dev set and store that vector of predictions, or just whatever assessment metric one is using. These assessments can provide direct information about how the model is doing on the actual task we care about, which will be a better indicator than the errors.\n", "\n", "All the PyTorch models for this course accept keyword arguments `X_dev` and `dev_iter`. If these are specified, then the model is tested every `test_iter` iteration and the resulting predictions are stored in the class attribute `dev_predictions`. Here's an example:" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "First, an artificial classification dataset with a train/dev/test structure:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "X, y = make_classification(class_sep=0.5, n_samples=5000, n_features=200)\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(X, y)\n", "\n", "X_train, X_dev, y_train, y_dev = train_test_split(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Second, a shallow neural classifier trained with the requisite keyword arguments provided to `fit`:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Finished epoch 100 of 100; error is 0.04936201777309179" ] } ], "source": [ "dev_iter = 5 # Test increments.\n", "\n", "model = TorchShallowNeuralClassifier(max_iter=100, hidden_dim=10)\n", "\n", "_ = model.fit(X_train, y_train, X_dev=X_dev, dev_iter=dev_iter)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Third, we can calculate our chosen evaluation metric for each of the incremental predictions:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "dev_preds = sorted(model.dev_predictions.items())\n", "\n", "scores = [utils.safe_macro_f1(y_dev, p) for i, p in dev_preds]\n", "\n", "scores = pd.Series(scores)\n", "\n", "scores.index *= dev_iter" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Finally, we have a neat plot that tells us a lot about how training affects the model's performance:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYsAAAEKCAYAAADjDHn2AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAIABJREFUeJzt3XmYVNWd//H3t/cF6G7oRqFBaGQLLkAkRqNxS5wYk4CJMwaymcn8NDNRk5hlRmcymcQk82SSjM5Mxpn5OUkm60j8GWNIQiQkLtk0AgIqIIgg2oBS1dDQC3T18v39cW81RdN71+1qqj6v56mn6946997T9VTXp889955j7o6IiEh/8jJdARERGfsUFiIiMiCFhYiIDEhhISIiA1JYiIjIgBQWIiIyIIWFiIgMSGEhIiIDUliIiMiACjJdgXSprq72mTNnZroaIiKnlA0bNsTdvWagclkTFjNnzmT9+vWZroaIyCnFzPYMppxOQ4mIyIAUFiIiMiCFhYiIDEhhISIiA1JYiIjIgBQWIiIyIIWFiIgMKGvusxhNbR2dNB3rCB/t3T+PHO3gyLF22juda8+rZfL4kkxXVUQkLRQWKZ575QgPPftKjxAIAqA7EI51kOjoGnBfv9r2Kvd9+ELy82wUai4iEi2FRYqvrdnOr7YdoKwon/ElBUwoKWR8SQFVZUWcMbGM8SWFTCgpYEJpsH58SQHji5PPC5lQGvx8+LlXufWHm/mvx17gpstnZ/rXEhEZMYVFigNNbVw6t4bvfOj8Ee3nmkW1/GrbAe5au4NL59Zwdm1FmmooIpIZ6uBOEW9qo3pc8Yj3Y2Z86ZqzmVhexK0/3MSx9s401E5EJHMUFiF3J96coHpcUVr2V1lWxFf/bCHPH2jma2u2p2WfIiKZorAIHTnWQaKzKy0ti6RL59bw/gtm8M3f7+YPL8TTtl8RkdGmsAjFm9sAqB6fnpZF0u1Xz2fmpHI+dd9mjhxrT+u+RURGi8IiFG8KwyKNLQuAsqIC7rxuIa82tfG5VVvSum8RkdGisAjFmxNA+sMCYPEZVdx0+WweeGovDz27P+37FxGJWqRhYWZXmdl2M9tpZrf18voZZvaImW00s6fN7Opw/ZVmtsHMngl/XhFlPSHlNFQEYQFwyxWzOXdaBbc/8AwHmo5FcgwRkahEFhZmlg/cDbwVWACsMLMFPYp9BrjP3RcDy4H/CNfHgXe4+znA9cD3oqpnUry5jTyDieXp7bNIKszP487rFtGa6ORv7n8ad4/kOCIiUYiyZXE+sNPdd7l7AlgJLOtRxoEJ4fMKYB+Au290933h+i1AiZlF8y9/KN7cxsTyokiH55g9eRy3vXU+j2yPce+TL0d2HBGRdIsyLGqB1G/E+nBdqs8B7zOzemA1cEsv+7kW2OjubVFUMinWlIjsFFSq6y+cycWzq/niz7eyp6El8uOJiKRDlGHR27/oPc+9rAC+7e7TgKuB75lZd53M7Czgn4AP93oAsxvNbL2ZrY/FYiOqbLw5PXdvDyQvz/jqn51LQZ7xifs209ml01EiMvZFGRb1wPSU5WmEp5lS/AVwH4C7Pw6UANUAZjYN+DHwAXd/obcDuPs97r7E3ZfU1NSMqLJBWETTX9HTlIpSvnDN2WzYc4j/eqzXX01EZEyJMizWAXPMrM7Migg6sFf1KPMS8CYAM3sNQVjEzKwS+Dlwu7v/PsI6AsmhPkanZZG0dOFU3nbuFO5au4Nn9x4eteOKiAxHZGHh7h3AzcAaYBvBVU9bzOwOM1saFvskcIOZbQbuBT7owWVCNwOzgb83s03hY3JUdW1JdHKsvYvq8aMXFhpsUEROJZEOUe7uqwk6rlPXfTbl+Vbgol62+yLwxSjrlqoh4nss+pIcbPD6bz3J19Zs5zNv73llsYjI2KA7uEm9IW90+ixSabBBETkVKCwILpuF0W9ZJGmwQREZ6xQWHG9Z1Ixin0UqDTYoImOdwoLjYRHVUB+DocEGRWQsU1gQhEVlWSGF+Zl9OzTYoIiMVQoLID5KQ30MRIMNishYpbBgdO/eHsjsyeO4XYMNisgYo7Bg9MaFGqwPhIMNfuFnW9l5oCnT1RERUVhAMEveWAqLvDzjn69bSHlxPn/5/adoaevIdJVEJMflfFgca++kua0jY5fN9uW0CSX82/LF7Io1c/sDz6j/QkQyKufDItaUubu3B/KG2dV88k/msWrzPr7/xJ5MV0dEcljOh0XUc2+P1F9deiZXzJ/MHT/byqaXGzNdHRHJUQqL5swO9TGQvDzjzusWctqEEm76wVMcaklkukoikoMUFsmWxRjrs0hVWVbEf7z3tcSa2vj4DzfRpdn1RGSUKSzCPotJGRzqYzDOnVbJPyxdwGM7Ynz94Z2Zro6I5BiFRXMb40sKKCnMz3RVBvSe88/gXYtr+Zdf7+C3z49sznERkaGINCzM7Coz225mO83stl5eP8PMHjGzjWb2tJldnfLa7eF2283sLVHVMd6SoGaM9lf0ZGZ88Z1nM2fyOD62chP7Go9mukoikiMiCwszywfuBt4KLABWmFnPqeA+QzDd6mKCObr/I9x2Qbh8FnAV8B/h/tIu3jS27t4eSFlRAf/5vvNIdHRx0/8+RaKjK9NVEpEcEGXL4nxgp7vvcvcEsBJY1qOMAxPC5xXAvvD5MmClu7e5+25gZ7i/tIs3t1E9fmz3V/R0Zs04vvKn57LxpUb+cfW2TFdHRHJAlGFRC6SOhFcfrkv1OeB9ZlZPMFf3LUPYFjO70czWm9n6WGx45/DH2lAfg3X1OVP40EV1fPsPL/LTzfsG3kBEZASiDAvrZV3Paz5XAN9292nA1cD3zCxvkNvi7ve4+xJ3X1JTUzPkCiY6ujh8tP2UDAsIpmM9b0YVt/3oaXYeaM50dUQki0UZFvXA9JTlaRw/zZT0F8B9AO7+OFACVA9y2xFraBnbd28PpDA/j39/z2KKC/P5yA820JrQgIMiEo0ow2IdMMfM6sysiKDDelWPMi8BbwIws9cQhEUsLLfczIrNrA6YAzyZ7grGm4K7oSeNwXGhBmtKRSn/tnwxzx9o5m814KCIRCSysHD3DuBmYA2wjeCqpy1mdoeZLQ2LfRK4wcw2A/cCH/TAFoIWx1bgIeAmd+9Mdx3H+rhQg3XxnGo+8ea5PLhpH9//40uZro6IZKGCKHfu7qsJOq5T13025flW4KI+tv0S8KUo6xcLw+JUuc+iPzddPpsNLx3iCz/dyrm1FSycXpnpKolIFsnpO7iPjwt16p6GSsrLM+66bhE144v5iAYcFJE0y+2waEpQVpRPWVGkDaxRU1V+fMDBW+/TgIMikj65HRZjbO7tdFg4vZK/f8cCHt0e4+5HNOCgiKSHwuIUvhKqL+97/Rlcs2gqd/5qB797Pp7p6ohIFlBYZFnLAoIBB//xXecwu2YcH125kf2HNeCgiIxMjodFYkxPejQSyQEH29o7ufLO3/CJH27i0e0HaO/UwIMiMnTZ0bM7DB2dXRxqPTXHhRqs2ZPH8cMPX8j3n9jD6mf288DGvUwsL+Jt50xh6aKpnHdGFXl5vY2sIiJyopwNi4MtCdyhJgv7LFKdXVvBl689l88vO4vf7Ijzk017+X8bXuZ7T+yhtrKUty+cwtKFU1kwZQJmCg4R6V3OhkUsS+7eHqzignyuXHAaVy44jZa2DtZufZVVm/fxzd/u5v8+tovZk8exdOFUli6cyszq8kxXV0TGmJwNi4bm4Ka1bO2z6E95cQHXLK7lmsW1HGxJ8Itn9/OTTfu4c+0O7ly7g4XTKli6qJZ3nDuFyRNKMl1dERkDcjYssmVcqJGaWF7Ee18/g/e+fgb7Dx/lZ5v385PNe/nCz7byxZ9v5cJZk1i6cCpvPXsKFWWFma6uiGSIwiLL+yyGYkpFKTdcMosbLpnFC7FmVm3ax0837+O2B57h7x58ljMmljFzUhkzq8upqy5n5qTg59TKUvLVUS6S1XI4LBIUF+Qxrjhn34J+nVkzjluvnMvH3zyHZ/ceYe3WV9gZa2Z3vJU/7j5Ia+L4IMBF+XlMn1jaHSDJMJkxqYypFaW64kokC+TsN2W8KbghT1cA9c/MOGdaBedMq+he5+4caGpjd7yFPQ0t7I638mK8hRcbWvjdzjjH2o/fy1FUkMeMicdbI4umV/KWs05XS0TkFJOzYRFrbsvJzu10MDNOm1DCaRNKuGDWpBNe6+pyXm06xu54Cy/GW3mxoSV83sJjO2IkOro4s6acj715Lm8/Z4paHSKniEjDwsyuAv4VyAe+4e5f7vH6XcDl4WIZMNndK8PXvgK8jeAu87XAxzyN08DFmxNMrdCVPumWl2dMqShlSkUpbzjzxNc6u5y1W1/hrrXP89F7N/LvDz/PrW+ey1vOOl2hITLGRTbch5nlA3cDbwUWACvMbEFqGXe/1d0Xufsi4OvAA+G2byCYFOlc4GzgdcCl6axfto4LNZbl5xlXnT2FX3zsjXx9xWI6u5y/+sFTvP3rv2Pt1lc1JazIGBbl2FDnAzvdfZe7J4CVwLJ+yq8gmFoVwAnm4y4CioFC4NV0VayryznYksiKSY9ORXl5xjsWTuWXt17KXe9eSGuigxu+u55ld/+eR7YfUGiIjEFRhkUt8HLKcn247iRmNgOoAx4GcPfHgUeA/eFjjbtv62W7G81svZmtj8Vig67YodYEnV2ulkWG5ecZ71w8jV994lK+8qfncrAlwZ//zzqu/c8/8Lvn4woNkTEkyrDo7SR0X3/9y4H73b0TwMxmA68BphEEzBVmdslJO3O/x92XuPuSmpqaQVcsnrx7W2ExJhTk53Hdkuk8/MnL+Md3nsMrh4/xvm/+kXf/3yd4YldDpqsnIkQbFvXA9JTlacC+Psou5/gpKIB3Ak+4e7O7NwO/AC5IV8V09/bYVFSQx3tefwaPfPoy7lh2FnsOtrD8nid4z38/wfoXD2a6eiI5LcqwWAfMMbM6MysiCIRVPQuZ2TygCng8ZfVLwKVmVmBmhQSd2yedhhquZFjUqM9iTCouyOcDF87ksU9fzmffvoAdrzbzp//1OB/41pNserkx09UTyUmRhYW7dwA3A2sIvujvc/ctZnaHmS1NKboCWNnjstj7gReAZ4DNwGZ3/2m66hZrUsviVFBSmM+HLq7jN399GX979Xye3XuYa+7+PX/x7XU8/kIDXV3q0xAZLZYtnYhLlizx9evXD6rsl3/xHN/83S52fPGtuoP7FNLc1sF3/vAi9/xmF4ePtlNbWcq7XlvLu147jToNqy4yLGa2wd2XDFQuJ+/gjje3MalcQ32casYVF3DT5bP50EV1/HLrK/zoqb3c/chOvv7wTl57RiXXnjeNt58zVaPjikQgZ8NC91icukqL8lm2qJZli2p59cgxHty4lx89Vc/f/fhZPv/TrVz5mtO49rxa3jinhsL8nJ5m/pSU6Ohi/YsHefi5Azyy/QCvHD7GjHCE47rq4wNV1lWXU1VWqH/6RklOhkVDc3bPvZ1LTptQwocvPZMbL5nFln1HuH9DPas27+Pnz+ynelwRyxbV8q7X1nLW1IqBdyYZc6DpGI9uj/HIcwf47fNxmts6KMrP4/WzJnLx7Gr2HGxly77DPLTlFTpT+qoqSguZWV3OrOSQ+TXl1E0qZ2Z1GeNL1MJMp5wMi3hzG/NOH5/pakgamRln11Zwdm0Ff/e21/Do9hgPPFXP9x7fwzd/t5v5p4/n2tdOY9niqUwerzHBMq2ry3l672Eefu4Aj24/wNP1hwE4fUIJ71g4hcvnTeai2dWU95hCoL2zi5cPBgNU7oq1dA9U+eTug/x4494TylaPK6auuqy7NTIr/DlzUjklhfmj9rtmi5zr4HZ35n3mIT50cR23vXX+KNRMMqmxNcFPn97PjzbUs+nlRvIMLplbw7WvncaVC07Tl8YoOnKsnd/uiPPwcwd4bMcB4s0J8gwWn1HF5fNquHz+ZBZMmTDs00rH2jvZ09DK7ngw78rueDMvxlvZ3dDSfQVk0tSKEupqjk/glQyU6VVlFBXkzqnLzi6nID9PHdy9OXK0g0Rnl2bIyxGVZUW8/4IZvP+CGbwQa+aBp+r58VN7ueXejRQX5FE9rpjKssLwUURlaSFVZUUnLpcXUlEaristpED9IIPi7rwQa+bh5w7w8HMHWP/iITq6nIrSQi6dW8MV8ydzydwaJpan52+xpDCfeaeP7/WsQdOxdvY0tLIrHC5/d/j42dP7OXy0vbtcfp4xrer4RF6zarJvRsiG5jZ+83yMR7fHeGzH4IdJyrmwiHXfkKc+i1xzZs04Pv2W+Xzyynk8sbuBR547QENLgsOt7RxqTbD/8BEaW9tpbE3Q3y0c44sLqCwvpDIMkNLC/O5xbI431L17+fhrnvLKia8ZwSmYGdVlzJwUzDI4Y1L5mJrJ0d1pTXRyqDURvk/tNB5NcKi1ncOtwc/k+9d4tJ39jUfZd/gYAPNPH88Nl8ziivmTWTy9ctQDd3xJYfdpyp4OtSRODJGG4PmTvcwIOW1iKRPDfyZS/4GoLCukIvznoiJcriwtYnxJQcaH3+/qcp7Ze5hHth/g0e0xNtc34g6Tyou4Yt5kNg9yP2PnkzhKNNSH5OUZbzizmjecWd3r611dTlNbR3eINB4NvwDDL8NDrQkOH23v/tJMnuJInj5JfjUkz6aYgYVru9dxvJABXe5s2Xek+/OZVDO+mJlhcBz/Wc6M6jImjKADt6Ozi8aj7RxqCb7kg98lwcGW4HftPRDaSXR29bnPsqL88Isz+BI9b+ZEPlI3kcvnT6a2snTYdY1aVXkR55UXcd6MqhPWuzuxprYTguTlQ600trazr/EY2/Y30diaoCUlUHrKM5hQGgRKMkwqywqZPL6YuupxzAz7VE4bX5LWUGlsTfCb5+M8+twBHtsRo6ElgRksnFbJx980l8vm1XBObQV5ecZdywe3T4WFSA95eUZF+B/iGZPKRvXYzW0d7GloYU9D0Im7J5xt8HfPx7l/w7ETyk4sL2LGpOMtkZmTyqkeV9wdZMkgaGxNcLA15XlLgqZjHX3Woaggj6qy4HRcRWkhs6rHHT8tV1ZIVfhfdVW4rqqskIqyQooLsqv/x8yYPKGEyb3MCJkq0dHF4aPtHD4aBOzho8mQDVpcjSnLja0JdsdbeOXIMRIdx4O3pDAvmL/+hCu6gqu6agYx/XNXl7N1/xEe3X6AR7bH2PjSIbocqsoKuWRuDZfPm8wb51QzaQTfe7kXFuF/gZPUZyFj0LjiAs6aWtHrpb5HE53sORhMV7unoYUXG4KfT+4+yIOb9tLbtSrlRfnBF3p58OU/Y2IZE8uTX/pFVJUXdQdD8nlpYb7uXRiCooI8asYXD+nUdleXs+/w0e4O+BfD1suOA038aturdKScBx1XXMCMSWXHO+LDIDm9ooRNLzXy6PYDPLoj1t3CPXdaBTdfPpvL5k9m4bTKtPWz5F5YhFdgVJUpLOTUUlqUz/zTJzD/9AknvXasvZP6Q63EmxPdQVCZhf/tZ4u8PGNaVRnTqsq4eM6Jp0M7OrvY23i0e+76Fxta2R1v4en6w6x+Zv9J/WkTSgq4ZG4Nl82bzKVzayLrjx1WWJjZPe5+Y7orMxrizW1MLC/OiqsaRJJKCvOZPXk8sydnuiYyUgX5ecyYVM6MSeUw78TXEh1dvHyolRfjLextPMprpkwYtQsG+gwLM5vY10vA1dFUJ3rB3NtqVYjIqaeoII8za8ZxZs24UT92fy2LGLCHE2e883D5lP3/Jdac0GWzIiJD1F9Y7ALe5O4v9XzBzF7upfwpId7UxiwNZy0iMiT9nej6F4IZ7HrzlcHs3MyuMrPtZrbTzG7r5fW7zGxT+NhhZo0pr51hZr80s21mttXMZg7mmP1xd52GEhEZhj5bFu5+dz+vfX2gHZtZPnA3cCXBfNzrzGyVu29N2c+tKeVvARan7OK7wJfcfa2ZjQP6vhtokJrbOmjr6NI9FiIiQ9Rny8LM/jHl+ZXD2Pf5wE533+XuCWAlsKyf8iuAe8PjLQAK3H0tgLs3u3vrMOpwgnhzAtANeSIiQ9XfaairUp7/0zD2XQuk9m3Uh+tOYmYzgDrg4XDVXKDRzB4ws41m9tWwpTIi3Xdvq4NbRGRIorw4t7cbGfoanm05cL+7JwdZKQDeCHwKeB0wC/jgSQcwu9HM1pvZ+lhs4NETG7qH+lCfhYjIUPR3NdRkM/sE4aWy4fNu7n7nAPuuB6anLE8D9vVRdjlwU49tN7r7LgAzexC4APhmjzrcA9wDwXwWA9SHWHgaqkanoUREhqS/lsV/A+OBcSnPUx8DWQfMMbM6MysiCIRVPQuZ2TyCq64e77FtlZnVhMtXAFt7bjtU8aY2zEjb+PkiIrmiv6uhPj+SHbt7h5ndDKwB8oFvufsWM7sDWO/uyeBYAaz0lCn73L3TzD4F/NqCEc02EATWiMSb26gqK9LkNSIiQxTpQILuvhpY3WPdZ3ssf66PbdcC56azPrrHQkRkeHLqX+x4c0KXzYqIDEOOhUWbwkJEZBgGDAszqwiH5VgfPv7ZzE6emeUUEG9SWIiIDMdgWhbfAo4A14WPI8D/RFmpKBxNdNKS6NQMeSIiwzCYDu4z3f3alOXPm9mmqCoUleTd27rHQkRk6AbTsjhqZhcnF8zsIuBodFWKRqx7qA+1LEREhmowLYu/BL6b0k9xCLg+uipFI96UHOpDLQsRkaHqNyzMLA+Y5+4LzWwCgLsfGZWapZlGnBURGb5+T0O5exdwc/j8yKkaFHC8z0Id3CIiQzeYPou1ZvYpM5tuZhOTj8hrlmbx5jYmlBRQXDDikc5FRHLOYPosPhT+TB0V1gmGDT9lxJvbNI+FiMgwDRgW7l43GhWJWrxJQ32IiAzXYO7gvsnMKlOWq8zsI9FWK/3iLW26x0JEZJgG02dxg7s3Jhfc/RBwQ3RVikYw1Ic6t0VEhmMwYZEXzikBQDgX9in1rdvW0cmRYx06DSUiMkyDCYs1wH1m9iYzuwK4F3hoMDs3s6vMbLuZ7TSz23p5/S4z2xQ+dphZY4/XJ5jZXjP798Ecry8NyXss1MEtIjIsg7ka6m+ADwN/RTAf9y+Bbwy0UdgCuRu4kmBO7XVmtsrdu6dHdfdbU8rfAizusZsvAI8Noo79St5joZaFiMjwDOZqqC7gP8PHUJwP7HT3XQBmthJYRt9zaa8A/iG5YGbnAacRtGKWDPHYJzgeFqfU2TMRkTFjMFdDzTGz+81sq5ntSj4Gse9a4OWU5fpwXW/HmAHUAQ+Hy3nAPwOfHsRxBhRv0lAfIiIjMZg+i/8haFV0AJcD3wW+N4jtrJd13kfZ5cD97t4ZLn8EWO3uL/dRPjiA2Y3JSZlisVif5ZIjztaoz0JEZFgGExal7v5rwNx9j7t/DrhiENvVA9NTlqcB+/oou5yg4zzpQuBmM3sR+BrwATP7cs+N3P0ed1/i7ktqamr6rEi8uY1xxQWUFGqoDxGR4RhMB/ex8LTQ82Z2M7AXmDyI7dYBc8ysLtxmOfCenoXMbB5QBTyeXOfu7015/YPAEnc/6WqqwYo3J9RfISIyAoNpWXwcKAM+CpwHvJ9BzGfh7h0EI9auAbYB97n7FjO7w8yWphRdAax0975OUY1YvKmNSeqvEBEZtsFcDbUufNoM/PlQdu7uq4HVPdZ9tsfy5wbYx7eBbw/luD3Fm9uYVVM+kl2IiOS0PsPCzFb1t6G7L+3v9bEk3tzG+XWn3KjqIiJjRn8tiwsJLn29F/gjvV/dNOa1d3ZxqLVdl82KiIxAf2FxOsHd1ysIOqZ/Dtzr7ltGo2LpcrBFQ32IiIxUnx3c7t7p7g+5+/XABcBO4NFwWI5TRqwpvMdCV0OJiAxbvx3cZlYMvI2gdTET+DfggeirlT4aF0pEZOT66+D+DnA28Avg8+7+7KjVKo26R5xVWIiIDFt/LYv3Ay3AXOCjqVNaAO7uEyKuW1p0tyzUZyEiMmx9hoW7D+aGvTEv3txGSWEe5UUa6kNEZLiyIhD6Ewz1UUxKy0hERIYoB8KiTf0VIiIjlPVhEWtSWIiIjFTWh0W8OUHNeN1jISIyElkdFp1dzsEWtSxEREYqq8PiUGuCLtc9FiIiI5XVYaG7t0VE0iPSsDCzq8xsu5ntNLOTZrozs7vMbFP42GFmjeH6RWb2uJltMbOnzezdwzl+vCl597b6LERERmIw06oOi5nlA3cTjFxbD6wzs1XuvjVZxt1vTSl/C7A4XGwFPuDuz5vZVGCDma1x98ah1CHZstAseSIiIxNly+J8YKe773L3BLASWNZP+RUEc2fg7jvc/fnw+T7gAFAz1Aokw6JGYSEiMiJRhkUtweRJSfXhupOY2QygDni4l9fOB4qAF4ZagVhzG0X5eUwojawBJSKSE6IMi97G1/A+yi4H7nf3zhN2YDYF+B7w5+7eddIBzG40s/Vmtj4Wi52003hTgknjijTUh4jICEUZFvXA9JTlacC+PsouJzwFlWRmEwhm5/uMuz/R20bufo+7L3H3JTU1J5+l0lAfIiLpEWVYrAPmmFmdmRURBMKqnoXMbB5QBTyesq4I+DHwXXf/f8OtQBAWuhJKRGSkIgsLd+8AbgbWANuA+9x9i5ndYWZLU4quAFa6e+opquuAS4APplxau2iodWgIR5wVEZGRibTn191XA6t7rPtsj+XP9bLd94Hvj/DYNLS0adIjEZE0yNo7uA8fbae909WyEBFJg6wNi+NDfajPQkRkpLI2LGLhUB+6IU9EZOSyNiy6WxbqsxARGbHsDwu1LERERiyrwyI/z6gsLcx0VURETnnZGxZNCSaVF5GXp6E+RERGKnvDQkN9iIikTXaHhTq3RUTSIovDIqF7LERE0iQrw8Ldiek0lIhI2mRlWDS1dZDo6FLLQkQkTbIyLOJNusdCRCSdsjMsmoOhPhQWIiLpkaVhoZaFiEg6RRoWZnaVmW03s51mdlsvr9+VMrnRDjNrTHntejN7PnxcP5TjNnSPC6U+CxGRdIhs8iMzywfuBq4kmI97nZmtcvetyTLufmtK+VuAxeHzicA/AEsABzaE2x4azLFjzQnMYGKZwkJEJB2VgnGFAAALfUlEQVSibFmcD+x0913ungBWAsv6Kb8CuDd8/hZgrbsfDANiLXDVYA8cb25jYlkRBflZeZZNRGTURfltWgu8nLJcH647iZnNAOqAh4e6bW/iTbrHQkQknaIMi95G8PM+yi4H7nf3zqFsa2Y3mtl6M1sfi8W61wdDfegUlIhIukQZFvXA9JTlacC+Psou5/gpqEFv6+73uPsSd19SU1PTvT4Y6kMtCxGRdIkyLNYBc8yszsyKCAJhVc9CZjYPqAIeT1m9BvgTM6sysyrgT8J1g6IRZ0VE0iuyq6HcvcPMbib4ks8HvuXuW8zsDmC9uyeDYwWw0t09ZduDZvYFgsABuMPdDw7muK2JDloTnQoLEZE0iiwsANx9NbC6x7rP9lj+XB/bfgv41lCPGW9K3r2tPgsRkXTJumtLY9035KllISKSLlkXFsmhPmp0GkpEJG2yNizUZyEikj7ZFxZhn8XEcvVZiIikS/aFRXMbFaWFFBVk3a8mIpIxWfeNGtxjoVaFiEg6ZWlYqL9CRCSdsjAsErpsVkQkzbIvLJradNmsiEiaZVVYHGvvpKmtQ30WIiJpllVh0dCSHOpDLQsRkXTKqrCIN+mGPBGRKGRXWGhcKBGRSGRnWKjPQkQkrbIsLNRnISIShawKi1hTG+OLCygpzM90VUREskqkYWFmV5nZdjPbaWa39VHmOjPbamZbzOx/U9Z/JVy3zcz+zcxsoOPFm9vUXyEiEoHIZsozs3zgbuBKoB5YZ2ar3H1rSpk5wO3ARe5+yMwmh+vfAFwEnBsW/R1wKfBof8fUuFAiItGIsmVxPrDT3Xe5ewJYCSzrUeYG4G53PwTg7gfC9Q6UAEVAMVAIvDrQAePNCfVXiIhEIMqwqAVeTlmuD9elmgvMNbPfm9kTZnYVgLs/DjwC7A8fa9x9W88DmNmNZrbezNbHYjENIigiEpEow6K3PgbvsVwAzAEuA1YA3zCzSjObDbwGmEYQMFeY2SUn7cz9Hndf4u5LqmtqaGxtV1iIiEQgyrCoB6anLE8D9vVS5ifu3u7uu4HtBOHxTuAJd29292bgF8AF/R2sozPIoerx6rMQEUm3KMNiHTDHzOrMrAhYDqzqUeZB4HIAM6smOC21C3gJuNTMCsyskKBz+6TTUKk6uroAmFSuloWISLpFFhbu3gHcDKwh+KK/z923mNkdZrY0LLYGaDCzrQR9FJ929wbgfuAF4BlgM7DZ3X/a3/GSLYsatSxERNIusktnAdx9NbC6x7rPpjx34BPhI7VMJ/DhoRwr2bJQn4WISPplzR3c3X0WCgsRkbTLnrDockoL8ykvjrSxJCKSk7InLDq7dCWUiEhEsicsulynoEREIpI9YdGpsBARiUr2hEVXl8JCRCQiWRQWTo1GnBURiUTWhAVo7m0RkahkV1joNJSISCQUFiIiMqCsCYsJJYXUVpVmuhoiIlkpa8JixqQyaisVFiIiUciasBARkegoLEREZEAKCxERGVCkYWFmV5nZdjPbaWa39VHmOjPbamZbzOx/U9afYWa/NLNt4eszo6yriIj0LbLxvM0sH7gbuJJgru11ZrbK3bemlJkD3A5c5O6HzGxyyi6+C3zJ3dea2TigK6q6iohI/6JsWZwP7HT3Xe6eAFYCy3qUuQG4290PAbj7AQAzWwAUuPvacH2zu7dGWFcREelHlGFRC7ycslwfrks1F5hrZr83syfM7KqU9Y1m9oCZbTSzr4YtlROY2Y1mtt7M1sdisUh+CRERiTYsrJd13mO5AJgDXAasAL5hZpXh+jcCnwJeB8wCPnjSztzvcfcl7r6kpqYmfTUXEZETRDkHaT0wPWV5GrCvlzJPuHs7sNvMthOERz2w0d13AZjZg8AFwDf7OtiGDRuaw+3lZNVAPNOVGIP0vvRN703vsvF9mTGYQlGGxTpgjpnVAXuB5cB7epR5kKBF8W0zqyY4/bQLaASqzKzG3WPAFcD6AY633d2XpPMXyBZmtl7vzcn0vvRN703vcvl9iew0lLt3ADcDa4BtwH3uvsXM7jCzpWGxNUCDmW0FHgE+7e4N7t5JcArq12b2DMEprf+Oqq4iItI/c+/ZjXBqyuXEH4jem97pfemb3pve5fL7kk13cN+T6QqMYXpveqf3pW96b3qXs+9L1rQsREQkOtnUshARkYhkRVgMZgyqXGBm083skXA8rS1m9rFw/UQzW2tmz4c/qzJd10wxs/zwRs+fhct1ZvbH8L35oZkVZbqOo83MKs3sfjN7LvzsXKjPTMDMbg3/lp41s3vNrCRXPzOnfFikjEH1VmABsCIcLiQXdQCfdPfXENyXclP4XtwG/Nrd5wC/Dpdz1ccIrs5L+ifgrvC9OQT8RUZqlVn/Cjzk7vOBhQTvT85/ZsysFvgosMTdzwbyCW4ByMnPzCkfFgxuDKqc4O773f2p8HkTwR99LcH78Z2w2HeAazJTw8wys2nA24BvhMtGcA/P/WGRnHtvzGwCcAnhDa/unnD3RvSZSSoASs2sACgD9pOjn5lsCIvBjEGVc8Ih3RcDfwROc/f9EAQKMLnvLbPavwB/zfERjCcBjeE9QZCbn51ZQAz4n/D03DfMrBx9ZnD3vcDXgJcIQuIwsIEc/cxkQ1gMZgyqnBIO6f4j4OPufiTT9RkLzOztwAF335C6upeiufbZKQBeC/ynuy8GWsjBU069CftplgF1wFSgnOB0d0858ZnJhrAYzBhUOcPMCgmC4gfu/kC4+lUzmxK+PgU4kKn6ZdBFwFIze5HgVOUVBC2NyvAUA+TmZ6ceqHf3P4bL9xOEhz4z8GZgt7vHwvHrHgDeQI5+ZrIhLLrHoAqvSlgOrMpwnTIiPAf/TWCbu9+Z8tIq4Prw+fXAT0a7bpnm7re7+zR3n0nwGXnY3d9LMMzMn4bFcu69cfdXgJfNbF646k3AVvSZgeD00wVmVhb+bSXfm5z8zGTFTXlmdjXBf4n5wLfc/UsZrlJGmNnFwG+BZzh+Xv5vCfot7gPOIPgD+DN3P5iRSo4BZnYZ8Cl3f7uZzSJoaUwENgLvc/e2TNZvtJnZIoJO/yKCgTz/nOAfyZz/zJjZ54F3E1xpuBH4PwR9FDn3mcmKsBARkWhlw2koERGJmMJCREQGpLAQEZEBKSxERGRACgsRERmQwkIkZGbN4c+ZZtZzvviR7vtveyz/IZ37F4mawkLkZDOBIYVFOPpxf04IC3d/wxDrJJJRCguRk30ZeKOZbQrnM8g3s6+a2Toze9rMPgzBzX3h/CH/S3AjJGb2oJltCOdAuDFc92WCkUs3mdkPwnXJVoyF+37WzJ4xs3en7PvRlHkmfhDeRYyZfdnMtoZ1+dqovzuSkwoGLiKSc24jvMMbIPzSP+zurzOzYuD3ZvbLsOz5wNnuvjtc/pC7HzSzUmCdmf3I3W8zs5vdfVEvx3oXsIhgHonqcJvfhK8tBs4iGHvo98BFZrYVeCcw393dzCrT/tuL9EItC5GB/QnwATPbRDB0yiRgTvjakylBAfBRM9sMPEEwwOUc+ncxcK+7d7r7q8BjwOtS9l3v7l3AJoLTY0eAY8A3zOxdQOuIfzuRQVBYiAzMgFvcfVH4qHP3ZMuipbtQMObUm4EL3X0hwbhBJYPYd19SxxvqBArCeRTOJxhZ+BrgoSH9JiLDpLAQOVkTMD5leQ3wV+Hw75jZ3HCCoJ4qgEPu3mpm8wmmtk1qT27fw2+Ad4f9IjUEs9Y92VfFwrlKKtx9NfBxglNYIpFTn4XIyZ4GOsLTSd8mmKN6JvBU2Mkco/epNB8C/tLMnga2E5yKSroHeNrMngqHRk/6MXAhsJlgEp2/dvdXwrDpzXjgJ2ZWQtAquXV4v6LI0GjUWRERGZBOQ4mIyIAUFiIiMiCFhYiIDEhhISIiA1JYiIjIgBQWIiIyIIWFiIgMSGEhIiID+v+fan/I/u4RuwAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "ax = scores.plot()\n", "ax.set_xlabel(\"Iterations\")\n", "_ = ax.set_ylabel(\"Macro F1\")" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "It's a different picture than we get from the error term:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEKCAYAAAD9xUlFAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAIABJREFUeJzt3Xl4XGd59/HvPYt2WbYWS94tO96dzVYSJyEkb+JsBGIKBQJNCAkQAgVKXyhvoAtteNvSCwothVKykwJpIYRgaHZw9s1yEjte4uDdijfZkmVr18zc/WPGQnZkW7I1OpqZ3+e65vKcmTMzt8bH/uk8z3Oex9wdERERgFDQBYiIyMihUBARkV4KBRER6aVQEBGRXgoFERHppVAQEZFeCgUREemlUBARkV4KBRER6RUJuoDBqqys9KlTpwZdhohIRlmxYsVed6863n4ZFwpTp06lvr4+6DJERDKKmW0dyH5qPhIRkV4KBRER6aVQEBGRXgoFERHppVAQEZFeCgUREemlUBARkV4ZFwp7DnYFXYKISNbKvFA40ElPPBF0GSIiWSnjQsGBbU3tQZchIpKVMi4UADY1tgVdgohIVsrQUGgNugQRkayUcaEQCRkbFQoiImmRcaGQHwmr+UhEJE0yLxSiITbtVSiIiKRD5oVCJERTWzfNbd1BlyIiknUyMhQANu1Vv4KIyFBLWyiY2SQzW2Zm68xsjZn9WT/7mJl918w2mNkqM1twvPfNi4QB2Kh+BRGRIZfO5ThjwBfd/RUzKwVWmNnj7r62zz5XAjNSt3OAH6T+PKq8SIho2NTZLCKSBmk7U3D3ne7+Sur+QWAdMOGI3ZYA93rSi8BoMxt3rPc1YEpFsa5VEBFJg2HpUzCzqcCZwEtHPDUB2N5nu4G3B8fbTKss1rUKIiJpkPZQMLMS4BfAF9z9wJFP9/MS7+c9bjKzejOrb2xsZPrYErY1tRPTxHgiIkMqraFgZlGSgfATd3+gn10agEl9ticCO47cyd1vc/c6d6+rqqpiWmUxPXFne3NHegoXEclR6Rx9ZMCdwDp3//ZRdlsKfDQ1CmkR0OLuO4/33tOqSgDNgSQiMtTSOfrofOA64HUzey312FeByQDu/h/AQ8C7gA1AO3DDQN54elUxABsbW7lkTvXQVi0iksPSFgru/iz99xn03ceBPx3se48uyqOiOE/DUkVEhljGXdF8yLSqYoWCiMgQy9xQqCzRsFQRkSGWuaFQVcy+tm5a2nuCLkVEJGtkbChMT41A2qiJ8UREhkzGhsKsmlIAXm9oCbgSEZHskbGhMKm8iCkVRTy5fk/QpYiIZI2MDQWAi2ZW8cKmfXT2xIMuRUQkK2R2KMwaS2dPgpc3NwVdiohIVsjoUFg0rYK8SIgn1zcGXYqISFbI6FAozAtzTm05T76pfgURkaGQ0aEAySakTY1tbG9qD7oUEZGMlwWhUAWgUUgiIkMg40NhWmUxk8oLeepN9SuIiJysjA8FM+OimWN5fuM+umIamioicjIyPhQg2YTU3h1n+ebmoEsREcloWREK506vIC8cUr+CiMhJyopQKMqLsGh6BY+u3UVy3R4RETkRWREKAO8+bRzbmzpYqQnyREROWNaEwuXzasgLh/j1yh1BlyIikrGyJhTKCqNcOKuK36zaQTyhJiQRkRORNaEA8J7Tx7P7QBfLt2iCPBGRE5FVobB4zlgKo2E1IYmInKCsCoWivAiL51bz8Opd9MQTQZcjIpJxsioUAN5z2jia2rp5bsPeoEsREck4WRcKF86qorQgwq9X7gy6FBGRjJN1oZAfCXPFvBoeW7NLy3SKiAxS1oUCJEchHeyKaeZUEZFByspQOG96BeXFefxmlZqQREQGIytDIRIOccX8Gp5Yu5v27ljQ5YiIZIysDAVIzoXU0RPnd29o5lQRkYHK2lA4p7aCqtJ8fqNRSCIiA5a1oRAOGVedOo5l6/dwsLMn6HJERDJC1oYCJJuQumIJnli3O+hSREQyQlaHwoLJYxhXVqAmJBGRAcrqUAilmpCe/n0jLe1qQhIROZ6sDgVIXsjWE3ceXbMr6FJEREa8rA+F0yaWMWF0IY+tVb+CiMjxZH0omBmL54zl2Q2NdHRrLiQRkWNJWyiY2V1mtsfMVh/l+YvMrMXMXkvd/iZdtSyeW01nT0LTaYuIHEc6zxTuAa44zj7PuPsZqdut6SrknNoKSvMjPK4mJBGRY0pbKLj708CIWCw5LxLiwllV/PaN3SQSHnQ5IiIjVtB9Cuea2Uoze9jM5qXzgy6dW83e1m5ea9ifzo8REcloQYbCK8AUdz8d+DfgwaPtaGY3mVm9mdU3Np7YGgkXzRxLOGQ8oSYkEZGjCiwU3P2Au7em7j8ERM2s8ij73ubude5eV1VVdUKfV1YU5Zzack15ISJyDIGFgpnVmJml7p+dqmVfOj9z8Zxq3tzdytZ9ben8GBGRjJXOIan3AS8As8yswcw+bmY3m9nNqV3+GFhtZiuB7wLXuHtae4EXz6kG0CgkEZGjiKTrjd39w8d5/nvA99L1+f2ZXFHErOpSHl+7m09cMG04P1pEJCMEPfpo2F02r5rlW5poausOuhQRkREn50Lh8nk1JByNQhIR6UfOhcK88aOYOKaQRzRrqojI2+RcKJgZV8yr4dnf79UynSIiR8i5UAC4fH4N3fEEy9af2IVwIiLZKidDYcHkMVSW5PPoajUhiYj0lZOhEA4Zl82rZtn6PXT2aI0FEZFDcjIUAK6YV0N7d5xnfq81FkREDsnZUFg0rYJRBREeUROSiEivnA2FvEiIxXOqeWLdbnriiaDLEREZEXI2FCA5Cqmlo4cXN6V1Hj4RkYyR06Fw4cwqivPCPPT6zqBLEREZEXI6FAqiYS6ZU80jq3epCUlEhBwPBYCrThtHc7uakEREQKHQ24T0P6vUhCQikvOhUBANs3huNY+uUROSiEjOhwLAVacmm5Be2KgmJBHJbQoF4J0zqyjJj2gUkojkPIUCqSakOWN5RE1IIpLjFAopV502nv3tPTyvJiQRyWEKhZQLZlRSmh/hf1btCLoUEZHAKBRSCqJhLptXw8Ord9EV03TaIpKbFAp9LDljPAc7YzypFdlEJEcpFPo4b3oFlSV5LH1NTUgikpuOGwpmFjazbw5HMUGLhENcdeo4nli3m4OdPUGXIyIy7I4bCu4eBxaamQ1DPYG7+owJdMUSPLZmd9CliIgMu8gA93sV+JWZ/RxoO/Sguz+QlqoCtGDyaCaOKeRXK3fw/oUTgy5HRGRYDTQUyoF9wMV9HnMg60LBzFhyxnj+46lNNB7soqo0P+iSRESGzYBCwd1vSHchI8mSMybw/WUbeej1nVx/3tSgyxERGTYDGn1kZhPN7JdmtsfMdpvZL8wsa9tWZlaXMrumlAdfeyvoUkREhtVAh6TeDSwFxgMTgF+nHstaf3TmBF7dtp91Ow8EXYqIyLAZaChUufvd7h5L3e4BqtJYV+A+dNYkCqNh7nhmc9CliIgMm4GGwl4zuzZ1zULYzK4l2fGctUYX5fHBuoksXfkWuw90Bl2OiMiwGGgo3Ah8ENgF7AT+OPVYVrvxHbXEEs6Pnt8SdCkiIsNiQFc0A+9396vdvcrdx7r7e9196zDUF6gpFcVcPreGn7y0jfbuWNDliIik3UCvaF4yDLWMSJ98Zy0tHT3cv6Ih6FJERNJuoM1Hz5nZ98zsAjNbcOiW1spGiIVTyjlz8mjufHYz8YQHXY6ISFoNNBTOA+YBtwL/nLp9K11FjTSfvGAaW/e18+iaXUGXIiKSVse9otnMQsAP3P1nw1DPiHT5vBpOGVvCPz+2nsvmVhMJa8ZxEclOA+lTSACfHewbm9ldqSugVx/leTOz75rZBjNbNZKbo8Ih4y8un8XGxjb1LYhIVhvor7yPm9mXzGySmZUfuh3nNfcAVxzj+SuBGanbTcAPBlhLIC6bW82CyaP5zhNv0tGt5TpFJDsN5jqFPwWeBlakbvXHeoG7Pw00HWOXJcC9nvQiMNrMxg2wnmFnZtxy5Rx2H+jiHl23ICJZakCh4O61/dymneRnTwC299luSD02Yp1dW84ls8fy709uYH97d9DliIgMuWOGgpl9uc/9Dxzx3D+c5Gf3t5Jbv2M+zewmM6s3s/rGxsaT/NiT8xdXzKK1K8a/P7kx0DpERNLheGcK1/S5/5UjnjtWf8FANACT+mxPBHb0t6O73+bude5eV1UV7Dx8s2tG8b4zJ3LP81vY3tQeaC0iIkPteKFgR7nf3/ZgLQU+mhqFtAhocfedJ/mew+JLl8/EgG89tj7oUkREhtTxQsGPcr+/7cOY2X3AC8AsM2sws4+b2c1mdnNql4eATcAG4HbgMwMvO1jjygr55AXT+NVrO1i5fX/Q5YiIDBlzP/r/7WYWB9pInhUUAofaSwwocPdo2is8Ql1dndfXH3Pg07Bo7Ypx0TeXMa2qhP++aRFmJ3viJCKSPma2wt3rjrffMc8U3D3s7qPcvdTdI6n7h7aHPRBGkpL8CF9YPJOXNzfx+NrdQZcjIjIkNF/DSbjmrElMryrmGw+/QU88EXQ5IiInTaFwEiLhEH951Rw27W3jrme1bKeIZD6Fwkm6eHY1i+dU86+//T079ncEXY6IyElRKAyBr71nLgl3vv6btUGXIiJyUhQKQ2BSeRGfu3gGD6/exZPr9wRdjojICVMoDJFPXFDLtKpivrZ0DZ09mkVVRDKTQmGI5EfCfH3JfLbua+evHlxNTKORRCQDKRSG0PmnVPL5i0/h/hUN3PSfK2jvjgVdkojIoCgUhtj/vWwW//+983ly/R6uue1FGg92BV2SiMiAKRTS4NpFU7jtujre3H2QD/7wBZratPaCiGQGhUKaLJ5bzY8/fg479ndw07316nwWkYygUEijuqnlfPuDZ1C/tZkv37+KY00+KCIyEigU0uyq08bx/66YzdKVO/j2428GXY6IyDFFgi4gF9x84TS27mvj3363gdMmjubSudVBlyQi0i+dKQwDM+Pr753PrOpS/nbpGjq61b8gIiOTQmGYRMMhbl0yj7f2d/D9ZRuCLkdEpF8KhWF0zrQK3nfmBG57ehObGluDLkdE5G0UCsPslnfNJj8S4mtL12g0koiMOAqFYTa2tIAvXjaTZ36/l4de3xV0OSIih1EoBODaRVOYP2EUX3lgFRvVjCQiI4hCIQCRcIgf/MlCouEQH79nOfvbNQ2GiIwMCoWATCov4ofXLWTH/k4+/eNX6NFU2yIyAigUAlQ3tZxvvP9UXti0j79+cLU6nkUkcLqiOWDvWzCRjY2tfH/ZRorzI/zVVXMws6DLEpEcpVAYAb502Szau+Pc+exmEu78zbvnKhhEJBAKhRHAzJJBgHHXc5txh6+9R8EgIsNPoTBCmBl//e45hAzueHYzxflh/uLy2UGXJSI5RqEwgpgZf3nVHNq643x/2UamVZbw/oUTgy5LRHKIRh+NMGbGrUvmcf4pFdzywCpe3twUdEkikkMUCiNQNBzi3z+ykEnlRXzqP+vZuq8t6JJEJEcoFEaosqIod11/Fg7cqKueRWSYKBRGsKmVxfzw2oVsb+rgpntX0NmjxXlEJL0UCiPcOdMq+NYHT+flLU186ecrSSR01bOIpI9GH2WAq08fz879Hfzjw28wYXQhX3nXnKBLEpEspVDIEDe9cxoNzR388OlNTBxTyHXnTg26JBHJQgqFDGFm/O3V89jZ0sHXlq5hwphCLp5dHXRZIpJl1KeQQcIh47sfPpN548v47E9fZfVbLUGXJCJZJq2hYGZXmNl6M9tgZrf08/zHzKzRzF5L3T6RznqyQVFehDuvr2NMUR433rOct/Z3BF2SiGSRtIWCmYWB7wNXAnOBD5vZ3H52/W93PyN1uyNd9WSTsaMKuPuGs+joiXPtHS+x50Bn0CWJSJZI55nC2cAGd9/k7t3AfwFL0vh5OWVmdSn33HAWew508uHbX6TxYFfQJYlIFkhnKEwAtvfZbkg9dqT3m9kqM7vfzCalsZ6ss3BKOXffcDY79nfykdtfZF+rgkFETk46Q6G/xQCOvPLq18BUdz8NeAL4Ub9vZHaTmdWbWX1jY+MQl5nZzq4t566PncX25nY+cvtL6mMQkZOSzlBoAPr+5j8R2NF3B3ff5+6Hfr29HVjY3xu5+23uXufudVVVVWkpNpOdO72Cu64/ix37O1jyved4dVtz0CWJSIZKZygsB2aYWa2Z5QHXAEv77mBm4/psXg2sS2M9We28Uyp54DPnUZgX4prbXmTpyh3Hf5GIyBHSFgruHgM+CzxK8j/7n7n7GjO71cyuTu32eTNbY2Yrgc8DH0tXPblgRnUpD37mfE6bWMbn73uVW3+9VpPoicigmHtmTbBWV1fn9fX1QZcxonXF4vzD/6zjRy9sZXZNKf9yzRnMrhkVdFkiEiAzW+HudcfbT1c0Z6H8SJi/WzKfu284i72t3Vz9vee47+VtQZclIhlAoZDF/s+ssTzyhQs4d1oFX3ngdQWDiByXQiHLVZbkc/tH67hoVhVf/eXr/PLVhqBLEpERTKGQA/IiIf7j2oUsqq3gSz9fxcOv7wy6JBEZoRQKOaIgGuaO6+s4fWIZn7vvVX5Wv/34LxKRnKNQyCHF+RHuufFszp1ewZfvX8U/PrxOy3uKyGEUCjlmVEGUuz92FtcumswPn9rEp368gtauWNBlicgIoVDIQZFwiK8vmc/fvmcuv123m4u++SQ/fnErPfFE0KWJSMAUCjnKzPjY+bX84tPnUVtZxF89uJrLv/M0j6/dHXRpIhIghUKOO3PyGH72qXO57bqFmMEn763nsz99haa27qBLE5EAKBQEM+OyeTU88oV38sVLZ/Loml1c+u2nNHRVJAcpFKRXNBzic5fM4NefewfjRhfw6Z+8wifvrdcaDSI5RKEgbzO7ZhS//Mz53HLlbJ75fSOXfvspbn96kzqiRXKAQkH6FQ2HuPnC6Tz+5xdy7rQK/v6hdVzyz0/xo+e30N6tIawi2UpTZ8txuTu/e2MP31u2gVe37aesMMp1i6bw0fOmMLa0IOjyRGQABjp1tkJBBmXF1iZ++NQmHl+3m2goxPsWTOATF0zjlLElQZcmIsegUJC02ry3jTuf3cTP6xvoiiVYPKeamy+cRt3U8qBLE5F+KBRkWOxr7eLeF7Zy7wtbaG7vYcHk0XzygmksnltNNKwuK5GRQqEgw6qjO87PV2zn9mc2sb2pg8qSfD5QN5FrzprElIrioMsTyXkKBQlEPOE89eYefvrSdpat30M84Zw6oYwr5tdw+bwa9T2IBEShIIHb1dLJg6+9xSOrd/Ha9v0AjCsr4MzJo1kweQznTa9k7vhRAVcpkhsUCjKi7Gzp4Im1u3l5SzOvbG3uvUr67Knl3PiOWi6dW004ZAFXKZK9FAoyou050MnSlTu45/ktNDR3MKm8kPOnVzJ/QhmnTihjzrhR5EXUUS0yVBQKkhFi8QSPr93Nfy3fzsqG/exv7wFgVEGEK+eP4+ozxrNoWoXOIkROkkJBMo6709DcwaqGFp5Yt5vH1uyirTvOmKIo81NnD3PHjWLB5DFMKi/ETEEhMlADDYXIcBQjMhBmxqTyIiaVF3HVaePo7Inzuzf2sOyNPazbdYB7nt9Cdyw5KV/NqALOri3nrNpyzp5azoyxJYR0NiFy0hQKMmIVRMO869RxvOvUcQD0xBNsbGxl+ZZmXt7cxIub9rF05Q4g2dy0YMoYppQXMX50IeNGFzKrulRhITJICgXJGNFwiNk1o5hdM4rrFk3B3dne1MHyLU3Ub23i1W37WbG1mYOdf5jFtbQgwpmTxzC7ppS8cIhQyIiGjMrSfGrKChhfVsiUiiIKouEAfzKRkUOhIBnLzJhcUcTkiiLev3Bi7+MHO3vYsb+T1W+1sGJbcgjsi5v2EU848cTb+9DywiFOn1TGObUVnDl5NIV5YSKhENGwUVtZzOiivOH8sUQCpY5mySnuTk/caWztYuf+Dna0dLLmrRZe3NzE6rda+g2NGWNLqJtazszqEiIhIxwKkRcJUVtZzOyaUorz9buVjHzqaBbph5mRFzEmjC5kwuhCAK4+fTwArV0x1u86SHcsQTzhdMXirNt5gPqtzfxm1Y7DmqX6mlJRRFVJPiEzQiEoyY+yYMpozp5azqkTy8iPqGlKModCQSSlJD/CwiljDnvskjnVACQSzv6Ont4mqI6eOBv2tLJu5wHe2HWAltRzCYfNe1t5Yt1uINk0NaY4SmE0TEE0THF+hNKCCKUFUUryw4RDlgwTM6pK85leVUxtZYn6OSQwCgWRAQiFjPLiw/sWaiuLuXRudb/7N7V1s3xLE69sa6alvYeOnjgd3XHaumM0tXWzdV87rV0xEgkn7k487hzsOvxMZExRlJqyQqpH5VMQCRONhIiGjIqSPGZUlzKzupTaimKK8sOaplyGjEJBJA3Ki/O4fF5yZtiBau2KsWVvG5v2trFtXxu7DnSyq6WT3Qe66IrFicWdnkSC3Qe6eq/XOCQcMgoiIQqiYfIjIfKjYQySYdSTfO340QVMqShmSnkR+dEQ7d3JoDKD8WWFTEoN5w2HjHjCiSUSFOdFmDimkPLiPMyMrlicrfva2bK3jfxomPFlBYwbXUiJ+lWyhv4mRUaIkvwI8yeUMX9C2TH3iyecbU3tvLn7INv2tdPZE6czFqezJ0FXLE5XT4LOWAJ3pzAapjAvTMiMt/Z3sHVfG0+/2UhPPEFRXoTCvDDuzt7W7mN+ZmE0zJiiKLsOdNJPXzylBRHGlRVQU1ZIZUkeXbEEBzp6ONAZozgvzJSKYqZWFDFudCF9LxsJmxENh4iEk01osUSCWDz5AVWpYcNVJflEdCY0bBQKIhkmHEoOla2tPLHFiw6NOOw7TUhHd5y39newY38HDkRSfR2tXTEamttpaO6gua2biWMKmVZVwtTKYrpjCXa2dLCrpZOdLZ299zfsPkhhXphRhVHKCqMc7Ozh0TW7aGo7dvAcTcjobR5zwIC8SCh5RhQJkx8NUZQXpjAaJj8SJhpOBs2h+g909tDaGSM/GqasMMLowjxKCiLkp86sCqIhxhTlUV6c19tE2NoVo7UzRtydcWUFjE8NTCgtiJ7Qz5BJFAoiOaa/OaMK88KcMrYkrYsgtXT0sOdAZ++2kzzricWd7njyzCYSDhEJGe7Q2JoMm10tncnmMvvDC7tiCbrjidRZUbIZrL07Rnt3jJ640xNPjiArLYgwpiiPSeVFdPUkz1427W2ltTNGVyxBZ6p5rb+zn/6UFUaZOKaQiWMKqSzJ7w2V/EiYo104b2bkR0JEw6Hes6JDwRUNJ4c354dDYNDc1sO+ti72tXYTTzhmyR87FDIiISMSTobhlIoipleVMHFM0ZBPFqlQEJFhUZY6cxjEK9JWS1/uzoHO5ACAprYuwBhVEKGkIIJh7GhJnkG91dxBQ3MHDc3tbGxsY/mW5mTT3SBCZTBClgzOY11KlhcJURgN0x1L0BNP4EBRXpjivAhF+WFwiLv3NskNhEJBRHKamfUGVn9NcjVlBSyYPKafVya5O7FjpEKy097pSZ3d9MQTvWcz3X3OeBynojif8uI8xhRFD+tHSaTeI55w2rtjbNnXxsY9bWxsbKUrlug98wBoT501tXXHMVJNgSHj+QF+H2kNBTO7AvhXIAzc4e7fOOL5fOBeYCGwD/iQu29JZ00iIkPJLNkcdDS9l5vkn/hnhEJGXqqZqDAvTEVJPgunlA/qPb7zoQF+1mCLGygzCwPfB64E5gIfNrO5R+z2caDZ3U8BvgP8U7rqERGR40vnOK+zgQ3uvsndu4H/ApYcsc8S4Eep+/cDl5hWThERCUw6Q2ECsL3PdkPqsX73cfcY0AJUpLEmERE5hnSGQn+/8R/ZGzOQfTCzm8ys3szqGxsbh6Q4ERF5u3SGQgMwqc/2RGDH0fYxswjJMWhNR76Ru9/m7nXuXldVVZWmckVEJJ2hsByYYWa1ZpYHXAMsPWKfpcD1qft/DPzOM22BBxGRLJK2IanuHjOzzwKPkhySepe7rzGzW4F6d18K3An8p5ltIHmGcE266hERkeNL63UK7v4Q8NARj/1Nn/udwAfSWYOIiAxcxi3HaWYHgfVB1zGCVAJ7gy5ihNB3cTh9H4fL9e9jirsft1M2E6e5WD+QdUZzhZnV6/tI0ndxOH0fh9P3MTCapFxERHopFEREpFcmhsJtQRcwwuj7+AN9F4fT93E4fR8DkHEdzSIikj6ZeKYgIiJpklGhYGZXmNl6M9tgZrcEXc9wMrNJZrbMzNaZ2Roz+7PU4+Vm9riZ/T7159FXA8lCZhY2s1fN7Dep7Vozeyn1ffx36mr6nGBmo83sfjN7I3WcnJurx4eZ/Xnq38lqM7vPzApy+dgYjIwJhQGuz5DNYsAX3X0OsAj409TPfwvwW3efAfw2tZ1L/gxY12f7n4DvpL6PZpJrduSKfwUecffZwOkkv5ecOz7MbALweaDO3eeTnFHhGnL72BiwjAkFBrY+Q9Zy953u/krq/kGS/+AncPiaFD8C3htMhcPPzCYCVwF3pLYNuJjk2hyQQ9+HmY0C3kly6hjcvdvd95O7x0cEKExNtFkE7CRHj43ByqRQGMj6DDnBzKYCZwIvAdXuvhOSwQGMDa6yYfcvwJeBRGq7AtifWpsDcusYmQY0AnenmtPuMLNicvD4cPe3gG8B20iGQQuwgtw9NgYlk0JhQGsvZDszKwF+AXzB3Q8EXU9QzOzdwB53X9H34X52zZVjJAIsAH7g7mcCbeRAU1F/Uv0mS4BaYDxQTLLZ+Ui5cmwMSiaFwkDWZ8hqZhYlGQg/cfcHUg/vNrNxqefHAXuCqm+YnQ9cbWZbSDYlXkzyzGF0qskAcusYaQAa3P2l1Pb9JEMiF4+PxcBmd2909x7gAeA8cvfYGJRMCoWBrM+QtVLt5XcC69z9232e6rsmxfXAr4a7tiC4+1fcfaK7TyV5LPzO3f8EWEZybQ7Ire9jF7DdzGalHroEWEtuHh/bgEVmVpT6d3Pou8jJY2OwMuriNTN7F8nfBg+tz/D3AZc0bMzsHcAzwOv8oQ39qyT7FX6KWsQzAAACvUlEQVQGTCb5j+ED7v621euymZldBHzJ3d9tZtNInjmUA68C17p7V5D1DRczO4Nkp3sesAm4geQvfjl3fJjZ3wEfIjlq71XgEyT7EHLy2BiMjAoFERFJr0xqPhIRkTRTKIiISC+FgoiI9FIoiIhIL4WCiIj0UihIzjGz1tSfU83sI0P83l89Yvv5oXx/kXRTKEgumwoMKhRSs/Uey2Gh4O7nDbImkUApFCSXfQO4wMxeS82/Hzazb5rZcjNbZWafguTFcam1LH5K8uJBzOxBM1uRmrP/ptRj3yA5M+drZvaT1GOHzkos9d6rzex1M/tQn/d+ss86CD9JXYWLmX3DzNamavnWsH87kpMix99FJGvdQupKaIDUf+4t7n6WmeUDz5nZY6l9zwbmu/vm1PaN7t5kZoXAcjP7hbvfYmafdfcz+vms9wFnkFznoDL1mqdTz50JzCM5F89zwPlmthb4I2C2u7uZjR7yn16kHzpTEPmDy4CPmtlrJKcPqQBmpJ57uU8gAHzezFYCL5KcqHEGx/YO4D53j7v7buAp4Kw+793g7gngNZLNWgeATuAOM3sf0H7SP53IACgURP7AgM+5+xmpW627HzpTaOvdKTnX0mLgXHc/neQ8OgUDeO+j6Tv/ThyIpOb9P5vkrLjvBR4Z1E8icoIUCpLLDgKlfbYfBT6dmqIcM5uZWqjmSGVAs7u3m9lsksujHtJz6PVHeBr4UKrfoorkKmkvH62w1LoZZe7+EPAFkk1PImmnPgXJZauAWKoZ6B6SaxxPBV5JdfY20v+SjY8AN5vZKmA9ySakQ24DVpnZK6mpvA/5JXAusJLk4i5fdvddqVDpTynwKzMrIHmW8ecn9iOKDI5mSRURkV5qPhIRkV4KBRER6aVQEBGRXgoFERHppVAQEZFeCgUREemlUBARkV4KBRER6fW/p6nEC7GROwsAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "err_ax = pd.Series(model.errors).plot()\n", "err_ax.set_xlabel(\"Iterations\")\n", "_ = err_ax.set_ylabel(\"Error\")" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Early stopping\n", "\n", "The above plot of dev-set performance suggests a simple strategy of __early stopping__: identify the iteration $i$ at which dev-set performance peaked and train our models for exactly $i$ iterations when doing our final test-set run. This value $i$ can be set differently for different models; selecting this point could even be done automatically during [hyperparameter](#Hyperparameter-optimization).\n", "\n", "If it is important to test the same model that is being used to create the dev-set performance curve, then one needs to store all the model parameters for the currently best model and then \"rewind\" to that stage once one decides that further training isn't helping. This is arguably the safest thing to do, since it keeps the actual parameters that maximized dev-set performance; see below on [the impact of random initializations](#The-role-of-random-parameter-initialization).\n", "\n", "For more on early stopping schemes, see [Prechelt 1997](http://page.mi.fu-berlin.de/prechelt/Biblio/stop_tricks1997.pdf)." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Learning curves with confidence intervals\n", "\n", "I frankly think the best response to all this is to accept that incremental performance plots like the above are how we should be assessing our models. This exposes all of the variation that we actually observe. \n", "\n", "In addition, in deep learning, we're often dealing with classes of models that are in principle capable of learning anything. The real question is implicitly how efficiently they can learn given the available data and other resources. Learning curves bring this our very clearly.\n", "\n", "We can improve the curves by adding confidence intervals to them derived from repeated runs. Here's a plot from a paper I recently wrote with Nick Dingwall ([Dingwall and Potts 2018](https://arxiv.org/abs/1803.09901)):\n", "\n", "\n", "\n", "I think this shows very clearly that, once all is said and done, the Mittens model (red) learns faster than the others, but is indistinguishable from the Clinical text GloVe model (blue) after enough training time. Furthermore, it's clear that the other two models are never going to catch up in the current experimental setting. A lot of this information would be lost if, for example, we decided to stop training when dev set performance reached its peak and report only a single F1 score per class." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## The role of random parameter initialization\n", "\n", "Most deep learning models have their parameters initialized randomly, perhaps according to some heuristics related to the number of parameters ([Glorot and Bengio 2010](http://proceedings.mlr.press/v9/glorot10a.html)) or their internal structure ([Saxe et al. 2014](https://arxiv.org/abs/1312.6120)). This is meaningful largely because of the non-convex optimization problems that these models define, but it can impact simpler models that have multiple optimal solutions that still differ at test time. \n", "\n", "There is growing awareness that these random choices have serious consequences. For instance, [Reimers and Gurevych (2017)](https://aclanthology.coli.uni-saarland.de/papers/D17-1035/d17-1035) report that different initializations for neural sequence models can lead to statistically significant results, and they show that a number of recent systems are indistinguishable in terms of raw performance once this source of variation is taken into account.\n", "\n", "This shouldn't surprise practitioners, who have long struggled with the question of what to do when a system experiences a catastrophic failure as a result of unlucky initialization. (I think the answer is to report this failure rate.)\n", "\n", "The code snippet below lets you experience this phenomenon for yourself. The XOR logic operator, which is true just in case its two arguments have the same value, is famously not learnable by a linear classifier but within reach of a neural network with a single hidden layer and a non-linear activation function ([Rumelhart et al. 1986](https://www.nature.com/articles/323533a0)). But how consistently do such models actually learn XOR? No matter what settings you choose, you rarely if ever see perfect performance across multiple runs." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "slideshow": { "slide_type": "slide" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Finished epoch 500 of 500; error is 0.0151921808719635015" ] }, { "data": { "text/plain": [ "defaultdict(int, {'correct': 8, 'incorrect': 2})" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def xor_eval(n_trials=10):\n", " xor = [\n", " ([1.,1.], 1),\n", " ([1.,0.], 0),\n", " ([0.,1.], 0),\n", " ([0.,0.], 1)]\n", " X, y = zip(*xor)\n", " results = defaultdict(int)\n", " for trial in range(n_trials):\n", " model = TorchShallowNeuralClassifier(\n", " hidden_dim=2,\n", " max_iter=500,\n", " eta=0.01)\n", " model.fit(X, y)\n", " preds = tuple(model.predict(X))\n", " result = 'correct' if preds == y else 'incorrect'\n", " results[result] += 1\n", " return results\n", "\n", "xor_eval(n_trials=10)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "For better or worse, the only response we have to this situation is to __report scores for multiple complete runs of a model with different randomly chosen initializations__. [Confidence intervals](#Confidence-intervals) and [statistical tests](#Wilcoxon-signed-rank-test) can be used to summarize the variation observed. If the evaluation regime already involves comparing the results of multiple train/test splits, then ensuring a new random initializing for each of those would seem sufficient.\n", "\n", "Arguably, these observations are incompatible with evaluation regimes involving only a single train/test split, as in [McNemar's test](#McNemar's-test). However, [as discussed above](#Practical-considerations,-and-some-compromises), we have to be realistic. If multiple run aren't feasible, then a more heuristic argument will be needed to try to convince skeptics that the differences observed are larger than we would expect from just different random initializations." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Closing remarks\n", "\n", "We can summarize most of the above with a few key ideas:\n", " \n", "1. Your evaluation should be based around a few systems that are related in ways that illuminate your hypotheses and help to convey what the best models are learning.\n", "\n", "1. Every model you assess should be given its best chance to shine (but we need to be realistic about how many experiments this entails!).\n", "\n", "1. The test set should play no role whatsoever in optimization or model selection. The best way to ensure this is to have the test set locked away until the final batch of experiments that will be reported in the paper, but this separation is simulated adequately by careful cross-validation set-ups.\n", "\n", "1. Strive to base your model comparisons in multiple runs on the same splits. This is especially important for deep learning, where a single model can perform in very different ways on the same data, depending on the vagaries of optimization." ] } ], "metadata": { "celltoolbar": "Slideshow", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 2 }