{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/fonnesbeck/Bios8366/blob/master/notebooks/Section4_9-Model-Comparison.ipynb)\n", "\n", "# Model comparison\n", "\n", "To demonstrate the use of model comparison criteria, we implement the radon contamination example from the previous lecture. \n", "\n", "Below, we fit a **pooled model**, which assumes a single fixed effect across all counties, and a **hierarchical model** that allows for a random effect that partially pools the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "import pandas as pd\n", "import pymc as pm\n", "import arviz as az\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "import warnings\n", "warnings.filterwarnings(\"ignore\", module=\"mkl_fft\")\n", "sns.set_context('notebook')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data include the observed radon levels and associated covariates for 85 counties in Minnesota." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DATA_URL = 'https://raw.githubusercontent.com/fonnesbeck/Bios8366/master/data/'\n", "\n", "try:\n", " radon_data = pd.read_csv('../data/radon.csv', index_col=0)\n", "except FileNotFoundError:\n", " radon_data = pd.read_csv(DATA_URL + 'radon.csv', index_col=0)\n", "radon_data.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "county_ind, mn_counties = radon_data.county.factorize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Pooled model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model() as pooled_model:\n", " floor = pm.Data(\"floor\", radon_data.floor.values, mutable=True)\n", " log_radon = pm.Data(\"log_radon\", radon_data.log_radon.values, mutable=True)\n", " \n", " beta = pm.Normal('beta', 0, sd=1e5, shape=2)\n", " sigma = pm.HalfCauchy('sigma', 5)\n", " \n", " theta = beta[0] + beta[1]*floor\n", " \n", " y = pm.Normal('y', theta, sigma=sigma, observed=log_radon)\n", " \n", " trace_p = pm.sample(1000, tune=1000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_trace(trace_p, var_names=['beta']);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Hierarchical model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ " \n", "with pm.Model(coords={'county': mn_counties}) as hierarchical_model:\n", "\n", " floor = pm.Data(\"floor\", radon_data.floor.values, mutable=True)\n", " log_radon = pm.Data(\"log_radon\", radon_data.log_radon.values, mutable=True)\n", " county_idx = pm.Data('county_idx', county_ind, mutable=True)\n", "\n", " # Priors\n", " mu_a = pm.Normal('mu_a', mu=0., sigma=1e5)\n", " sigma_a = pm.HalfCauchy('sigma_a', 5)\n", " \n", " \n", " # Random intercepts\n", " a = pm.Normal('a', mu=mu_a, sigma=sigma_a, dims='county')\n", " # Common slope\n", " b = pm.Normal('b', mu=0., sigma=1e5)\n", " \n", " # Model error\n", " sd_y = pm.HalfCauchy('sd_y', 5)\n", " \n", " # Expected value\n", " y_hat = a[county_idx] + b * floor\n", " \n", " # Data likelihood\n", " y_like = pm.Normal('y_like', mu=y_hat, sigma=sd_y, observed=log_radon)\n", " \n", " trace_h = pm.sample(1000, tune=1000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_trace(trace_h, var_names=['mu_a', 'b']);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_forest(trace_h, var_names=['a'], r_hat=True, figsize=(5,16), combined=True);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predictive Information Criteria" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Measures of predictive accuracy are called information criteria, and are comprised of the log-predictive density of the data given a point estimate of the fitted model multiplied by −2 (i.e. the deviance):\n", "\n", "$$−2 \\log[p(y | \\hat{\\theta})]$$\n", "\n", "Clearly, the expected accuracy of a fitted model’s predictions of future data will generally be lower than the accuracy of the model’s predictions for observed data, even though the parameters in the model happen to be sampled from the specified prior distribution.\n", "\n", "Why are we interested in prediction accuracy?\n", "\n", "1. to quantify the performance of a model\n", "2. to perform model selection \n", "\n", "By model selection, we may not necessarily want to choose one model over another, but we might want to put different models on the same scale. The advantage if information-theoretic measures is that candidate models do not need to be nested; even models with completely different parameterizations can be used to predict the same measurements.\n", "\n", "Note that when candidate models have the same number of parameters, one can compare their best-fit log predictive densities directly, but when model dimensions differ, one has to make an adjustment for the tendency of a larger model to fit data better." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One advantage of using predictive information criteria for model comparison is that they allow us to estimate out-of-sample predictive accuracy using the data in our sample. All such methods are approximations of predictive accuracy, so they are not perfect, but they perform reasonably well.\n", "\n", "One can naively use the log predictive density for the sample data (within-sample predictive accuracy) as an approximation of the out-of-sample predictive accuracy, but this will almost always result in an overestimate of performance.\n", "\n", "As is popular in machine learning, cross-validation can be used to evaluate predictive accuracy, whereby the dataset is partitioned and each partition is allowed to be used to fit the model and evaluate the fit. However, this method is computationally expensive because it reqiures the same model to be fit to multiple subsets of the data.\n", "\n", "We will focus here on adjusted within-sample predictive accuracy, using a variety of information criteria. The goal here is to get an approximately unbiased estimate of predictive accuracy which are correct in expectation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### AIC and DIC\n", "\n", "One approach to model selection is to use an information-theoretic criterion to identify the most appropriate model. Akaike (1973) found a formal relationship between Kullback-Leibler information (a dominant paradigm in information and coding theory) and likelihood theory. Akaike's Information Criterion (AIC) is an estimator of expected relative K-L information based on the maximized log-likelihood function, corrected for asymptotic bias.\n", "\n", "$$\\text{AIC} = −2 \\log(L(\\theta|data)) + 2K$$\n", "\n", "AIC balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC from the residual sums of squares as:\n", "\n", "$$\\text{AIC} = n \\log(\\text{RSS}/n) + 2k$$\n", "\n", "where $k$ is the number of parameters in the model. Notice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases.\n", "\n", "A limitation of AIC for Bayesian models is that it cannot be applied to hierarchical models (or any models with random effects), as counting the number of parameters in such models is problematic. A more Bayesian version of AIC is called the deviance information criterion (DIC), and replaces the fixed parameter penalty with an estimate of the effective number of parameters.\n", "\n", "$$p_{DIC} = 2\\left(\\log p(y | E[\\theta | y]) - E_{post}[\\log p(y|\\theta)] \\right)$$\n", "\n", "where the second term is an average of $\\theta$ over the posterior distribution:\n", "\n", "$$\\hat{p}_{DIC} = 2\\left(\\log p(y | E[\\theta | y]) - \\frac{1}{M} \\sum_{j=1}^{M}\\log p(y|\\theta^{(j)}) \\right)$$\n", "\n", "DIC is computed as:\n", "\n", "$$\\text{DIC} = -2 \\log p(y | E[\\theta | y]) + 2p_{DIC}$$\n", "\n", "Though this is an improvement over AIC, DIC is still not fully Bayesian, as it relies on a point estimate of the model rather than using the full posterior. As a result, it can be unstable for hierarchical models, sometimes producing estimates of effective number of parameters that is negative." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Widely-applicable Information Criterion (WAIC)\n", "\n", "WAIC (Watanabe 2010) is a fully Bayesian criterion for estimating out-of-sample expectation, using the log pointwise posterior predictive density (LPPD) and correcting for the effective number of parameters to adjust for overfitting.\n", "\n", "The computed log pointwise predictive density is:\n", "\n", "$$lppd_{comp} = \\sum_{i=1}^N \\log \\left(\\frac{1}{M} \\sum_{j=1}^M p(y_i | \\theta^{(j)}) \\right)$$\n", "\n", "The complexity adjustment here is as follows:\n", "\n", "$$p_{WAIC} = 2\\sum_{i=1}^N \\left[ \\log \\left(\\frac{1}{M} \\sum_{j=1}^M p(y_i | \\theta^{(j)})\\right) - \\frac{1}{M} \\sum_{j=1}^M \\log p(y_i | \\theta^{(j)}) \\right]$$\n", "\n", "so WAIC is then:\n", "\n", "$$\\text{WAIC} = -2(lppd) + 2p_{WAIC}$$\n", "\n", "The adjustment is an approximation to the number of unconstrained parameters in the model (0=fully constrained, 1=no constraints). In this sense, WAIC treats the effective number of paramters as a random variable.\n", "\n", "WAIC averages over the posterior distribution, and therefore is more reliable for a wider range of models." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pooled_waic = az.waic(trace_p, pooled_model)\n", " \n", "pooled_waic" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hierarchical_waic = az.waic(trace_h, hierarchical_model)\n", " \n", "hierarchical_waic" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "ArviZ includes two convenience functions to help compare WAIC for different models. The first of this functions is `compare`, this one computes WAIC (or LOO) from a set of traces and models and returns a DataFrame." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_comp_WAIC = az.compare({'hierarchical':trace_h, 'pooled':trace_p}, ic='waic').round(2)\n", "df_comp_WAIC" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have many columns so let check one by one the meaning of them:\n", "\n", "\n", "1. The **waic** column contains the values of WAIC. The DataFrame is always sorted from lowest to highest WAIC. The index reflects the order in which the models are passed to this function.\n", "\n", "2. The **p_waic** column is the estimated effective number of parameters. In general, models with more parameters will be more flexible to fit data and at the same time could also lead to overfitting. Thus we can interpret pWAIC as a penalization term, intuitively we can also interpret it as measure of how flexible each model is in fitting the data. \n", "\n", "3. The **d_waic** column is the relative difference between the value of WAIC for the top-ranked model and the value of WAIC for each model. For this reason we will always get a value of 0 for the first model.\n", "\n", "4. Sometimes when comparing models, we do not want to select the \"best\" model, instead we want to perform predictions by averaging along all the models (or at least several models). Ideally we would like to perform a weighted average, giving more weight to the model that seems to explain/predict the data better. There are many approaches to perform this task, one of them is to use Akaike weights based on the values of WAIC for each model. These weights can be loosely interpreted as the probability of each model (among the compared models) given the data. One caveat of this approach is that the weights are based on point estimates of WAIC (i.e. the uncertainty is ignored).\n", "\n", "5. The **se** column records the standard error for the WAIC computations. The standard error can be useful to assess the uncertainty of the WAIC estimates. Nevertheless, caution need to be taken because the estimation of the standard error assumes normality and hence could be problematic when the sample size is low.\n", "\n", "6. In the same way that we can compute the standard error for each value of WAIC, we can compute the standard error of the differences between two values of WAIC. Notice that both quantities are not necessarily the same, the reason is that the uncertainty about WAIC is correlated between models. This quantity is always 0 for the top-ranked model.\n", "\n", "7. The **warning** column signals whether any warnings were given when wAIC was estimated. A value of 1 indicates that the computation of WAIC may not be reliable, this warning is based on an empirical determined cutoff value and need to be interpreted with caution. For more details you can read this [paper](https://arxiv.org/abs/1507.04544)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The second convenience function takes the output of `compare` and produces a summary plot in the style of the one used in the book [Statistical Rethinking](http://xcelab.net/rm/statistical-rethinking/) by Richard McElreath (check also [this port](https://github.com/aloctavodia/Statistical-Rethinking-with-Python-and-PyMC3) of the examples in the book to PyMC3)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_compare(df_comp_WAIC);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- The empty circle represents the values of WAIC and the black error bars associated with them are the values of the standard deviation of WAIC. \n", "- The value of the lowest WAIC is also indicated with a vertical dashed grey line to ease comparison with other WAIC values.\n", "- The filled black dots are the in-sample deviance of each model, which for WAIC is 2 pWAIC from the corresponding WAIC value.\n", "\n", "For all models except the top-ranked one we also get a triangle indicating the value of the difference of WAIC between that model and the top model and a grey errobar indicating the standard error of the differences between the top-ranked WAIC and WAIC for each model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Leave-one-out Cross-validation (LOO)\n", "\n", "LOO cross-validation is an estimate of the out-of-sample predictive fit. In cross-validation, the data are repeatedly partitioned into training and holdout sets, iteratively fitting the model with the former and evaluating the fit with the holdout data. \n", "\n", "The estimate of out-of-sample predictive fit from applying LOO cross-validation to a Bayesian model is:\n", "\n", "$$lppd_{loo} = \\sum_{i=1}^N \\log p_{post(-i)}(y_i) = \\sum_{i=1}^N \\log \\left(\\frac{1}{S} \\sum_{s=1}^S p(y_i| \\theta^{(is)})\\right)$$\n", "\n", "so, each prediction is conditioned on $N-1$ data points, which induces an underestimation of the predictive fit for smaller $N$. The resulting estimate of effective samples size is:\n", "\n", "$$p_{loo} = lppd - lppd_{loo}$$\n", "\n", "As mentioned, using cross-validation for a Bayesian model, fitting $N$ copies of the model under different subsets of the data is computationally expensive. However, Vehtari et al. (2016) introduced an efficient computation of LOO from MCMC samples, which are corrected using Pareto-smoothed importance sampling (PSIS) to provide an estimate of point-wise out-of-sample prediction accuracy.\n", "\n", "This involves estimating the importance sampling LOO predictive distribution\n", "\n", "$$p(\\tilde{y}_i | y_{-i}) \\approx \\frac{\\sum_{s=1}^S w_i(\\theta^{(s)}) p(\\tilde{y}_i|\\theta^{(s)})}{\\sum_{s=1}^S w_i(\\theta^{(s)})}$$\n", "\n", "where the importance weights are:\n", "\n", "$$w_i(\\theta^{(s)}) = \\frac{1}{p(y_i | \\theta^{(s)})} \\propto \\frac{p(\\theta^{(s)}|y_{-i})}{p(\\theta^{(s)}|y)}$$\n", "\n", "The predictive distribution evaluated at the held-out point is then:\n", "\n", "$$p(y_i | y_{-i}) \\approx \\frac{1}{\\frac{1}{S} \\sum_{s=1}^S \\frac{1}{p(y_i | \\theta^{(s)})}}$$\n", "\n", "However, the posterior is likely to have a smaller variance and thinner tails than the LOO posteriors, so this approximation induces instability due to the fact that the importance ratios can have high or infinite variance.\n", "To deal with this instability, a generalized Pareto distribution fit to the upper tail of the distribution of the importance ratios can be used to construct a test for a finite importance ratio variance. If the test suggests the variance is infinite then importance sampling is halted." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "LOO using using Pareto-smoothed importance sampling is implemented in ArviZ in the `loo` function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pooled_loo = az.loo(trace_p, pooled_model)\n", " \n", "pooled_loo" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hierarchical_loo = az.loo(trace_h, hierarchical_model)\n", " \n", "hierarchical_loo" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also use `compare` with LOO." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_comp_LOO = az.compare({'hierarchical':trace_h, 'pooled':trace_p}, ic='loo')\n", "df_comp_LOO.round(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also plot the results" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_compare(df_comp_LOO);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Interpretation\n", "\n", "Though we might expect the hierarchical model to outperform a complete pooling model, there is little to choose between the models in this case, giving that both models gives very similar values of the information criteria. This is more clearly appreciated when we take into account the uncertainty (in terms of standard errors) of WAIC and LOO.\n", "\n", "---\n", "\n", "## Reference\n", "\n", "[Gelman, A., Hwang, J., & Vehtari, A. (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing, 24(6), 997–1016.](http://doi.org/10.1007/s11222-013-9416-2)\n", "\n", "[Vehtari, A, Gelman, A, Gabry, J. (2016). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing](http://link.springer.com/article/10.1007/s11222-016-9696-4)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.9" } }, "nbformat": 4, "nbformat_minor": 2 }