{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/fonnesbeck/Bios8366/blob/master/notebooks/Section4_7-Multilevel-Modeling.ipynb)\n", "\n", "# A Primer on Bayesian Methods for Multilevel Modeling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Hierarchical or multilevel modeling is a generalization of regression modeling.\n", "\n", "*Multilevel models* are regression models in which the constituent model parameters are given **probability models**. This implies that model parameters are allowed to **vary by group**.\n", "\n", "Observational units are often naturally **clustered**. Clustering induces dependence between observations, despite random sampling of clusters and random sampling within clusters.\n", "\n", "A *hierarchical model* is a particular multilevel model where parameters are nested within one another.\n", "\n", "Some multilevel structures are not hierarchical. \n", "\n", "* e.g. \"country\" and \"year\" are not nested, but may represent separate, but overlapping, clusters of parameters\n", "\n", "We will motivate this topic using an environmental epidemiology example." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Radon contamination (Gelman and Hill 2006)\n", "\n", "Radon is a radioactive gas that enters homes through contact points with the ground. It is a carcinogen that is the primary cause of lung cancer in non-smokers. Radon levels vary greatly from household to household.\n", "\n", "![radon](images/how_radon_enters.jpg)\n", "\n", "The EPA did a study of radon levels in 80,000 houses. Two important predictors:\n", "\n", "* measurement in basement or first floor (radon higher in basements)\n", "* county uranium level (positive correlation with radon levels)\n", "\n", "We will focus on modeling radon levels in Minnesota.\n", "\n", "The hierarchy in this example is households within county. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data organization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we import the data from a local file, and extract Minnesota's data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "import pandas as pd\n", "import pymc as pm\n", "import arviz as az\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns; sns.set_context('notebook')\n", "import warnings\n", "warnings.filterwarnings(\"ignore\", module=\"mkl_fft\")\n", "warnings.filterwarnings(\"ignore\", module=\"matplotlib\")\n", "\n", "DATA_URL = 'https://raw.githubusercontent.com/fonnesbeck/Bios8366/master/data/'\n", "\n", "try:\n", " srrs2 = pd.read_csv('../data/srrs2.dat')\n", "except FileNotFoundError:\n", " srrs2 = pd.read_csv(DATA_URL + 'srrs2.dat')\n", "\n", "# Import radon data\n", "\n", "srrs2.columns = srrs2.columns.map(str.strip)\n", "srrs_mn = srrs2[srrs2.state=='MN'].copy()\n", "\n", "RANDOM_SEED = 20090425" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, obtain the county-level predictor, uranium, by combining two variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " cty = pd.read_csv('../data/cty.dat')\n", "except FileNotFoundError:\n", " cty = pd.read_csv(DATA_URL + 'cty.dat')\n", "\n", "srrs_mn['fips'] = srrs_mn.stfips*1000 + srrs_mn.cntyfips\n", "cty_mn = cty[cty.st=='MN'].copy()\n", "cty_mn['fips'] = 1000*cty_mn.stfips + cty_mn.ctfips" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use the `merge` method to combine home- and county-level information in a single DataFrame." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "srrs_mn = srrs_mn.merge(cty_mn[['fips', 'Uppm']], on='fips')\n", "srrs_mn = srrs_mn.drop_duplicates(subset='idnum')\n", "u = np.log(srrs_mn.Uppm).unique()\n", "\n", "n = len(srrs_mn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also need a lookup table (`dict`) for each unique county, for indexing." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "srrs_mn.county = srrs_mn.county.map(str.strip)\n", "county, mn_counties = srrs_mn.county.factorize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, create local copies of variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "radon = srrs_mn.activity\n", "srrs_mn['log_radon'] = log_radon = np.log(radon + 0.1).values\n", "floor_measure = srrs_mn.floor.values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Distribution of radon levels in MN (log scale):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "srrs_mn.activity.apply(lambda x: np.log(x+0.1)).hist(bins=25, grid=False);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conventional approaches\n", "\n", "The two conventional alternatives to modeling radon exposure represent the two extremes of the bias-variance tradeoff:\n", "\n", "***Complete pooling***: \n", "\n", "Treat all counties the same, and estimate a single radon level.\n", "\n", "$$y_i = \\alpha + \\beta x_i + \\epsilon_i$$\n", "\n", "***No pooling***:\n", "\n", "Model radon in each county independently.\n", "\n", "$$y_i = \\alpha_{j[i]} + \\beta x_i + \\epsilon_i$$\n", "\n", "where $j = 1,\\ldots,85$\n", "\n", "The errors $\\epsilon_i$ may represent measurement error, temporal within-house variation, or variation among houses." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are the point estimates of the slope and intercept for the complete pooling model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "floor = srrs_mn.floor.values\n", "log_radon = srrs_mn.log_radon.values\n", "\n", "with pm.Model(rng_seeder=RANDOM_SEED) as pooled_model:\n", " \n", " mu = pm.Normal('mu', 0, sd=1e5)\n", " beta = pm.Normal('beta', mu=0, sd=1e5)\n", " sigma = pm.HalfCauchy('sigma', 5)\n", " \n", " theta = mu + beta*floor\n", " \n", " y = pm.Normal('y', theta, sd=sigma, observed=log_radon)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pm.model_to_graphviz(pooled_model)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "with pooled_model:\n", " pooled_trace = pm.sample()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mu_mean = pooled_trace.posterior.mean(dim=(\"chain\", \"draw\")).mu.values\n", "beta_mean = pooled_trace.posterior.mean(dim=(\"chain\", \"draw\")).beta.values\n", "\n", "\n", "plt.scatter(srrs_mn.floor, np.log(srrs_mn.activity+0.1))\n", "xvals = np.linspace(-0.2, 1.2)\n", "plt.plot(xvals, beta_mean*xvals + mu_mean, 'r--');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Estimates of county radon levels for the unpooled model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "coords={'county': mn_counties}\n", "\n", "with pm.Model(coords=coords, rng_seeder=RANDOM_SEED) as unpooled_model:\n", " \n", " mu = pm.Normal('mu', 0, sd=1e5, dims='county')\n", " beta = pm.Normal('beta', 0, sd=1e5)\n", " sigma = pm.HalfCauchy('sigma', 5)\n", " \n", " theta = mu[county] + beta*floor\n", " \n", " y = pm.Normal('y', theta, sd=sigma, observed=log_radon)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pm.model_to_graphviz(unpooled_model)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with unpooled_model:\n", " unpooled_trace = pm.sample()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_forest(\n", " unpooled_trace, \n", " var_names=['mu'], \n", " ess=True, r_hat=True, \n", " combined=True,\n", " figsize=(6,18)\n", ");" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "unpooled_estimates = unpooled_trace.posterior.mean(dim=('chain', 'draw')).mu\n", "unpooled_se = unpooled_trace.posterior.std(dim=('chain', 'draw')).mu" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can plot the ordered estimates to identify counties with high radon levels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "unpooled_means = unpooled_trace.posterior.mean(dim=(\"chain\", \"draw\"))\n", "unpooled_hdi = az.hdi(unpooled_trace)\n", "\n", "unpooled_means_iter = unpooled_means.sortby(\"mu\")\n", "unpooled_hdi_iter = unpooled_hdi.sortby(unpooled_means_iter.mu)\n", "\n", "_, ax = plt.subplots(figsize=(10,6))\n", "xticks = np.arange(0, 86, 6)\n", "unpooled_means_iter.plot.scatter(x=\"county\", y=\"mu\", ax=ax, alpha=0.8)\n", "ax.vlines(\n", " np.arange(mn_counties.size),\n", " unpooled_hdi_iter.mu.sel(hdi=\"lower\"),\n", " unpooled_hdi_iter.mu.sel(hdi=\"higher\"),\n", " color=\"orange\",\n", " alpha=0.6,\n", ")\n", "ax.set(ylabel=\"Radon estimate\", ylim=(-2, 4.5))\n", "ax.set_xticks(xticks)\n", "ax.set_xticklabels(unpooled_means_iter.county.values[xticks])\n", "ax.tick_params(rotation=45)\n", "sns.despine(trim=True);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are visual comparisons between the pooled and unpooled estimates for a subset of counties representing a range of sample sizes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sample_counties = ('LAC QUI PARLE', 'AITKIN', 'KOOCHICHING', \n", " 'DOUGLAS', 'CLAY', 'STEARNS', 'RAMSEY', 'ST LOUIS')\n", "\n", "fig, axes = plt.subplots(2, 4, figsize=(12, 6), sharey=True, sharex=True)\n", "axes = axes.ravel()\n", "m = unpooled_trace.posterior.mean(dim=(\"chain\", \"draw\")).beta\n", "for i,c in enumerate(sample_counties):\n", " y = srrs_mn.log_radon[srrs_mn.county==c]\n", " x = srrs_mn.floor[srrs_mn.county==c]\n", " axes[i].scatter(x + np.random.randn(len(x))*0.01, y, alpha=0.4)\n", " \n", " # No pooling model\n", " b = unpooled_estimates.sel(county=c)\n", " \n", " # Plot both models and data\n", " xvals = np.linspace(0, 1)\n", " axes[i].plot(xvals, m.values*xvals+b.values)\n", " axes[i].plot(xvals, beta_mean*xvals+mu_mean, 'r--')\n", " axes[i].set_xticks([0,1])\n", " axes[i].set_xticklabels(['basement', 'floor'])\n", " axes[i].set_ylim(-1, 3)\n", " axes[i].set_title(c)\n", " if not i%2:\n", " axes[i].set_ylabel('log radon level')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Neither of these models are satisfactory:\n", "\n", "* if we are trying to identify high-radon counties, pooling is useless\n", "* we do not trust extreme unpooled estimates produced by models using few observations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multilevel and hierarchical models\n", "\n", "When we pool our data, we imply that they are sampled from the same model. This ignores any variation among sampling units (other than sampling variance):\n", "\n", "![pooled](images/pooled_model.png)\n", "\n", "When we analyze data unpooled, we imply that they are sampled independently from separate models. At the opposite extreme from the pooled case, this approach claims that differences between sampling units are to large to combine them:\n", "\n", "![unpooled](images/unpooled_model.png)\n", "\n", "In a hierarchical model, parameters are viewed as a sample from a population distribution of parameters. Thus, we view them as being neither entirely different or exactly the same. This is ***parital pooling***.\n", "\n", "![hierarchical](images/partial_pooled_model.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use PyMC to easily specify multilevel models, and fit them using Markov chain Monte Carlo." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Partial pooling model\n", "\n", "The simplest partial pooling model for the household radon dataset is one which simply estimates radon levels, without any predictors at any level. A partial pooling model represents a compromise between the pooled and unpooled extremes, approximately a weighted average (based on sample size) of the unpooled county estimates and the pooled estimates.\n", "\n", "$$\\hat{\\alpha} \\approx \\frac{(n_j/\\sigma_y^2)\\bar{y}_j + (1/\\sigma_{\\alpha}^2)\\bar{y}}{(n_j/\\sigma_y^2) + (1/\\sigma_{\\alpha}^2)}$$\n", "\n", "Estimates for counties with smaller sample sizes will shrink towards the state-wide average.\n", "\n", "Estimates for counties with larger sample sizes will be closer to the unpooled county estimates." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model(coords=coords, rng_seeder=RANDOM_SEED) as partial_pooling:\n", " \n", " # Priors\n", " mu_a = pm.Normal('mu_a', mu=0., sd=1e5)\n", " sigma_a = pm.HalfCauchy('sigma_a', 5)\n", " \n", " # Random intercepts\n", " mu = pm.Normal('mu', mu=mu_a, sd=sigma_a, dims='county')\n", " \n", " # Model error\n", " sigma_y = pm.HalfCauchy('sigma_y',5)\n", " \n", " # Expected value\n", " y_hat = mu[county]\n", " \n", " # Data likelihood\n", " y_like = pm.Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pm.model_to_graphviz(partial_pooling)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with partial_pooling:\n", " partial_pooling_trace = pm.sample(tune=2000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "N_county = srrs_mn.groupby(\"county\")[\"idnum\"].count().values\n", "\n", "fig, axes = plt.subplots(1, 2, figsize=(10, 4), sharex=True, sharey=True)\n", "for ax, trace, level in zip(\n", " axes,\n", " (unpooled_trace, partial_pooling_trace),\n", " (\"no pooling\", \"partial pooling\"),\n", "):\n", "\n", " # add variable with x values to xarray dataset\n", " trace.posterior = trace.posterior.assign_coords({\"N_county\": (\"county\", N_county)})\n", " # plot means\n", " trace.posterior.mean(dim=(\"chain\", \"draw\")).plot.scatter(\n", " x=\"N_county\", y=\"mu\", ax=ax, alpha=0.9\n", " )\n", " ax.hlines(\n", " partial_pooling_trace.posterior.mu.mean(),\n", " 0.9,\n", " max(N_county) + 1,\n", " alpha=0.4,\n", " ls=\"--\",\n", " label=\"Est. population mean\",\n", " )\n", "\n", " # plot hdi\n", " hdi = az.hdi(trace).mu\n", " ax.vlines(N_county, hdi.sel(hdi=\"lower\"), hdi.sel(hdi=\"higher\"), color=\"orange\", alpha=0.5)\n", "\n", " ax.set(\n", " title=f\"{level.title()} Estimates\",\n", " xlabel=\"Nbr obs in county (log scale)\",\n", " xscale=\"log\",\n", " ylabel=\"Log radon\",\n", " )\n", " ax.legend(fontsize=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice the difference between the unpooled and partially-pooled estimates, particularly at smaller sample sizes. The former are both more extreme and more imprecise." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Varying intercept model\n", "\n", "This model allows intercepts to vary across county, according to a random effect.\n", "\n", "$$y_i = \\alpha_{j[i]} + \\beta x_{i} + \\epsilon_i$$\n", "\n", "where\n", "\n", "$$\\epsilon_i \\sim N(0, \\sigma_y^2)$$\n", "\n", "and the intercept random effect:\n", "\n", "$$\\alpha_{j[i]} \\sim N(\\mu_{\\alpha}, \\sigma_{\\alpha}^2)$$\n", "\n", "As with the the “no-pooling” model, we set a separate intercept for each county, but rather than fitting separate least squares regression models for each county, multilevel modeling **shares strength** among counties, allowing for more reasonable inference in counties with little data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model(coords=coords, rng_seeder=RANDOM_SEED) as varying_intercept:\n", " \n", " # Priors\n", " mu_a = pm.Normal('mu_a', mu=0., tau=0.0001)\n", " sigma_a = pm.HalfCauchy('sigma_a', 5)\n", " \n", " \n", " # Random intercepts\n", " mu = pm.Normal('mu', mu=mu_a, sd=sigma_a, dims='county')\n", " # Common slope\n", " beta = pm.Normal('beta', mu=0., sd=1e5)\n", " \n", " # Model error\n", " sd_y = pm.HalfCauchy('sd_y', 5)\n", " \n", " # Expected value\n", " y_hat = mu[county] + beta * floor_measure\n", " \n", " # Data likelihood\n", " y_like = pm.Normal('y_like', mu=y_hat, sd=sd_y, observed=log_radon)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pm.model_to_graphviz(varying_intercept)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with varying_intercept:\n", " varying_intercept_trace = pm.sample(tune=2000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pm.plot_forest(varying_intercept_trace, var_names=['mu'], figsize=(6,18), combined=True, ess=True, r_hat=True);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pm.plot_posterior(varying_intercept_trace, var_names=['sigma_a', 'beta']);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The estimate for the `floor` coefficient is approximately -0.66, which can be interpreted as houses without basements having about half ($\\exp(-0.66) = 0.52$) the radon levels of those with basements, after accounting for county." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.summary(varying_intercept_trace, var_names=['beta'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import xarray as xr\n", "\n", "xvals = xr.DataArray([0, 1], dims=\"Level\", coords={\"Level\": [\"Basement\", \"Floor\"]})\n", "post = varying_intercept_trace.posterior # alias for readability\n", "theta = (\n", " (post.mu + post.beta * xvals).mean(dim=(\"chain\", \"draw\")).to_dataset(name=\"Mean log radon\")\n", ")\n", "\n", "_, ax = plt.subplots()\n", "theta.plot.scatter(x=\"Level\", y=\"Mean log radon\", alpha=0.2, color=\"k\", ax=ax) # scatter\n", "ax.plot(xvals, theta[\"Mean log radon\"].T, \"k-\", alpha=0.2)\n", "# add lines too\n", "ax.set_title(\"MEAN LOG RADON BY COUNTY\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is easy to show that the partial pooling model provides more objectively reasonable estimates than either the pooled or unpooled models, at least for counties with small sample sizes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sample_counties = ('LAC QUI PARLE', 'AITKIN', 'KOOCHICHING', \n", " 'DOUGLAS', 'CLAY', 'STEARNS', 'RAMSEY', 'ST LOUIS')\n", "\n", "fig, axes = plt.subplots(2, 4, figsize=(12, 6), sharey=True, sharex=True)\n", "axes = axes.ravel()\n", "m = unpooled_trace.posterior.mean(dim=(\"chain\", \"draw\")).beta\n", "for i,c in enumerate(sample_counties):\n", " y = srrs_mn.log_radon[srrs_mn.county==c]\n", " x = srrs_mn.floor[srrs_mn.county==c]\n", " axes[i].scatter(x + np.random.randn(len(x))*0.01, y, alpha=0.4)\n", " \n", " # No pooling model\n", " b = unpooled_estimates.sel(county=c)\n", " \n", " # Plot both models and data\n", " xvals = np.linspace(0, 1)\n", " axes[i].plot(xvals, m.values*xvals+b.values)\n", " axes[i].plot(xvals, beta_mean*xvals+mu_mean, 'r--')\n", " varying_intercept_trace.posterior.sel(county=c).beta\n", " post = varying_intercept_trace.posterior.sel(county='DOUGLAS').mean(dim=(\"chain\", \"draw\"))\n", " theta = (\n", " post.mu.values + post.beta.values * xvals\n", " )\n", " axes[i].plot(xvals, theta, 'k:')\n", " axes[i].set_xticks([0,1])\n", " axes[i].set_xticklabels(['basement', 'floor'])\n", " axes[i].set_ylim(-1, 3)\n", " axes[i].set_title(c)\n", " if not i%2:\n", " axes[i].set_ylabel('log radon level')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Varying slope model\n", "\n", "Alternatively, we can posit a model that allows the counties to vary according to how the location of measurement (basement or floor) influences the radon reading.\n", "\n", "$$y_i = \\alpha + \\beta_{j[i]} x_{i} + \\epsilon_i$$\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model(coords=coords, rng_seeder=RANDOM_SEED) as varying_slope:\n", " \n", " # Priors\n", " mu_b = pm.Normal('mu_b', mu=0., sd=1e5)\n", " sigma_b = pm.HalfCauchy('sigma_b', 5)\n", " \n", " # Common intercepts\n", " mu = pm.Normal('mu', mu=0., sd=1e5)\n", " # Random slopes\n", " beta = pm.Normal('beta', mu=mu_b, sd=sigma_b, dims='county')\n", " \n", " # Model error\n", " sigma_y = pm.HalfCauchy('sigma_y',5)\n", " \n", " # Expected value\n", " y_hat = mu + beta[county] * floor_measure\n", " \n", " # Data likelihood\n", " y_like = pm.Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with varying_slope:\n", " varying_slope_trace = pm.sample()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_forest(varying_slope_trace, var_names=['beta'], figsize=(6,18), combined=True, ess=True, r_hat=True);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xvals = xr.DataArray([0, 1], dims=\"Level\", coords={\"Level\": [\"Basement\", \"Floor\"]})\n", "post = varying_slope_trace.posterior # alias for readability\n", "theta = (\n", " (post.mu + post.beta * xvals).mean(dim=(\"chain\", \"draw\")).to_dataset(name=\"Mean log radon\")\n", ")\n", "\n", "_, ax = plt.subplots()\n", "theta.plot.scatter(x=\"Level\", y=\"Mean log radon\", alpha=0.2, color=\"k\", ax=ax) # scatter\n", "ax.plot(xvals, theta[\"Mean log radon\"].T, \"k-\", alpha=0.2)\n", "# add lines too\n", "ax.set_title(\"MEAN LOG RADON BY COUNTY\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Non-centered Parameterization\n", "\n", "The partial pooling models specified above uses a **centered** parameterization of the slope random effect. That is, the individual county effects are distributed around a county mean, with a spread controlled by the hierarchical standard deviation parameter. As the preceding plot reveals, this constraint serves to **shrink** county estimates toward the overall mean, to a degree proportional to the county sample size. This is exactly what we want, and the model appears to fit well--the Gelman-Rubin statistics are exactly 1.\n", "\n", "But, on closer inspection, there are signs of trouble. Specifically, let's look at the trace of the random effects, and their corresponding standard deviation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, axs = plt.subplots(nrows=2)\n", "axs[0].plot(varying_slope_trace.posterior.sel(chain=0)['sigma_b'], alpha=.5);\n", "axs[0].set(ylabel='sigma_b');\n", "axs[1].plot(varying_slope_trace.posterior.sel(chain=0)['beta'], alpha=.05);\n", "axs[1].set(ylabel='beta');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that when the chain reaches the lower end of the parameter space for $\\sigma_b$, it appears to get \"stuck\" and the entire sampler, including the random slopes `b`, mixes poorly. \n", "\n", "Jointly plotting the random effect variance and one of the individual random slopes demonstrates what is going on." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "x = varying_slope_trace.posterior['beta'].sel(chain=0, county='AITKIN').to_series() \n", "x.name='slope'\n", "y = varying_slope_trace.posterior['sigma_b'].sel(chain=0).to_series()\n", "y.name='slope group variance'\n", "\n", "jp = sns.jointplot(x=x, y=y, ylim=(0, .7));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When the group variance is small, this implies that the individual random slopes are themselves close to the group mean. This results in a *funnel*-shaped relationship between the samples of group variance and any of the slopes (particularly those with a smaller sample size). \n", "\n", "In itself, this is not a problem, since this is the behavior we expect. However, if the sampler is tuned for the wider (unconstrained) part of the parameter space, it has trouble in the areas of higher curvature. The consequence of this is that the neighborhood close to the lower bound of $\\sigma_b$ is sampled poorly; indeed, in our chain it is not sampled at all below 0.1. The result of this will be biased inference.\n", "\n", "Now that we've spotted the problem, what can we do about it? The best way to deal with this issue is to reparameterize our model. Notice the random slopes in this version:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model(coords=coords, rng_seeder=RANDOM_SEED) as varying_slope_noncentered:\n", " \n", " # Priors\n", " mu_b = pm.Normal('mu_b', mu=0., sd=1e5)\n", " sigma_b = pm.HalfCauchy('sigma_b', 5)\n", " \n", " # Common intercepts\n", " mu = pm.Normal('mu', mu=0., sd=1e5)\n", " \n", " # Non-centered random slopes\n", " # Centered: b = pm.Normal('b', mu_b, sd=sigma_b, shape=counties)\n", " z = pm.Normal('z', mu=0, sd=1, dims='county')\n", " beta = pm.Deterministic(\"beta\", mu_b + z * sigma_b, dims='county')\n", " \n", " # Model error\n", " sigma_y = pm.HalfCauchy('sigma_y',5)\n", " \n", " # Expected value\n", " y_hat = mu + beta[county] * floor_measure\n", " \n", " # Data likelihood\n", " y_like = pm.Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pm.model_to_graphviz(varying_slope_noncentered)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a **non-centered** parameterization. By this, we mean that the random deviates are no longer explicitly modeled as being centered on $\\mu_b$. Instead, they are independent standard normals $\\upsilon$, which are then scaled by the appropriate value of $\\sigma_b$, before being location-transformed by the mean.\n", "\n", "This model samples much better." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with varying_slope_noncentered:\n", " noncentered_trace = pm.sample(tune=2000, target_accept=.9)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the bottlenecks in the traces are gone." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, axs = plt.subplots(nrows=2)\n", "axs[0].plot(noncentered_trace.posterior.sel(chain=0)['sigma_b'], alpha=.5);\n", "axs[0].set(ylabel='sigma_b');\n", "axs[1].plot(noncentered_trace.posterior.sel(chain=0)['beta'], alpha=.05);\n", "axs[1].set(ylabel='beta');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And, we are now fully exploring the support of the posterior." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = noncentered_trace.posterior['beta'].sel(chain=0, county='AITKIN').to_series() \n", "x.name='slope'\n", "y = noncentered_trace.posterior['sigma_b'].sel(chain=0).to_series()\n", "y.name='slope group variance'\n", "\n", "jp = sns.jointplot(x=x, y=y, ylim=(0, .7));" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, constrained_layout=True)\n", "az.plot_posterior(varying_slope_trace, var_names=['sigma_b'], ax=ax1)\n", "az.plot_posterior(noncentered_trace, var_names=['sigma_b'], ax=ax2)\n", "ax1.set_title('Centered (top) and non-centered (bottom)');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Varying intercept and slope model\n", "\n", "The most general model allows both the intercept and slope to vary by county:\n", "\n", "$$y_i = \\alpha_{j[i]} + \\beta_{j[i]} x_{i} + \\epsilon_i$$\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model(coords=coords, rng_seeder=RANDOM_SEED) as varying_intercept_slope:\n", " \n", " # Priors\n", " mu_a = pm.Normal('mu_a', mu=0., sd=1e5)\n", " sigma_a = pm.HalfCauchy('sigma_a', 5)\n", " \n", " mu_b = pm.Normal('mu_b', mu=0., sd=1e5)\n", " sigma_b = pm.HalfCauchy('sigma_b', 5)\n", " \n", " # Random intercepts\n", " mu = pm.Normal('mu', mu=mu_a, sd=sigma_a, dims='county')\n", " # Random slopes\n", " beta = pm.Normal('beta', mu=mu_b, sd=sigma_b, dims='county')\n", " \n", " # Model error\n", " sigma_y = pm.Uniform('sigma_y', lower=0, upper=100)\n", " \n", " # Expected value\n", " y_hat = mu[county] + beta[county] * floor_measure\n", " \n", " # Data likelihood\n", " y_like = pm.Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with varying_intercept_slope:\n", " varying_intercept_slope_trace = pm.sample(tune=2000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_energy(varying_intercept_slope_trace)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_forest(varying_intercept_slope_trace, var_names=['mu','beta'], figsize=(6,24), combined=True, ess=True, r_hat=True);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import xarray as xr\n", "\n", "xvals = xr.DataArray([0, 1], dims=\"Level\", coords={\"Level\": [\"Basement\", \"Floor\"]})\n", "post = varying_intercept_slope_trace.posterior # alias for readability\n", "theta = (\n", " (post.mu + post.beta * xvals).mean(dim=(\"chain\", \"draw\")).to_dataset(name=\"Mean log radon\")\n", ")\n", "\n", "_, ax = plt.subplots()\n", "theta.plot.scatter(x=\"Level\", y=\"Mean log radon\", alpha=0.2, color=\"k\", ax=ax) # scatter\n", "ax.plot(xvals, theta[\"Mean log radon\"].T, \"k-\", alpha=0.2)\n", "# add lines too\n", "ax.set_title(\"MEAN LOG RADON BY COUNTY\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "Reparameterize the `varying_intercept_slope` model to be non-centered, and compare the resulting parameter estimates." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model(coords=coords, rng_seeder=RANDOM_SEED) as varying_intercept_slope_noncentered:\n", " \n", " # Priors\n", " mu_a = pm.Normal('mu_a', mu=0., sd=1e5)\n", " sigma_a = pm.HalfCauchy('sigma_a', 5)\n", " \n", " mu_b = pm.Normal('mu_b', mu=0., sd=1e5)\n", " sigma_b = pm.HalfCauchy('sigma_b', 5)\n", " \n", " # Random intercepts\n", " z_mu = pm.Normal('z_mu', mu=0, sd=1, dims='county')\n", " mu = pm.Deterministic(\"mu\", mu_a + z_mu * sigma_a, dims='county')\n", " # Random slopes\n", " z_beta = pm.Normal('z_beta', mu=0, sd=1, dims='county')\n", " beta = pm.Deterministic(\"beta\", mu_b + z_beta * sigma_b, dims='county')\n", " \n", " # Model error\n", " sigma_y = pm.Uniform('sigma_y', lower=0, upper=100)\n", " \n", " # Expected value\n", " y_hat = mu[county] + beta[county] * floor_measure\n", " \n", " # Data likelihood\n", " y_like = pm.Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with varying_intercept_slope_noncentered:\n", " varying_intercept_slope_noncentered_trace = pm.sample(tune=2000, target_accept=.9)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_forest(varying_intercept_slope_noncentered_trace, var_names=['mu','beta'], figsize=(6,24), combined=True, ess=True, r_hat=True);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adding group-level predictors\n", "\n", "A primary strength of multilevel models is the ability to handle predictors on multiple levels simultaneously. If we consider the varying-intercepts model above:\n", "\n", "$$y_i = \\alpha_{j[i]} + \\beta x_{i} + \\epsilon_i$$\n", "\n", "we may, instead of a simple random effect to describe variation in the expected radon value, specify another regression model with a county-level covariate. Here, we use the county uranium reading $u_j$, which is thought to be related to radon levels:\n", "\n", "$$\\alpha_j = \\gamma_0 + \\gamma_1 u_j + \\zeta_j$$\n", "\n", "$$\\zeta_j \\sim N(0, \\sigma_{\\alpha}^2)$$\n", "\n", "Thus, we are now incorporating a house-level predictor (floor or basement) as well as a county-level predictor (uranium).\n", "\n", "Note that the model has both indicator variables for each county, plus a county-level covariate. In classical regression, this would result in collinearity. In a multilevel model, the partial pooling of the intercepts towards the expected value of the group-level linear model avoids this.\n", "\n", "Group-level predictors also serve to reduce group-level variation $\\sigma_{\\alpha}$. An important implication of this is that the group-level estimate induces stronger pooling." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model(coords=coords, rng_seeder=RANDOM_SEED) as hierarchical_intercept:\n", " \n", " # Priors\n", " sigma_a = pm.HalfCauchy('sigma_a', 5)\n", " \n", " # County uranium model\n", " gamma_0 = pm.Normal('gamma_0', mu=0., sd=1e5)\n", " gamma_1 = pm.Normal('gamma_1', mu=0., sd=1e5)\n", " \n", " \n", " # Uranium model for intercept\n", " mu_a = pm.Deterministic('mu_a', gamma_0 + gamma_1*u)\n", " # County variation not explained by uranium\n", " epsilon_a = pm.Normal('epsilon_a', mu=0, sd=1, dims='county')\n", " mu = pm.Deterministic('mu', mu_a + sigma_a*epsilon_a)\n", " \n", " # Common slope\n", " beta = pm.Normal('beta', mu=0., sd=1e5)\n", " \n", " # Model error\n", " sigma_y = pm.Uniform('sigma_y', lower=0, upper=100)\n", " \n", " # Expected value\n", " y_hat = mu[county] + beta * floor_measure\n", " \n", " # Data likelihood\n", " y_like = pm.Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with hierarchical_intercept:\n", " hierarchical_intercept_trace = pm.sample(tune=2000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "uranium = u\n", "post = hierarchical_intercept_trace.posterior.assign_coords(uranium=uranium)\n", "avg_a = post[\"mu_a\"].mean(dim=(\"chain\", \"draw\")).values[np.argsort(uranium)]\n", "avg_a_county = post[\"mu\"].mean(dim=(\"chain\", \"draw\"))\n", "avg_a_county_hdi = az.hdi(post, var_names=\"mu\")[\"mu\"]\n", "\n", "_, ax = plt.subplots()\n", "ax.plot(uranium[np.argsort(uranium)], avg_a, \"k--\", alpha=0.6, label=\"Mean intercept\")\n", "az.plot_hdi(\n", " uranium,\n", " post[\"mu\"],\n", " fill_kwargs={\"alpha\": 0.1, \"color\": \"k\", \"label\": \"Mean intercept HPD\"},\n", " ax=ax,\n", ")\n", "ax.scatter(uranium, avg_a_county, alpha=0.8, label=\"Mean county-intercept\")\n", "ax.vlines(\n", " uranium,\n", " avg_a_county_hdi.sel(hdi=\"lower\"),\n", " avg_a_county_hdi.sel(hdi=\"higher\"),\n", " alpha=0.5,\n", " color=\"orange\",\n", ")\n", "plt.xlabel(\"County-level uranium\")\n", "plt.ylabel(\"Intercept estimate\")\n", "plt.legend(fontsize=9);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The standard errors on the intercepts are narrower than for the partial-pooling model without a county-level covariate." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Correlations among levels\n", "\n", "In some instances, having predictors at multiple levels can reveal correlation between individual-level variables and group residuals. We can account for this by including the average of the individual predictors as a covariate in the model for the group intercept.\n", "\n", "$$\\alpha_j = \\gamma_0 + \\gamma_1 u_j + \\gamma_2 \\bar{x} + \\zeta_j$$\n", "\n", "These are broadly referred to as ***contextual effects***." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create new variable for mean of floor across counties\n", "xbar = srrs_mn.groupby('county')['floor'].mean().values" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xbar.shape, u.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "type(floor_idx)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with pm.Model(coords=coords, rng_seeder=RANDOM_SEED) as contextual_effect:\n", " floor_idx = pm.Data(\"floor_idx\", floor, mutable=True)\n", " county_idx = pm.Data(\"county_idx\", county, mutable=True)\n", " \n", " # Priors\n", " sigma_a = pm.HalfCauchy('sigma_a', 5)\n", " \n", " # County uranium model for slope\n", " gamma = pm.Normal('gamma', mu=0., sd=1e5, shape=3)\n", " \n", " # Uranium model for intercept\n", " mu_a = pm.Deterministic('mu_a', gamma[0] + gamma[1]*u + gamma[2]*xbar)\n", "\n", " # County variation not explained by uranium\n", " epsilon_a = pm.Normal('epsilon_a', mu=0, sd=1, dims='county')\n", " mu = pm.Deterministic('mu', mu_a + sigma_a*epsilon_a)\n", "\n", " # Common slope\n", " beta = pm.Normal('beta', mu=0., sd=1e15)\n", " \n", " # Model error\n", " sigma_y = pm.Uniform('sigma_y', lower=0, upper=100)\n", " \n", " # Expected value\n", " y_hat = mu[county_idx] + beta * floor_idx\n", " \n", " # Data likelihood\n", " y_like = pm.Normal('y_like', mu=y_hat, sd=sigma_y, observed=log_radon)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pm.model_to_graphviz(contextual_effect)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with contextual_effect:\n", " contextual_effect_trace = pm.sample(tune=2000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_forest(contextual_effect_trace, var_names=['gamma'], combined=True, ess=True, r_hat=True);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.summary(contextual_effect_trace, var_names=['gamma'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, we might infer from this that counties with higher proportions of houses without basements tend to have higher baseline levels of radon. Perhaps this is related to the soil type, which in turn might influence what type of structures are built." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Prediction\n", "\n", "Gelman (2006) used cross-validation tests to check the prediction error of the unpooled, pooled, and partially-pooled models\n", "\n", "**root mean squared cross-validation prediction errors**:\n", "\n", "* unpooled = 0.86\n", "* pooled = 0.84\n", "* multilevel = 0.79\n", "\n", "There are two types of prediction that can be made in a multilevel model:\n", "\n", "1. a new individual within an existing group\n", "2. a new individual within a new group\n", "\n", "For example, if we wanted to make a prediction for a new house with no basement in St. Louis and Kanabec counties, we just need to sample from the radon model with the appropriate intercept." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That is, \n", "\n", "$$\\tilde{y}_i \\sim N(\\alpha_{69} + \\beta (x_i=1), \\sigma_y^2)$$\n", "\n", "Because we judiciously set the county index and floor values as shared variables earlier, we can modify them directly to the desired values (69 and 1 respectively) and sample corresponding posterior predictions, without having to redefine and recompile our model. Using the model just above:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prediction_coords = {\"obs_id\": [\"ST LOUIS\", \"KANABEC\"]}\n", "with contextual_effect:\n", " pm.set_data(\n", " {\"county_idx\": np.array([69, 31]), \n", " \"floor_idx\": np.array([1, 1])}\n", " )\n", " stl_pred = pm.sample_posterior_predictive(\n", " contextual_effect_trace.posterior\n", " )\n", "\n", "contextual_effect_trace.extend(stl_pred)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "contextual_effect_trace" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "az.plot_posterior(contextual_effect_trace, group='posterior_predictive');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise\n", "\n", "How would we make a prediction from a new county (*e.g.* one not included in this dataset)?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write your answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Benefits of Multilevel Models\n", "\n", "- Accounting for natural hierarchical structure of observational data\n", "- Estimation of coefficients for (under-represented) groups\n", "- Incorporating individual- and group-level information when estimating group-level coefficients\n", "- Allowing for variation among individual-level coefficients across groups\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## References\n", "\n", "Gelman, A., & Hill, J. (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models (1st ed.). Cambridge University Press.\n", "\n", "Betancourt, M. J., & Girolami, M. (2013). Hamiltonian Monte Carlo for Hierarchical Models.\n", "\n", "Gelman, A. (2006). Multilevel (Hierarchical) modeling: what it can and cannot do. Technometrics, 48(3), 432–435." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.9" }, "latex_envs": { "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 0 } }, "nbformat": 4, "nbformat_minor": 2 }