{ "cells": [ { "cell_type": "markdown", "metadata": { "ExecuteTime": { "end_time": "2018-06-18T15:27:28.707896Z", "start_time": "2018-06-18T15:27:28.702766Z" } }, "source": [ "# Compound Steps in Sampling\n", "This notebook explains how the compound steps work in `pymc3.sample` function when sampling multiple random variables. We are going to answer the following questions associated with compound steps:\n", "\n", "- How does compound steps work?\n", "- What will happen when PyMC3 assign step methods by default?\n", "- How to specify the step methods? What is the order to apply the step methods at each iteration? Is there a way to specify the order of the step methods? \n", "- What are the issues with mixing discrete and continuous samplers, especially with HMC/NUTS?" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "ExecuteTime": { "end_time": "2018-07-28T21:53:54.050085Z", "start_time": "2018-07-28T21:53:52.045480Z" } }, "outputs": [], "source": [ "import pymc3 as pm\n", "import numpy as np\n", "import theano" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Compound steps" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When sampling a model with multiple free random variables, the compound steps will be needed in the `pm.sample` function. When the compound steps are involved, the function takes a list of `step` to generate a list of `methods` for different random variables. So for example in the following code,\n", "```python\n", "with pm.Model() as m:\n", " rv1 = ... # random variable 1 (continuous)\n", " rv2 = ... # random variable 2 (continuous)\n", " rv3 = ... # random variable 3 (categorical)\n", " ...\n", " step1 = pm.Metropolis([rv1, rv2])\n", " step2 = pm.CategoricalGibbsMetropolis([rv3])\n", " trace = pm.sample(..., step=[step1, step2]...)\n", "```\n", "the compound step is now contain a list of `methods`. And at each sampling step, it iterates each method, which takes a `point` as input, and generates a new `point` as output. The new `point` is proposed within each step via a stochastic kernel, and if the proposal was rejected by the Metropolis-Hastings criteria it just outputs the original input `point`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Compound steps by default\n", "When we call `pm.sample()`, `PyMC3` assigns the best step method to each of the free random variables. Take the following example:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "ExecuteTime": { "end_time": "2018-07-28T21:54:23.727052Z", "start_time": "2018-07-28T21:53:56.768369Z" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Multiprocess sampling (4 chains in 4 jobs)\n", "CompoundStep\n", ">NUTS: [p]\n", ">BinaryGibbsMetropolis: [ni]\n" ] }, { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " 100.00% [8000/8000 00:06<00:00 Sampling 4 chains, 0 divergences]\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 8 seconds.\n" ] } ], "source": [ "n_ = theano.shared(np.asarray([10, 15]))\n", "with pm.Model() as m:\n", " p = pm.Beta('p', 1., 1.)\n", " ni = pm.Bernoulli('ni', .5)\n", " k = pm.Binomial('k', p=p, n=n_[ni], observed=4)\n", " trace = pm.sample()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are two free parameters in the model we would like to sample from, a continuous variable `p_logodds__` and a binary variable `ni`." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "ExecuteTime": { "end_time": "2018-07-28T21:56:05.311321Z", "start_time": "2018-07-28T21:56:05.302743Z" } }, "outputs": [ { "data": { "text/plain": [ "[p_logodds__, ni]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "m.free_RVs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When we call `pm.sample()`, `PyMC3` assigns the best step method to each of them. For example, `NUTS` was assigned to `p_logodds__` and `BinaryGibbsMetropolis` was assigned to `ni`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Specify compound steps\n", "But we can also specify the steps manually:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "ExecuteTime": { "end_time": "2018-07-28T21:56:07.867622Z", "start_time": "2018-07-28T21:56:07.051095Z" } }, "outputs": [], "source": [ "with m:\n", " step1 = pm.Metropolis([p])\n", " step2 = pm.BinaryGibbsMetropolis([ni])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now we can pass a point to each step, and see what happens. First, let us generate a test `point` as the input:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "ExecuteTime": { "end_time": "2018-07-28T21:56:09.259368Z", "start_time": "2018-07-28T21:56:09.253057Z" } }, "outputs": [ { "data": { "text/plain": [ "{'p_logodds__': array(0.), 'ni': array(0)}" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "point = m.test_point\n", "point" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then pass the `point` to the first step method `pm.Metropolis` for random variable `p`." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "ExecuteTime": { "end_time": "2018-07-28T21:56:10.554828Z", "start_time": "2018-07-28T21:56:10.547243Z" } }, "outputs": [ { "data": { "text/plain": [ "({'p_logodds__': array(0.20397629), 'ni': array(0)},\n", " [{'tune': True,\n", " 'scaling': array([1.]),\n", " 'accept': 0.7662261757775519,\n", " 'accepted': True}])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "point, state = step1.step(point=point)\n", "point, state" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, the value of `ni` does not change, but `p_logodds__` is updated.\n", "\n", "And similarly, you can pass the updated `point` to `step2` and get a sample for `ni`:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "ExecuteTime": { "end_time": "2018-07-28T21:56:11.698858Z", "start_time": "2018-07-28T21:56:11.691170Z" } }, "outputs": [ { "data": { "text/plain": [ "{'p_logodds__': array(0.20397629), 'ni': array(0)}" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "point = step2.step(point=point)\n", "point" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compound step works exactly like this by iterating all the steps within the list. In effect, it is a metropolis hastings within gibbs sampling. \n", "\n", "Moreover, `pm.CompoundStep` is called internally by `pm.sample()`. We can make them explicit as below:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "ExecuteTime": { "end_time": "2018-07-28T21:56:12.982233Z", "start_time": "2018-07-28T21:56:12.976999Z" } }, "outputs": [ { "data": { "text/plain": [ "[,\n", " ]" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "with m:\n", " comp_step1 = pm.CompoundStep([step1, step2])\n", "comp_step1.methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Order of step methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When in the default setting, the parameter update order follows the same order of the random variables, and it is assigned automatically. But if you specify the steps, you can change the order of the methods in the list:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "ExecuteTime": { "end_time": "2018-07-28T21:56:14.969080Z", "start_time": "2018-07-28T21:56:14.963094Z" } }, "outputs": [ { "data": { "text/plain": [ "[,\n", " ]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "with m:\n", " comp_step2 = pm.CompoundStep([step2, step1])\n", "comp_step2.methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the sampling process, it always follows the same step order in each sample in the Gibbs-like fashion. More precisely, at each update, it iterates over the list of `methods` where the accept/reject is based on comparing the acceptance rate with $p \\sim \\text{Uniform}(0, 1)$ (by checking whether $\\log p < \\log p_{\\text {updated}} - \\log p_{\\text {current}}$)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Issues with mixing discrete and continuous sampling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A recurrent issue/concern is the validity of mixing discrete and continuous sampling, especially mixing other samplers with NUTS. While in the book [Bayesian Data Analysis 3rd edition](http://www.stat.columbia.edu/~gelman/book/) Chapter 12.4, there is a small paragraph on \"Combining Hamiltonian Monte Carlo with Gibbs sampling\", which suggests that this could be a valid way to do, the Stan developers are always skeptical about how practical it is. (Here are more discussions about this issue [1](http://discourse.mc-stan.org/t/mcmc-sampling-does-not-work-when-execute/1918/47), [2](http://discourse.mc-stan.org/t/constraining-latent-factor-model-baysian-probabalisic-matrix-factorization-to-remove-multimodality/2152/21)). \n", "\n", "The concern with mixing discrete and continuous sampling is that the change in discrete parameters will affect the continuous distribution's geometry so that the adaptation (i.e., the tuned mass matrix and step size) may be inappropriate for the Hamiltonian Monte Carlo sampling. HMC/NUTS is hypersensitive to its tuning parameters (mass matrix and step size). Another issue is that we also don't know how many iterations we have to run to get a decent sample when the discrete parameters change. Though it hasn't been fully evaluated, it seems that if the discrete parameter is in low dimensions (e.g., 2-class mixture models, outlier detection with explicit discrete labeling), the mixing of discrete sampling with HMC/NUTS works OK. However, it is much less efficient than marginalizing out the discrete parameters. And sometimes it can be observed that the Markov chains get stuck quite often. In order to evaluate this more properly, one can use a simulation-based method to look at the posterior coverage and establish the computational correctness, as explained in [Cook, Gelman, and Rubin 2006](https://amstat.tandfonline.com/doi/abs/10.1198/106186006x136976)." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "numpy 1.18.5\n", "pymc3 3.9.0\n", "theano 1.0.4\n", "last updated: Mon Jun 15 2020 \n", "\n", "CPython 3.7.7\n", "IPython 7.15.0\n", "watermark 2.0.2\n" ] } ], "source": [ "%load_ext watermark\n", "%watermark -n -u -v -iv -w" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" }, "latex_envs": { "LaTeX_envs_menu_present": true, "autoclose": false, "autocomplete": true, "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 1, "hotkeys": { "equation": "Ctrl-E", "itemize": "Ctrl-I" }, "labels_anchors": false, "latex_user_defs": false, "report_style_numbering": false, "user_envs_cfg": false }, "toc": { "nav_menu": {}, "number_sections": false, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": true } }, "nbformat": 4, "nbformat_minor": 4 }