{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Solutions-04\n", "\n", "As we have seen, the standard Bayesian approach to model fitting involves sampling the posterior, usually via a variant of Markov Chain Monte Carlo (MCMC). Though there are many very sophisticated MCMC samplers out there, the most simple algorithm (Metropolis-Hastings) is rather straightforward to code.\n", "\n", "Here we'll walk through creating our own Metropolis-Hastings sampler from scratch, in order to better understand exactly what is going on under the hood.\n", "\n", "If you'd like to view one possible solution, take a look at the [Solutions-04](Solutions-04.ipynb) notebook (but again, try to make an honest effort at this before you peek!)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preliminaries\n", "\n", "As usual, we start with some imports:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from scipy import stats\n", "\n", "# use seaborn plotting defaults\n", "# If this causes an error, you can comment it out.\n", "import seaborn; seaborn.set()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Metropolis-Hastings Procedure\n", "\n", "Recall the Metropolis-Hastings procedure:\n", "\n", "1. Define a posterior $p(\\theta~|~D, I)$\n", "2. Define a *proposal density* $p(\\theta_{i + 1}~|~\\theta_i)$, which must be a symmetric function, but otherwise is unconstrained (a Gaussian is the usual choice).\n", "3. Choose a starting point $\\theta_0$\n", "4. Repeat the following:\n", "\n", " 1. Given $\\theta_i$, draw a new $\\theta_{i + 1}$ from the proposal distribution\n", " \n", " 2. Compute the *acceptance ratio*\n", " $$\n", " a = \\frac{p(\\theta_{i + 1}~|~D,I)}{p(\\theta_i~|~D,I)}\n", " $$\n", " \n", " 3. If $a \\ge 1$, the proposal is more likely: accept the draw and add $\\theta_{i + 1}$ to the chain.\n", " \n", " 4. If $a < 1$, then accept the point with probability $a$: this can be done by drawing a uniform random number $r$ and checking if $a < r$. If the point is accepted, add $\\theta_{i + 1}$ to the chain. If not, then add $\\theta_i$ to the chain *again*.\n", " \n", "The goal is to produce a \"chain\", i.e. a list of $\\theta$ values, where each $\\theta$ is a vector of parameters for your model.\n", "Here we'll write a simple Metropolis-Hastings sampler in Python.\n", "\n", "Note that the ``np.random.randn()`` function will be useful: it returns a pseudorandom value drawn from a standard normal distribution (i.e. mean of zero and variance of 1)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Data\n", "\n", "We'll use data drawn from a straight line model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def make_data(intercept, slope, N=20, dy=2, rseed=42):\n", " rand = np.random.RandomState(rseed)\n", " x = 100 * rand.rand(20)\n", " y = intercept + slope * x\n", " y += dy * rand.randn(20)\n", " return x, y, dy * np.ones_like(x)\n", "\n", "theta_true = (2, 0.5)\n", "x, y, dy = make_data(*theta_true)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise\n", "\n", "Walk through all the following steps, filling-in the code along the way." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First plot the data to see what we're looking at (Use a ``plt.errorbar()`` plot with the provided data)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "plt.errorbar(x, y, dy, fmt='o');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're going to fit a line to the data, as we've done through the lecture:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def model(theta, x):\n", " # the `theta` argument is a list of parameter values, e.g., theta = [m, b] for a line\n", " return theta[0] + theta[1] * x" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "We'll start with the assumption that the data are independent and identically distributed so that the likelihood is simply a product of Gaussians (one big Gaussian). We'll also assume that the uncertainties reported are correct, and that there are no uncertainties on the `x` data. We need to define a function that will evaluate the (ln)likelihood of the data, given a particular choice of your model parameters. A good way to structure this function is as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def ln_likelihood(theta, x, y, dy):\n", " # we will pass the parameters (theta) to the model function\n", " # the other arguments are the data\n", " return -0.5 * np.sum(np.log(2 * np.pi * dy ** 2)\n", " + ((y - model(theta, x)) / dy) ** 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What about priors? Remember your prior only depends on the model parameters, but be careful about what kind of prior you are specifying for each parameter. Do we need to properly normalize the probabilities?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def ln_prior(theta):\n", " # flat prior: log(1) = 0\n", " return 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can define a function that evaluates the log-posterior probability, which is just the sum of the log-prior and log-likelihood:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def ln_posterior(theta, x, y, dy):\n", " return ln_prior(theta) + ln_likelihood(theta, x, y, dy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now write a function to actually run a Metropolis-Hastings MCMC sampler. Ford (2005) includes a great step-by-step walkthrough of the Metropolis-Hastings algorithm, and we'll base our code on that. Fill-in the steps mentioned in the comments below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def run_mcmc(ln_posterior, nsteps, ndim, theta0, stepsize, args=()):\n", " \"\"\"\n", " Run a Markov Chain Monte Carlo\n", " \n", " Parameters\n", " ----------\n", " ln_posterior: callable\n", " our function to compute the posterior\n", " nsteps: int\n", " the number of steps in the chain\n", " theta0: list\n", " the starting guess for theta\n", " stepsize: float\n", " a parameter controlling the size of the random step\n", " e.g. it could be the width of the Gaussian distribution\n", " args: tuple (optional)\n", " additional arguments passed to ln_posterior\n", " \"\"\"\n", " # Create the array of size (nsteps, ndims) to hold the chain\n", " # Initialize the first row of this with theta0\n", " chain = np.zeros((nsteps, ndim))\n", " chain[0] = theta0\n", " \n", " # Create the array of size nsteps to hold the log-likelihoods for each point\n", " # Initialize the first entry of this with the log likelihood at theta0\n", " log_likes = np.zeros(nsteps)\n", " log_likes[0] = ln_posterior(chain[0], *args)\n", " \n", " # Loop for nsteps\n", " for i in range(1, nsteps):\n", " # Randomly draw a new theta from the proposal distribution.\n", " # for example, you can do a normally-distributed step by utilizing\n", " # the np.random.randn() function\n", " theta_new = chain[i - 1] + stepsize * np.random.randn(ndim)\n", " \n", " # Calculate the probability for the new state\n", " log_like_new = ln_likelihood(theta_new, *args)\n", " \n", " # Compare it to the probability of the old state\n", " # Using the acceptance probability function\n", " # (remember that you've computed the log probability, not the probability!)\n", " log_p_accept = log_like_new - log_likes[i - 1]\n", " \n", " # Chose a random number r between 0 and 1 to compare with p_accept\n", " r = np.random.rand()\n", " \n", " # If p_accept>1 or p_accept>r, accept the step\n", " # Else, do not accept the step\n", " if log_p_accept > np.log(r):\n", " chain[i] = theta_new\n", " log_likes[i] = log_like_new\n", " else:\n", " chain[i] = chain[i - 1]\n", " log_likes[i] = log_likes[i - 1]\n", " \n", " return chain" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now run the MCMC code on the data provided." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "chain = run_mcmc(ln_posterior, 10000, 2, [0, 1], 0.1, (x, y, dy))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot the position of the walker as a function of step number for each of the parameters. Are the chains converged? " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(2)\n", "ax[0].plot(chain[:, 0])\n", "ax[1].plot(chain[:, 1]);" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Now that we've burned-in, let's get a fresh chain\n", "chain = run_mcmc(ln_posterior, 20000, 2, chain[-1], 0.1, (x, y, dy))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(2)\n", "ax[0].plot(chain[:, 0])\n", "ax[1].plot(chain[:, 1]);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make histograms of the samples for each parameter. Should you include all of the samples? " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(2)\n", "ax[0].hist(chain[:, 0], alpha=0.5)\n", "ax[1].hist(chain[:, 1], alpha=0.5);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's also sometimes useful to view a two-dimensional histogram:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "plt.hist2d(chain[:, 0], chain[:, 1], bins=30,\n", " cmap='Blues')\n", "plt.xlabel('intercept')\n", "plt.ylabel('slope')\n", "plt.grid(False);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Report to us your constraints on the model parameters.\n", "This is the number for the abstract – the challenge is to figure out how to accurately summarize a multi-dimensional posterior (which is **the result** in Bayesianism) with a few numbers (which is what readers want to see as they skim the arXiv).\n", "\n", "What numbers should you use?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "theta_best = chain.mean(0)\n", "theta_std = chain.std(0)\n", "\n", "print(\"true intercept:\", theta_true[0])\n", "print(\"true slope:\", theta_true[1])\n", "print()\n", "print(\"intercept = {0:.1f} +/- {1:.1f}\".format(theta_best[0], theta_std[0]))\n", "print(\"slope = {0:.2f} +/- {1:.2f}\".format(theta_best[1], theta_std[1]))" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 }