{ "metadata": { "name": "", "signature": "sha256:0c1879fd6f722611f3bc510c3a7b0af0b7fd397c84d17ff39768bb0755958f04" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Bayesian Regression\n", "\n", "## Data Science School, Nyeri, Kenya\n", "\n", "### 16th June 2015 Neil Lawrence\n", "\n", "$$\\newcommand{\\inputScalar}{x}\n", "\\newcommand{\\lengthScale}{\\ell}\n", "\\newcommand{\\mappingVector}{\\mathbf{w}}\n", "\\newcommand{\\gaussianDist}[3]{\\mathcal{N}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gaussianSamp}[2]{\\mathcal{N}\\left(#1,#2\\right)}\n", "\\newcommand{\\zerosVector}{\\mathbf{0}}\n", "\\newcommand{\\eye}{\\mathbf{I}}\n", "\\newcommand{\\dataStd}{\\sigma}\n", "\\newcommand{\\dataScalar}{y}\n", "\\newcommand{\\dataVector}{\\mathbf{y}}\n", "\\newcommand{\\dataMatrix}{\\mathbf{Y}}\n", "\\newcommand{\\noiseScalar}{\\epsilon}\n", "\\newcommand{\\noiseVector}{\\mathbf{\\epsilon}}\n", "\\newcommand{\\noiseMatrix}{\\mathbf{\\Epsilon}}\n", "\\newcommand{\\inputVector}{\\mathbf{x}}\n", "\\newcommand{\\kernelMatrix}{\\mathbf{K}}\n", "\\newcommand{\\basisMatrix}{\\mathbf{\\Phi}}\n", "\\newcommand{\\basisVector}{\\mathbf{\\phi}}\n", "\\newcommand{\\basisScalar}{\\phi}\n", "\\newcommand{\\expSamp}[1]{\\left<#1\\right>}\n", "\\newcommand{\\expDist}[2]{\\left<#1\\right>_{#2}}\n", "\\newcommand{\\covarianceMatrix}{\\mathbf{C}}\n", "\\newcommand{\\numData}{n}\n", "\\newcommand{\\mappingScalar}{w}\n", "\\newcommand{\\mappingFunctionScalar}{f}\n", "\\newcommand{\\mappingFunctionVector}{\\mathbf{f}}\n", "\\newcommand{\\meanVector}{\\boldsymbol{\\mu}}\n", "\\newcommand{\\meanScalar}{\\mu}$$\n", "\n", "### Overdetermined Systems\n", "\n", "At the beginning of this course, we motivated the introduction of probability by considering systems where there were more observations than unknowns. In particular we though about the simple fitting of the gradient and an offset of a line,\n", "\n", "$$ y= mx +c $$\n", "\n", "and what happens if we have three pairs of observations of $x$ and $y$, $\\{x_i, y_i\\}_{i=1}^3$. We solved this issue by introducing a type of [slack variable](http://en.wikipedia.org/wiki/Slack_variable), $\\epsilon_i$, known as noise, such that for each observation we had the equation,\n", "\n", "$$y_i = mx_i + c + \\epsilon_i.$$\n", "\n", "### Underdetermined System\n", "\n", "In contrast, today we'd like to consider the situation where you have more parameters than data in your simultaneous equation. So we have an *underdetermined* system. In fact this set up is in some sense *easier* to solve, because we don't need to think about introducing a slack variable (although it might make a lot of sense from a *modelling* perspective to do so).\n", "\n", "In the overdetermined system, we resolved the problem by introducing slack variables, $\\epsilon_i$, which needed to be estimated for each point. The slack variable represented the difference between our actual prediction and the true observation. This is known as the *residual*. By introducing the slack variable we now have an additional $n$ variables to estimate, one for each data point, $\\{\\epsilon_i\\}$. This actually turns the overdetermined system into an underdetermined system. Introduction of $n$ variables, plus the original $m$ and $c$ gives us $n+2$ parameters to be estimated from $n$ observations, which actually makes the system *underdetermined*. However, we then made a probabilistic assumption about the slack variables, we assumed that the slack variables were distributed according to a probability density. And for the moment we have been assuming that density was the Gaussian,\n", "\n", "$$\\epsilon_i \\sim \\mathcal{N}(0, \\sigma^2),$$\n", "\n", "with zero mean and variance $\\sigma^2$. \n", "\n", "#### Sum of Squares and Probability\n", "\n", "In the overdetermined system we introduced a new set of slack variables, $\\{\\epsilon_i\\}_{i=1}^n$, on top of our parameters $m$ and $c$. We dealt with the variables by placing a probability distribution over them. This gives rise to the likelihood and for the case of Gaussian distributed variables, it gives rise to the sum of squares error. It was Gauss who first made this connection in his volume on \"Theoria Motus Corprum Coelestium\" (written in Latin)" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import pods\n", "pods.notebook.display_google_book(id='ORUOAAAAQAAJ', page='213')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The relevant section roughly translates as\n", "\n", "... It is clear, that for the product $\\Omega = h^\\mu \\pi ^{-frac{1}{2}\\mu} e^{-hh(vv + v^\\prime v^\\prime + v^{\\prime\\prime} v^{\\prime\\prime} + \\dots)}$ to be maximised the sum $vv + v ^\\prime v^\\prime + v^{\\prime\\prime} v^{\\prime\\prime} + \\text{etc}.$ ought to be minimized. *Therefore, the most probable values of the unknown quantities $p , q, r , s \\text{etc}.$, should be that in which the sum of the squares of the differences between the functions $V, V^\\prime, V^{\\prime\\prime} \\text{etc}$, and the observed values is minimized*, for all observations of the same degree of precision is presumed.\n", "\n", "It's on the strength of this paragraph that the density is known as the Gaussian, despite the fact that four pages later Gauss credits the necessary integral for the density to Laplace, and it was also Laplace that did a lot of the original work on dealing with these errors through probability. [Stephen Stigler's book on the measurement of uncertainty before 1900](http://www.hup.harvard.edu/catalog.php?isbn=9780674403413) has a nice chapter on this." ] }, { "cell_type": "code", "collapsed": false, "input": [ "pods.notebook.display_google_book(id='ORUOAAAAQAAJ', page='217')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "where the crediting to the Laplace is about halfway through the last paragraph. This book was published in 1809, four years after [Legendre presented least squares](./week3.ipynb) in an appendix to one of his chapters on the orbit of comets. Gauss goes on to make a claim for priority on the method on page 221 (towards the end of the first paragraph ...)." ] }, { "cell_type": "code", "collapsed": false, "input": [ "pods.notebook.display_google_book(id='ORUOAAAAQAAJ', page='221')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A Philosophical Dispute: Probabilistic Treatment of Parameters?\n", "\n", "The follow up question is whether we can do the same thing with the parameters. If we have two parameters and only one unknown can we place a probability distribution over the parameters, as we did with the slack variables? The answer is yes, and from a philosophical perspective placing a probability distribution over the *parameters* is known as the *Bayesian* approach. This is because Thomas Bayes, in a [1763 essay](http://en.wikipedia.org/wiki/An_Essay_towards_solving_a_Problem_in_the_Doctrine_of_Chances) published at the Royal Society introduced the [Bernoulli distribution](http://en.wikipedia.org/wiki/Bernoulli_distribution) with a probabilistic interpretation for the *parameters*. Later statisticians such as [Ronald Fisher](http://en.wikipedia.org/wiki/Ronald_Fisher) objected to the use of probability distributions for *parameters*, and so in an effort to discredit the approach the referred to it as Bayesian. However, the earliest practioners of modelling, such as Laplace applied the approach as the most natural thing to do for dealing with unknowns (whether they were parameters or variables). Unfortunately, this dispute led to a split in the modelling community that still has echoes today. It is known as the Bayesian vs Frequentist controversy. From my own perspective, I think that it is a false dichotomy, and that the two approaches are actually complementary. My own focus research focus is on *modelling* and in that context, the use of probability is vital. For frequenstist statisticians, such as Fisher, the emphasis was on the value of the evidence in the data for a particular hypothesis. This is known as hypothesis testing. The two approaches can be unified because one of the most important approaches to hypothesis testing is to [compute the ratio of the likelihoods](http://en.wikipedia.org/wiki/Likelihood-ratio_test), and the result of applying a probability distribution to the parameters is merely to arrive at a different form of the likelihood." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Bayesian Approach\n", "\n", "The aim of this notebook is to study Bayesian approaches to regression. In the Bayesian approach we define a *prior* density over our parameters, $m$ and $c$ or more generally $\\mathbf{w}$. This prior distribution gives us a range of expected values for our parameter *before* we have seen the data. The object in Bayesian inference is to then compute the *posterior* density which is the effect on the density of having observed the data. In standard probability notation we write the prior distribution as,\n", "$$p(\\mathbf{w}),$$\n", "so it is the *marginal* distribution for the parameters, i.e. the distribution we have for the parameters without any knowledge about the data. The posterior distribution is written as,\n", "$$p(\\mathbf{w}|\\mathbf{y}, \\mathbf{X}).$$\n", "So the posterior distribution is the *conditional* distribution for the parameters given the data (which in this case consists of pairs of observations including response variables (or targets), $y_i$, and covariates (or inputs) $\\mathbf{x}_i$. Where here we are allowing the inputs to be multivariate. \n", "\n", "The posterior is recovered from the prior using *Bayes' rule*. Which is simply a rewriting of the product rule. We can recover Bayes rule as follows. The product rule of probability tells us that the joint distribution is given as the product of the conditional and the marginal. Dropping the inputs from our conditioning for the moment we have,\n", "$$p(\\mathbf{w}, \\mathbf{y})=p(\\mathbf{y}|\\mathbf{w})p(\\mathbf{w}),$$\n", "where we see we have related the joint density to the prior density and the *likelihood* from our previous investigation of regression,\n", "$$p(\\mathbf{y}|\\mathbf{w}) = \\prod_{i=1}^n\\mathcal{N}(y_i | \\mathbf{w}^\\top \\mathbf{x}_i, \\sigma^2)$$\n", "which arises from the assumption that our observation is given by\n", "$$y_i = \\mathbf{w}^\\top \\mathbf{x}_i + \\epsilon_i.$$\n", "In other words this is the Gaussian likelihood we have been fitting by minimizing the sum of squares. Have a look at [the session on multivariate regression](./week3.ipynb) as a reminder.\n", "\n", "We've introduce the likelihood, but we don't have relationship with the posterior, however, the product rule can also be written in the following way\n", "$$p(\\mathbf{w}, \\mathbf{y}) = p(\\mathbf{w}|\\mathbf{y})p(\\mathbf{y}),$$\n", "where here we have simply used the opposite conditioning. We've already introduced the *posterior* density above. This is the density that represents our belief about the parameters *after* observing the data. This is combined with the *marginal likelihood*, sometimes also known as the evidence. It is the marginal likelihood, because it is the original likelihood of the data with the parameters marginalised, $p(\\mathbf{y})$. Here it's conditioned on nothing, but in practice you should always remember that everything here is conditioned on things like model choice: which set of basis functions. Because it's a regression problem, its also conditioned on the inputs. Using the equality between the two different forms of the joint density we recover\n", "$$p(\\mathbf{w}|\\mathbf{y}) = \\frac{p(\\mathbf{y}|\\mathbf{w})p(\\mathbf{w})}{p(\\mathbf{y})}$$\n", "where we divided both sides by $p(\\mathbf{y})$ to recover this result. Let's re-introduce the conditioning on the input locations (or covariates), $\\mathbf{X}$ to write the full form of Bayes' rule for the regression problem. \n", "$$p(\\mathbf{w}|\\mathbf{y}, \\mathbf{X}) = \\frac{p(\\mathbf{y}|\\mathbf{w}, \\mathbf{X})p(\\mathbf{w})}{p(\\mathbf{y}|\\mathbf{X})}$$\n", "where the posterior density for the parameters given the data is $p(\\mathbf{w}|\\mathbf{y}, \\mathbf{X})$, the marginal likelihood is $p(\\mathbf{y}|\\mathbf{X})$, the prior density is $p(\\mathbf{w})$ and our original regression likelihood is given by $p(\\mathbf{y}|\\mathbf{w}, \\mathbf{X})$. It turns out that to compute the posterior the only things we need to do are define the prior and the likelihood. The other term on the right hand side can be computed by *the sum rule*. It is one of the key equations of Bayesian inference, the expectation of the likelihood under the prior, this process is known as marginalisation,\n", "$$\n", "p(\\mathbf{y}|\\mathbf{X}) = \\int p(\\mathbf{y}|\\mathbf{w},\\mathbf{X})p(\\mathbf{w}) \\text{d}\\mathbf{w}\n", "$$\n", "I like the term marginalisation, and the description of the probability as the *marginal likelihood*, because (for me) it somewhat has the implication that the variable name has been removed, and (perhaps) written in the margin. Marginalisation of a variable goes from a likelihood where the variable is in place, to a new likelihood where all possible values of that variable (under the prior) have been considered and weighted in the integral. \n", "\n", "This implies that all we need for specifying our model is to define the likelihood and the prior. We already have our likelihood from our earlier discussion, so our focus now turns to the prior density.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Bayesian Controversy: Philosophical Underpinnings\n", "\n", "A segment from the lecture in 2012 on philsophical underpinnings." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from datetime import timedelta\n", "start=int(timedelta(hours=0, minutes=20, seconds=15).total_seconds())\n", "YouTubeVideo('AvlnFnvFw_0',start=start)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Prior Density\n", "\n", "Let's assume that the prior density is given by a zero mean Gaussian, which is independent across each of the parameters, \n", "$$\\mappingVector \\sim \\gaussianSamp{\\zerosVector}{\\alpha \\eye}$$\n", "In other words, we are assuming, for the prior, that each element of the parameters vector, $\\mappingScalar_i$, was drawn from a Gaussian density as follows\n", "$$\\mappingScalar_i \\sim \\gaussianSamp{0}{\\alpha}$$\n", "\n", "Let's start by assigning the parameter of the prior distribution, which is the variance of the prior distribution, $\\alpha$." ] }, { "cell_type": "code", "collapsed": false, "input": [ "# set prior variance on w\n", "alpha = 4.\n", "# set the order of the polynomial basis set\n", "order = 5\n", "# set the noise variance\n", "sigma2 = 0.01" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Generating from the Model\n", "\n", "A very important aspect of probabilistic modelling is to *sample* from your model to see what type of assumptions you are making about your data. In this case that involves a two stage process.\n", "\n", "1. Sample a candiate parameter vector from the prior.\n", "2. Place the candidate parameter vector in the likelihood and sample functions conditiond on that candidate vector.\n", "3. Repeat to try and characterise the type of functions you are generating.\n", "\n", "Given a prior variance (as defined above) we can now sample from the prior distribution and combine with a basis set to see what assumptions we are making about the functions *a priori* (i.e. before we've seen the data). \n", "\n", "Firstly we compute the basis function matrix. We will do it both for our training data, and for a range of prediction locations (`x_pred`). " ] }, { "cell_type": "code", "collapsed": false, "input": [ "import numpy as np\n", "data = pods.datasets.olympic_marathon_men()\n", "x = data['X']\n", "y = data['Y']\n", "num_data = x.shape[0]\n", "num_pred_data = 100 # how many points to use for plotting predictions\n", "x_pred = np.linspace(1890, 2016, num_pred_data)[:, None] # input locations for predictions" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "now let's build the basis matrices. We define the polynomial basis as follows." ] }, { "cell_type": "code", "collapsed": false, "input": [ "def polynomial(x, degree, loc, scale):\n", " degrees = np.arange(degree+1)\n", " return ((x-loc)/scale)**degrees" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "code", "collapsed": false, "input": [ "loc = 1950.\n", "scale = 1.\n", "degree = 5. \n", "Phi_pred = polynomial(x_pred, degree=degree, loc=loc, scale=scale)\n", "Phi = polynomial(x, degree=degree, loc=loc, scale=scale)\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sampling from the Prior\n", "\n", "Now we will sample from the prior to produce a vector $\\mappingVector$ and use it to plot a function which is representative of our belief *before* we fit the data. To do this we are going to use the properties of the Gaussian density and a sample from a *standard normal* using the function `np.random.normal`.\n", "\n", "### Scaling Gaussian-distributed Variables\n", "\n", "First, let's consider the case where we have one data point and one feature in our basis set. In otherwords $\\mappingFunctionVector$ would be a scalar, $\\mappingVector$ would be a scalar and $\\basisMatrix$ would be a scalar. In this case we have \n", "\n", "$$\\mappingFunctionScalar = \\basisScalar \\mappingScalar$$\n", "\n", "If $\\mappingScalar$ is drawn from a normal density, \n", "\n", "$$\\mappingScalar \\sim \\gaussianSamp{\\meanScalar_\\mappingScalar}{c_\\mappingScalar}$$\n", "\n", "and $\\basisScalar$ is a scalar value which we are given, then properties of the Gaussian density tell us that \n", "\n", "$$\\basisScalar \\mappingScalar \\sim \\gaussianSamp{\\basisScalar\\meanScalar_\\mappingScalar}{\\basisScalar^2c_\\mappingScalar}$$\n", "\n", "Let's test this out numerically. First we will draw 200 samples from a standard normal," ] }, { "cell_type": "code", "collapsed": false, "input": [ "w_vec = np.random.normal(size=200)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can compute the mean of these samples and their variance" ] }, { "cell_type": "code", "collapsed": false, "input": [ "print 'w sample mean is ', w_vec.mean()\n", "print 'w sample variance is ', w_vec.var()" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These are close to zero (the mean) and one (the variance) as you'd expect. Now compute the mean and variance of the scaled version," ] }, { "cell_type": "code", "collapsed": false, "input": [ "phi = 7\n", "f_vec = phi*w_vec\n", "print 'True mean should be phi*0 = 0.'\n", "print 'True variance should be phi*phi*1 = ', phi*phi\n", "print 'f sample mean is ', f_vec.mean()\n", "print 'f sample variance is ', f_vec.var()" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you increase the number of samples then you will see that the sample mean and the sample variance begin to converge towards the true mean and the true variance. Obviously adding an offset to a sample from `np.random.normal` will change the mean. So if you want to sample from a Gaussian with mean `mu` and standard deviation `sigma` one way of doing it is to sample from the standard normal and scale and shift the result, so to sample a set of $\\mappingScalar$ from a Gaussian with mean $\\meanScalar$ and variance $\\alpha$,\n", "\n", "$$w \\sim \\gaussianSamp{\\meanScalar}{\\alpha}$$\n", "\n", "We can simply scale and offset samples from the *standard normal*." ] }, { "cell_type": "code", "collapsed": false, "input": [ "mu = 4 # mean of the distribution\n", "alpha = 2 # variance of the distribution\n", "w_vec = np.random.normal(size=200)*np.sqrt(alpha) + mu\n", "print 'w sample mean is ', w_vec.mean()\n", "print 'w sample variance is ', w_vec.var()" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here the `np.sqrt` is necesssary because we need to multiply by the standard deviation and we specified the variance as `alpha`. So scaling and offsetting a Gaussian distributed variable keeps the variable Gaussian, but it effects the mean and variance of the resulting variable. \n", "\n", "To get an idea of the overal shape of the resulting distribution, let's do the same thing with a histogram of the results. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "# First the standard normal\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "z_vec = np.random.normal(size=1000) # by convention, in statistics, z is often used to denote samples from the standard normal\n", "w_vec = z_vec*np.sqrt(alpha) + mu\n", "# plot normalized histogram of w, and then normalized histogram of z on top\n", "plt.hist(w_vec, bins=30, normed=True)\n", "plt.hist(z_vec, bins=30, normed=True)\n", "plt.legend(('$w$', '$z$'))" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now re-run this histogram with 100,000 samples and check that the both histograms look qualitatively Gaussian.\n", "\n", "## Sampling from the Prior\n", "\n", "Let's use this way of constructing samples from a Gaussian to check what functions look like *a priori*. The process will be as follows. First, we sample a random vector $K$ dimensional from `np.random.normal`. Then we scale it by $\\sqrt{\\alpha}$ to obtain a prior sample of $\\mappingVector$. " ] }, { "cell_type": "code", "collapsed": false, "input": [ "K = degree + 1\n", "z_vec = np.random.normal(size=K)\n", "w_sample = z_vec*np.sqrt(alpha)\n", "print w_sample" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can combine our sample from the prior with the basis functions to create a function," ] }, { "cell_type": "code", "collapsed": false, "input": [ "f_sample = np.dot(Phi_pred,w_sample)\n", "plt.plot(x_pred.flatten(), f_sample.flatten(), 'r-')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This shows the recurring problem with the polynomial basis. Our prior allows relatively large coefficients for the basis associated with high polynomial degrees. Because we are operating with input values of around 2000, this leads to output functions of very high values. The fix we have used for this before is to rescale our data before we apply the polynomial basis to it. Above, we set the scale of the basis to 1. Here let's set it to 100 and try again." ] }, { "cell_type": "code", "collapsed": false, "input": [ "scale = 100.\n", "Phi_pred = polynomial(x_pred, degree=degree, loc=loc, scale=scale)\n", "Phi = polynomial(x, degree=degree, loc=loc, scale=scale)\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we need to recompute the basis functions from above, " ] }, { "cell_type": "code", "collapsed": false, "input": [ "f_sample = np.dot(Phi_pred,w_sample)\n", "plt.plot(x_pred.flatten(), f_sample.flatten(), 'r-')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's loop through some samples and plot various functions as samples from this system," ] }, { "cell_type": "code", "collapsed": false, "input": [ "num_samples = 10\n", "K = degree+1\n", "for i in xrange(num_samples):\n", " z_vec = np.random.normal(size=K)\n", " w_sample = z_vec*np.sqrt(alpha)\n", " f_sample = np.dot(Phi_pred,w_sample)\n", " plt.plot(x_pred.flatten(), f_sample.flatten())\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The predictions for the mean output can now be computed. We want the expected value of the predictions under the posterior distribution. In matrix form, the predictions can be computed as\n", "\n", "$$\\mathbf{f} = \\basisMatrix \\mappingVector.$$\n", "\n", "This involves a matrix multiplication between a fixed matrix $\\basisMatrix$ and a vector that is drawn from a distribution $\\mappingVector$. Because $\\mappingVector$ is drawn from a distribution, this imples that $\\mappingFunctionVector$ should also be drawn from a distribution. There are two distributions we are interested in though. We have just been sampling from the *prior* distribution to see what sort of functions we get *before* looking at the data. In Bayesian inference, we need to computer the *posterior* distribution and sample from that density.\n", "\n", "## Computing the Posterior\n", "\n", "We will now attampt to compute the *posterior distribution*. In the lecture we went through the maths that allows us to compute the posterior distribution for $\\mappingVector$. This distribution is also Gaussian,\n", "\n", "$$p(\\mappingVector | \\dataVector, \\inputVector, \\dataStd^2) = \\mathcal{N}\\left(\\mappingVector|\\meanVector_\\mappingScalar, \\covarianceMatrix_\\mappingScalar\\right)$$\n", "\n", "with covariance, $\\covarianceMatrix_\\mappingScalar$, given by\n", "\n", "$$\\covarianceMatrix_\\mappingScalar = \\left(\\dataStd^{-2}\\basisMatrix^\\top \\basisMatrix + \\alpha^{-1} \\eye\\right)^{-1}$$ \n", "\n", "whilst the mean is given by\n", "\n", "$$\\meanVector_\\mappingScalar = \\covarianceMatrix_\\mappingScalar \\dataStd^{-2}\\basisMatrix^\\top \\dataVector$$\n", "\n", "Let's compute the posterior covariance and mean, then we'll sample from these densities to have a look at the posterior belief about $\\mappingVector$ once the data has been accounted for. Remember, the process of Bayesian inference involves combining the prior, $p(\\mappingVector)$ with the likelihood, $p(\\dataVector|\\inputVector, \\mappingVector)$ to form the posterior, $p(\\mappingVector | \\dataVector, \\inputVector)$ through Bayes' rule,\n", "\n", "$$p(\\mappingVector|\\dataVector, \\inputVector) = \\frac{p(\\dataVector|\\inputVector, \\mappingVector)p(\\mappingVector)}{p(\\dataVector)}$$\n", "\n", "We've looked at the samples for our function $\\mappingFunctionVector = \\basisMatrix\\mappingVector$, which forms the mean of the Gaussian likelihood, under the prior distribution. I.e. we've sampled from $p(\\mappingVector)$ and multiplied the result by the basis matrix. Now we will sample from the posterior density, $p(\\mappingVector|\\dataVector, \\inputVector)$, and check that the new samples fit do correspond to the data, i.e. we want to check that the updated distribution includes information from the data set. First we need to compute the posterior mean and *covariance*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Bayesian Inference in the Univariate Case\n", "\n", "This video talks about Bayesian inference across the single parameter, the offset $c$, illustrating how the prior and the likelihood combine in one dimension to form a posterior." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from datetime import timedelta\n", "start=int(timedelta(hours=0, minutes=0, seconds=15).total_seconds())\n", "YouTubeVideo('AvlnFnvFw_0',start=start)\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Multivariate Bayesian Inference\n", "\n", "This section of the lecture talks about how we extend the idea of Bayesian inference for the multivariate case. It goes through the multivariate Gaussian and how to complete the square in the linear algebra as we managed below." ] }, { "cell_type": "code", "collapsed": false, "input": [ "start=int(timedelta(hours=0, minutes=22, seconds=42).total_seconds())\n", "YouTubeVideo('Os1iqgpelPw', start=start)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The lecture informs us the the posterior density for $\\mathbf{w}$ is given by a Gaussian density with covariance\n", "$$\n", "\\mathbf{C}_w = \\left(\\sigma^{-2}\\boldsymbol{\\Phi}^\\top \\boldsymbol{\\Phi} + \\alpha^{-1} \\mathbf{I}\\right)^{-1}\n", "$$\n", "and mean \n", "$$\n", "\\boldsymbol{\\mu}_w = \\mathbf{C}_w\\sigma^{-2}\\boldsymbol{\\Phi}^\\top \\mathbf{y}.\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assignment Question 1\n", "\n", "Compute the covariance for $\\mathbf{w}$ given the training data, call the resulting variable `w_cov`. Compute the mean for $\\mathbf{w}$ given the training data. Call the resulting variable `w_mean`. Assume that $\\sigma^2 = 0.01$\n", "\n", "*10 marks*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Question 1 Answer Code\n", "# Write code for you answer to this question in this box\n", "# Do not delete these comments, otherwise you will get zero for this answer.\n", "# Make sure your code has run and the answer is correct *before* submitting your notebook for marking.\n", "sigma2 = \n", "w_cov = \n", "w_mean = " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sampling from the Posterior\n", "\n", "Before we were able to sample the prior values for the mean *independently* from a Gaussian using `np.random.normal` and scaling the result. However, observing the data *correlates* the parameters. Recall this from the first lab where we had a correlation between the offset, $c$ and the slope $m$ which caused such problems with the coordinate ascent algorithm. We need to sample from a *correlated* Gaussian. For this we can use `np.random.multivariate_normal`." ] }, { "cell_type": "code", "collapsed": false, "input": [ "w_sample = np.random.multivariate_normal(w_mean.flatten(), w_cov)\n", "f_sample = np.dot(Phi_pred,w_sample)\n", "plt.plot(x_pred.flatten(), f_sample.flatten(), 'r-')\n", "plt.plot(x, y, 'rx') # plot data to show fit." ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's sample several functions and plot them all to see how the predictions fluctuate." ] }, { "cell_type": "code", "collapsed": false, "input": [ "for i in xrange(num_samples):\n", " w_sample = np.random.multivariate_normal(w_mean.flatten(), w_cov)\n", " f_sample = np.dot(Phi_pred,w_sample)\n", " plt.plot(x_pred.flatten(), f_sample.flatten())\n", "plt.plot(x, y, 'rx') # plot data to show fit." ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This gives us an idea of what our predictions are. These are the predictions that are consistent with data and our prior. Try plotting different numbers of predictions. You can also try plotting beyond the range of where the data is and see what the functions do there. \n", "\n", "Rather than sampling from the posterior each time to compute our predictions, it might be better if we just summarised the predictions by the expected value of the output funciton, $f(x)$, for any particular input. If we can get formulae for this we don't need to sample the values of $f(x)$ we might be able to compute the distribution directly. Fortunately, in the Gaussian case, we can use properties of multivariate Gaussians to compute both the mean and the variance of these samples.\n", "\n", "## Properties of Gaussian Variables\n", "\n", "Gaussian variables have very particular properties, that many other densities don't exhibit. Perhaps foremost amoungst them is that the sum of any Gaussian distributed set of random variables also turns out to be Gaussian distributed. This property is much rarer than you might expect.\n", "\n", "### Sum of Gaussian-distributed Variables\n", "\n", "The sum of Gaussian random variables is also Gaussian, so if we have a random variable $y_i$ drawn from a Gaussian density with mean $\\meanScalar_i$ and variance $\\dataStd^2_i$, \n", "\n", "$$y_i \\sim \\gaussianSamp{\\meanScalar_i}{\\dataStd^2_i}$$\n", "\n", "Then the sum of $K$ independently sampled values of $y_i$ will be drawn from a Gaussian with mean $\\sum_{i=1}^K \\mu_i$ and variance $\\sum_{i=1}^K \\dataStd_i^2$,\n", "\n", "\n", "$$\\sum_{i=1}^K y_i \\sim \\gaussianSamp{\\sum_{i=1}^K \\meanScalar_i}{\\sum_{i=1}^K \\dataStd_i^2}.$$\n", "\n", "Let's try that experimentally. First let's generate a vector of samples from a standard normal distribution, $z \\sim \\gaussianSamp{0}{1}$, then we will scale and offset them, then keep adding them into a vector `y_vec`.\n", "\n", "#### Sampling from Gaussians and Summing Up" ] }, { "cell_type": "code", "collapsed": false, "input": [ "K = 10 # how many Gaussians to add.\n", "num_samples = 1000 # how many samples to have in y_vec\n", "mus = np.linspace(0, 5, K) # mean values generated linearly spaced between 0 and 5\n", "sigmas = np.linspace(0.5, 2, K) # sigmas generated linearly spaced between 0.5 and 2\n", "y_vec = np.zeros(num_samples)\n", "for mu, sigma in zip(mus, sigmas):\n", " z_vec = np.random.normal(size=num_samples) # z is from standard normal\n", " y_vec += z_vec*sigma + mu # add to y z*sigma + mu\n", "\n", "# now y_vec is the sum of each scaled and off set z.\n", "print 'Sample mean is ', y_vec.mean(), ' and sample variance is ', y_vec.var()\n", "print 'True mean should be ', mus.sum()\n", "print 'True variance should be ', (sigmas**2).sum(), ' standard deviation ', np.sqrt((sigmas**2).sum()) " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course, we can histogram `y_vec` as well." ] }, { "cell_type": "code", "collapsed": false, "input": [ "plt.hist(y_vec, bins=30, normed=True)\n", "plt.legend('$y$')" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Matrix Multiplication of Gaussian Variables\n", "\n", "We are interested in what our model is saying about the sort of functions we are observing. The fact that summing of Gaussian variables leads to new Gaussian variables, and scaling of Gaussian variables *also* leads to Gaussian variables means that matrix multiplication (which is just a series of sums and scales) also leads to Gaussian densities. Matrix multiplication is just adding and scaling together, in the formula, $\\mappingFunctionVector = \\basisMatrix \\mappingVector$ we can extract the first element from $\\mappingFunctionVector$ as\n", "\n", "$$\\mappingFunctionScalar_i = \\basisVector_i^\\top \\mappingVector$$\n", "\n", "where $\\basisVector$ is a column vector from the $i$th row of $\\basisMatrix$ and $\\mappingFunctionScalar_i$ is the $i$th element of $\\mappingFunctionVector$. This vector inner product itself merely implies that \n", "\n", "$$\\mappingFunctionScalar_i = \\sum_{j=1}^K \\mappingScalar_j \\basisScalar_{i, j}$$\n", "\n", "and if we now say that $\\mappingScalar_i$ is Gaussian distributed, then because a scaled Gaussian is also Gaussian, and because a sum of Gaussians is also Gaussian, we know that $\\mappingFunctionScalar_i$ is also Gaussian distributed. It merely remains to work out its mean and covariance. We can do this by looking at the expectation under a Gaussian distribution. The expectation of the mean vector is given by\n", "\n", "$$\\expDist{\\mappingFunctionVector}{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} = \\int \\mappingFunctionVector \\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix} \\text{d}\\mappingVector = \\int \\basisMatrix\\mappingVector \\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix} \\text{d}\\mappingVector = \\basisMatrix \\int \\mappingVector \\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix} \\text{d}\\mappingVector = \\basisMatrix \\meanVector$$\n", "\n", "Which is straightforward. The expectation of $\\mappingFunctionVector=\\basisMatrix\\mappingVector$ under the Gaussian distribution for $\\mappingFunctionVector$ is simply $\\mappingFunctionVector=\\basisMatrix\\meanVector$, where $\\meanVector$ is the *mean* of the Gaussian density for $\\mappingVector$. Because our prior distribution was Gaussian with zero mean, the expectation under the prior is given by\n", "\n", "$$\\expDist{\\mappingFunctionVector}{\\gaussianDist{\\mappingVector}{\\zerosVector}{\\alpha\\eye}} = \\zerosVector$$\n", "\n", "The covariance is a little more complicated. A covariance matrix is defined as\n", "\n", "$$\\text{cov}\\left(\\mappingFunctionVector\\right)_{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} = \\expDist{\\mappingFunctionVector\\mappingFunctionVector^\\top}{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} - \\expDist{\\mappingFunctionVector}{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}}\\expDist{\\mappingFunctionVector}{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}}^\\top$$\n", "\n", "we've already computed $\\expDist{\\mappingFunctionVector}{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}}=\\basisMatrix \\meanVector$ so we can substitute that in to recover\n", "\n", "$$\\text{cov}\\left(\\mappingFunctionVector\\right)_{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} = \\expDist{\\mappingFunctionVector\\mappingFunctionVector^\\top}{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} - \\basisMatrix \\meanVector \\meanVector^\\top \\basisMatrix^\\top$$\n", "\n", "So we need the expectation of $\\mappingFunctionVector\\mappingFunctionVector^\\top$. Substituting in $\\mappingFunctionVector = \\basisMatrix \\mappingVector$ we have\n", "\n", "$$\\text{cov}\\left(\\mappingFunctionVector\\right)_{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} = \\expDist{\\basisMatrix\\mappingVector\\mappingVector^\\top \\basisMatrix^\\top}{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} - \\basisMatrix \\meanVector \\meanVector^\\top \\basisMatrix^\\top$$\n", "\n", "$$\\text{cov}\\left(\\mappingFunctionVector\\right)_{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} = \\basisMatrix\\expDist{\\mappingVector\\mappingVector^\\top}{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} \\basisMatrix^\\top - \\basisMatrix \\meanVector \\meanVector^\\top \\basisMatrix^\\top$$\n", "\n", "Which is dependent on the second moment of the Gaussian,\n", "\n", "$$\\expDist{\\mappingVector\\mappingVector^\\top}{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} = \\covarianceMatrix + \\meanVector\\meanVector^\\top$$\n", "\n", "that can be substituted in to recover,\n", "\n", "$$\\text{cov}\\left(\\mappingFunctionVector\\right)_{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} = \\basisMatrix\\covarianceMatrix \\basisMatrix^\\top$$\n", "\n", "so in the case of the prior distribution, where we have $\\covarianceMatrix = \\alpha \\eye$ we can write\n", "\n", "$$\\text{cov}\\left(\\mappingFunctionVector\\right)_{\\gaussianDist{\\mappingVector}{\\zerosVector}{\\alpha \\eye}} = \\alpha \\basisMatrix \\basisMatrix^\\top$$\n", "\n", "This implies that the prior we have suggested for $\\mappingVector$, which is Gaussian with a mean of zero and covariance of $\\alpha \\eye$ suggests that the distribution for $\\mappingVector$ is also Gaussian with a mean of zero and covariance of $\\alpha \\basisMatrix\\basisMatrix^\\top$. Since our observed output, $\\dataVector$, is given by a noise corrupted variation of $\\mappingFunctionVector$, the final distribution for $\\dataVector$ is given as \n", "\n", "$$\\dataVector = \\mappingFunctionVector + \\noiseVector$$\n", "\n", "where the noise, $\\noiseVector$, is sampled from a Gaussian density: $\\noiseVector \\sim \\gaussianSamp{\\zerosVector}{\\dataStd^2\\eye}$. So, in other words, we are taking a Gaussian distributed random value $\\mappingFunctionVector$,\n", "\n", "$$\\mappingFunctionVector \\sim \\gaussianSamp{\\zerosVector}{\\alpha\\basisMatrix\\basisMatrix^\\top}$$\n", "\n", "and adding to it another Gaussian distributed value, $\\noiseVector \\sim \\gaussianSamp{\\zerosVector}{\\dataStd^2\\eye}$, to form our data observations, $\\dataVector$. Once again the sum of two (multivariate) Gaussian distributed variables is also Gaussian, with a mean given by the sum of the means (both zero in this case) and the covariance given by the sum of the covariances. So we now have that the marginal likelihood for the data, $p(\\dataVector)$ is given by\n", "\n", "$$p(\\dataVector) = \\gaussianDist{\\dataVector}{\\zerosVector}{\\alpha \\basisMatrix \\basisMatrix^\\top + \\dataStd^2\\eye}$$\n", "\n", "This is our *implicit* assumption for $\\dataVector$ given our prior assumption for $\\mappingVector$.\n", "\n", "### Computing the Mean and Error Bars of the Functions\n", "\n", "These ideas together, now allow us to compute the mean and error bars of the predictions. The mean prediction, before corrupting by noise is given by,\n", "$$\n", "\\mathbf{f} = \\boldsymbol{\\Phi}\\mathbf{w}\n", "$$\n", "in matrix form. This give syou enough information to compute the predictive mean. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assignment Question 2\n", "\n", "Compute the predictive mean for the function at all the values of the basis function given by `Phi_pred`. Call the vector of predictions `f_pred_mean`. Plot the predictions alongside the data. We can also compute what the training error was. Use the output from your model to compute the predictive mean, and then compute the sum of squares error of that predictive mean.\n", "$$\n", "E = \\sum_{i=1}^n (y_i - \\langle f_i\\rangle)^2\n", "$$\n", "where $\\langle f_i\\rangle$ is the expected output of the model at point $x_i$.\n", "\n", "*15 marks*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Question 2 Answer Code\n", "# Write code for you answer to this question in this box\n", "# Do not delete these comments, otherwise you will get zero for this answer.\n", "# Make sure your code has run and the answer is correct *before* submitting your notebook for marking.\n", "\n", "# compute mean under posterior density\n", "f_pred_mean = \n", "\n", "# plot the predictions\n", "\n", "# compute mean at the training data and sum of squares error\n", "f_mean = \n", "sum_squares = \n", "print 'The error is: ', sum_squares" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Computing Error Bars\n", "\n", "Finally, we can compute error bars for the predictions. The error bars are the standard deviations of the predictions for $\\mappingFunctionVector=\\basisMatrix\\mappingVector$ under the posterior density for $\\mappingVector$. The standard deviations of these predictions can be found from the variance of the prediction at each point. Those variances are the diagonal entries of the covariance matrix. We've already computed the form of the covariance under Gaussian expectations, \n", "\n", "$$\\text{cov}\\left(\\mappingFunctionVector\\right)_{\\gaussianDist{\\mappingVector}{\\meanVector}{\\covarianceMatrix}} = \\basisMatrix\\covarianceMatrix \\basisMatrix^\\top$$\n", "\n", "which under the posterior density is given by\n", "\n", "$$\\text{cov}\\left(\\mappingFunctionVector\\right)_{\\gaussianDist{\\mappingVector}{\\meanVector_w}{\\covarianceMatrix_w}} = \\basisMatrix\\covarianceMatrix_w \\basisMatrix^\\top$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assignment Question 3\n", "\n", "The error bars are given by computing the standard deviation of the predictions, $f$. For a given prediction $f_i$ the variance is $\\text{var}(f_i) = \\langle f_i^2\\rangle - \\langle f_i \\rangle^2$. This is given by the diagonal element of the covariance of $\\mathbf{f}$,\n", "$$\n", "\\text{var}(f_i) = \\boldsymbol{\\phi}_{i, :}^\\top \\mathbf{C}_w \\boldsymbol{\\phi}_{i, :}\n", "$$\n", "where $\\boldsymbol{\\phi}_{i, :}$ is the basis vector associated with the input location, $\\mathbf{x}_i$.\n", "\n", "Plot the mean function and the error bars for your basis.\n", "\n", "*20 marks*\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Question 3 Answer Code\n", "# Write code for you answer to this question in this box\n", "# Do not delete these comments, otherwise you will get zero for this answer.\n", "# Make sure your code has run and the answer is correct *before* submitting your notebook for marking.\n", "\n", "# Compute variance at function values\n", "f_pred_var = \n", "f_pred_std = \n", "\n", "# plot the mean and error bars at 2 standard deviations above and below the mean\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Validation\n", "\n", "Now we will test the generalisation ability of these models. Firstly we are going to use hold out validation to attempt to see which model is best for extrapolating." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assignment Question 4\n", "\n", "Now split the data into training and *hold out* validation sets. Hold out the data for years after 1980. Compute the predictions for different model orders between 0 and 8. Find the model order which fits best according to *hold out* validation. Is it the same as the maximum likelihood result fom last week?\n", "\n", "*25 marks*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Question 4 Answer Code\n", "# Write code for you answer to this question in this box\n", "# Do not delete these comments, otherwise you will get zero for this answer.\n", "# Make sure your code has run and the answer is correct *before* submitting your notebook for marking.\n", "\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 61 }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assignment Question 5\n", "\n", "Now we will use leave one out cross validation to attempt to see which model is best at interpolating. Do you get the same result as for hold out validation? Compare plots of the hold out validation area for different degrees and the cross validation error for different degrees. Why are they so different? Select a suitable polynomial for characterising the differences in the predictions. Plot the mean function and the error bars for the full data set (to represent the leave one out solution) and the training data from the hold out experiment. Discuss your answer. \n", "\n", "*30 marks*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Question 5 Answer Code\n", "# Write code for you answer to this question in this box\n", "# Do not delete these comments, otherwise you will get zero for this answer.\n", "# Make sure your code has run and the answer is correct *before* submitting your notebook for marking.\n", "\n", "\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 62 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 5 Answer\n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [] } ], "metadata": {} } ] }