{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Bayesian Machine Learning"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Preliminaries\n",
"\n",
"- Goals\n",
" - Introduction to Bayesian (i.e., probabilistic) modeling\n",
"- Materials\n",
" - Mandatory\n",
" - These lecture notes\n",
" - Optional\n",
" - Bishop pp. 68-74 (on the coin toss example)\n",
" - [Ariel Caticha - 2012 - Entropic Inference and the Foundations of Physics](https://github.com/bertdv/BMLIP/blob/master/lessons/notebooks/files/Caticha-2012-Entropic-Inference-and-the-Foundations-of-Physics.pdf), pp.35-44 (section 2.9, on deriving Bayes rule for updating probabilities)\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Challenge: Predicting a Coin Toss\n",
"\n",
"- **Problem**: We observe a the following sequence of heads (outcome $=1$) and tails (outcome $=0$) when tossing the same coin repeatedly $$D=\\{1011001\\}\\,.$$\n",
"\n",
"- What is the probability that heads comes up next?"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- **Solution**: later in this lecture. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### The Bayesian Machine Learning Framework\n",
"\n",
"- Suppose that your application is to predict a future observation $x$, based on $N$ past observations $D=\\{x_1,\\dotsc,x_N\\}$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- The Bayesian design approach to solving this task involves four stages: \n",
"\n",
"
\n",
"REPEAT\n",
" 1- Model specification\n",
" 2- Parameter estimation\n",
" 3- Model evaluation\n",
"UNTIL model performance is satisfactory\n",
" 4- Apply model\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- In principle, based on the model evaluation results, you may want to re-specify your model and _repeat_ the design process (a few times), until model performance is acceptable. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
" \n",
"- Next, we discuss these four stages in a bit more detail."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### (1) Model specification\n",
"\n",
"- Your first task is to propose a probabilistic model ($m$) for generating the observations $x$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- A probabilistic model $m$ consists of a joint distribution $p(x,\\theta|m)$ that relates observations $x$ to model parameters $\\theta$. Usually, the model is proposed in the form of a data generating distribution $p(x|\\theta,m)$ and a prior $p(\\theta|m)$. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- _You_ are responsible to choose the data generating distribution $p(x|\\theta)$ based on your physical understanding of the data generating process. (For brevity, if we are working on one given model $m$ with no alternative models, we usually drop the given dependency on $m$ from the notation)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- _You_ must also choose the prior $p(\\theta)$ to reflect what you know about the parameter values before you see the data $D$.\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### (2) Parameter estimation\n",
"\n",
"- Note that, for a given data set $D=\\{x_1,x_2,\\dots,x_N\\}$ with _independent_ observations $x_n$, the likelihood factorizes as \n",
"$$ p(D|\\theta) = \\prod_{n=1}^N p(x_n|\\theta)\\,,$$\n",
"so usually you select a model for generating one observation $x_n$ and then use (in-)dependence assumptions to combine these models into a likelihood function for the model parameters."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- The likelihood and prior both contain information about the model parameters. Next, you use Bayes rule to fuse these two information sources into a posterior distribution for the parameters:\n",
"$$\n",
"\\underbrace{p(\\theta|D) }_{\\text{posterior}} = \\frac{p(D|\\theta) p(\\theta)}{p(D)} \\propto \\underbrace{p(D|\\theta)}_{\\text{likelihood}} \\cdot \\underbrace{p(\\theta)}_{\\text{prior}}\n",
"$$ "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Note that there's **no need for you to design some clever parameter estimation algorithm**. Bayes rule _is_ the parameter estimation algorithm. The only complexity lies in the computational issues! "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- This \"recipe\" works only if the right-hand side (RHS) factors can be evaluated; the computational details can be quite challenging and this is what machine learning is about. \n",
" \n",
"- $\\Rightarrow$ **Machine learning is EASY, apart from computational details :)**\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### (3) Model Evaluation \n",
"\n",
"- In the framework above, parameter estimation was executed by \"perfect\" Bayesian reasoning. So is everything settled now? "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- No, there appears to be one remaining problem: how good really were our model assumptions $p(x|\\theta)$ and $p(\\theta)$? We want to \"score\" the model performance."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Note that this question is only interesting in practice if we have alternative models to choose from. After all, if you don't have an alternative model, any value for the model evidence would still not lead you to switch to another model. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"- Let's assume that we have more candidate models, say $\\mathcal{M} = \\{m_1,\\ldots,m_K\\}$ where each model relates to specific prior $p(\\theta|m_k)$ and likelihood $p(D|\\theta,m_k)$? Can we evaluate the relative performance of a model against another model from the set?"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Start again with **model specification**. _You_ must now specify a prior $p(m_k)$ (next to the likelihood $p(D|\\theta,m_k)$ and prior $p(\\theta|m_k)$) for each of the models and then solve the desired inference problem: \n",
"$$\\begin{align*} \n",
"\\underbrace{p(m_k|D)}_{\\substack{\\text{model}\\\\\\text{posterior}}} &= \\frac{p(D|m_k) p(m_k)}{p(D)} \\\\\n",
" &\\propto p(m_k) \\cdot p(D|m_k) \\\\\n",
" &= p(m_k)\\cdot \\int_\\theta p(D,\\theta|m_k) \\,\\mathrm{d}\\theta\\\\\n",
" &= \\underbrace{p(m_k)}_{\\substack{\\text{model}\\\\\\text{prior}}}\\cdot \\underbrace{\\int_\\theta \\underbrace{p(D|\\theta,m_k)}_{\\text{likelihood}} \\,\\underbrace{p(\\theta|m_k)}_{\\text{prior}}\\, \\mathrm{d}\\theta }_{\\substack{\\text{evidence }p(D|m_k)\\\\\\text{= model likelihood}}}\\\\\n",
"\\end{align*}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- You _can_ evaluate the RHS of this equation since you selected the model priors $p(m_k)$, the parameter priors $p(\\theta|m_k)$ and the likelihoods $p(D|\\theta,m_k)$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- You can now compare posterior distributions $p(m_k|D)$ for a set of models $\\{m_k\\}$ and decide on the merits of each model relative to alternative models. This procedure is called **Bayesian model comparison**."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"- Note that, to evaluate the model posterior, you must calculate the \"model evidence\" $p(D|M_k)$, which can be interpreted as a likelihood function for the model. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- $\\Rightarrow$ In a Bayesian framework, **model estimation** follows the same recipe as parameter estimation; it just works at one higher hierarchical level. Compare the required calulations:\n",
"\n",
"$$\\begin{align*}\n",
"p(\\theta|D) &\\propto p(D|\\theta) p(\\theta) \\; &&\\text{(parameter estimation)} \\\\\n",
"p(m_k|D) &\\propto p(D|m_k) p(m_k) \\; &&\\text{(model comparison)}\n",
"\\end{align*}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Again, **no need to invent a special algorithm for estimating the performance of your model**. Straightforward application of probability theory takes care of all that. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- In principle, you could proceed with asking how good your choice for the candidate model set $\\mathcal{M}$ was. You would have to provide a set of alternative model sets $\\{\\mathcal{M}_1,\\mathcal{M}_2,\\ldots,\\mathcal{M}_M\\}$ with priors $p(\\mathcal{M}_m)$ for each set and compute posteriors $p(\\mathcal{M}_m|D)$. And so forth ... "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- With the (relative) performance evaluation scores of your model in hand, you could now re-specify your model (hopefully an improved model) and _repeat_ the design process until the model performance score is acceptable. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Bayes Factors"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"- As an aside, in the (statistics and machine learning) literature, performance comparison between two models is often reported by the [Bayes Factor](https://en.wikipedia.org/wiki/Bayes_factor), which is defined as the ratio of model evidences: \n",
"$$\\begin{align*}\n",
"\\underbrace{\\frac{p(D|m_1)}{p(D|m_2)}}_{\\text{Bayes Factor}} &= \\frac{\\frac{p(D,m_1)}{p(m_1)}}{\\frac{p(D,m_2)}{p(m_2)}} \\\\\n",
"&= \\frac{p(D,m_1)}{p(m_1)} \\cdot \\frac{p(m_2)}{p(D,m_2)} \\\\\n",
"&= \\frac{p(m_1|D) p(D)}{p(m_1)} \\cdot \\frac{p(m_2)}{p(m_2|D) p(D)} \\\\\n",
"&= \\underbrace{\\frac{p(m_1|D)}{p(m_2|D)}}_{\\substack{\\text{posterior} \\\\ \\text{ratio}}} \\cdot \\underbrace{\\frac{p(m_2)}{p(m_1)}}_{\\substack{\\text{prior} \\\\ \\text{ratio}}}\n",
"\\end{align*}$$\n",
" - Hence, for equal model priors ($p(m_1)=p(m_2)=0.5$), the Bayes Factor reports the posterior probability ratio for the two models. \n",
"\n",
"- In principle, any hard decision on which is the better model has to accept some _ad hoc_ arguments, but [Jeffreys (1961)](https://www.amazon.com/Theory-Probability-Classic-Physical-Sciences/dp/0198503687/ref=sr_1_1?qid=1663516628&refinements=p_27%3Athe+late+Harold+Jeffreys&s=books&sr=1-1&text=the+late+Harold+Jeffreys) advises the following interpretation of the log-Bayes factor \n",
"\n",
"$$\\log_{10} B_{12} =\\log_{10}\\frac{p(D|m_1)}{p(D|m_2)}$$\n",
"\n",
"
\n",
"
$\\log_{10} B_{12}$
Evidence for $m_1$
\n",
"
0 to 0.5
not worth mentioning
\n",
"
0.5 to 1
substantial
\n",
"
1 to 2
strong
\n",
"
>2
decisive
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### (4) Prediction\n",
"\n",
"- Once we are satisfied with the evidence for a (trained) model, we can apply the model to our prediction/classification/etc task."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Given the data $D$, our knowledge about the yet unobserved datum $x$ is captured by (everything is conditioned on the selected model)\n",
"$$\\begin{align*}\n",
"p(x|D) &\\stackrel{s}{=} \\int p(x,\\theta|D) \\,\\mathrm{d}\\theta\\\\\n",
" &\\stackrel{p}{=} \\int p(x|\\theta,D) p(\\theta|D) \\,\\mathrm{d}\\theta\\\\\n",
" &\\stackrel{m}{=} \\int \\underbrace{p(x|\\theta)}_{\\text{data generation dist.}} \\cdot \\underbrace{p(\\theta|D)}_{\\text{posterior}} \\,\\mathrm{d}\\theta\\\\\n",
"\\end{align*}$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- In the last equation, the simplification $p(x|\\theta,D) = p(x|\\theta)$ follows from our model specification. We assumed a _parametric_ data generating distribution $p(x|\\theta)$ with no explicit dependency on the data set $D$. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Again, **no need to invent a special prediction algorithm**. Probability theory takes care of all that. The complexity of prediction is just computational: how to carry out the marginalization over $\\theta$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Note that the application of the learned posterior $p(\\theta|D)$ not necessarily has to be a prediction task. We use it here as an example, but other applications (e.g., classification, regression etc.) are of course also possible. \n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"##### Prediction with multiple models\n",
"\n",
"- When you have a posterior $p(m_k|D)$ for the models, you don't *need* to choose one model for the prediction task. You can do prediction by **Bayesian model averaging**, which combines the predictive power from all models:\n",
"$$\\begin{align*}\n",
"p(x|D) &= \\sum_k \\int p(x,\\theta,m_k|D)\\,\\mathrm{d}\\theta \\\\\n",
" &= \\sum_k \\int p(x|\\theta,m_k) \\,p(\\theta|m_k,D)\\, p(m_k|D) \\,\\mathrm{d}\\theta \\\\\n",
" &= \\sum_k \\underbrace{p(m_k|D)}_{\\substack{\\text{model}\\\\\\text{posterior}}} \\cdot \\int \\underbrace{p(\\theta|m_k,D)}_{\\substack{\\text{parameter}\\\\\\text{posterior}}} \\, \\underbrace{p(x|\\theta,m_k)}_{\\substack{\\text{data generating}\\\\\\text{distribution}}} \\,\\mathrm{d}\\theta\n",
"\\end{align*}$$ "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Alternatively, if you do need to work with one model (e.g. due to computational resource constraints), you can for instance select the model with largest posterior $p(m_k|D)$ and use that model for prediction. This is called **Bayesian model selection**."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"- Bayesian model averaging is the principal way to apply PT to machine learning. You don't throw away information by discarding lesser performant models, but rather use PT (marginalization of models) to compute $$p(\\text{what-I-am-interested-in} \\,|\\, \\text{all available information})$$ exactly. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### We're Done!\n",
"\n",
"- In principle, you now have the recipe in your hands now to solve all your prediction/classification/regression etc problems by the same method:\n",
"\n",
" 1. specify a model\n",
" 2. train the model (by PT)\n",
" 3. evaluate the model (by PT); if not satisfied, goto 1\n",
" 4. apply the model (by PT)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Crucially, there is no need to invent clever machine learning algorithms, and there is no need to invent a clever prediction algorithm nor a need to invent a model performance criterion. Instead, you propose a model and, from there on, you let PT reason about everything that you care about. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Your problems are only of computational nature. Perhaps the integral to compute the evidence may not be analytically tractable, etc."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Bayesian Evidence as a Model Performance Criterion\n",
"\n",
"- I'd like to convince you that Bayesian model evidence is an excellent criterion for assessing your model's performance. To do so, let us consider a decomposition that relates model evidence to other highly-valued criteria such as **accuracy** and **model complexity**."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Consider a model $p(x,\\theta|m)$ and a data set $D = \\{x_1,x_2, \\ldots,x_N\\}$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Given the data set $D$, the log-evidence for model $m$ decomposes as follows (please check the derivation):\n",
"\n",
"$$\\begin{align*}\n",
"\\underbrace{\\log p(D|m)}_{\\text{log-evidence}} &= \\log p(D|m) \\cdot \\underbrace{\\int p(\\theta|D,m)\\mathrm{d}\\theta}_{\\text{evaluates to }1} \\\\\n",
" &= \\int p(\\theta|D,m) \\log p(D|m) \\mathrm{d}\\theta \\qquad \\text{(move $\\log p(D|m)$ into the integral)} \\\\\n",
" &= \\int p(\\theta|D,m) \\log \\underbrace{\\frac{p(D|\\theta,m) p(\\theta|m)}{p(\\theta|D,m)}}_{\\text{by Bayes rule}} \\mathrm{d}\\theta \\\\\n",
" &= \\underbrace{\\int p(\\theta|D,m) \\log p(D|\\theta,m) \\mathrm{d}\\theta}_{\\text{accuracy (a.k.a. data fit)}} - \\underbrace{\\int p(\\theta|D,m) \\log \\frac{p(\\theta|D,m)}{p(\\theta|m)} \\mathrm{d}\\theta}_{\\text{complexity}}\n",
"\\end{align*}$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- The first term (data fit, also known as accuracy) reflects that by exectuting Bayes rule, the posterior $p(\\theta|D)$ (which is proportional to prior times likelihood) moves toward the likelihood function $p(D|\\theta)$. This term measures how well the model predicts the data set $D$. We want this term to be large. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- The second term (complexity) reflects that executing Bayes rule will also try to keep the posterior $p(\\theta|D)$ close to the prior $p(\\theta)$. After all, Bayes rule just fuses two sources of information about $\\theta$, namely the likelihood function and prior. Technically, the complexity term is a [Kullback-Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) between posterior and prior. The KL divergence is always $\\geq 0$ and only equals $0$ if the posterior is equal to the prior, see [OPTIONAL SLIDE](#KLD). We want this term to be small. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- The complexity term regularizes the Bayesian learning process automatically. If you prefer models with high Bayesian evidence, then you prefer models that get a good data fit without need to learn much from the data set. These types of models are said to *generalize* well, since they can be applied to different data sets without specific adaptations for each data set. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Focussing only on accuracy maximization could lead to *overfitting*. Focussing only on complexity minimization could lead to *underfitting* of the model. Bayesian ML attends to both terms and avoids both underfitting and overfitting. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- $\\Rightarrow$ Bayesian learning automatically leads to models that generalize well. **There is no need for early stopping or validation data sets**. Just learn on the full data set and all behaves well. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- This latter point accentuates that the common practice in machine learning to divide a data set into a training, test and validation set is just an ad hoc mechanism that compensates for failing to frame the learning task as a Bayesian inference task. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Bayesian Machine Learning and the Scientific Method Revisited\n",
"\n",
"- The Bayesian design process provides a unified framework for the Scientific Inquiry method. We can now add equations to the design loop. (Trial design to be discussed in [Intelligent Agent lesson](https://nbviewer.jupyter.org/github/bertdv/BMLIP/blob/master/lessons/notebooks/Intelligent-Agents-and-Active-Inference.ipynb).) \n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Now Solve the Example Problem: Predicting a Coin Toss\n",
"\n",
"- We observe a the following sequence of heads (outcome = $1$) and tails (outcome = $0$) when tossing the same coin repeatedly $$D=\\{1011001\\}\\,.$$\n",
"\n",
"- What is the probability that heads comes up next? We solve this in the next slides ..."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Coin toss example (1): Model Specification\n",
"\n",
"- We observe a sequence of $N$ coin tosses $D=\\{x_1,\\ldots,x_N\\}$ with $n$ heads. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Let us denote outcomes by \n",
"$$x_k = \\begin{cases} 1 & \\text{if heads comes up} \\\\\n",
" 0 & \\text{otherwise (tails)} \\end{cases}\n",
" $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"##### Likelihood\n",
"\n",
"- Assume a [**Bernoulli** distributed](https://en.wikipedia.org/wiki/Bernoulli_distribution) variable $p(x_k=1|\\mu)=\\mu$ for a single coin toss, leading to \n",
"$$p(x_k|\\mu)=\\mu^{x_k} (1-\\mu)^{1-x_k} \\,.$$\n",
"\n",
"- Assume $n$ times heads were thrown out of a total of $N$ throws. The likelihood function then follows a [**binomial** distribution](https://en.wikipedia.org/wiki/Binomial_distribution) :\n",
"$$ \n",
"p(D|\\mu) = \\prod_{k=1}^N p(x_k|\\mu) = \\mu^n (1-\\mu)^{N-n}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"##### Prior\n",
"\n",
"- Assume the prior beliefs for $\\mu$ are governed by a [**beta distribution**](https://en.wikipedia.org/wiki/Beta_distribution)\n",
"\n",
"$$\n",
"p(\\mu) = \\mathrm{Beta}(\\mu|\\alpha,\\beta) = \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)} \\mu^{\\alpha-1}(1-\\mu)^{\\beta-1}\n",
"$$\n",
"where the Gamma function is sort-of a generalized factorial function. In particular, if $\\alpha,\\beta$ are integers, then $$\\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)} = \\frac{(\\alpha+\\beta-1)!}{(\\alpha-1)!\\,(\\beta-1)!}$$\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- A _what_ distribution? Yes, the **beta distribution** is a **conjugate prior** for the binomial distribution, which means that \n",
"$$\n",
"\\underbrace{\\text{beta}}_{\\text{posterior}} \\propto \\underbrace{\\text{binomial}}_{\\text{likelihood}} \\times \\underbrace{\\text{beta}}_{\\text{prior}}\n",
"$$\n",
"so we get a closed-form posterior."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- $\\alpha$ and $\\beta$ are called **hyperparameters**, since they parameterize the distribution for another parameter ($\\mu$). E.g., $\\alpha=\\beta=1$ leads to a uniform prior for $\\mu$. Some other choices for $\\alpha, \\beta$ and the resulting prior for $\\mu$ are shown in the following graph:\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"\n",
"- (Bishop Fig.2.2). Plots of the beta distribution $\\mathrm{Beta}(\\mu|a, b)$ as a function of $\\mu$ for various values of the hyperparameters $a$ and $b$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Coin toss example (2): Parameter estimation\n",
"\n",
"- Infer posterior PDF over $\\mu$ (and evidence) through Bayes rule\n",
"\n",
"$$\\begin{align*}\n",
"p(\\mu&|D) \\cdot p(D) = p(D|\\mu)\\cdot p(\\mu) \\\\\n",
" &= \\underbrace{\\left[ \\mu^n (1-\\mu)^{N-n}\\right]}_{\\text{likelihood}} \\cdot \\underbrace{\\left[ \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)} \\mu^{\\alpha-1}(1-\\mu)^{\\beta-1} \\right]}_{\\text{prior}} \\\\\n",
" &= \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)} \\mu^{n+\\alpha-1} (1-\\mu)^{N-n+\\beta-1} \\\\\n",
" &= \\underbrace{\\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)} \\frac{\\Gamma(n+\\alpha) \\Gamma(N-n+\\beta)}{\\Gamma(N+\\alpha+\\beta)}}_{\\text{evidence }p(D)} \\cdot \\underbrace{\\left[ \\frac{\\Gamma(N+\\alpha+\\beta)}{\\Gamma(n+\\alpha)\\Gamma(N-n+\\beta)} \\mu^{n+\\alpha-1} (1-\\mu)^{N-n+\\beta-1}\\right]}_{\\text{posterior }p(\\mu|D)=\\mathrm{Beta}(\\mu|n+\\alpha, N-n+\\beta)}\n",
"\\end{align*}$$\n",
"\n",
"\n",
"hence the posterior is also beta-distributed as\n",
"\n",
"$$\n",
"p(\\mu|D) = \\mathrm{Beta}(\\mu|\\,n+\\alpha, N-n+\\beta)\n",
"$$\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Coin toss example (3): Model Evaluation\n",
"\n",
"- It follow from the above calculation that the evidence for model $m$ can be analytically expressed as\n",
"\n",
"$$\n",
"p(D|m) = \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)} \\frac{\\Gamma(n+\\alpha) \\Gamma(N-n+\\beta)}{\\Gamma(N+\\alpha+\\beta)}\n",
"$$\n",
"\n",
"- The model evidence is a scalar. The absolute value is not important. However, you may want to compare the model evidence of this model to the evidence for another model on the same data set. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Coin Toss Example (4): Prediction\n",
"\n",
"- Once we have accepted a model, let's apply it to the application, in this case, predicting future observations. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Marginalize over the parameter posterior to get the predictive PDF for a new coin toss $x_\\bullet$, given the data $D$,\n",
"\n",
"$$\\begin{align*}\n",
"p(x_\\bullet=1|D) &= \\int_0^1 p(x_\\bullet=1|\\mu)\\,p(\\mu|D) \\,\\mathrm{d}\\mu \\\\\n",
" &= \\int_0^1 \\mu \\times \\mathrm{Beta}(\\mu|\\,n+\\alpha, N-n+\\beta) \\,\\mathrm{d}\\mu \\\\\n",
" &= \\frac{n+\\alpha}{N+\\alpha+\\beta}\n",
"\\end{align*}$$\n",
"\n",
"- This result is known as [**Laplace's rule of succession**](https://en.wikipedia.org/wiki/Rule_of_succession)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- The above integral computes the mean of a beta distribution, which is given by $\\mathbb{E}[x] = \\frac{a}{a+b}$ for $x \\sim \\mathrm{Beta}(a,b)$, see [wikipedia](https://en.wikipedia.org/wiki/Beta_distribution)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Finally, we're ready to solve our example problem: for $D=\\{1011001\\}$ and uniform prior ($\\alpha=\\beta=1$), we get\n",
"\n",
"$$ p(x_\\bullet=1|D)=\\frac{n+1}{N+2} = \\frac{4+1}{7+2} = \\frac{5}{9}$$\n",
"\n",
"- In other words, given the model assumptions (the Bernoulli data-generating distribution and Beta prior as specified above), and the observations $D=\\{1011001\\}$, the probability for observing heads (outcome=$1$) on the next toss is $\\frac{5}{9}$.\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Be aware that there is no such thing as an \"objective\" or \"correct\" prediction. Every prediction is conditional on the selected model and the used data set. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Coin Toss Example: What did we learn?\n",
"\n",
"- What did we learn from the data? Before seeing any data, we think that $$p(x_\\bullet=1)=\\left. p(x_\\bullet=1|D) \\right|_{n=N=0} = \\frac{\\alpha}{\\alpha + \\beta}\\,.$$ "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Hence, $\\alpha$ and $\\beta$ are prior pseudo-counts for heads and tails respectively. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- After the $N$ coin tosses, we think that $p(x_\\bullet=1|D) = \\frac{n+\\alpha}{N+\\alpha+\\beta}$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"- Note the following decomposition\n",
"\n",
"$$\\begin{align*}\n",
" p(x_\\bullet=1|\\,D) &= \\frac{n+\\alpha}{N+\\alpha+\\beta} = \\frac{n}{N+\\alpha+\\beta} + \\frac{\\alpha}{N+\\alpha+\\beta} \\\\\n",
" &= \\frac{N}{N+\\alpha+\\beta}\\cdot \\frac{n}{N} + \\frac{\\alpha+\\beta}{N+\\alpha+\\beta} \\cdot \\frac{\\alpha}{\\alpha+\\beta} \\\\\n",
" &= \\underbrace{\\frac{\\alpha}{\\alpha+\\beta}}_{\\substack{\\text{prior}\\\\\\text{prediction}}} + \\underbrace{\\underbrace{\\frac{N}{N+\\alpha+\\beta}}_{\\text{gain}}\\cdot \\underbrace{\\big( \\underbrace{\\frac{n}{N}}_{\\substack{\\text{data-based}\\\\\\text{prediction}}} - \\underbrace{\\frac{\\alpha}{\\alpha+\\beta}}_{\\substack{\\text{prior}\\\\\\text{prediction}}} \\big)}_{\\text{prediction error}}}_{\\text{correction}}\n",
"\\end{align*}$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"- Note that, since $0\\leq \\underbrace{\\frac{N}{N+\\alpha+\\beta}}_{\\text{gain}} \\lt 1$, the Bayesian prediction lies between (fuses) the prior and data-based predictions. The data plays the role of \"correcting\" the prior prediction."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- For large $N$, the gain goes to $1$ and $\\left. p(x_\\bullet=1|D)\\right|_{N\\rightarrow \\infty} \\rightarrow \\frac{n}{N}$ goes to the data-based prediction (the observed relative frequency)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Code Example: Bayesian evolution for the coin toss"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"- Next, we code an example for a sequence of coin tosses, where we assume that the true coin generates data $x_n \\in \\{0,1\\}$ by a Bernoulli distribution:\n",
"$$\n",
"p(x_n|\\mu=0.4)=0.4^{x_n} \\cdot 0.6^{1-x_n}\n",
"$$\n",
"\n",
"- So, this coin is biased!\n",
"\n",
"- In order predict the outcomes of future coin tosses, we'll compare three models.\n",
"\n",
"- All models have the same data generating distribution (also Bernoulli)\n",
"$$\n",
"p(x_n|\\mu,m_k) = \\mu^{x_n} (1-\\mu)^{1-x_n} \\quad \\text{for }k=1,2,3\n",
"$$\n",
"but they have different priors:\n",
"$$\\begin{aligned}\n",
"p(\\mu|m_1) &= \\mathrm{Beta}(\\mu|\\alpha=100,\\beta=500) \\\\\n",
"p(\\mu|m_2) &= \\mathrm{Beta}(\\mu|\\alpha=5,\\beta=1) \\\\\n",
"p(\\mu|m_3) &= \\mathrm{Beta}(\\mu|\\alpha=8,\\beta=13)\n",
"\\end{aligned}$$\n",
"\n",
"- You can verify that model $m_3$ has the best prior \n",
"$$p(x_n=1) = \\frac{\\alpha}{\\alpha+\\beta} = 8/21 \\approx 0.38 \\,,\n",
"$$\n",
"( but you are not supposed to know that the real coin has probability for heads $p(x_n=1|\\mu) = 0.4$ ). \n",
"\n",
"- Let's run $200$ tosses:\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m\u001b[1m Activating\u001b[22m\u001b[39m project at `~/github/bertdv/BMLIP/lessons`\n"
]
}
],
"source": [
"using Pkg; Pkg.activate(\"../.\"); Pkg.instantiate();\n",
"using IJulia; try IJulia.clear_output(); catch _ end"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"\n",
"using Distributions, StatsPlots, SpecialFunctions\n",
"\n",
"# computes log10 of Gamma function\n",
"function log10gamma(num)\n",
" num = convert(BigInt, num)\n",
" return log10(gamma(num))\n",
"end\n",
"\n",
"\n",
"μ = 0.4; # specify model parameter\n",
"n_tosses = 200 # specify number of coin tosses\n",
"samples = rand(n_tosses) .<= μ # Flip 200 coins\n",
"\n",
"function handle_coin_toss(prior :: Beta, observation :: Bool)\n",
" posterior = Beta(prior.α + observation, prior.β + (1 - observation))\n",
" return posterior\n",
"end\n",
"\n",
"function log_evidence_prior(prior :: Beta, N :: Int64, n :: Int64)\n",
" log_evidence = log10gamma(prior.α + prior.β) - log10gamma(prior.α) - log10gamma(prior.β) + log10gamma(n+prior.α) + log10gamma((N-n)+prior.β) - log10gamma(N+prior.α+prior.β)\n",
" return log_evidence\n",
"end\n",
"\n",
"priors = [Beta(100., 500.), Beta(5., 1.), Beta(8, 13)] #specify prior distributions (you can add additional beta distributions here) model 3 is the \"best\" prior. Can you see why? #save prior distributions to compute log-evidence\n",
"\n",
"n_models = length(priors)\n",
"\n",
"posterior_distributions = [[d] for d in priors] #save a sequence of posterior distributions for every prior, starting with the prior itself\n",
"log_evidences = [[] for _ in priors] #maintain a vector of log evidences to plot later\n",
"\n",
"for (N, sample) in enumerate(samples) #for every sample we want to update our posterior\n",
" for (i, prior) in enumerate(priors) #at every sample we want to update all distributions\n",
" posterior = handle_coin_toss(prior, sample) #do bayesian updating\n",
" push!(posterior_distributions[i], posterior) #add posterior to vector of posterior distributions\n",
" log_evidence = log_evidence_prior(posterior_distributions[i][N], N, sum(samples[1:N])) #compute log evidence and add to vector\n",
" push!(log_evidences[i], log_evidence)\n",
" priors[i] = posterior #the prior for the next sample is the posterior from the current sample\n",
" end\n",
"end\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- For each model, as a function of the number of coin tosses, we plot the evolution of the posteriors \n",
"$$p(\\mu|D,m_\\bullet)\n",
"$$"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"┌ Info: Saved animation to /Users/bert/github/bertdv/BMLIP/lessons/notebooks/anim_bay_ct.gif\n",
"└ @ Plots /Users/bert/.julia/packages/Plots/sxUvK/src/animation.jl:156\n"
]
},
{
"data": {
"text/html": [
""
],
"text/plain": [
"Plots.AnimatedGif(\"/Users/bert/github/bertdv/BMLIP/lessons/notebooks/anim_bay_ct.gif\")"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"#animate posterior distributions over time in a gif\n",
"\n",
"anim = @animate for i in 1:n_tosses\n",
" p = plot(title=string(\"n = \", i))\n",
" for j in 1:n_models\n",
" plot!(posterior_distributions[j][i+1], xlims = (0, 1), fill=(0, .2,), label=string(\"Posterior \", j), linewidth=2, ylims=(0,28), xlabel=\"μ\")\n",
" end\n",
"end\n",
"\n",
"gif(anim, \"anim_bay_ct.gif\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- (This is an animation. If you are viewing these notes in PDF format you can see the animation at https://nbviewer.org/github/bertdv/BMLIP/blob/master/lessons/notebooks/Bayesian-Machine-Learning.ipynb .)\n",
"- Note that the posterior for $m_2$ (red) grows toward the posterior for $m_3$ as the number of tosses increases. \n",
"$\\Rightarrow$ With more data, the relevance of the prior diminishes!\n",
"- The prior for model $m_1$ is very peaked, since we asummed a lot of prior pseudo-obervations. It would take many more tosses to let $m_1$ converge to the red and green posteriors. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Let's now do performance assessment of the models. We plot over time the relative Bayesian evidences for each model:\n",
"$$\\frac{p(D|m_i)}{\\sum_{i=1}^3 p(D|m_i)}$$"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"┌ Info: Saved animation to /Users/bert/github/bertdv/BMLIP/lessons/notebooks/anim_bay_ct_log_evidence.gif\n",
"└ @ Plots /Users/bert/.julia/packages/Plots/sxUvK/src/animation.jl:156\n"
]
},
{
"data": {
"text/html": [
""
],
"text/plain": [
"Plots.AnimatedGif(\"/Users/bert/github/bertdv/BMLIP/lessons/notebooks/anim_bay_ct_log_evidence.gif\")"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"using LaTeXStrings\n",
"\n",
"evidences = map(model -> exp.(model), log_evidences)\n",
"\n",
"anim = @animate for i in 1:n_tosses\n",
" p = plot(title=string(L\"\\frac{p_i(\\mathbf{x}_{1:n})}{\\sum_i p_i(\\mathbf{x}_{1:n})}\",\" n = \", i), ylims=(0, 1))\n",
" total = sum([evidences[j][i] for j in 1:n_models])\n",
" bar!([(evidences[j][i] / total) for j in 1:n_models], group=[\"Model $i\" for i in 1:n_models])\n",
"end\n",
"\n",
"gif(anim, \"anim_bay_ct_log_evidence.gif\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Over time, the relative evidence of one of our models converges to 0. Can you explain this behavior?"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### From Posterior to Point-Estimate\n",
"\n",
"- In the example above, Bayesian parameter estimation and prediction were tractable in closed-form. This is often not the case. We will need to approximate some of the computations. \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Recall Bayesian prediction\n",
"\n",
"$$\n",
"p(x|D) = \\int p(x|\\theta)p(\\theta|D)\\,\\mathrm{d}{\\theta}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- If we approximate posterior $p(\\theta|D)$ by a delta function for one 'best' value $\\hat\\theta$, then the predictive distribution collapses to\n",
"\n",
"$$\n",
"p(x|D)= \\int p(x|\\theta)\\,\\delta(\\theta-\\hat\\theta)\\,\\mathrm{d}{\\theta} = p(x|\\hat\\theta)\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- This is just the data generating distribution $p(x|\\theta)$ evaluated at $\\theta=\\hat\\theta$, which is easy to evaluate.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- The next question is how to get the parameter estimate $\\hat{\\theta}$? (See next slide)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Some Well-known Point-Estimates\n",
"\n",
"- **Bayes estimate** (the mean of the posterior)\n",
"\n",
"$$\n",
"\\hat \\theta_{bayes} = \\int \\theta \\, p\\left( \\theta |D \\right)\n",
"\\,\\mathrm{d}{\\theta}\n",
"$$\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- **Maximum A Posteriori** (MAP) estimate \n",
"$$\n",
"\\hat \\theta_{\\text{map}}= \\arg\\max _{\\theta} p\\left( \\theta |D \\right) =\n",
"\\arg \\max_{\\theta} p\\left(D |\\theta \\right) \\, p\\left(\\theta \\right)\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- **Maximum Likelihood** (ML) estimate\n",
"$$\n",
"\\hat \\theta_{ml} = \\arg \\max_{\\theta} p\\left(D |\\theta\\right)\n",
"$$\n",
" - Note that Maximum Likelihood is MAP with uniform prior\n",
" - ML is the most common approximation to the full Bayesian posterior."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Bayesian vs Maximum Likelihood Learning\n",
"\n",
"Consider the task: predict a datum $x$ from an observed data set $D$.\n",
"\n",
"
\n",
"
Bayesian
Maximum Likelihood
\n",
"
1. Model Specification
Choose a model $m$ with data generating distribution $p(x|\\theta,m)$ and parameter prior $p(\\theta|m)$
Choose a model $m$ with same data generating distribution $p(x|\\theta,m)$. No need for priors.
\n",
"
2. Learning
use Bayes rule to find the parameter posterior,\n",
"$$\n",
"p(\\theta|D) \\propto p(D|\\theta) p(\\theta)\n",
"$$
By Maximum Likelihood (ML) optimization,\n",
"$$ \n",
" \\hat \\theta = \\arg \\max_{\\theta} p(D |\\theta)\n",
"$$
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Report Card on Maximum Likelihood Estimation\n",
"\n",
"- Maximum Likelihood (ML) is MAP with uniform prior. MAP is sometimes called a 'penalized' ML procedure:\n",
"\n",
"$$\n",
"\\hat \\theta_{map} = \\arg \\max _\\theta \\{ \\underbrace{\\log\n",
"p\\left( D|\\theta \\right)}_{\\text{log-likelihood}} + \\underbrace{\\log\n",
"p\\left( \\theta \\right)}_{\\text{penalty}} \\}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- (good!). ML works rather well if we have a lot of data because the influence of the prior diminishes with more data."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- (good!). Computationally often do-able. Useful fact that makes the optimization easier (since $\\log$ is monotonously increasing):\n",
"\n",
"$$\\arg\\max_\\theta \\log p(D|\\theta) = \\arg\\max_\\theta p(D|\\theta)$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- (bad). Cannot be used for model comparison! When doing ML estimation, the Bayesian model evidence always evaluates to zero because the prior probability mass under the likelihood function goes to zero. Therefore, when doing ML estimation, Bayesian model evidence cannot be used to evaluate model performance: \n",
"$$\\begin{align*}\n",
"\\underbrace{p(D|m)}_{\\substack{\\text{Bayesian}\\\\ \\text{evidence}}} &= \\int p(D|\\theta) \\cdot p(\\theta|m)\\,\\mathrm{d}\\theta \\\\\n",
" &= \\lim_{(b-a)\\rightarrow \\infty} \\int p(D|\\theta)\\cdot \\text{Uniform}(\\theta|a,b)\\,\\mathrm{d}\\theta \\\\\n",
" &= \\lim_{(b-a)\\rightarrow \\infty} \\frac{1}{b-a}\\underbrace{\\int_a^b p(D|\\theta)\\,\\mathrm{d}\\theta}_{<\\infty} \\\\\n",
" &= 0\n",
"\\end{align*}$$\n",
" - In fact, this is a serious problem because Bayesian evidence is fundamentally the correct performance assessment criterion that follows from straighforward PT. In practice, when estimating parameters by maximum likelihood, we often evaluate model performance by an _ad hoc_ performance measure such as mean-squared-error on a testing data set."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"$\\Rightarrow$ **Maximum Likelihood estimation is at best an approximation to Bayesian learning**, but for good reason a very popular learning method when faced with lots of available data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##
OPTIONAL SLIDES
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The Kullback-Leibler Divergence\n",
"\n",
"- The Kullback-Leibler Divergence (a.k.a. relative entropy) between two distributions $q$ and $p$ is defined as\n",
"$$\n",
"D_{\\text{KL}}[q,p] \\equiv \\sum_z q(z) \\log \\frac{q(z)}{p(z)}\n",
"$$\n",
"\n",
"- The following [Gibbs Inequality](https://en.wikipedia.org/wiki/Gibbs%27_inequality) holds (see [wikipedia](https://en.wikipedia.org/wiki/Gibbs%27_inequality) for proof): \n",
" $$D_{\\text{KL}}[q,p] \\geq 0 \\quad \\text{with equality only if } p=q $$\n",
"\n",
"- The KL divergence can be interpreted as a distance between two probability distributions.\n",
"\n",
"- As an aside, note that $D_{\\text{KL}}[q,p] \\neq D_{\\text{KL}}[p,q]$. Both divergences are relevant. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Here is an animation that shows the KL divergence between two Gaussian distributions:\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"┌ Info: Saved animation to /Users/bert/github/bertdv/BMLIP/lessons/notebooks/anim_lat_kl.gif\n",
"└ @ Plots /Users/bert/.julia/packages/Plots/sxUvK/src/animation.jl:156\n"
]
},
{
"data": {
"text/html": [
""
],
"text/plain": [
"Plots.AnimatedGif(\"/Users/bert/github/bertdv/BMLIP/lessons/notebooks/anim_lat_kl.gif\")"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"using Distributions, StatsPlots, Plots.PlotMeasures, LaTeXStrings\n",
"\n",
"function kullback_leibler(q :: Normal, p :: Normal) #Calculates the KL Divergence between two gaussians (see https://stats.stackexchange.com/questions/7440/kl-divergence-between-two-univariate-gaussians for calculations)\n",
" return log((p.σ / q.σ)) + ((q.σ)^2 + (p.μ - q.μ)^2) / (2*p.σ^2) - (1. / 2.)\n",
"end\n",
"\n",
"μ_p = 0 #statistics of distributions we'll keep constant (we'll vary the mean of q). Feel free to change these and see what happens\n",
"σ_p = 1\n",
"σ_q = 1\n",
"\n",
"p = Normal(μ_p, σ_p)\n",
"\n",
"anim = @animate for i in 1:100\n",
" μ_seq = [(j / 10.) - 5. + μ_p for j in 1:i] #compute the sequence of means tested so far (to compute sequence of KL divergences)\n",
" kl = [kullback_leibler(Normal(μ, σ_q), p) for μ in μ_seq] #compute KL divergence data\n",
" viz = plot(right_margin=8mm, title=string(L\"D_{KL}(Q || P) = \", round(100 * kl[i]) / 100.), legend=:topleft) #build plot and format title and margins\n",
" μ_q = μ_seq[i] #extract mean of current frame from mean sequence\n",
" q = Normal(μ_q, σ_q)\n",
" plot!(p, xlims = (μ_p - 8, μ_p + 8), fill=(0, .2,), label=string(\"P\"), linewidth=2, ylims=(0,0.5)) #plot p\n",
" plot!(q, fill=(0, .2,), label=string(\"Q\"), linewidth=2, ylims=(0,0.5)) #plot q\n",
" plot!(twinx(), μ_seq, kl, xticks=:none, ylims=(0, maximum(kl) + 3), linewidth = 3, #plot KL divergence data with different y-axis scale and different legend\n",
" legend=:topright,xlims = (μ_p - 8, μ_p + 8), color=\"green\", label=L\"D_{KL}(Q || P)\")\n",
"end\n",
"gif(anim, \"anim_lat_kl.gif\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Note that this is an animation. If you are viewing these notes in PDF format you can see the animation at https://nbviewer.org/github/bertdv/BMLIP/blob/master/lessons/notebooks/Bayesian-Machine-Learning.ipynb"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"open(\"../../styles/aipstyle.html\") do f display(\"text/html\", read(f, String)) end"
]
}
],
"metadata": {
"@webio": {
"lastCommId": null,
"lastKernelId": null
},
"anaconda-cloud": {},
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Julia 1.9.3",
"language": "julia",
"name": "julia-1.9"
},
"language_info": {
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.9.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}