{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Probabilistic Programming 1: Introduction to Bayesian inference\n", "\n", "#### Goal \n", " - Familiarize yourself with basic concepts from Bayesian inference such as prior and posterior distributions.\n", " - Familiarize yourself with Jupyter notebooks and the basics of the Julia programming language.\n", "\n", "#### Materials \n", " - Mandatory\n", " - This notebook\n", " - Lecture notes on Probability Theory\n", " - Lecture notes on Bayesian Machine Learning\n", " - Optional\n", " - [Course installation guide](https://github.com/bertdv/BMLIP/blob/master/lessons/notebooks/files/WKouw-Mar2020-JuliaJupyterInstallGuide.pdf)\n", " - [Jupyter notebook tutorial](https://www.datacamp.com/community/tutorials/tutorial-jupyter-notebook)\n", " - [Intro to programming in Julia](https://youtu.be/8h8rQyEpiZA?t=233).\n", " - [Differences between Julia and Matlab / Python](https://docs.julialang.org/en/v1/manual/noteworthy-differences/index.html).\n", " - [Beer Tasting Experiment](https://journals.sagepub.com/doi/pdf/10.1177/1475725719848574)\n", " - [Savage-Dickey ratios](https://www.sciencedirect.com/science/article/pii/S0010028509000826?casa_token=AhA2bAAbOygAAAAA:3quBBzBv5PqTl0zdFo-_AKh2SmH_pH68FdXHMGGw0328wA1h0YGTdsOYkKwWBrwx84WVhselJA)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "In 1937, one of the founders of the field of statistics, [Ronald Fisher](https://en.wikipedia.org/wiki/Ronald_Fisher), published a story of how he explained _inference_ to a friend. This story, called the \"Lady Tasting Tea\", has been re-told many times in different forms. In this notebook, we will re-tell one of its modern variants and introduce you to some important concepts along the way. Note that none of the material used below is new; you have all heard this in the theory lectures. The point of the Probabilistic Programming sessions is to solve practical problems so that concepts from theory become less abstract and you develop an intuition for them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "First, let's get started with activating a Julia workspace and importing some modules." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "using Pkg\n", "Pkg.activate(\"../../../lessons/\")\n", "Pkg.instantiate();\n", "IJulia.clear_output();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "Every once in a while, a cell with \"code notes\" is added to explain Julia-specific commands, symbols or procedures. We expect students to be proficient in at least one programming language, preferably Python, C(++) or MATLAB (most closely related to Julia), and we will not expain generic programming constructs such as control flows and data structures.\n", "\n", "Code notes:\n", "- The code cell above activates a specific Julia workspace (a virtual environment) that lists all packages you will need for the Probabilistic Programming sessions. The first time you run this cell, it will download and install all packages automatically." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "using Distributions\n", "using Plots" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "Code notes:\n", "- `using` is how you import libraries and modules in Julia. Here we have imported a library of probability distributions called [Distributions.jl](https://github.com/JuliaStats/Distributions.jl) and a library of plotting utilities called [Plots.jl](https://github.com/JuliaPlots/Plots.jl)." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "search: \u001b[0m\u001b[1mu\u001b[22m\u001b[0m\u001b[1ms\u001b[22m\u001b[0m\u001b[1mi\u001b[22m\u001b[0m\u001b[1mn\u001b[22m\u001b[0m\u001b[1mg\u001b[22m S\u001b[0m\u001b[1mu\u001b[22mb\u001b[0m\u001b[1mS\u001b[22mtr\u001b[0m\u001b[1mi\u001b[22m\u001b[0m\u001b[1mn\u001b[22m\u001b[0m\u001b[1mg\u001b[22m incl\u001b[0m\u001b[1mu\u001b[22mde_\u001b[0m\u001b[1ms\u001b[22mtr\u001b[0m\u001b[1mi\u001b[22m\u001b[0m\u001b[1mn\u001b[22m\u001b[0m\u001b[1mg\u001b[22m \u001b[0m\u001b[1mu\u001b[22mn\u001b[0m\u001b[1ms\u001b[22mafe_str\u001b[0m\u001b[1mi\u001b[22m\u001b[0m\u001b[1mn\u001b[22m\u001b[0m\u001b[1mg\u001b[22m \u001b[0m\u001b[1mu\u001b[22mne\u001b[0m\u001b[1ms\u001b[22mcape_str\u001b[0m\u001b[1mi\u001b[22m\u001b[0m\u001b[1mn\u001b[22m\u001b[0m\u001b[1mg\u001b[22m\n", "\n" ] }, { "data": { "text/latex": [ "\\begin{verbatim}\n", "using\n", "\\end{verbatim}\n", "\\texttt{using Foo} will load the module or package \\texttt{Foo} and make its \\href{@ref}{\\texttt{export}}ed names available for direct use. Names can also be used via dot syntax (e.g. \\texttt{Foo.foo} to access the name \\texttt{foo}), whether they are \\texttt{export}ed or not. See the \\href{@ref modules}{manual section about modules} for details.\n", "\n" ], "text/markdown": [ "```\n", "using\n", "```\n", "\n", "`using Foo` will load the module or package `Foo` and make its [`export`](@ref)ed names available for direct use. Names can also be used via dot syntax (e.g. `Foo.foo` to access the name `foo`), whether they are `export`ed or not. See the [manual section about modules](@ref modules) for details.\n" ], "text/plain": [ "\u001b[36m using\u001b[39m\n", "\n", " \u001b[36musing Foo\u001b[39m will load the module or package \u001b[36mFoo\u001b[39m and make its \u001b[36mexport\u001b[39med names\n", " available for direct use. Names can also be used via dot syntax (e.g.\n", " \u001b[36mFoo.foo\u001b[39m to access the name \u001b[36mfoo\u001b[39m), whether they are \u001b[36mexport\u001b[39med or not. See the\n", " manual section about modules for details." ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "?using" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Beer Tasting Experiment\n", "\n", "In the summer of 2017, students of the University of Amsterdam participated in a \"Beer Tasting Experiment\" ([Doorn et al., 2019](https://journals.sagepub.com/doi/pdf/10.1177/1475725719848574)). Each participant was given two cups and were told that the cups contained [Hefeweissbier](https://www.bierenco.nl/product/weihenstephaner-hefeweissbier/), one with alcohol and one without. The participants had to taste each beer and guess which of the two contained alcohol.\n", "\n", "We are going to do a statistical analysis of the tasting experiment. We want to know to what degree participants are able to discriminate between the alcoholic and alcohol-free beers. The Bayesian approach is about 3 core steps: (1) specifying a model, (2) absorbing the data through inference (parameter estimation), and (3) evaluating the model. We are going to walk through these steps in detail below." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1. Model Specification\n", "\n", "Model specification consists of two parts: a likelihood function and a prior distribution.\n", "\n", "#### Likelihood\n", "\n", "A [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function) is a function of parameters given observed data. \n", "\n", "Here, we have an event variable $X$ that indicates that the choice was either \"correct\", which we will assign the number $1$, or \"incorrect\", which we will assign the number $0$. We can model this choice with what's known as a [_Bernoulli_ distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution). The Bernoulli distribution is a formula to compute the probability of a binary event. It has a \"rate parameter\" $\\theta$, a number between $0$ and $1$, which governs the probability of the two events. If $\\theta = 1$, then the participant will always choose the right cup (\"always\" = \"with probability $1$\") and if $\\theta = 0$, then the participant will never choose the right cup (\"never\" = \"with probability $0$\"). Choosing at random, i.e. getting as many correct choices as incorrect choices, corresponds to $\\theta = 0.5$.\n", "\n", "As stated above, we are using the Bernoulli distribution in our tasting experiment. As the Bernoulli distribution's rate parameter $\\theta$ increases, the event $X=1$, i.e. the participant correctly guesses the alcoholic beverage, becomes more probable. The formula for the Bernoulli distribution is:\n", "\n", "$$\\begin{aligned} p(X = x \\mid \\theta) =&\\ \\text{Bernoulli}(x \\mid \\theta) \\\\ =&\\ \\theta^x (1-\\theta)^{1-x} \\end{aligned}$$\n", "\n", "If $X=1$, then the formula simplifies to $p(X = 1 \\mid \\theta) = \\theta^1 (1-\\theta)^{1-1} = \\theta$. For $X=0$, it simplifies to $p(X = 0 \\mid \\theta) = \\theta^0 (1-\\theta)^{1-0} = 1-\\theta$. If you have multiple _independent_ observations, e.g. a data set $\\mathcal{D} = \\{X_1, X_2, X_3\\}$, you can get the probability of all observations by taking the product of individual probabilities:\n", "\n", "$$p(\\mathcal{D} \\mid \\theta) = \\prod_{i=1}^{N} p(X_i \\mid \\theta)$$\n", "\n", "As an example, suppose the first two participants have correctly guessed the beverage and a third one incorrectly guessed it. Then, the probability under $\\theta = 0.8$ is $$p(\\mathcal{D} = \\{1,1,0\\} \\mid \\theta = 0.8) = 0.8 \\cdot 0.8 \\cdot 0.2 = 0.128 \\, .$$ \n", "\n", "That is larger than the probability under $\\theta = 0.4$, which is \n", "\n", "$$p(\\mathcal{D} = \\{1,1,0\\} \\mid \\theta = 0.4) = 0.4 \\cdot 0.4 \\cdot 0.6 = 0.096 \\, .$$ \n", "\n", "But it is not as large as the probability under $\\theta = 0.6$, which is \n", "\n", "$$p(\\mathcal{D} = \\{1,1,0\\} \\mid \\theta = 0.6) = 0.6 \\cdot 0.6 \\cdot 0.4 = 0.144 \\, .$$ \n", "\n", "As you can see, the likelihood function tells us how well each value of the parameter fits the observed data. In short, how \"likely\" each parameter value is." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Prior Distribution\n", "\n", "In Bayesian inference, it is important to think about what kind of _prior knowledge_ you have about your problem. In our tasting experiment, this corresponds to what you think the probability is that a participant will correctly choose the cup. In other words, you have some thoughts about what value $\\theta$ is in this scenario. You might think that the participants' choices are all going to be roughly random. Or, given that you have tasted other types of alcohol-free beers before, you might think that the participants are going to choose the right cup most of the time. This intuition, this \"prior knowledge\", needs to be quantified. We do that by specifying another probability distribution for it, in this case the [_Beta_ distribution](https://en.wikipedia.org/wiki/Beta_distribution):\n", "\n", "$$\\begin{aligned} p(\\theta) =&\\ \\text{Beta}(\\theta \\mid \\alpha, \\beta) \\\\ =&\\ \\frac{\\Gamma(\\alpha + \\beta)}{\\Gamma(\\alpha) \\Gamma(\\beta)} \\theta^{\\alpha-1} (1-\\theta)^{\\beta-1} \\, . \\end{aligned}$$\n", "\n", "We use a Beta distribution to describe our state of knowledge about appropriate values for $\\theta$. \n", "\n", "The Beta distribution computes the probability of an outcome in the interval $[0,1]$. Like any other other distribution, it has parameters: $\\alpha$ and $\\beta$. Both are \"shape parameters\", meaning the distribution has a different shape for each value of the parameters. Let's visualise this!" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2×3 Matrix{Int64}:\n", " 1 2 3\n", " 4 5 6" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = [1 2 3;\n", " 4 5 6]" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Define shape parameters\n", "α = 2.0\n", "β = 2.0\n", "\n", "# Define probability distribution\n", "pθ = Beta(α, β)\n", "\n", "# Define range of values for θ\n", "θ = range(0.0, step=0.01, stop=1.0)\n", "\n", "# Visualize probability distribution function\n", "plot(θ, pdf.(pθ, θ), \n", " linewidth=3, \n", " color=\"red\", \n", " label=\"α = \"*string(α)*\", β = \"*string(β), \n", " xlabel=\"θ\", \n", " ylabel=\"p(θ)\",\n", " size=(800,300))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Code notes:\n", "- You can use greek letters as variables (write them like in latex, e.g. \\alpha, and press `tab`)\n", "- Ranges of numbers work just like they do in Matlab (e.g. `0.0:0.1:1.0`) and Python (e.g. `range(0.0, stop=100., length=100)`). Note that Julia is strict about types, e.g. using integers vs floats.\n", "- There is a `.` after the command `pdf`. This refers to [\"broadcasting\"](https://julia.guide/broadcasting): the function is applied to each element of a list or array. Here we use the `pdf` command to compute the probability for each value of $\\theta$ in the array.\n", "- Many of the keyword arguments in the `plot` command should be familiar to you if you've worked with [Matplotlib](https://matplotlib.org/) (Python's plotting library).\n", "- In the `label=` argument to plots, we have performed \"string concatenation\". In Julia, you write a string with double-quote characters and concatenate two strings by \"multiplying\", i.e. using `*`.\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note for the keen observers among you: since this is a continuous distribution, we are not actually plotting \"probability\", but rather [\"probability density\"](https://en.wikipedia.org/wiki/Probability_density_function#Link_between_discrete_and_continuous_distributions) (probability densities can be larger than $1$)." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Define shape parameters\n", "α = [2.0, 5.0]\n", "β = [1.0, 2.0]\n", "\n", "# Define initial distribution\n", "pθ = Beta(1.0, 1.0)\n", "\n", "# Start initial plot\n", "plot(θ, pdf.(pθ, θ), linewidth=3, label=\"α = 1.0, β = 1.0\", xlabel=\"θ\", ylabel=\"p(θ)\", legend=:topleft)\n", "\n", "# Loop over shape parameters\n", "for a in α\n", " for b in β\n", " plot!(θ, pdf.(Beta(a, b), θ), linewidth=3, label=\"α = \"*string(a)*\", β = \"*string(b))\n", " end\n", "end\n", "plot!(size=(800,300))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Code notes:\n", "- Square brackets around numbers automatically creates an Array (in Python, they create lists).\n", "- The `:` in `:topleft` indicates a `Symbol` type. It has many uses, but here it is used synonymously with a string option (e.g. legend=\"topleft\").\n", "- `for` loops can be done by using a range, such as `for i = 1:10` (like in Matlab), or using a variable that iteratively takes a value in an array, such as `for i in [1,2.3]` (like in Python). More [here](https://www.tutorialkart.com/julia/julia-for-loop/).\n", "- The `!` at the end of the plot command means the function is performed [\"in-place\"](https://docs.julialang.org/en/v1/manual/style-guide/#bang-convention-1). In other words, it changes its input arguments. Here, we change the plot by adding lines.\n", "- The final `plot!` is there to ensure Jupyter actually plots the figure. If you end a cell on an `end` command, Jupyter will remain silent.\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, the Beta distribution is quite flexible and can capture your belief about how often participants will correctly detect the alcoholic beverage. For example, the purple line indicates that you believe that it is very probable that participants will always get it right (peak lies on $\\theta=1.0$), but you still think there is some probability that the participants will guess at random ($p(\\theta = 1/2) \\approx 0.3$). The yellow-brown line indicates you believe that it is nearly impossible that the participants will always get it right ($p(\\theta = 1) \\approx 0.0$), but you still believe that they will get it right more often than not (peak lies around $\\theta \\approx 0.8$).\n", "\n", "In summary: a prior distribution $p(\\theta)$ reflects our beliefs about good values for parameter $\\theta$ _before_ data is observed." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "#### Exercise\n", "\n", "I want you to pick values for the shape parameters $\\alpha$ and $\\beta$ that reflect how often you think the participants will get it right.\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2. Parameter estimation\n", "\n", "Now that we have specified our generative model, it is time to estimate unknown variables. We'll first look at the data and then the inference part.\n", "\n", "#### Data\n", "\n", "The data of the participants in Amsterdam is available online at the [Open Science Foundation](https://osf.io/428pb/?view_only=e3dc67dab9c54d23a92fb2e88465f428). We'll start by reading it in." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "using DataFrames\n", "using CSV" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Code notes:\n", "- [CSV.jl](https://github.com/JuliaData/CSV.jl) is a library for reading in data stored in tables.\n", "- [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl) manipulates table data (like `pandas` in Python)." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "D = [1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1]\n" ] } ], "source": [ "# Read data from CSV file\n", "data = DataFrame(CSV.File(\"../datasets/TastingBeerResults.csv\"))\n", "\n", "# Extract variable indicating correctness of guess\n", "D = data[!, :CorrectIdentify];\n", "println(\"D = \", D)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Number of successes and failures\n", "S = sum(D .== 1)\n", "F = sum(D .== 0)\n", "\n", "# Visualize frequencies\n", "histogram(D, bins=[0,1,2], label=\"S = \"*string(S)*\", F = \"*string(F), xlabel=\"D\", xticks=[0,1], ylabel=\"Number\", legend=:topleft, size=(800,300))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Code notes:\n", "- The `!` in `data[!, ` is specific to the DataFrames syntax.\n", "- The `.==` checks for each element of the array D whether it is equal." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's visualize the likelihood of these observations." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Define the Bernoulli likelihood function\n", "likelihood(θ) = prod([θ^X_i * (1-θ)^(1-X_i) for X_i in D])\n", "\n", "# Plot likelihood\n", "plot(θ, likelihood.(θ), linewidth=3, color=\"black\", label=\"\", xlabel=\"θ\", ylabel=\"p(D|θ)\", size=(800,300))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The likelihood has somewhat of a bell shape, peaking just below $\\theta = 0.75$. Note that the y-axis is very small. Indeed, the likelihood is not a proper probability distribution, because it doesn't integrate / sum to $1$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Inference\n", "\n", "Using our generative model, we can estimate parameters for unknown variables. Remember Bayes' rule:\n", "\n", "$$ p(\\theta \\mid \\mathcal{D}) = \\frac{p(\\mathcal{D} \\mid \\theta) p(\\theta)}{p(\\mathcal{D})} \\, .$$\n", "\n", "The posterior $p(\\theta \\mid \\mathcal{D})$ equals the likelihood $p(\\mathcal{D} \\mid \\theta)$ times the prior $p(\\theta)$ divided by the evidence $p(\\mathcal{D})$. In our tasting experiment, we have a special thing going on: [conjugacy](https://en.wikipedia.org/wiki/Conjugate_prior). The Beta distribution is \"conjugate\" to the Bernoulli likelihood, meaning that the posterior distribution is also going to be a Beta distribution. Specifically with the Beta-Bernoulli combination, it is easy to see what conjugacy actually means. Recall the formula for the Beta distribution:\n", "\n", "$$\\begin{aligned} p(\\theta) =&\\ \\frac{\\Gamma(\\alpha + \\beta)}{\\Gamma(\\alpha) \\Gamma(\\beta)} \\theta^{\\alpha-1} (1-\\theta)^{\\beta-1} \\, . \\end{aligned}$$\n", "\n", "The term $\\Gamma(\\alpha + \\beta) / \\left( \\Gamma(\\alpha) \\Gamma(\\beta) \\right)$ normalises this distribution. If you ignore that and multiply it with the likelihood, you get something that simplifies beautifully:\n", "\n", "$$\\begin{aligned} \n", "p(\\mathcal{D} \\mid \\theta) p(\\theta) \\ \\propto&\\ \\ \\prod_{i=1}^{N} \\big[ \\theta^{X_i} (1-\\theta)^{1-X_i} \\big] \\cdot \\theta^{\\alpha-1} (1-\\theta)^{\\beta-1} \\\\ =&\\ \\ \\theta^{\\sum_{i=1}^{N} X_i} (1-\\theta)^{\\sum_{i=1}^{N} 1-X_i} \\cdot \\theta^{\\alpha-1} (1-\\theta)^{\\beta-1} \\\\ =&\\ \\ \\theta^{S} (1-\\theta)^{F} \\cdot \\theta^{\\alpha-1} (1-\\theta)^{\\beta-1} \\\\ =&\\ \\ \\theta^{S + \\alpha-1} (1-\\theta)^{F + \\beta-1} \\, , \\end{aligned}$$\n", "\n", "where $S = \\sum_{i=1}^{N} X_i$ is the number of successes (correct guesses) and $F = \\sum_{i=1}^{N} 1 - X_i$ is the number of failures (incorrect guesses). \n", "\n", "This last line is again the formula for the Beta distribution (except for a proper normalisation) but with different parameters ($S+\\alpha$ instead of $\\alpha$ and $F+\\beta$ instead of $\\beta$). This is what we mean by conjugacy: applying Bayes rule to a conjugate prior and likelihood pair yields a posterior distribution of the same family as the prior, which in this case is a Beta distribution.\n", "\n", "Let's now visualise the posterior after observing the data from Amsterdam." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Define shape parameters of prior distribution\n", "α0 = 4.0\n", "β0 = 2.0\n", "\n", "# Define prior distribution\n", "pθ = Beta(α0, β0)\n", "\n", "# Update parameters for the posterior\n", "αN = α0 + sum(D .== 1)\n", "βN = β0 + sum(D .== 0)\n", "\n", "# Define posterior distribution\n", "pθD = Beta(αN, βN)\n", "\n", "# Mean of posterior\n", "mean_post = αN / (αN + βN)\n", "mode_post = (αN - 1) / (αN + βN - 2)\n", "\n", "# Visualize probability distribution function\n", "plot(θ, pdf.(pθ, θ), linewidth=3, color=\"red\", label=\"prior\", xlabel=\"θ\", ylabel=\"p(θ)\")\n", "plot!(θ, pdf.(pθD, θ), linewidth=3, color=\"blue\", label=\"posterior\", size=(800,300))\n", "vline!([mean_post], color=\"black\", linewidth=3, label=\"mean of posterior\", legend=:topleft)\n", "vline!([mode_post], color=\"black\", linewidth=3, linestyle=:dash, label=\"maximum a posteriori\", legend=:topleft)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Code notes:\n", "- `vline` draw a vertical line in the plot, at the specified point on the x-axis.\n", "- `mode()` extracts the [mode](https://en.wikipedia.org/wiki/Mode_(statistics)) of the supplied distribution, i.e. the point with the largest probability." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That looks great! We have updated our belief from a very broad prior to a much sharper posterior. \n", "\n", "The posterior contains a lot of information: it tells us something about every value for $\\theta$. Sometimes, we are interested in a point estimate, i.e. the probability of a single value for $\\theta$ under the posterior. Two well-known point estimators are the mean of the posterior and the mode (the value for $\\theta$ with the highest probability). I have plotted both point estimates in the figure above. In this case, they are nearly equal. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "#### Exercise\n", "\n", "Plug the shape parameters of your prior into a copy of the cell above and see how your posterior differs.\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3. Model Evaluation\n", "\n", "Given our model assumptions and a posterior for theta, we can now make quantitative predictions about how well we think people can recognize alcoholic from non-alcoholic hefeweizen. But suppose you meet someone else who is absolutely sure that people can't tell the difference. Can you say something about the probability of his belief, given the experiment?\n", "\n", "Technically, this is a question about comparing the performance of different models. Model comparison is also known in the statistical literature as \"hypothesis testing\".\n", "\n", "In hypothesis testing, you start with a null hypothesis $\\mathcal{H}_0$, which is a particular choice for the detection parameter $\\theta$. In the question above, the other person's belief corresponds to $\\theta = 0.5$. We then have an alternative hypothesis $\\mathcal{H}_1$, namely that his belief is wrong, i.e. $\\theta \\neq 0.5$. From a Bayesian perspective, hypothesis testing is just about comparing the posterior beliefs about these two hypotheses:\n", "\n", "$$\\begin{aligned} \\underbrace{\\frac{p(\\mathcal{H}_1 | \\mathcal{D})}{p(\\mathcal{H}_0 | \\mathcal{D})}}_{\\text{Posterior belief over hypotheses}} = \\underbrace{\\frac{p(\\mathcal{H}_1)}{p(\\mathcal{H}_0)}}_{\\text{Prior belief over hypotheses}} \\times \\underbrace{\\frac{p(\\mathcal{D} | \\mathcal{H}_1)}{p(\\mathcal{D} | \\mathcal{H}_0)}}_{\\text{Likelihood of hypotheses}} \\, . \\end{aligned}$$\n", "\n", "Note that the evidence term $p(\\mathcal{D})$ is missing, because it appears in the posterior for both hypotheses and therefore cancels out. The hypothesis likelihood ratio is also called the **Bayes factor**. Bayes factors can be hard to compute, but in some cases we can simplify it: if the null hypothesis is a specific value of interest, for instance $\\theta = 0.5$, and the alternative hypothesis is not that specific value, e.g. $\\theta \\neq 0.5$, then the factor reduces to what's known as a Savage-Dickey Ratio (see Appendix A of [Wagemakers et al., 2010](https://www.sciencedirect.com/science/article/pii/S0010028509000826?casa_token=AhA2bAAbOygAAAAA:3quBBzBv5PqTl0zdFo-_AKh2SmH_pH68FdXHMGGw0328wA1h0YGTdsOYkKwWBrwx84WVhselJA)):\n", "\n", "$$ \\frac{p(\\mathcal{D} | \\mathcal{H}_1)}{p(\\mathcal{D} | \\mathcal{H}_0)} = \\frac{p(\\theta = 0.5)}{p(\\theta = 0.5 \\mid \\mathcal{D})} \\, .$$\n", "\n", "This compares the probability of $\\theta = 0.5$ under the prior versus $\\theta = 0.5$ under the posterior. It effectively tells you how much your belief changes after observing the data. Let's compute the Savage-Dickey ratio for our experiment:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The Bayes factor for H1 versus H0 = 229.23178145896927\n" ] } ], "source": [ "BF_10 = pdf(pθ, 0.5) / pdf(pθD, 0.5)\n", "println(\"The Bayes factor for H1 versus H0 = \"*string(BF_10))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, in the experiment, the alternative hypothesis _\"students can discriminate alcoholic from non-alcoholic Hefeweissbier\"_ is more than 200 times more probable than the null hypothesis that _\"students cannot discriminate alcoholic from non-alcoholic Hefeweissbier\"_." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "#### Exercise\n", "\n", "Compute the Bayes factor for your prior and posterior distribution. How many times is the alternative hypothesis more probable than the null hypothesis?\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "@webio": { "lastCommId": "e01e903ce62a43f081de0d927024457a", "lastKernelId": "d19f1b82-d2af-4ee2-8c82-2be824ae11d2" }, "kernelspec": { "display_name": "Julia 1.7.2", "language": "julia", "name": "julia-1.7" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.7.2" } }, "nbformat": 4, "nbformat_minor": 4 }