{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# The Cookie Problem" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "This notebook is part of [Bite Size Bayes](https://allendowney.github.io/BiteSizeBayes/), an introduction to probability and Bayesian statistics using Python.\n", "\n", "Copyright 2020 Allen B. Downey\n", "\n", "License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The following cell downloads `utils.py`, which contains some utility function we'll need." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "tags": [] }, "outputs": [], "source": [ "from os.path import basename, exists\n", "\n", "def download(url):\n", " filename = basename(url)\n", " if not exists(filename):\n", " from urllib.request import urlretrieve\n", " local, _ = urlretrieve(url, filename)\n", " print('Downloaded ' + local)\n", "\n", "download('https://github.com/AllenDowney/BiteSizeBayes/raw/master/utils.py')" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "If everything we need is installed, the following cell should run with no error messages." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "\n", "from utils import values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Review\n", "\n", "[In the previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/02_bayes.ipynb) I presented and proved (sort of) three theorems of probability:\n", "\n", "**Theorem 1** gives us a way to compute a conditional probability using a conjunction:\n", "\n", "$P(A|B) = \\frac{P(A~\\mathrm{and}~B)}{P(B)}$ \n", "\n", "**Theorem 2** gives us a way to compute a conjunction using a conditional probability:\n", "\n", "$P(A~\\mathrm{and}~B) = P(B) P(A|B)$\n", "\n", "**Theorem 3** gives us a way to get from $P(A|B)$ to $P(B|A)$, or the other way around:\n", "\n", "$P(A|B) = \\frac{P(A) P(B|A)}{P(B)}$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the examples we've seen so far, we didn't really need these theorems, because when you have all of the data, you can compute any probability you want, any conjunction, or any conditional probability, just by counting. \n", "\n", "Starting with this notebook, we will look at examples where we don't have all of the data, and we'll see that these theorems are useful, expecially Theorem 3, which is also known as Bayes's Theorem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bayes's Theorem\n", "\n", "There are two ways to think about Bayes's Theorem:\n", "\n", "* It is a divide-and conquer strategy for computing conditional probabilities. If it's hard to compute $P(A|B)$ directly, sometimes it is easier to compute the terms on the other side of the equation: $P(A)$, $P(B|A)$, and $P(B)$.\n", "\n", "* It is also a recipe for updating beliefs in the light of new data.\n", "\n", "When we are working with the second interpretation, we often write Bayes's Theorem with different variables. Instead of $A$ and $B$, we use $H$ and $D$, where\n", "\n", "* $H$ stands for \"hypothesis\", and\n", "\n", "* $D$ stands for \"data\".\n", "\n", "So we write Bayes's Theorem like this:\n", "\n", "$P(H|D) = P(H) ~ P(D|H) ~/~ P(D)$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this context, each term has a name:\n", "\n", "* $P(H)$ is the \"prior probability\" of the hypothesis, which represents how confident you are that $H$ is true prior to seeing the data,\n", "\n", "* $P(D|H)$ is the \"likelihood\" of the data, which is the probability of seeing $D$ if the hypothesis is true,\n", "\n", "* $P(D)$ is the \"total probability of the data\", that is, the chance of seeing $D$ regardless of whether $H$ is true or not.\n", "\n", "* $P(H|D)$ is the \"posterior probability\" of the hypothesis, which indicates how confident you should be that $H$ is true after taking the data into account.\n", "\n", "An example will make all of this clearer." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The cookie problem\n", "\n", "Here's a problem I got from Wikipedia a long time ago, but now it's been edited away.\n", "\n", "> Suppose you have two bowls of cookies. Bowl 1 contains 30 vanilla and 10 chocolate cookies. Bowl 2 contains 20 of each kind.\n", ">\n", "> You choose one of the bowls at random and, without looking into the bowl, choose one of the cookies at random. It turns out to be a vanilla cookie.\n", ">\n", "> What is the chance that you chose Bowl 1?\n", "\n", "We'll assume that there was an equal chance of choosing either bowl and an equal chance of choosing any cookie in the bowl." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can solve this problem using Bayes's Theorem. First, I'll define $H$ and $D$:\n", "\n", "* $H$ is the hypothesis that the bowl you chose is Bowl 1.\n", "\n", "* $D$ is the datum that the cookie is vanilla (\"datum\" is the rarely-used singular form of \"data\").\n", "\n", "What we want is the posterior probability of $H$, which is $P(H|D)$. It is not obvious how to compute it directly, but if we can figure out the terms on the right-hand side of Bayes's Theorem, we can get to it indirectly." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. $P(H)$ is the prior probability of $H$, which is the probability of choosing Bowl 1 before we see the data. If there was an equal chance of choosing either bowl, $P(H)$ is $1/2$.\n", "\n", "2. $P(D|H)$ is the likelihood of the data, which is the chance of getting a vanilla cookie if $H$ is true, in other words, the chance of getting a vanilla cookie from Bowl 1, which is $30/40$ or $3/4$.\n", "\n", "3. $P(D)$ is the total probability of the data, which is the chance of getting a vanilla cookie whether $H$ is true or not. In this example, we can figure out $P(D)$ directly: because the bowls are equally likely, and they contain the same number of cookies, you were equally likely to choose any cookie. Combining the two bowls, there are 50 vanilla and 30 chocolate cookies, so the probability of choosing a vanilla cookie is $50/80$ or $5/8$.\n", "\n", "Now that we have the terms on the right-hand side, we can use Bayes's Theorem to combine them." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "prior = 1/2\n", "prior" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "likelihood = 3/4\n", "likelihood" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "prob_data = 5/8\n", "prob_data" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "posterior = prior * likelihood / prob_data\n", "posterior" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The posterior probability is $0.6$, a little higher than the prior, which was $0.5$. \n", "\n", "So the vanilla cookie makes us a little more certain that we chose Bowl 1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** What if we had chosen a chocolate cookie instead; what would be the posterior probability of Bowl 1?" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evidence\n", "\n", "In the previous example and exercise, notice a pattern:\n", "\n", "* A vanilla cookie is more likely if we chose Bowl 1, so getting a vanilla cookie makes Bowl 1 more likely.\n", "\n", "* A chocolate cookie is less likely if we chose Bowl 1, so getting a chocolate cookie makes Bowl 1 less likely.\n", "\n", "If data makes the probability of a hypothesis go up, we say that it is \"evidence in favor\" of the hypothesis.\n", "\n", "If data makes the probability of a hypothesis go down, it is \"evidence against\" the hypothesis." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's do another example:\n", "\n", "> Suppose you have two coins in a box. One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides.\n", ">\n", "> You choose a coin at random and see that one of the sides is heads. Is this data evidence in favor of, or against, the hypothesis that you chose the trick coin?\n", "\n", "See if you can figure out the answer before you read my solution. I suggest these steps:\n", "\n", "1. First, state clearly what is the hypothesis and what is the data.\n", "\n", "2. Then think about the prior, the likelihood of the data, and the total probability of the data.\n", "\n", "3. Apply Bayes's Theorem to compute the posterior probability of the hypothesis.\n", "\n", "4. Use the result to answer the question as posed." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example:\n", "\n", "* $H$ is the hypothesis that you chose the trick coin with two heads.\n", "\n", "* $D$ is the observation that one side of the coin is heads.\n", "\n", "Now let's think about the right-hand terms:\n", "\n", "* The prior is 1/2 because we were equally likely to choose either coin.\n", "\n", "* The likelihood is 1 because if we chose the the trick coin, we would necessarily see heads.\n", "\n", "* The total probability of the data is 3/4 because 3 of the 4 sides are heads, and we were equally likely to see any of them.\n", "\n", "Here's what we get when we apply Bayes's theorem:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "prior = 1/2\n", "likelihood = 1\n", "prob_data = 3/4\n", "\n", "posterior = prior * likelihood / prob_data\n", "posterior" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The posterior is greater than the prior, so this data is evidence *in favor of* the hypothesis that you chose the trick coin.\n", "\n", "And that makes sense, because getting heads is more likely if you choose the trick coin rather than the normal coin." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Bayes table\n", "\n", "In the cookie problem and the coin problem we were able to compute the probability of the data directly, but that's not always the case. In fact, computing the total probability of the data is often the hardest part of the problem.\n", "\n", "Fortunately, there is another way to solve problems like this that makes it easier: the Bayes table.\n", "\n", "You can write a Bayes table on paper or use a spreadsheet, but in this notebook I'll use a Pandas DataFrame.\n", "\n", "I'll do the cookie problem first. Here's an empty DataFrame with one row for each hypothesis:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now I'll add a column to represent the priors:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "table['prior'] = 1/2, 1/2\n", "table" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And a column for the likelihoods:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "table['likelihood'] = 3/4, 1/2\n", "table" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:\n", "\n", "* The chance of getting a vanilla cookie from Bowl 1 is 3/4.\n", "\n", "* The chance of getting a vanilla cookie from Bowl 2 is 1/2.\n", "\n", "The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "table['unnorm'] = table['prior'] * table['likelihood']\n", "table" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I called the result `unnorm` because it is an \"unnormalized posterior\". To see what that means, let's compare the right-hand side of Bayes's Theorem:\n", "\n", "$P(H) P(D|H)~/~P(D)$\n", "\n", "To what we have computed so far:\n", "\n", "$P(H) P(D|H)$\n", "\n", "The difference is that we have not divided through by $P(D)$, the total probability of the data. So let's do that." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are two ways to compute $P(D)$:\n", "\n", "1. Sometimes we can figure it out directly.\n", "\n", "2. Otherwise, we can compute it by adding up the unnormalized posteriors.\n", "\n", "I'll show the second method computationally, then explain how it works.\n", "\n", "Here's the total of the unnormalized posteriors:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "prob_data = table['unnorm'].sum()\n", "prob_data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that we get 5/8, which is what we got by computing $P(D)$ directly.\n", "\n", "Now we divide by $P(D)$ to get the posteriors:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "table['posterior'] = table['unnorm'] / prob_data\n", "table" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.\n", "\n", "As a bonus, we also get the posterior probability of Bowl 2, which is 0.4.\n", "\n", "The posterior probabilities add up to 1, which they should, because the hypotheses are \"complementary\"; that is, either one of them is true or the other, but not both. So their probabilities have to add up to 1.\n", "\n", "When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called \"normalization\", which is why the total probability of the data is also called the \"[normalizing constant](https://en.wikipedia.org/wiki/Normalizing_constant#Bayes'_theorem)\"\n", "\n", "It might not be clear yet why the unnormalized posteriors add up to $P(D)$. I'll come back to that in the next notebook." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** Solve the trick coin problem using a Bayes table:\n", "\n", "> Suppose you have two coins in a box. One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides.\n", ">\n", "> You choose a coin at random and see the one of the sides is heads. What is the posterior probability that you chose the trick coin?\n", "\n", "Hint: The answer should still be 2/3." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook I introduced two example problems: the cookie problem and the trick coin problem. \n", "\n", "We solved both problem using Bayes's Theorem; then I presented the Bayes table, a method for solving problems where it is hard to compute the total probability of the data directly.\n", "\n", "[In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/04_dice.ipynb), we'll see examples with more than two hypotheses, and I'll explain more carefully how the Bayes table works." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 2 }