{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Bayesian Decision Analysis\n", "\n", "Allen Downey\n", "\n", "[Bayesian Decision Analysis](https://allendowney.github.io/BayesianDecisionAnalysis/)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Five Urn Problem\n", "\n", "Let's start by solving the Five Urn problem using a pandas `Series` to represent a [Probability Mass Function](https://en.wikipedia.org/wiki/Probability_mass_function) (PMF).\n", "\n", "The key idea is that the **index** of the `Series` represents a set of hypotheses and the **values** of the `Series` represent the corresponding probabilities.\n", "\n", "You have five urns that contain blue and yellow marbles:\n", "\n", "* Urn 0 contains 0% blue marbles.\n", "* Urn 1 contains 25% blue marbles.\n", "* Urn 2 contains 50% blue marbles.\n", "* Urn 3 contains 75% blue marbles.\n", "* Urn 4 contains 100% blue marbles.\n", "\n", "You choose an urn, choose a marble, and it's blue.\n", "\n", "What is the posterior probability that you chose each urn?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I'll use integers to represent the hypotheses, so `0` represents the hypothesis that we chose from Urn 0.\n", "The prior probabilities are equal for the five hypotheses." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "hypos = [0, 1, 2, 3, 4]\n", "prior = pd.Series(1/5, hypos)\n", "prior" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we chose from Urn $i$, the probability of getting a blue marble is $i/4$.\n", "So that's the likelihood of the data." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "likelihood = np.array(hypos) / 4\n", "likelihood" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the fundamental step of all Bayesian updates, multiplying the prior by the likelihood." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "posterior = prior * likelihood\n", "posterior" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The sum of these products is the total probability of the data." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "prob_data = posterior.sum()\n", "prob_data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last step is to normalize the posterior probabilities by dividing through by the probability of the data." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "posterior /= prob_data\n", "posterior" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The results are the same as what we worked through in the slides." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Bayesian Bandit Problem\n", "\n", "Now let's get to the Bayesian Bandit problem, which is a model for many kinds of A/B testing, including medical trials.\n", "\n", "Suppose you have several \"one-armed bandit\" slot machines, and there's reason to think that they have different probabilities of paying off.\n", "\n", "Each time you play a machine, you either win or lose, and you can use the outcome to update your belief about the probability of winning.\n", "\n", "Then, to decide which machine to play next, you can use the \"Bayesian bandit\" strategy, explained below.\n", "\n", "First, let's see how to do the update." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The prior\n", "\n", "If we know nothing about the probability of winning, we can start with a uniform prior.\n", "We'll use integers from 0 to 100 to represent hypothetical probabilities of winning (as percentages)." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 8, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def decorate(title=\"\"):\n", " \"\"\"Labels the axes.\n", "\n", " title: string\n", " \"\"\"\n", " plt.xlabel(\"Probability of winning\")\n", " plt.ylabel(\"PMF\")\n", " plt.title(title)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bayesian Update\n", "\n", "The prior represents what we believe about possible values of `x` before we have any data.\n", "Now suppose we play a machine once and win.\n", "What should we believe about `x` now?\n", "\n", "We can answer that question by computing the likelihood of the data, a win, for each value of `x`.\n", "If `x` is 50, the probability of winning is 0.5.\n", "If `x` is 75, the probability is 0.75.\n", "In general, we can compute the probabilities by dividing the values of `x` by 100." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 12, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 13, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 14, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "posterior.plot()\n", "decorate(\"Posterior, one win\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose we play the same machine and win again. We can do a second update, using the posterior from the first update as the prior for the second." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "posterior2 = posterior * likelihood_win\n", "posterior2 /= posterior2.sum()\n", "posterior2.plot()\n", "decorate(\"Posterior, two wins\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And suppose we play one more time and lose. Now we need the likelihood of losing for each value of `x`.\n" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's the update." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "posterior3 = posterior2 * likelihood_loss\n", "posterior3 /= posterior3.sum()\n", "posterior3.plot()\n", "decorate(\"Posterior, two wins, one loss\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The update function\n", "\n", "The following function takes as parameters a Pandas Series that represents the prior distribution and a string that represents the data: either `W` if we won or `L` if we lost." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "def update(pmf, data):\n", " \"\"\"Likelihood function for Bayesian bandit\n", "\n", " pmf: Series that maps hypotheses to probabilities\n", " data: string, either 'W' or 'L'\n", " \"\"\"\n", " if data == \"W\":\n", " likelihood = likelihood_win\n", " else:\n", " likelihood = likelihood_loss\n", "\n", " pmf *= likelihood\n", " pmf /= pmf.sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It uses the quantities in the index to compute the likelihood of the data, then updates `pmf` by multiplying by the likelihood and dividing through by the probability of the data." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "Suppose you play a machine 10 times and win once. What is the posterior distribution of $x$?" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "outcomes = \"WLLLLLLLLL\"" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multiple bandits\n", "\n", "Now suppose we have several bandits and we want to decide which one to play.\n", "\n", "For this example, suppose we have 4 machines with these probabilities:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "actual_probs = [0.0, 0.1, 0.2, 0.3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For purposes of the example, we should **assume that we do not know these probabilities**.\n", "\n", "The function `play` simulates playing one machine once and returns `W` or `L`." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "from random import random\n", "\n", "\n", "def flip(p):\n", " \"\"\"Return True with probability p.\"\"\"\n", " return random() < p" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "from collections import Counter\n", "\n", "# count how many times we've played each machine\n", "counter = Counter()\n", "\n", "\n", "def play(i):\n", " \"\"\"Play machine i.\n", "\n", " returns: string 'W' or 'L'\n", " \"\"\"\n", " counter[i] += 1\n", " p = actual_probs[i]\n", " if flip(p):\n", " return \"W\"\n", " else:\n", " return \"L\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's a test, playing machine 3 ten times:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "for i in range(10):\n", " outcome = play(3)\n", " print(outcome, end=\" \")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now I'll make four copies of the prior to represent our beliefs about the four machines." ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This function displays four distributions in a grid." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "def plot(beliefs, **options):\n", " for i, b in enumerate(beliefs):\n", " plt.subplot(2, 2, i + 1)\n", " b.plot(label=\"Machine %s\" % i)\n", " plt.gca().set_yticklabels([])\n", " plt.legend()\n", "\n", " plt.tight_layout()" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "scrolled": true }, "outputs": [], "source": [ "plot(beliefs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As an example, let's play each machine 10 times, then plot the posterior distributions. " ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "plot(beliefs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bayesian Bandits\n", "\n", "To get more information, we could play each machine 100 times, but while we are gathering data, we are not making good use of it. The kernel of the Bayesian Bandits algorithm is that it collects and uses data at the same time. In other words, it balances exploration and exploitation.\n", "\n", "To do that, it draws a random value from each distribution and chooses the the machine that generates the largest value.\n", "\n", "The following function takes a PMF and chooses a random value from it, using the probabilities as weights." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "def pmf_choice(pmf):\n", " \"\"\"Draw a random sample from a PMF.\n", "\n", " pmf: Series representing a PMF\n", "\n", " returns: quantity from PMF\n", " \"\"\"\n", " return np.random.choice(a=pmf.index, p=pmf.values)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's an example." ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "pmf_choice(beliefs[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following function uses `pmf_choice` to choose one value from the posterior distribution of each machine and then uses `argmax` to find the index of the machine that chose the highest value." ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "def choose(beliefs):\n", " \"\"\"Use the Bayesian bandit strategy to choose a machine.\n", "\n", " Draws a sample from each distribution.\n", "\n", " returns: index of the machine that yielded the highest value\n", " \"\"\"\n", " ps = [pmf_choice(b) for b in beliefs]\n", " return np.argmax(ps)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's an example." ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "choose(beliefs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`choose` has the property that the probability of choosing each machine is equal to its \"probability of superiority\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise 3:** Putting it all together, fill in the following function to choose a machine, play once, and update `beliefs`:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "def choose_play_update(beliefs, verbose=False):\n", " \"\"\"Chose a machine, play it, and update beliefs.\n", "\n", " beliefs: list of Pmf objects\n", " verbose: Boolean, whether to print results\n", " \"\"\"\n", " # choose a machine\n", " machine = ____\n", "\n", " # play it\n", " outcome = ____\n", "\n", " # update beliefs\n", " update(____)\n", "\n", " if verbose:\n", " print(machine, outcome)" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's an example:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "choose_play_update(beliefs, verbose=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Trying it out\n", "\n", "Let's start again with a fresh set of machines and an empty `Counter`." ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [], "source": [ "beliefs = [prior.copy() for i in range(4)]\n", "counter = Counter()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we run the bandit algorithm 100 times, we can see how `beliefs` gets updated:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "num_plays = 100\n", "\n", "for i in range(num_plays):\n", " choose_play_update(beliefs)\n", "\n", "plot(beliefs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The estimates are still rough, especially for the lower-probability machines. But that's a feature, not a bug: the goal is to play the high-probability machines most often. Making the estimates more precise is a means to that end, but not an end itself.\n", "\n", "Let's see how many times each machine got played. If things go according to plan, the machines with higher probabilities should get played more often." ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "for machine, count in sorted(counter.items()):\n", " print(machine, count)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "**Exercise 4:** Go back and run this section again with a different value of `num_play` and see how it does." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n", "The algorithm I presented in this notebook is called [Thompson sampling](https://en.wikipedia.org/wiki/Thompson_sampling). It is an example of a general strategy called [Bayesian decision theory](https://wiki.lesswrong.com/wiki/Bayesian_decision_theory), which is the idea of using a posterior distribution as part of a decision-making process, usually by choosing an action that minimizes the costs we expect on average (or maximizes a benefit).\n", "\n", "In my opinion, this strategy is the biggest advantage of Bayesian methods over classical statistics. When we represent knowledge in the form of probability distributions, Bayes's theorem tells us how to change our beliefs as we get more data, and Bayesian decision theory tells us how to make that knowledge actionable." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Copyright 2022 Allen B. Downey\n", "\n", "License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 1 }