{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Minimum, Maximum, and Mixture" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Think Bayes, Second Edition\n", "\n", "Copyright 2020 Allen B. Downey\n", "\n", "License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:30.749769Z", "iopub.status.busy": "2021-04-16T19:35:30.748995Z", "iopub.status.idle": "2021-04-16T19:35:30.752221Z", "shell.execute_reply": "2021-04-16T19:35:30.751509Z" }, "tags": [] }, "outputs": [], "source": [ "# If we're running on Colab, install empiricaldist\n", "# https://pypi.org/project/empiricaldist/\n", "\n", "import sys\n", "IN_COLAB = 'google.colab' in sys.modules\n", "\n", "if IN_COLAB:\n", " !pip install empiricaldist" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:30.756174Z", "iopub.status.busy": "2021-04-16T19:35:30.755765Z", "iopub.status.idle": "2021-04-16T19:35:30.757511Z", "shell.execute_reply": "2021-04-16T19:35:30.757889Z" }, "tags": [] }, "outputs": [], "source": [ "# Get utils.py\n", "\n", "from os.path import basename, exists\n", "\n", "def download(url):\n", " filename = basename(url)\n", " if not exists(filename):\n", " from urllib.request import urlretrieve\n", " local, _ = urlretrieve(url, filename)\n", " print('Downloaded ' + local)\n", " \n", "download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:30.760611Z", "iopub.status.busy": "2021-04-16T19:35:30.760188Z", "iopub.status.idle": "2021-04-16T19:35:31.437686Z", "shell.execute_reply": "2021-04-16T19:35:31.438042Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import set_pyplot_params\n", "set_pyplot_params()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous chapter we computed distributions of sums.\n", "In this chapter, we'll compute distributions of minimums and maximums, and use them to solve both forward and inverse problems.\n", "\n", "Then we'll look at distributions that are mixtures of other distributions, which will turn out to be particularly useful for making predictions.\n", "\n", "But we'll start with a powerful tool for working with distributions, the cumulative distribution function." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cumulative Distribution Functions\n", "\n", "So far we have been using probability mass functions to represent distributions.\n", "A useful alternative is the **cumulative distribution function**, or CDF.\n", "\n", "As an example, I'll use the posterior distribution from the Euro problem, which we computed in <<_BayesianEstimation>>.\n", "\n", "Here's the uniform prior we started with." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.442337Z", "iopub.status.busy": "2021-04-16T19:35:31.441797Z", "iopub.status.idle": "2021-04-16T19:35:31.443480Z", "shell.execute_reply": "2021-04-16T19:35:31.443832Z" } }, "outputs": [], "source": [ "import numpy as np\n", "from empiricaldist import Pmf\n", "\n", "hypos = np.linspace(0, 1, 101)\n", "pmf = Pmf(1, hypos)\n", "data = 140, 250" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's the update." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.447686Z", "iopub.status.busy": "2021-04-16T19:35:31.447185Z", "iopub.status.idle": "2021-04-16T19:35:31.449283Z", "shell.execute_reply": "2021-04-16T19:35:31.448866Z" } }, "outputs": [], "source": [ "from scipy.stats import binom\n", "\n", "def update_binomial(pmf, data):\n", " \"\"\"Update pmf using the binomial distribution.\"\"\"\n", " k, n = data\n", " xs = pmf.qs\n", " likelihood = binom.pmf(k, n, xs)\n", " pmf *= likelihood\n", " pmf.normalize()" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.454274Z", "iopub.status.busy": "2021-04-16T19:35:31.453576Z", "iopub.status.idle": "2021-04-16T19:35:31.459248Z", "shell.execute_reply": "2021-04-16T19:35:31.458812Z" } }, "outputs": [], "source": [ "update_binomial(pmf, data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The CDF is the cumulative sum of the PMF, so we can compute it like this:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.462631Z", "iopub.status.busy": "2021-04-16T19:35:31.462163Z", "iopub.status.idle": "2021-04-16T19:35:31.463845Z", "shell.execute_reply": "2021-04-16T19:35:31.464199Z" } }, "outputs": [], "source": [ "cumulative = pmf.cumsum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's what it looks like, along with the PMF." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.467579Z", "iopub.status.busy": "2021-04-16T19:35:31.467137Z", "iopub.status.idle": "2021-04-16T19:35:31.469147Z", "shell.execute_reply": "2021-04-16T19:35:31.468672Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import decorate\n", "\n", "def decorate_euro(title):\n", " decorate(xlabel='Proportion of heads (x)',\n", " ylabel='Probability',\n", " title=title)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.474021Z", "iopub.status.busy": "2021-04-16T19:35:31.472944Z", "iopub.status.idle": "2021-04-16T19:35:31.677455Z", "shell.execute_reply": "2021-04-16T19:35:31.677848Z" }, "tags": [] }, "outputs": [], "source": [ "cumulative.plot(label='CDF')\n", "pmf.plot(label='PMF')\n", "decorate_euro(title='Posterior distribution for the Euro problem')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The range of the CDF is always from 0 to 1, in contrast with the PMF, where the maximum can be any probability.\n", "\n", "The result from `cumsum` is a Pandas `Series`, so we can use the bracket operator to select an element:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.681378Z", "iopub.status.busy": "2021-04-16T19:35:31.680792Z", "iopub.status.idle": "2021-04-16T19:35:31.684176Z", "shell.execute_reply": "2021-04-16T19:35:31.683818Z" } }, "outputs": [], "source": [ "cumulative[0.61]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is about 0.96, which means that the total probability of all quantities less than or equal to 0.61 is 96%.\n", "\n", "To go the other way --- to look up a probability and get the corresponding quantile --- we can use interpolation:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.688718Z", "iopub.status.busy": "2021-04-16T19:35:31.688069Z", "iopub.status.idle": "2021-04-16T19:35:31.691063Z", "shell.execute_reply": "2021-04-16T19:35:31.690624Z" } }, "outputs": [], "source": [ "from scipy.interpolate import interp1d\n", "\n", "ps = cumulative.values\n", "qs = cumulative.index\n", "\n", "interp = interp1d(ps, qs)\n", "interp(0.96)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is about 0.61, so that confirms that the 96th percentile of this distribution is 0.61.\n", "\n", "`empiricaldist` provides a class called `Cdf` that represents a cumulative distribution function.\n", "Given a `Pmf`, you can compute a `Cdf` like this:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.694831Z", "iopub.status.busy": "2021-04-16T19:35:31.694113Z", "iopub.status.idle": "2021-04-16T19:35:31.696012Z", "shell.execute_reply": "2021-04-16T19:35:31.696439Z" } }, "outputs": [], "source": [ "cdf = pmf.make_cdf()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`make_cdf` uses `np.cumsum` to compute the cumulative sum of the probabilities.\n", "\n", "You can use brackets to select an element from a `Cdf`:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.700286Z", "iopub.status.busy": "2021-04-16T19:35:31.699659Z", "iopub.status.idle": "2021-04-16T19:35:31.702626Z", "shell.execute_reply": "2021-04-16T19:35:31.703050Z" } }, "outputs": [], "source": [ "cdf[0.61]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But if you look up a quantity that's not in the distribution, you get a `KeyError`.\n" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.707236Z", "iopub.status.busy": "2021-04-16T19:35:31.706612Z", "iopub.status.idle": "2021-04-16T19:35:31.709290Z", "shell.execute_reply": "2021-04-16T19:35:31.709801Z" }, "tags": [] }, "outputs": [], "source": [ "try:\n", " cdf[0.615]\n", "except KeyError as e:\n", " print(repr(e))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To avoid this problem, you can call a `Cdf` as a function, using parentheses.\n", "If the argument does not appear in the `Cdf`, it interpolates between quantities." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.714823Z", "iopub.status.busy": "2021-04-16T19:35:31.714082Z", "iopub.status.idle": "2021-04-16T19:35:31.718078Z", "shell.execute_reply": "2021-04-16T19:35:31.717545Z" } }, "outputs": [], "source": [ "cdf(0.615)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Going the other way, you can use `quantile` to look up a cumulative probability and get the corresponding quantity:\n" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.722903Z", "iopub.status.busy": "2021-04-16T19:35:31.722203Z", "iopub.status.idle": "2021-04-16T19:35:31.725892Z", "shell.execute_reply": "2021-04-16T19:35:31.725379Z" } }, "outputs": [], "source": [ "cdf.quantile(0.9638303)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`Cdf` also provides `credible_interval`, which computes a credible interval that contains the given probability:\n" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.730655Z", "iopub.status.busy": "2021-04-16T19:35:31.730044Z", "iopub.status.idle": "2021-04-16T19:35:31.733376Z", "shell.execute_reply": "2021-04-16T19:35:31.733908Z" } }, "outputs": [], "source": [ "cdf.credible_interval(0.9)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "CDFs and PMFs are equivalent in the sense that they contain the\n", "same information about the distribution, and you can always convert\n", "from one to the other.\n", "Given a `Cdf`, you can get the equivalent `Pmf` like this:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.738651Z", "iopub.status.busy": "2021-04-16T19:35:31.738024Z", "iopub.status.idle": "2021-04-16T19:35:31.740320Z", "shell.execute_reply": "2021-04-16T19:35:31.740851Z" } }, "outputs": [], "source": [ "pmf = cdf.make_pmf()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`make_pmf` uses `np.diff` to compute differences between consecutive cumulative probabilities.\n", "\n", "One reason `Cdf` objects are useful is that they compute quantiles efficiently.\n", "Another is that they make it easy to compute the distribution of a maximum or minimum, as we'll see in the next section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Best Three of Four\n", "\n", "In *Dungeons & Dragons*, each character has six attributes: strength, intelligence, wisdom, dexterity, constitution, and charisma.\n", "\n", "To generate a new character, players roll four 6-sided dice for each attribute and add up the best three.\n", "For example, if I roll for strength and get 1, 2, 3, 4 on the dice, my character's strength would be the sum of 2, 3, and 4, which is 9.\n", "\n", "As an exercise, let's figure out the distribution of these attributes.\n", "Then, for each character, we'll figure out the distribution of their best attribute.\n", "\n", "I'll import two functions from the previous chapter: `make_die`, which makes a `Pmf` that represents the outcome of rolling a die, and `add_dist_seq`, which takes a sequence of `Pmf` objects and computes the distribution of their sum.\n", "\n", "Here's a `Pmf` that represents a six-sided die and a sequence with three references to it." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.746067Z", "iopub.status.busy": "2021-04-16T19:35:31.745270Z", "iopub.status.idle": "2021-04-16T19:35:31.748598Z", "shell.execute_reply": "2021-04-16T19:35:31.747789Z" } }, "outputs": [], "source": [ "from utils import make_die\n", "\n", "die = make_die(6)\n", "dice = [die] * 3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's the distribution of the sum of three dice." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.755067Z", "iopub.status.busy": "2021-04-16T19:35:31.753615Z", "iopub.status.idle": "2021-04-16T19:35:31.757421Z", "shell.execute_reply": "2021-04-16T19:35:31.757933Z" } }, "outputs": [], "source": [ "from utils import add_dist_seq\n", "\n", "pmf_3d6 = add_dist_seq(dice)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's what it looks like:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.762372Z", "iopub.status.busy": "2021-04-16T19:35:31.761881Z", "iopub.status.idle": "2021-04-16T19:35:31.764211Z", "shell.execute_reply": "2021-04-16T19:35:31.763786Z" }, "tags": [] }, "outputs": [], "source": [ "def decorate_dice(title=''):\n", " decorate(xlabel='Outcome',\n", " ylabel='PMF',\n", " title=title)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.799186Z", "iopub.status.busy": "2021-04-16T19:35:31.784859Z", "iopub.status.idle": "2021-04-16T19:35:31.931262Z", "shell.execute_reply": "2021-04-16T19:35:31.930892Z" }, "tags": [] }, "outputs": [], "source": [ "pmf_3d6.plot()\n", "decorate_dice('Distribution of attributes')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we roll four dice and add up the best three, computing the distribution of the sum is a bit more complicated.\n", "I'll estimate the distribution by simulating 10,000 rolls.\n", "\n", "First I'll create an array of random values from 1 to 6, with 10,000 rows and 4 columns:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.934466Z", "iopub.status.busy": "2021-04-16T19:35:31.934017Z", "iopub.status.idle": "2021-04-16T19:35:31.936034Z", "shell.execute_reply": "2021-04-16T19:35:31.936379Z" } }, "outputs": [], "source": [ "n = 10000\n", "a = np.random.randint(1, 7, size=(n, 4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To find the best three outcomes in each row, I'll use `sort` with `axis=1`, which sorts the rows in ascending order." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.939312Z", "iopub.status.busy": "2021-04-16T19:35:31.938870Z", "iopub.status.idle": "2021-04-16T19:35:31.940846Z", "shell.execute_reply": "2021-04-16T19:35:31.941280Z" } }, "outputs": [], "source": [ "a.sort(axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, I'll select the last three columns and add them up." ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.944292Z", "iopub.status.busy": "2021-04-16T19:35:31.943874Z", "iopub.status.idle": "2021-04-16T19:35:31.945874Z", "shell.execute_reply": "2021-04-16T19:35:31.946275Z" } }, "outputs": [], "source": [ "t = a[:, 1:].sum(axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now `t` is an array with a single column and 10,000 rows.\n", "We can compute the PMF of the values in `t` like this:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.950862Z", "iopub.status.busy": "2021-04-16T19:35:31.949475Z", "iopub.status.idle": "2021-04-16T19:35:31.953994Z", "shell.execute_reply": "2021-04-16T19:35:31.953526Z" } }, "outputs": [], "source": [ "pmf_best3 = Pmf.from_seq(t)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following figure shows the distribution of the sum of three dice, `pmf_3d6`, and the distribution of the best three out of four, `pmf_best3`." ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:31.997678Z", "iopub.status.busy": "2021-04-16T19:35:31.996921Z", "iopub.status.idle": "2021-04-16T19:35:32.130553Z", "shell.execute_reply": "2021-04-16T19:35:32.130960Z" }, "tags": [] }, "outputs": [], "source": [ "pmf_3d6.plot(label='sum of 3 dice')\n", "pmf_best3.plot(label='best 3 of 4', ls='--')\n", "\n", "decorate_dice('Distribution of attributes')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you might expect, choosing the best three out of four tends to yield higher values.\n", "\n", "Next we'll find the distribution for the maximum of six attributes, each the sum of the best three of four dice." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Maximum\n", "\n", "To compute the distribution of a maximum or minimum, we can make good use of the cumulative distribution function.\n", "First, I'll compute the `Cdf` of the best three of four distribution:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.134886Z", "iopub.status.busy": "2021-04-16T19:35:32.134220Z", "iopub.status.idle": "2021-04-16T19:35:32.135956Z", "shell.execute_reply": "2021-04-16T19:35:32.136306Z" } }, "outputs": [], "source": [ "cdf_best3 = pmf_best3.make_cdf()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall that `Cdf(x)` is the sum of probabilities for quantities less than or equal to `x`.\n", "Equivalently, it is the probability that a random value chosen from the distribution is less than or equal to `x`.\n", "\n", "Now suppose I draw 6 values from this distribution.\n", "The probability that all 6 of them are less than or equal to `x` is `Cdf(x)` raised to the 6th power, which we can compute like this:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.141260Z", "iopub.status.busy": "2021-04-16T19:35:32.140745Z", "iopub.status.idle": "2021-04-16T19:35:32.143280Z", "shell.execute_reply": "2021-04-16T19:35:32.143621Z" }, "tags": [] }, "outputs": [], "source": [ "cdf_best3**6" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If all 6 values are less than or equal to `x`, that means that their maximum is less than or equal to `x`.\n", "So the result is the CDF of their maximum.\n", "We can convert it to a `Cdf` object, like this:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.147141Z", "iopub.status.busy": "2021-04-16T19:35:32.146607Z", "iopub.status.idle": "2021-04-16T19:35:32.148575Z", "shell.execute_reply": "2021-04-16T19:35:32.148928Z" } }, "outputs": [], "source": [ "from empiricaldist import Cdf\n", "\n", "cdf_max6 = Cdf(cdf_best3**6)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "And compute the equivalent `Pmf` like this:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.152335Z", "iopub.status.busy": "2021-04-16T19:35:32.151777Z", "iopub.status.idle": "2021-04-16T19:35:32.153560Z", "shell.execute_reply": "2021-04-16T19:35:32.154071Z" }, "tags": [] }, "outputs": [], "source": [ "pmf_max6 = cdf_max6.make_pmf()" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The following figure shows the result." ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.174609Z", "iopub.status.busy": "2021-04-16T19:35:32.169559Z", "iopub.status.idle": "2021-04-16T19:35:32.328490Z", "shell.execute_reply": "2021-04-16T19:35:32.328844Z" }, "tags": [] }, "outputs": [], "source": [ "pmf_max6.plot(label='max of 6 attributes')\n", "\n", "decorate_dice('Distribution of attributes')" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Most characters have at least one attribute greater than 12; almost 10% of them have an 18." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following figure shows the CDFs for the three distributions we have computed." ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.376448Z", "iopub.status.busy": "2021-04-16T19:35:32.371540Z", "iopub.status.idle": "2021-04-16T19:35:32.535663Z", "shell.execute_reply": "2021-04-16T19:35:32.536011Z" }, "tags": [] }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "cdf_3d6 = pmf_3d6.make_cdf()\n", "cdf_3d6.plot(label='sum of 3 dice')\n", "\n", "cdf_best3 = pmf_best3.make_cdf()\n", "cdf_best3.plot(label='best 3 of 4 dice', ls='--')\n", "\n", "cdf_max6.plot(label='max of 6 attributes', ls=':')\n", "\n", "decorate_dice('Distribution of attributes')\n", "plt.ylabel('CDF');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`Cdf` provides `max_dist`, which does the same computation, so we can also compute the `Cdf` of the maximum like this:" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.539561Z", "iopub.status.busy": "2021-04-16T19:35:32.539069Z", "iopub.status.idle": "2021-04-16T19:35:32.541075Z", "shell.execute_reply": "2021-04-16T19:35:32.540695Z" } }, "outputs": [], "source": [ "cdf_max_dist6 = cdf_best3.max_dist(6)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "And we can confirm that the differences are small." ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.544620Z", "iopub.status.busy": "2021-04-16T19:35:32.544171Z", "iopub.status.idle": "2021-04-16T19:35:32.546496Z", "shell.execute_reply": "2021-04-16T19:35:32.546875Z" }, "tags": [] }, "outputs": [], "source": [ "np.allclose(cdf_max_dist6, cdf_max6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the next section we'll find the distribution of the minimum.\n", "The process is similar, but a little more complicated.\n", "See if you can figure it out before you go on." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Minimum\n", "\n", "In the previous section we computed the distribution of a character's best attribute.\n", "Now let's compute the distribution of the worst.\n", "\n", "To compute the distribution of the minimum, we'll use the **complementary CDF**, which we can compute like this:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.550107Z", "iopub.status.busy": "2021-04-16T19:35:32.549668Z", "iopub.status.idle": "2021-04-16T19:35:32.551795Z", "shell.execute_reply": "2021-04-16T19:35:32.552222Z" } }, "outputs": [], "source": [ "prob_gt = 1 - cdf_best3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the variable name suggests, the complementary CDF is the probability that a value from the distribution is greater than `x`.\n", "If we draw 6 values from the distribution, the probability that all 6 exceed `x` is:" ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.556052Z", "iopub.status.busy": "2021-04-16T19:35:32.555455Z", "iopub.status.idle": "2021-04-16T19:35:32.557352Z", "shell.execute_reply": "2021-04-16T19:35:32.557778Z" } }, "outputs": [], "source": [ "prob_gt6 = prob_gt**6" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If all 6 exceed `x`, that means their minimum exceeds `x`, so `prob_gt6` is the complementary CDF of the minimum.\n", "And that means we can compute the CDF of the minimum like this:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.561848Z", "iopub.status.busy": "2021-04-16T19:35:32.561049Z", "iopub.status.idle": "2021-04-16T19:35:32.562922Z", "shell.execute_reply": "2021-04-16T19:35:32.563376Z" } }, "outputs": [], "source": [ "prob_le6 = 1 - prob_gt6" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is a Pandas `Series` that represents the CDF of the minimum of six attributes. We can put those values in a `Cdf` object like this:" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.566892Z", "iopub.status.busy": "2021-04-16T19:35:32.566369Z", "iopub.status.idle": "2021-04-16T19:35:32.568328Z", "shell.execute_reply": "2021-04-16T19:35:32.568768Z" } }, "outputs": [], "source": [ "cdf_min6 = Cdf(prob_le6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's what it looks like, along with the distribution of the maximum." ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.613126Z", "iopub.status.busy": "2021-04-16T19:35:32.604907Z", "iopub.status.idle": "2021-04-16T19:35:32.762095Z", "shell.execute_reply": "2021-04-16T19:35:32.761655Z" }, "tags": [] }, "outputs": [], "source": [ "cdf_min6.plot(color='C4', label='minimum of 6')\n", "cdf_max6.plot(color='C2', label='maximum of 6', ls=':')\n", "decorate_dice('Minimum and maximum of six attributes')\n", "plt.ylabel('CDF');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`Cdf` provides `min_dist`, which does the same computation, so we can also compute the `Cdf` of the minimum like this:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.766536Z", "iopub.status.busy": "2021-04-16T19:35:32.766111Z", "iopub.status.idle": "2021-04-16T19:35:32.767892Z", "shell.execute_reply": "2021-04-16T19:35:32.768318Z" } }, "outputs": [], "source": [ "cdf_min_dist6 = cdf_best3.min_dist(6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can confirm that the differences are small." ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.772657Z", "iopub.status.busy": "2021-04-16T19:35:32.772097Z", "iopub.status.idle": "2021-04-16T19:35:32.774638Z", "shell.execute_reply": "2021-04-16T19:35:32.775069Z" } }, "outputs": [], "source": [ "np.allclose(cdf_min_dist6, cdf_min6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the exercises at the end of this chapter, you'll use distributions of the minimum and maximum to do Bayesian inference.\n", "But first we'll see what happens when we mix distributions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Mixture\n", "\n", "In this section I'll show how we can compute a distribution which is a mixture of other distributions.\n", "I'll explain what that means with some simple examples;\n", "then, more usefully, we'll see how these mixtures are used to make predictions.\n", "\n", "Here's another example inspired by *Dungeons & Dragons*:\n", "\n", "* Suppose your character is armed with a dagger in one hand and a short sword in the other.\n", "\n", "* During each round, you attack a monster with one of your two weapons, chosen at random.\n", "\n", "* The dagger causes one 4-sided die of damage; the short sword causes one 6-sided die of damage.\n", "\n", "What is the distribution of damage you inflict in each round?\n", "\n", "To answer this question, I'll make a `Pmf` to represent the 4-sided and 6-sided dice:" ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.779457Z", "iopub.status.busy": "2021-04-16T19:35:32.778936Z", "iopub.status.idle": "2021-04-16T19:35:32.781650Z", "shell.execute_reply": "2021-04-16T19:35:32.781084Z" } }, "outputs": [], "source": [ "d4 = make_die(4)\n", "d6 = make_die(6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's compute the probability you inflict 1 point of damage.\n", "\n", "* If you attacked with the dagger, it's 1/4.\n", "\n", "* If you attacked with the short sword, it's 1/6.\n", "\n", "Because the probability of choosing either weapon is 1/2, the total probability is the average:" ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.786557Z", "iopub.status.busy": "2021-04-16T19:35:32.785642Z", "iopub.status.idle": "2021-04-16T19:35:32.788922Z", "shell.execute_reply": "2021-04-16T19:35:32.789450Z" } }, "outputs": [], "source": [ "prob_1 = (d4(1) + d6(1)) / 2\n", "prob_1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the outcomes 2, 3, and 4, the probability is the same, but for 5 and 6 it's different, because those outcomes are impossible with the 4-sided die." ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.794121Z", "iopub.status.busy": "2021-04-16T19:35:32.793553Z", "iopub.status.idle": "2021-04-16T19:35:32.796490Z", "shell.execute_reply": "2021-04-16T19:35:32.796042Z" } }, "outputs": [], "source": [ "prob_6 = (d4(6) + d6(6)) / 2\n", "prob_6" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To compute the distribution of the mixture, we could loop through the possible outcomes and compute their probabilities.\n", "\n", "But we can do the same computation using the `+` operator:" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.802178Z", "iopub.status.busy": "2021-04-16T19:35:32.801710Z", "iopub.status.idle": "2021-04-16T19:35:32.803562Z", "shell.execute_reply": "2021-04-16T19:35:32.803939Z" } }, "outputs": [], "source": [ "mix1 = (d4 + d6) / 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's what the mixture of these distributions looks like." ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.828505Z", "iopub.status.busy": "2021-04-16T19:35:32.820367Z", "iopub.status.idle": "2021-04-16T19:35:32.966618Z", "shell.execute_reply": "2021-04-16T19:35:32.966177Z" }, "tags": [] }, "outputs": [], "source": [ "mix1.bar(alpha=0.7)\n", "decorate_dice('Mixture of one 4-sided and one 6-sided die')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now suppose you are fighting three monsters:\n", "\n", "* One has a club, which causes one 4-sided die of damage.\n", "\n", "* One has a mace, which causes one 6-sided die.\n", "\n", "* And one has a quarterstaff, which also causes one 6-sided die. \n", "\n", "Because the melee is disorganized, you are attacked by one of these monsters each round, chosen at random.\n", "To find the distribution of the damage they inflict, we can compute a weighted average of the distributions, like this:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.971304Z", "iopub.status.busy": "2021-04-16T19:35:32.970806Z", "iopub.status.idle": "2021-04-16T19:35:32.972845Z", "shell.execute_reply": "2021-04-16T19:35:32.972416Z" } }, "outputs": [], "source": [ "mix2 = (d4 + 2*d6) / 3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This distribution is a mixture of one 4-sided die and two 6-sided dice.\n", "Here's what it looks like." ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:32.995932Z", "iopub.status.busy": "2021-04-16T19:35:32.995150Z", "iopub.status.idle": "2021-04-16T19:35:33.117724Z", "shell.execute_reply": "2021-04-16T19:35:33.117207Z" }, "tags": [] }, "outputs": [], "source": [ "mix2.bar(alpha=0.7)\n", "decorate_dice('Mixture of one 4-sided and two 6-sided die')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this section we used the `+` operator, which adds the probabilities in the distributions, not to be confused with `Pmf.add_dist`, which computes the distribution of the sum of the distributions.\n", "\n", "To demonstrate the difference, I'll use `Pmf.add_dist` to compute the distribution of the total damage done per round, which is the sum of the two mixtures:" ] }, { "cell_type": "code", "execution_count": 50, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.122025Z", "iopub.status.busy": "2021-04-16T19:35:33.121481Z", "iopub.status.idle": "2021-04-16T19:35:33.123747Z", "shell.execute_reply": "2021-04-16T19:35:33.123236Z" } }, "outputs": [], "source": [ "total_damage = Pmf.add_dist(mix1, mix2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's what it looks like." ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.139228Z", "iopub.status.busy": "2021-04-16T19:35:33.136581Z", "iopub.status.idle": "2021-04-16T19:35:33.278941Z", "shell.execute_reply": "2021-04-16T19:35:33.279427Z" } }, "outputs": [], "source": [ "total_damage.bar(alpha=0.7)\n", "decorate_dice('Total damage inflicted by both parties')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## General Mixtures\n", "\n", "In the previous section we computed mixtures in an *ad hoc* way.\n", "Now we'll see a more general solution.\n", "In future chapters, we'll use this solution to generate predictions for real-world problems, not just role-playing games.\n", "But if you'll bear with me, we'll continue the previous example for one more section.\n", "\n", "Suppose three more monsters join the combat, each of them with a battle axe that causes one 8-sided die of damage.\n", "Still, only one monster attacks per round, chosen at random, so the damage they inflict is a mixture of:\n", "\n", "* One 4-sided die,\n", "* Two 6-sided dice, and\n", "* Three 8-sided dice.\n", "\n", "I'll use a `Pmf` to represent a randomly chosen monster:" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.286725Z", "iopub.status.busy": "2021-04-16T19:35:33.284730Z", "iopub.status.idle": "2021-04-16T19:35:33.290910Z", "shell.execute_reply": "2021-04-16T19:35:33.290555Z" } }, "outputs": [], "source": [ "hypos = [4,6,8]\n", "counts = [1,2,3]\n", "pmf_dice = Pmf(counts, hypos)\n", "pmf_dice.normalize()\n", "pmf_dice" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This distribution represents the number of sides on the die we'll roll and the probability of rolling each one.\n", "For example, one of the six monsters has a dagger, so the probability is $1/6$ that we roll a 4-sided die.\n", "\n", "Next I'll make a sequence of `Pmf` objects to represent the dice:" ] }, { "cell_type": "code", "execution_count": 53, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.294783Z", "iopub.status.busy": "2021-04-16T19:35:33.294222Z", "iopub.status.idle": "2021-04-16T19:35:33.295894Z", "shell.execute_reply": "2021-04-16T19:35:33.296242Z" } }, "outputs": [], "source": [ "dice = [make_die(sides) for sides in hypos]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To compute the distribution of the mixture, I'll compute the weighted average of the dice, using the probabilities in `pmf_dice` as the weights.\n", "\n", "To express this computation concisely, it is convenient to put the distributions into a Pandas `DataFrame`:" ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.300969Z", "iopub.status.busy": "2021-04-16T19:35:33.300520Z", "iopub.status.idle": "2021-04-16T19:35:33.309423Z", "shell.execute_reply": "2021-04-16T19:35:33.309778Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "\n", "pd.DataFrame(dice)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is a `DataFrame` with one row for each distribution and one column for each possible outcome.\n", "Not all rows are the same length, so Pandas fills the extra spaces with the special value `NaN`, which stands for \"not a number\".\n", "We can use `fillna` to replace the `NaN` values with 0." ] }, { "cell_type": "raw", "metadata": { "execution": { "iopub.execute_input": "2021-04-12T15:01:40.666810Z", "iopub.status.busy": "2021-04-12T15:01:40.666262Z", "iopub.status.idle": "2021-04-12T15:01:40.669604Z", "shell.execute_reply": "2021-04-12T15:01:40.669178Z" } }, "source": [ "pd.DataFrame(dice).fillna(0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next step is to multiply each row by the probabilities in `pmf_dice`, which turns out to be easier if we transpose the matrix so the distributions run down the columns rather than across the rows:" ] }, { "cell_type": "code", "execution_count": 55, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.314949Z", "iopub.status.busy": "2021-04-16T19:35:33.314409Z", "iopub.status.idle": "2021-04-16T19:35:33.316129Z", "shell.execute_reply": "2021-04-16T19:35:33.316499Z" } }, "outputs": [], "source": [ "df = pd.DataFrame(dice).fillna(0).transpose()" ] }, { "cell_type": "code", "execution_count": 56, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.324161Z", "iopub.status.busy": "2021-04-16T19:35:33.323375Z", "iopub.status.idle": "2021-04-16T19:35:33.326601Z", "shell.execute_reply": "2021-04-16T19:35:33.326987Z" }, "tags": [] }, "outputs": [], "source": [ "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can multiply by the probabilities in `pmf_dice`:\n" ] }, { "cell_type": "code", "execution_count": 57, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.330625Z", "iopub.status.busy": "2021-04-16T19:35:33.328502Z", "iopub.status.idle": "2021-04-16T19:35:33.332816Z", "shell.execute_reply": "2021-04-16T19:35:33.332380Z" } }, "outputs": [], "source": [ "df *= pmf_dice.ps" ] }, { "cell_type": "code", "execution_count": 58, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.340823Z", "iopub.status.busy": "2021-04-16T19:35:33.340224Z", "iopub.status.idle": "2021-04-16T19:35:33.343890Z", "shell.execute_reply": "2021-04-16T19:35:33.343386Z" } }, "outputs": [], "source": [ "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And add up the weighted distributions:" ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.349701Z", "iopub.status.busy": "2021-04-16T19:35:33.348913Z", "iopub.status.idle": "2021-04-16T19:35:33.352453Z", "shell.execute_reply": "2021-04-16T19:35:33.351964Z" }, "tags": [] }, "outputs": [], "source": [ "df.sum(axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The argument `axis=1` means we want to sum across the rows.\n", "The result is a Pandas `Series`.\n", "\n", "Putting it all together, here's a function that makes a weighted mixture of distributions." ] }, { "cell_type": "code", "execution_count": 60, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.356915Z", "iopub.status.busy": "2021-04-16T19:35:33.356141Z", "iopub.status.idle": "2021-04-16T19:35:33.358351Z", "shell.execute_reply": "2021-04-16T19:35:33.358991Z" } }, "outputs": [], "source": [ "def make_mixture(pmf, pmf_seq):\n", " \"\"\"Make a mixture of distributions.\"\"\"\n", " df = pd.DataFrame(pmf_seq).fillna(0).transpose()\n", " df *= np.array(pmf)\n", " total = df.sum(axis=1)\n", " return Pmf(total)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first parameter is a `Pmf` that maps from each hypothesis to a probability.\n", "The second parameter is a sequence of `Pmf` objects, one for each hypothesis.\n", "We can call it like this:" ] }, { "cell_type": "code", "execution_count": 61, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.365638Z", "iopub.status.busy": "2021-04-16T19:35:33.363553Z", "iopub.status.idle": "2021-04-16T19:35:33.367689Z", "shell.execute_reply": "2021-04-16T19:35:33.368168Z" } }, "outputs": [], "source": [ "mix = make_mixture(pmf_dice, dice)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's what it looks like." ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.399930Z", "iopub.status.busy": "2021-04-16T19:35:33.390116Z", "iopub.status.idle": "2021-04-16T19:35:33.566607Z", "shell.execute_reply": "2021-04-16T19:35:33.566961Z" }, "tags": [] }, "outputs": [], "source": [ "mix.bar(label='mixture', alpha=0.6)\n", "decorate_dice('Distribution of damage with three different weapons')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this section I used Pandas so that `make_mixture` is concise, efficient, and hopefully not too hard to understand.\n", "In the exercises at the end of the chapter, you'll have a chance to practice with mixtures, and we will use `make_mixture` again in the next chapter." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n", "This chapter introduces the `Cdf` object, which represents the cumulative distribution function (CDF).\n", "\n", "A `Pmf` and the corresponding `Cdf` are equivalent in the sense that they contain the same information, so you can convert from one to the other. \n", "The primary difference between them is performance: some operations are faster and easier with a `Pmf`; others are faster with a `Cdf`.\n", "\n", "In this chapter we used `Cdf` objects to compute distributions of maximums and minimums; these distributions are useful for inference if we are given a maximum or minimum as data.\n", "You will see some examples in the exercises, and in future chapters.\n", "We also computed mixtures of distributions, which we will use in the next chapter to make predictions.\n", "\n", "But first you might want to work on these exercises." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** When you generate a D&D character, instead of rolling dice, you can use the \"standard array\" of attributes, which is 15, 14, 13, 12, 10, and 8.\n", "Do you think you are better off using the standard array or (literally) rolling the dice?\n", "\n", "Compare the distribution of the values in the standard array to the distribution we computed for the best three out of four:\n", "\n", "* Which distribution has higher mean? Use the `mean` method.\n", "\n", "* Which distribution has higher standard deviation? Use the `std` method.\n", "\n", "* The lowest value in the standard array is 8. For each attribute, what is the probability of getting a value less than 8? If you roll the dice six times, what's the probability that at least one of your attributes is less than 8?\n", "\n", "* The highest value in the standard array is 15. For each attribute, what is the probability of getting a value greater than 15? If you roll the dice six times, what's the probability that at least one of your attributes is greater than 15?" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "To get you started, here's a `Cdf` that represents the distribution of attributes in the standard array:" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.571476Z", "iopub.status.busy": "2021-04-16T19:35:33.571037Z", "iopub.status.idle": "2021-04-16T19:35:33.573281Z", "shell.execute_reply": "2021-04-16T19:35:33.572815Z" }, "tags": [] }, "outputs": [], "source": [ "standard = [15,14,13,12,10,8]\n", "cdf_standard = Cdf.from_seq(standard)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "We can compare it to the distribution of attributes you get by rolling four dice at adding up the best three." ] }, { "cell_type": "code", "execution_count": 64, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.667038Z", "iopub.status.busy": "2021-04-16T19:35:33.649741Z", "iopub.status.idle": "2021-04-16T19:35:33.820988Z", "shell.execute_reply": "2021-04-16T19:35:33.820486Z" }, "tags": [] }, "outputs": [], "source": [ "cdf_best3.plot(label='best 3 of 4', color='C1', ls='--')\n", "cdf_standard.step(label='standard set', color='C7')\n", "\n", "decorate_dice('Distribution of attributes')\n", "plt.ylabel('CDF');" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "I plotted `cdf_standard` as a step function to show more clearly that it contains only a few quantities." ] }, { "cell_type": "code", "execution_count": 65, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.825146Z", "iopub.status.busy": "2021-04-16T19:35:33.824423Z", "iopub.status.idle": "2021-04-16T19:35:33.827188Z", "shell.execute_reply": "2021-04-16T19:35:33.826753Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 66, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.831439Z", "iopub.status.busy": "2021-04-16T19:35:33.830792Z", "iopub.status.idle": "2021-04-16T19:35:33.833271Z", "shell.execute_reply": "2021-04-16T19:35:33.833623Z" }, "scrolled": true }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 67, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.837780Z", "iopub.status.busy": "2021-04-16T19:35:33.837233Z", "iopub.status.idle": "2021-04-16T19:35:33.839569Z", "shell.execute_reply": "2021-04-16T19:35:33.839916Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 68, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.843912Z", "iopub.status.busy": "2021-04-16T19:35:33.843367Z", "iopub.status.idle": "2021-04-16T19:35:33.845820Z", "shell.execute_reply": "2021-04-16T19:35:33.846168Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 69, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.850671Z", "iopub.status.busy": "2021-04-16T19:35:33.850253Z", "iopub.status.idle": "2021-04-16T19:35:33.854102Z", "shell.execute_reply": "2021-04-16T19:35:33.853687Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 70, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.858526Z", "iopub.status.busy": "2021-04-16T19:35:33.858063Z", "iopub.status.idle": "2021-04-16T19:35:33.860312Z", "shell.execute_reply": "2021-04-16T19:35:33.860666Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** Suppose you are fighting three monsters:\n", "\n", "* One is armed with a short sword that causes one 6-sided die of damage,\n", "\n", "* One is armed with a battle axe that causes one 8-sided die of damage, and\n", "\n", "* One is armed with a bastard sword that causes one 10-sided die of damage.\n", "\n", "One of the monsters, chosen at random, attacks you and does 1 point of damage.\n", "\n", "Which monster do you think it was? Compute the posterior probability that each monster was the attacker.\n", "\n", "If the same monster attacks you again, what is the probability that you suffer 6 points of damage?\n", "\n", "Hint: Compute a posterior distribution as we have done before and pass it as one of the arguments to `make_mixture`." ] }, { "cell_type": "code", "execution_count": 71, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.869825Z", "iopub.status.busy": "2021-04-16T19:35:33.869229Z", "iopub.status.idle": "2021-04-16T19:35:33.871979Z", "shell.execute_reply": "2021-04-16T19:35:33.872359Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 72, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.876419Z", "iopub.status.busy": "2021-04-16T19:35:33.875994Z", "iopub.status.idle": "2021-04-16T19:35:33.877630Z", "shell.execute_reply": "2021-04-16T19:35:33.878036Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 73, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:33.903730Z", "iopub.status.busy": "2021-04-16T19:35:33.901601Z", "iopub.status.idle": "2021-04-16T19:35:34.009597Z", "shell.execute_reply": "2021-04-16T19:35:34.009942Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 74, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:34.013646Z", "iopub.status.busy": "2021-04-16T19:35:34.013169Z", "iopub.status.idle": "2021-04-16T19:35:34.015385Z", "shell.execute_reply": "2021-04-16T19:35:34.015763Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** Henri Poincaré was a French mathematician who taught at the Sorbonne around 1900. The following anecdote about him is probably fiction, but it makes an interesting probability problem.\n", "\n", "Supposedly Poincaré suspected that his local bakery was selling loaves of bread that were lighter than the advertised weight of 1 kg, so every day for a year he bought a loaf of bread, brought it home and weighed it. At the end of the year, he plotted the distribution of his measurements and showed that it fit a normal distribution with mean 950 g and standard deviation 50 g. He brought this evidence to the bread police, who gave the baker a warning.\n", "\n", "For the next year, Poincaré continued to weigh his bread every day. At the end of the year, he found that the average weight was 1000 g, just as it should be, but again he complained to the bread police, and this time they fined the baker.\n", "\n", "Why? Because the shape of the new distribution was asymmetric. Unlike the normal distribution, it was skewed to the right, which is consistent with the hypothesis that the baker was still making 950 g loaves, but deliberately giving Poincaré the heavier ones.\n", "\n", "To see whether this anecdote is plausible, let's suppose that when the baker sees Poincaré coming, he hefts `n` loaves of bread and gives Poincaré the heaviest one. How many loaves would the baker have to heft to make the average of the maximum 1000 g?" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "To get you started, I'll generate a year's worth of data from a normal distribution with the given parameters." ] }, { "cell_type": "code", "execution_count": 75, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:34.019380Z", "iopub.status.busy": "2021-04-16T19:35:34.018836Z", "iopub.status.idle": "2021-04-16T19:35:34.020500Z", "shell.execute_reply": "2021-04-16T19:35:34.020849Z" }, "tags": [] }, "outputs": [], "source": [ "mean = 950\n", "std = 50\n", "\n", "np.random.seed(17)\n", "sample = np.random.normal(mean, std, size=365)" ] }, { "cell_type": "code", "execution_count": 76, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:34.027571Z", "iopub.status.busy": "2021-04-16T19:35:34.024739Z", "iopub.status.idle": "2021-04-16T19:35:34.031491Z", "shell.execute_reply": "2021-04-16T19:35:34.031066Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 77, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:34.071771Z", "iopub.status.busy": "2021-04-16T19:35:34.052696Z", "iopub.status.idle": "2021-04-16T19:35:34.214328Z", "shell.execute_reply": "2021-04-16T19:35:34.213871Z" } }, "outputs": [], "source": [ "# Solution goes here" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.12" } }, "nbformat": 4, "nbformat_minor": 4 }