{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Estimating Counts" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Think Bayes, Second Edition\n", "\n", "Copyright 2020 Allen B. Downey\n", "\n", "License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:19.989007Z", "iopub.status.busy": "2021-04-16T19:35:19.988518Z", "iopub.status.idle": "2021-04-16T19:35:19.991079Z", "shell.execute_reply": "2021-04-16T19:35:19.990545Z" }, "tags": [] }, "outputs": [], "source": [ "# If we're running on Colab, install empiricaldist\n", "# https://pypi.org/project/empiricaldist/\n", "\n", "import sys\n", "IN_COLAB = 'google.colab' in sys.modules\n", "\n", "if IN_COLAB:\n", " !pip install empiricaldist" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:19.994541Z", "iopub.status.busy": "2021-04-16T19:35:19.994130Z", "iopub.status.idle": "2021-04-16T19:35:19.995904Z", "shell.execute_reply": "2021-04-16T19:35:19.996257Z" }, "tags": [] }, "outputs": [], "source": [ "# Get utils.py\n", "\n", "from os.path import basename, exists\n", "\n", "def download(url):\n", " filename = basename(url)\n", " if not exists(filename):\n", " from urllib.request import urlretrieve\n", " local, _ = urlretrieve(url, filename)\n", " print('Downloaded ' + local)\n", " \n", "download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:19.998961Z", "iopub.status.busy": "2021-04-16T19:35:19.998546Z", "iopub.status.idle": "2021-04-16T19:35:20.677236Z", "shell.execute_reply": "2021-04-16T19:35:20.677597Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import set_pyplot_params\n", "set_pyplot_params()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous chapter we solved problems that involve estimating proportions.\n", "In the Euro problem, we estimated the probability that a coin lands heads up, and in the exercises, you estimated a batting average, the fraction of people who cheat on their taxes, and the chance of shooting down an invading alien.\n", "\n", "Clearly, some of these problems are more realistic than others, and some are more useful than others.\n", "\n", "In this chapter, we'll work on problems related to counting, or estimating the size of a population.\n", "Again, some of the examples will seem silly, but some of them, like the German Tank problem, have real applications, sometimes in life and death situations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Train Problem\n", "\n", "I found the train problem \n", "in Frederick Mosteller's, [*Fifty Challenging Problems in\n", " Probability with Solutions*](https://store.doverpublications.com/0486653552.html):\n", "\n", "> \"A railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has.\"\n", "\n", "Based on this observation, we know the railroad has 60 or more\n", "locomotives. But how many more? To apply Bayesian reasoning, we\n", "can break this problem into two steps:\n", "\n", "* What did we know about $N$ before we saw the data?\n", "\n", "* For any given value of $N$, what is the likelihood of seeing the data (a locomotive with number 60)?\n", "\n", "The answer to the first question is the prior. The answer to the\n", "second is the likelihood.\n", "\n", "We don't have much basis to choose a prior, so we'll start with\n", "something simple and then consider alternatives.\n", "Let's assume that $N$ is equally likely to be any value from 1 to 1000.\n", "\n", "Here's the prior distribution:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.681706Z", "iopub.status.busy": "2021-04-16T19:35:20.681246Z", "iopub.status.idle": "2021-04-16T19:35:20.682954Z", "shell.execute_reply": "2021-04-16T19:35:20.683296Z" } }, "outputs": [], "source": [ "import numpy as np\n", "from empiricaldist import Pmf\n", "\n", "hypos = np.arange(1, 1001)\n", "prior = Pmf(1, hypos)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "Now let's figure out the likelihood of the data.\n", "In a hypothetical fleet of $N$ locomotives, what is the probability that we would see number 60?\n", "If we assume that we are equally likely to see any locomotive, the chance of seeing any particular one is $1/N$.\n", "\n", "Here's the function that does the update:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.687062Z", "iopub.status.busy": "2021-04-16T19:35:20.686637Z", "iopub.status.idle": "2021-04-16T19:35:20.688814Z", "shell.execute_reply": "2021-04-16T19:35:20.688368Z" } }, "outputs": [], "source": [ "def update_train(pmf, data):\n", " \"\"\"Update pmf based on new data.\"\"\"\n", " hypos = pmf.qs\n", " likelihood = 1 / hypos\n", " impossible = (data > hypos)\n", " likelihood[impossible] = 0\n", " pmf *= likelihood\n", " pmf.normalize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This function might look familiar; it is the same as the update function for the dice problem in the previous chapter.\n", "In terms of likelihood, the train problem is the same as the dice problem.\n", "\n", "Here's the update:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.692485Z", "iopub.status.busy": "2021-04-16T19:35:20.692048Z", "iopub.status.idle": "2021-04-16T19:35:20.698353Z", "shell.execute_reply": "2021-04-16T19:35:20.698751Z" } }, "outputs": [], "source": [ "data = 60\n", "posterior = prior.copy()\n", "update_train(posterior, data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's what the posterior looks like:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.702138Z", "iopub.status.busy": "2021-04-16T19:35:20.701696Z", "iopub.status.idle": "2021-04-16T19:35:20.889486Z", "shell.execute_reply": "2021-04-16T19:35:20.889838Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import decorate\n", "\n", "posterior.plot(label='Posterior after train 60', color='C4')\n", "decorate(xlabel='Number of trains',\n", " ylabel='PMF',\n", " title='Posterior distribution')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Not surprisingly, all values of $N$ below 60 have been eliminated.\n", "\n", "The most likely value, if you had to guess, is 60." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.894441Z", "iopub.status.busy": "2021-04-16T19:35:20.893628Z", "iopub.status.idle": "2021-04-16T19:35:20.896691Z", "shell.execute_reply": "2021-04-16T19:35:20.896316Z" } }, "outputs": [], "source": [ "posterior.max_prob()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That might not seem like a very good guess; after all, what are the chances that you just happened to see the train with the highest number?\n", "Nevertheless, if you want to maximize the chance of getting\n", "the answer exactly right, you should guess 60.\n", "\n", "But maybe that's not the right goal.\n", "An alternative is to compute the mean of the posterior distribution.\n", "Given a set of possible quantities, $q_i$, and their probabilities, $p_i$, the mean of the distribution is:\n", "\n", "$$\\mathrm{mean} = \\sum_i p_i q_i$$\n", "\n", "Which we can compute like this:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.900289Z", "iopub.status.busy": "2021-04-16T19:35:20.899837Z", "iopub.status.idle": "2021-04-16T19:35:20.902096Z", "shell.execute_reply": "2021-04-16T19:35:20.902445Z" } }, "outputs": [], "source": [ "np.sum(posterior.ps * posterior.qs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or we can use the method provided by `Pmf`:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.906600Z", "iopub.status.busy": "2021-04-16T19:35:20.905906Z", "iopub.status.idle": "2021-04-16T19:35:20.909638Z", "shell.execute_reply": "2021-04-16T19:35:20.909046Z" } }, "outputs": [], "source": [ "posterior.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The mean of the posterior is 333, so that might be a good guess if you want to minimize error.\n", "If you played this guessing game over and over, using the mean of the posterior as your estimate would minimize the [mean squared error](http://en.wikipedia.org/wiki/Minimum_mean_square_error) over the long run." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sensitivity to the Prior\n", "\n", "The prior I used in the previous section is uniform from 1 to 1000, but I offered no justification for choosing a uniform distribution or that particular upper bound.\n", "We might wonder whether the posterior distribution is sensitive to the prior.\n", "With so little data---only one observation---it is.\n", "\n", "This table shows what happens as we vary the upper bound:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.921293Z", "iopub.status.busy": "2021-04-16T19:35:20.920199Z", "iopub.status.idle": "2021-04-16T19:35:20.935921Z", "shell.execute_reply": "2021-04-16T19:35:20.935395Z" }, "tags": [] }, "outputs": [], "source": [ "import pandas as pd\n", "\n", "df = pd.DataFrame(columns=['Posterior mean'])\n", "df.index.name = 'Upper bound'\n", "\n", "for high in [500, 1000, 2000]:\n", " hypos = np.arange(1, high+1)\n", " pmf = Pmf(1, hypos)\n", " update_train(pmf, data=60)\n", " df.loc[high] = pmf.mean()\n", " \n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we vary the upper bound, the posterior mean changes substantially.\n", "So that's bad. \n", "\n", "When the posterior is sensitive to the prior, there are two ways to proceed:\n", "\n", "* Get more data.\n", "\n", "* Get more background information and choose a better prior.\n", "\n", "With more data, posterior distributions based on different priors tend to converge. \n", "For example, suppose that in addition to train 60 we also see trains 30 and 90.\n", "\n", "Here's how the posterior means depend on the upper bound of the prior, when we observe three trains:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.946439Z", "iopub.status.busy": "2021-04-16T19:35:20.945991Z", "iopub.status.idle": "2021-04-16T19:35:20.958263Z", "shell.execute_reply": "2021-04-16T19:35:20.958659Z" }, "tags": [] }, "outputs": [], "source": [ "df = pd.DataFrame(columns=['Posterior mean'])\n", "df.index.name = 'Upper bound'\n", "\n", "dataset = [30, 60, 90]\n", "\n", "for high in [500, 1000, 2000]:\n", " hypos = np.arange(1, high+1)\n", " pmf = Pmf(1, hypos)\n", " for data in dataset:\n", " update_train(pmf, data)\n", " df.loc[high] = pmf.mean()\n", " \n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The differences are smaller, but apparently three trains are not enough for the posteriors to converge." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Power Law Prior\n", "\n", "If more data are not available, another option is to improve the\n", "priors by gathering more background information.\n", "It is probably not reasonable to assume that a train-operating company with 1000 locomotives is just as likely as a company with only 1.\n", "\n", "With some effort, we could probably find a list of companies that\n", "operate locomotives in the area of observation.\n", "Or we could interview an expert in rail shipping to gather information about the typical size of companies.\n", "\n", "But even without getting into the specifics of railroad economics, we\n", "can make some educated guesses.\n", "In most fields, there are many small companies, fewer medium-sized companies, and only one or two very large companies.\n", "\n", "In fact, the distribution of company sizes tends to follow a power law, as Robert Axtell reports in *Science* ().\n", "\n", "This law suggests that if there are 1000 companies with fewer than\n", "10 locomotives, there might be 100 companies with 100 locomotives,\n", "10 companies with 1000, and possibly one company with 10,000 locomotives.\n", "\n", "Mathematically, a power law means that the number of companies with a given size, $N$, is proportional to $(1/N)^{\\alpha}$, where $\\alpha$ is a parameter that is often near 1.\n", "\n", "We can construct a power law prior like this:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.963600Z", "iopub.status.busy": "2021-04-16T19:35:20.963012Z", "iopub.status.idle": "2021-04-16T19:35:20.965471Z", "shell.execute_reply": "2021-04-16T19:35:20.965831Z" }, "tags": [] }, "outputs": [], "source": [ "alpha = 1.0\n", "ps = hypos**(-alpha)\n", "power = Pmf(ps, hypos, name='power law')\n", "power.normalize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For comparison, here's the uniform prior again." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.970595Z", "iopub.status.busy": "2021-04-16T19:35:20.969882Z", "iopub.status.idle": "2021-04-16T19:35:20.972668Z", "shell.execute_reply": "2021-04-16T19:35:20.972223Z" } }, "outputs": [], "source": [ "hypos = np.arange(1, 1001)\n", "uniform = Pmf(1, hypos, name='uniform')\n", "uniform.normalize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's what a power law prior looks like, compared to the uniform prior:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:20.993580Z", "iopub.status.busy": "2021-04-16T19:35:20.987921Z", "iopub.status.idle": "2021-04-16T19:35:21.145573Z", "shell.execute_reply": "2021-04-16T19:35:21.145133Z" }, "tags": [] }, "outputs": [], "source": [ "uniform.plot(color='C4')\n", "power.plot(color='C1')\n", "\n", "decorate(xlabel='Number of trains',\n", " ylabel='PMF',\n", " title='Prior distributions')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the update for both priors." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.149556Z", "iopub.status.busy": "2021-04-16T19:35:21.148612Z", "iopub.status.idle": "2021-04-16T19:35:21.152684Z", "shell.execute_reply": "2021-04-16T19:35:21.152290Z" } }, "outputs": [], "source": [ "dataset = [60]\n", "update_train(uniform, dataset)\n", "update_train(power, dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here are the posterior distributions." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.168591Z", "iopub.status.busy": "2021-04-16T19:35:21.168151Z", "iopub.status.idle": "2021-04-16T19:35:21.299407Z", "shell.execute_reply": "2021-04-16T19:35:21.299044Z" }, "tags": [] }, "outputs": [], "source": [ "uniform.plot(color='C4')\n", "power.plot(color='C1')\n", "\n", "decorate(xlabel='Number of trains',\n", " ylabel='PMF',\n", " title='Posterior distributions')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound.\n", "\n", "Here's how the posterior means depend on the upper bound when we use a power law prior and observe three trains:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.310454Z", "iopub.status.busy": "2021-04-16T19:35:21.308708Z", "iopub.status.idle": "2021-04-16T19:35:21.322722Z", "shell.execute_reply": "2021-04-16T19:35:21.323129Z" }, "tags": [] }, "outputs": [], "source": [ "df = pd.DataFrame(columns=['Posterior mean'])\n", "df.index.name = 'Upper bound'\n", "\n", "alpha = 1.0\n", "dataset = [30, 60, 90]\n", "\n", "for high in [500, 1000, 2000]:\n", " hypos = np.arange(1, high+1)\n", " ps = hypos**(-alpha)\n", " power = Pmf(ps, hypos)\n", " for data in dataset:\n", " update_train(power, data)\n", " df.loc[high] = power.mean()\n", " \n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now the differences are much smaller. In fact,\n", "with an arbitrarily large upper bound, the mean converges on 134.\n", "\n", "So the power law prior is more realistic, because it is based on\n", "general information about the size of companies, and it behaves better in practice." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Credible Intervals\n", "\n", "So far we have seen two ways to summarize a posterior distribution: the value with the highest posterior probability (the MAP) and the posterior mean.\n", "These are both **point estimates**, that is, single values that estimate the quantity we are interested in.\n", "\n", "Another way to summarize a posterior distribution is with percentiles.\n", "If you have taken a standardized test, you might be familiar with percentiles.\n", "For example, if your score is the 90th percentile, that means you did as well as or better than 90\\% of the people who took the test.\n", "\n", "If we are given a value, `x`, we can compute its **percentile rank** by finding all values less than or equal to `x` and adding up their probabilities.\n", "\n", "`Pmf` provides a method that does this computation.\n", "So, for example, we can compute the probability that the company has less than or equal to 100 trains:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.326884Z", "iopub.status.busy": "2021-04-16T19:35:21.326371Z", "iopub.status.idle": "2021-04-16T19:35:21.329117Z", "shell.execute_reply": "2021-04-16T19:35:21.328729Z" } }, "outputs": [], "source": [ "power.prob_le(100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With a power law prior and a dataset of three trains, the result is about 29%.\n", "So 100 trains is the 29th percentile.\n", "\n", "Going the other way, suppose we want to compute a particular percentile; for example, the median of a distribution is the 50th percentile.\n", "We can compute it by adding up probabilities until the total exceeds 0.5.\n", "Here's a function that does it:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.332553Z", "iopub.status.busy": "2021-04-16T19:35:21.332041Z", "iopub.status.idle": "2021-04-16T19:35:21.334402Z", "shell.execute_reply": "2021-04-16T19:35:21.333955Z" } }, "outputs": [], "source": [ "def quantile(pmf, prob):\n", " \"\"\"Compute a quantile with the given prob.\"\"\"\n", " total = 0\n", " for q, p in pmf.items():\n", " total += p\n", " if total >= prob:\n", " return q\n", " return np.nan" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The loop uses `items`, which iterates the quantities and probabilities in the distribution.\n", "Inside the loop we add up the probabilities of the quantities in order.\n", "When the total equals or exceeds `prob`, we return the corresponding quantity.\n", "\n", "This function is called `quantile` because it computes a quantile rather than a percentile.\n", "The difference is the way we specify `prob`.\n", "If `prob` is a percentage between 0 and 100, we call the corresponding quantity a percentile.\n", "If `prob` is a probability between 0 and 1, we call the corresponding quantity a **quantile**.\n", "\n", "Here's how we can use this function to compute the 50th percentile of the posterior distribution:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.337871Z", "iopub.status.busy": "2021-04-16T19:35:21.337255Z", "iopub.status.idle": "2021-04-16T19:35:21.340128Z", "shell.execute_reply": "2021-04-16T19:35:21.339696Z" } }, "outputs": [], "source": [ "quantile(power, 0.5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result, 113 trains, is the median of the posterior distribution.\n", "\n", "`Pmf` provides a method called `quantile` that does the same thing.\n", "We can call it like this to compute the 5th and 95th percentiles:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.344566Z", "iopub.status.busy": "2021-04-16T19:35:21.343907Z", "iopub.status.idle": "2021-04-16T19:35:21.346263Z", "shell.execute_reply": "2021-04-16T19:35:21.346610Z" } }, "outputs": [], "source": [ "power.quantile([0.05, 0.95])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is the interval from 91 to 243 trains, which implies:\n", "\n", "* The probability is 5% that the number of trains is less than or equal to 91.\n", "\n", "* The probability is 5% that the number of trains is greater than 243.\n", "\n", "Therefore the probability is 90% that the number of trains falls between 91 and 243 (excluding 91 and including 243).\n", "For this reason, this interval is called a 90% **credible interval**.\n", "\n", "`Pmf` also provides `credible_interval`, which computes an interval that contains the given probability." ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.350591Z", "iopub.status.busy": "2021-04-16T19:35:21.349787Z", "iopub.status.idle": "2021-04-16T19:35:21.353876Z", "shell.execute_reply": "2021-04-16T19:35:21.353340Z" } }, "outputs": [], "source": [ "power.credible_interval(0.9)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The German Tank Problem\n", "\n", "During World War II, the Economic Warfare Division of the American\n", "Embassy in London used statistical analysis to estimate German\n", "production of tanks and other equipment.\n", "\n", "The Western Allies had captured log books, inventories, and repair\n", "records that included chassis and engine serial numbers for individual\n", "tanks.\n", "\n", "Analysis of these records indicated that serial numbers were allocated\n", "by manufacturer and tank type in blocks of 100 numbers, that numbers\n", "in each block were used sequentially, and that not all numbers in each\n", "block were used. So the problem of estimating German tank production\n", "could be reduced, within each block of 100 numbers, to a form of the\n", "train problem.\n", "\n", "Based on this insight, American and British analysts produced\n", "estimates substantially lower than estimates from other forms\n", "of intelligence. And after the war, records indicated that they were\n", "substantially more accurate.\n", "\n", "They performed similar analyses for tires, trucks, rockets, and other\n", "equipment, yielding accurate and actionable economic intelligence.\n", "\n", "The German tank problem is historically interesting; it is also a nice\n", "example of real-world application of statistical estimation.\n", "\n", "For more on this problem, see [this Wikipedia page](https://en.wikipedia.org/wiki/German_tank_problem) and Ruggles and Brodie, \"An Empirical Approach to Economic Intelligence in World War II\", *Journal of the American Statistical Association*, March 1947, [available here](https://web.archive.org/web/20170123132042/https://www.cia.gov/library/readingroom/docs/CIA-RDP79R01001A001300010013-3.pdf)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Informative Priors\n", "\n", "Among Bayesians, there are two approaches to choosing prior\n", "distributions. Some recommend choosing the prior that best represents\n", "background information about the problem; in that case the prior\n", "is said to be **informative**. The problem with using an informative\n", "prior is that people might have different information or\n", "interpret it differently. So informative priors might seem arbitrary.\n", "\n", "The alternative is a so-called **uninformative prior**, which is\n", "intended to be as unrestricted as possible, in order to let the data\n", "speak for itself. In some cases you can identify a unique prior\n", "that has some desirable property, like representing minimal prior\n", "information about the estimated quantity.\n", "\n", "Uninformative priors are appealing because they seem more\n", "objective. But I am generally in favor of using informative priors.\n", "Why? First, Bayesian analysis is always based on\n", "modeling decisions. Choosing the prior is one of those decisions, but\n", "it is not the only one, and it might not even be the most subjective.\n", "So even if an uninformative prior is more objective, the entire analysis is still subjective.\n", "\n", "Also, for most practical problems, you are likely to be in one of two\n", "situations: either you have a lot of data or not very much. If you have a lot of data, the choice of the prior doesn't matter;\n", "informative and uninformative priors yield almost the same results.\n", "If you don't have much data, using relevant background information (like the power law distribution) makes a big difference.\n", "\n", "And if, as in the German tank problem, you have to make life and death\n", "decisions based on your results, you should probably use all of the\n", "information at your disposal, rather than maintaining the illusion of\n", "objectivity by pretending to know less than you do." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n", "This chapter introduces the train problem, which turns out to have the same likelihood function as the dice problem, and which can be applied to the German Tank problem.\n", "In all of these examples, the goal is to estimate a count, or the size of a population.\n", "\n", "In the next chapter, I'll introduce \"odds\" as an alternative to probabilities, and Bayes's Rule as an alternative form of Bayes's Theorem.\n", "We'll compute distributions of sums and products, and use them to estimate the number of Members of Congress who are corrupt, among other problems.\n", "\n", "But first, you might want to work on these exercises." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** Suppose you are giving a talk in a large lecture hall and the fire marshal interrupts because they think the audience exceeds 1200 people, which is the safe capacity of the room. \n", "\n", "You think there are fewer then 1200 people, and you offer to prove it.\n", "It would take too long to count, so you try an experiment:\n", "\n", "* You ask how many people were born on May 11 and two people raise their hands. \n", "\n", "* You ask how many were born on May 23 and 1 person raises their hand. \n", "* Finally, you ask how many were born on August 1, and no one raises their hand.\n", "\n", "How many people are in the audience? What is the probability that there are more than 1200 people.\n", "Hint: Remember the binomial distribution." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.359329Z", "iopub.status.busy": "2021-04-16T19:35:21.358751Z", "iopub.status.idle": "2021-04-16T19:35:21.363183Z", "shell.execute_reply": "2021-04-16T19:35:21.363696Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.370055Z", "iopub.status.busy": "2021-04-16T19:35:21.369426Z", "iopub.status.idle": "2021-04-16T19:35:21.371847Z", "shell.execute_reply": "2021-04-16T19:35:21.372439Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.380246Z", "iopub.status.busy": "2021-04-16T19:35:21.379424Z", "iopub.status.idle": "2021-04-16T19:35:21.382846Z", "shell.execute_reply": "2021-04-16T19:35:21.383434Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.406019Z", "iopub.status.busy": "2021-04-16T19:35:21.400248Z", "iopub.status.idle": "2021-04-16T19:35:21.545515Z", "shell.execute_reply": "2021-04-16T19:35:21.545140Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.548960Z", "iopub.status.busy": "2021-04-16T19:35:21.548544Z", "iopub.status.idle": "2021-04-16T19:35:21.553168Z", "shell.execute_reply": "2021-04-16T19:35:21.552768Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.557382Z", "iopub.status.busy": "2021-04-16T19:35:21.556868Z", "iopub.status.idle": "2021-04-16T19:35:21.560089Z", "shell.execute_reply": "2021-04-16T19:35:21.560441Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** I often see [rabbits](https://en.wikipedia.org/wiki/Eastern_cottontail) in the garden behind my house, but it's not easy to tell them apart, so I don't really know how many there are.\n", "\n", "Suppose I deploy a motion-sensing [camera trap](https://en.wikipedia.org/wiki/Camera_trap) that takes a picture of the first rabbit it sees each day. After three days, I compare the pictures and conclude that two of them are the same rabbit and the other is different.\n", "\n", "How many rabbits visit my garden?\n", "\n", "To answer this question, we have to think about the prior distribution and the likelihood of the data:\n", "\n", "* I have sometimes seen four rabbits at the same time, so I know there are at least that many. I would be surprised if there were more than 10. So, at least as a starting place, I think a uniform prior from 4 to 10 is reasonable.\n", "\n", "* To keep things simple, let's assume that all rabbits who visit my garden are equally likely to be caught by the camera trap in a given day. Let's also assume it is guaranteed that the camera trap gets a picture every day." ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.563963Z", "iopub.status.busy": "2021-04-16T19:35:21.563510Z", "iopub.status.idle": "2021-04-16T19:35:21.565271Z", "shell.execute_reply": "2021-04-16T19:35:21.565631Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.568713Z", "iopub.status.busy": "2021-04-16T19:35:21.568291Z", "iopub.status.idle": "2021-04-16T19:35:21.570022Z", "shell.execute_reply": "2021-04-16T19:35:21.570340Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.594482Z", "iopub.status.busy": "2021-04-16T19:35:21.590147Z", "iopub.status.idle": "2021-04-16T19:35:21.735651Z", "shell.execute_reply": "2021-04-16T19:35:21.735296Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** Suppose that in the criminal justice system, all prison sentences are either 1, 2, or 3 years, with an equal number of each. One day, you visit a prison and choose a prisoner at random. What is the probability that they are serving a 3-year sentence? What is the average remaining sentence of the prisoners you observe?" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.741573Z", "iopub.status.busy": "2021-04-16T19:35:21.741016Z", "iopub.status.idle": "2021-04-16T19:35:21.743433Z", "shell.execute_reply": "2021-04-16T19:35:21.743783Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.750272Z", "iopub.status.busy": "2021-04-16T19:35:21.749630Z", "iopub.status.idle": "2021-04-16T19:35:21.752134Z", "shell.execute_reply": "2021-04-16T19:35:21.752482Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.756049Z", "iopub.status.busy": "2021-04-16T19:35:21.755543Z", "iopub.status.idle": "2021-04-16T19:35:21.758197Z", "shell.execute_reply": "2021-04-16T19:35:21.757838Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** If I chose a random adult in the U.S., what is the probability that they have a sibling? To be precise, what is the probability that their mother has had at least one other child.\n", "\n", "[This article from the Pew Research Center](https://www.pewsocialtrends.org/2015/05/07/family-size-among-mothers/) provides some relevant data. " ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "From it, I extracted the following distribution of family size for mothers in the U.S. who were 40-44 years old in 2014:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.774819Z", "iopub.status.busy": "2021-04-16T19:35:21.773579Z", "iopub.status.idle": "2021-04-16T19:35:21.902764Z", "shell.execute_reply": "2021-04-16T19:35:21.903248Z" }, "tags": [] }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "qs = [1, 2, 3, 4]\n", "ps = [22, 41, 24, 14]\n", "prior = Pmf(ps, qs)\n", "prior.bar(alpha=0.7)\n", "\n", "plt.xticks(qs, ['1 child', '2 children', '3 children', '4+ children'])\n", "decorate(ylabel='PMF',\n", " title='Distribution of family size')" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "For simplicity, let's assume that all families in the 4+ category have exactly 4 children." ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.911850Z", "iopub.status.busy": "2021-04-16T19:35:21.911271Z", "iopub.status.idle": "2021-04-16T19:35:21.916004Z", "shell.execute_reply": "2021-04-16T19:35:21.916486Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.920242Z", "iopub.status.busy": "2021-04-16T19:35:21.919748Z", "iopub.status.idle": "2021-04-16T19:35:21.922344Z", "shell.execute_reply": "2021-04-16T19:35:21.921990Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.926250Z", "iopub.status.busy": "2021-04-16T19:35:21.925631Z", "iopub.status.idle": "2021-04-16T19:35:21.928050Z", "shell.execute_reply": "2021-04-16T19:35:21.928401Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** The [Doomsday argument](https://en.wikipedia.org/wiki/Doomsday_argument) is \"a probabilistic argument that claims to predict the number of future members of the human species given an estimate of the total number of humans born so far.\"\n", "\n", "Suppose there are only two kinds of intelligent civilizations that can happen in the universe. The \"short-lived\" kind go exinct after only 200 billion individuals are born. The \"long-lived\" kind survive until 2,000 billion individuals are born.\n", "And suppose that the two kinds of civilization are equally likely.\n", "Which kind of civilization do you think we live in? \n", "\n", "The Doomsday argument says we can use the total number of humans born so far as data.\n", "According to the [Population Reference Bureau](https://www.prb.org/howmanypeoplehaveeverlivedonearth/), the total number of people who have ever lived is about 108 billion.\n", "\n", "Since you were born quite recently, let's assume that you are, in fact, human being number 108 billion.\n", "If $N$ is the total number who will ever live and we consider you to be a randomly-chosen person, it is equally likely that you could have been person 1, or $N$, or any number in between.\n", "So what is the probability that you would be number 108 billion?\n", "\n", "Given this data and dubious prior, what is the probability that our civilization will be short-lived?" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.931981Z", "iopub.status.busy": "2021-04-16T19:35:21.931530Z", "iopub.status.idle": "2021-04-16T19:35:21.933115Z", "shell.execute_reply": "2021-04-16T19:35:21.933463Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.940200Z", "iopub.status.busy": "2021-04-16T19:35:21.939749Z", "iopub.status.idle": "2021-04-16T19:35:21.942386Z", "shell.execute_reply": "2021-04-16T19:35:21.942831Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:35:21.946847Z", "iopub.status.busy": "2021-04-16T19:35:21.946057Z", "iopub.status.idle": "2021-04-16T19:35:21.948588Z", "shell.execute_reply": "2021-04-16T19:35:21.948054Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 2 }