{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Resampling" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "*Elements of Data Science*\n", "\n", "Copyright 2021 [Allen B. Downey](https://allendowney.com)\n", "\n", "License: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Click here to run this notebook on Colab](https://colab.research.google.com/github/AllenDowney/ElementsOfDataScience/blob/master/11_inference.ipynb) or\n", "[click here to download it](https://github.com/AllenDowney/ElementsOfDataScience/raw/master/11_inference.ipynb)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This chapter introduces [resampling methods](https://en.wikipedia.org/wiki/Resampling_(statistics)), which are used to quantify the precision of an estimate.\n", "\n", "As examples, we'll use results a vaccine trial to estimate the efficacy of the vaccine, data from the BRFSS to estimate the average height of men in the U.S., and data from the General Social Survey to see how support for gun control has changed over time. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Vaccine Testing\n", "\n", "Suppose you read a report about a new vaccine and the manufacturer says it is 67% effective at preventing disease.\n", "You might wonder where that number comes from, what it means, and how confident we should be that it is correct.\n", "\n", "Results like this often come from a [randomized controlled trial](https://en.wikipedia.org/wiki/Randomized_controlled_trial) (RCT), which works like this:\n", "\n", "* You recruit a large group of volunteers and divide them into two groups at random: the \"treatment group\" receives the vaccine; the \"control group\" does not.\n", "\n", "* Then you follow both groups for a period of time and record the number of people in each group who are diagnosed with the disease.\n", "\n", "As an example, suppose you recruit 43,783 participants and they are assigned to groups with approximately the same size." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "n_control = 21885\n", "n_treatment = 21911" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "During the observation period, 468 people are diagnosed with the disease: 352 in the control group and 116 in the treatment group." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "k_control = 352\n", "k_treatment = 116" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use these results to compute the risk of getting the disease for each group, in cases per 1000 people" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "risk_control = k_control / n_control * 1000\n", "risk_control" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "risk_treatment = k_treatment / n_treatment * 1000\n", "risk_treatment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The risk is substantially lower in the treatment group -- about 5.2 per 1000, compared to 16 -- which suggests that the vaccine is effective.\n", "We can summarize these results by computing [relative risk](https://en.wikipedia.org/wiki/Relative_risk), which is the ratio of the two risks:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "relative_risk = risk_treatment / risk_control\n", "relative_risk" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The relative risk in this example is about 0.33, which means that the risk of disease in the treatment group is 33% of the risk in the control group.\n", "Equivalently, we could report the complement of relative risk, which is **efficacy**:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "efficacy = 1 - relative_risk\n", "efficacy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example the efficacy is `0.67`, which means that the vaccine reduces the risk of disease by 67%.\n", "\n", "That's good news, but as skeptical data scientists, we should not assume that it is perfectly accurate.\n", "There are any number of things that might have gone wrong.\n", "\n", "For example, if people in the treatment group know they have been vaccinated, they might take fewer precautions to prevent disease, and people in the control group might be more careful.\n", "That would affect the estimated efficacy, which is why a lot of trials are \"blinded\", meaning that the subjects don't know which group they are in.\n", "\n", "The estimate would also be less accurate if people in either group don't follow the protocol.\n", "For example, someone in the treatment group might not complete treatment, or someone in the control group might receive treatment from another source.\n", "\n", "And there are many other possible sources of error, including honest mistakes and deliberate fraud." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general it is hard to know whether estimates like this are accurate; nevertheless, there are things we can do to assess their quality.\n", "\n", "When estimates are reported in scientific journals, they almost always include one of two measurements of uncertainty: a standard error or a confidence interval.\n", "In the next section, I'll explain what they mean and show how to compute them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simulating One Group\n", "\n", "In our hypothetical example, there are 21 911 people in the treatment group and 116 of them got the disease, so the estimated risk is small." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "n_treatment, k_treatment, risk_treatment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But it's easy to imagine that there might have been a few more cases, or fewer, just by chance.\n", "For example, if there had been 10 more cases, the estimated risk would be 5.8 per 1000, and if there had been 10 fewer, it would be 4.8." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "126 / n_treatment * 1000, 106 / n_treatment * 1000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's a big enough difference that we should wonder how much variability there is in the estimate due to random variation.\n", "We'll answer that question in three steps:\n", "\n", "* We'll write a function that uses a random number generator to simulate the trial, then\n", "\n", "* We'll run the function 1000 times to see how much the estimate varies.\n", "\n", "* And we'll summarize the results.\n", "\n", "The following function takes two parameters: `n` is the number of people in the group (treatment or control) and `p` is the probability that any of them gets the disease." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "def simulate_group(n, p):\n", " xs = np.random.random(size=n)\n", " k = np.sum(xs < p)\n", " return k / n * 1000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first line generates an array of `n` random values between 0 and 1.\n", "The values are distributed uniformly in this range, so the probability that each one is less than `p` is... `p`.\n", "\n", "The second line counts how many of the values are less than `p`, that is, how many people in the simulated group get the disease.\n", "Then the function returns the estimated risk.\n", "\n", "Here's how we call this function, passing as arguments the size of the treatment group and the estimated risk:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "p = k_treatment / n_treatment\n", "simulate_group(n_treatment, p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is the estimated risk from a simulated trial.\n", "If we run this function 1000 times, it's like running the trial over and over." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "tags": [] }, "outputs": [], "source": [ "np.random.seed(17)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "t = [simulate_group(n_treatment, p)\n", " for i in range(1000)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is a list of estimated risks that shows how much we expect the results of the trial to vary due to randomness.\n", "We can use a KDE plot to visualize the distribution of these estimates" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "\n", "sns.kdeplot(t, label='control')\n", "\n", "plt.xlabel('Risk of disease (cases per 1000)')\n", "plt.ylabel('Probability density')\n", "plt.title('Estimated Risks from Simulation')\n", "plt.legend();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The mean of this distribution is about 5.3, which is close to the observed risk, as we should expect." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "np.mean(t), risk_treatment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The width of this distribution indicates how much variation there is in the estimate due to randomness.\n", "One way to quantify the width of the distribution is the standard deviation." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "standard_error = np.std(t)\n", "standard_error" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This result is called the [**standard error**](https://en.wikipedia.org/wiki/Standard_error).\n", "\n", "Another way to quantify the width of the distribution is an interval between two percentiles.\n", "For example, if we compute the 5th and 95th percentiles, the interval we get contains 90% of the simulated estimates." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "confidence_interval = np.percentile(t, [5, 95])\n", "confidence_interval" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This result is called a [**confidence interval**](https://en.wikipedia.org/wiki/Confidence_interval); specifically, this one is a \"90% confidence interval\", or 90% CI.\n", "If we assume that the observed risk is correct, and we run the same trial many times, we expect 90% of the estimates to fall in this interval.\n", "\n", "Standard errors and confidence intervals quantify our uncertainty about the estimate due to random variation from one trial to another." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simulating the Trial\n", "\n", "If that's not making sense yet, let's try another example. In the previous section we simulated one group and estimated their risk.\n", "Now we'll simulate both groups and estimate the efficacy of the vaccine.\n", "\n", "The following function takes as parameters the size of the two groups and their actual risks." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "def simulate_trial(n1, p1, n2, p2):\n", " risk1 = simulate_group(n1, p1)\n", " risk2 = simulate_group(n2, p2)\n", " efficacy = 1 - risk2 / risk1\n", " return efficacy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we call this function once, it simulates both groups, computes their risks in each group, and uses the results to estimate the efficacy of the treatment (assuming that the first group is the control)." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "p1 = k_control / n_control\n", "p2 = k_treatment / n_treatment\n", "simulate_trial(n_control, p1, n_treatment, p2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we call it 1000 times, the result is estimated efficacy from 1000 simulated trials." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "t2 = [simulate_trial(n_control, p1, n_treatment, p2)\n", " for i in range(1000)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, we can use a KDE plot to visualize the distribution of these estimates." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(t2)\n", "\n", "plt.xlabel('Efficacy')\n", "plt.ylabel('Probability density')\n", "plt.title('Estimated Efficacy from Simulation');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, the mean of this distribution is close to the efficacy we computed with the results of the actual trial." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "np.mean(t2), efficacy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The standard deviation of this distribution is the standard error of the estimate." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "np.std(t2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a scientific paper, we could report the estimated efficacy and standard error as 0.67 (SE 0.035).\n", "As an alternative, we can use percentiles to compute a 90% confidence interval." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "np.percentile(t2, [5, 95])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a scientific paper, we could report these results as 0.67, 90% CI [0.61, 0.72]\".\n", "\n", "The standard error and confidence interval represent nearly the same information.\n", "In general, I prefer to report a confidence interval because it is easier to interpret.\n", "\n", "Formally, it means that if we run the same experiment again, we expect 90% of the results to fall between 61% and 72% (assuming that the estimated risks are correct).\n", "\n", "More casually, it means that it is plausible that the actually efficacy is as low as 61%, or as high as 72% (assuming there are no sources of error other than random variation)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Estimating Means\n", "\n", "In the previous examples, we've estimated risk, which is a proportion, and efficacy, which is a ratio of two proportions.\n", "As a third example, let's estimate a mean.\n", "\n", "Suppose we want to estimate the average height of men in the United States.\n", "It would be impractical to measure everyone in the country, but if we choose a random sample of the population and measure the people in the sample, we can use the mean of the measurements to estimate the mean of the population.\n", "\n", "Ideally, the sample should be **representative**, which means that everyone in the population has an equal chance of appearing in the sample.\n", "In general, that's not easy to do.\n", "Depending on how you recruit people, your sample might have too many tall people or too many short people.\n", "\n", "But let's suppose we have a representative sample of 103 adult males in the United States, the average height in the sample is 177 cm and the standard deviation is 8.4 cm.\n", "\n", "If someone asks for your best guess about the height of mean in the U.S., you would report 177 cm.\n", "But how accurate do you think this estimate is?\n", "If you only measure 100 people from a population of about 100 million adult males, it seems like the average in the population might be substantially higher or lower.\n", "\n", "Again, we can use random simulation to quantify the uncertainty of this estimate.\n", "As we did in the previous examples, we will assume for purposes of simulation that the estimates are correct, and simulate the sampling process 1000 times.\n", "\n", "The following function takes as parameters the size of the sample, `n`, the presumed average height in the population, `mu`, and the presumed standard deviation, `std`. " ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "def simulate_sample_mean(n, mu, sigma):\n", " sample = np.random.normal(mu, sigma, size=n)\n", " return sample.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This function generates `n` random values from a normal distribution with the given mean and standard deviation, and returns their mean.\n", "\n", "We can run it like this, using the observed mean and standard deviation from the sample as the presumed mean and standard deviation of the population." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "n_height = 103\n", "mean_height = 177\n", "std_height = 8.4\n", "\n", "simulate_sample_mean(n_height, mean_height, std_height)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we run it 1000 times, it simulates the sampling and measurement process and returns a list of results from 1000 simulated experiments." ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "t3 = [simulate_sample_mean(n_height, mean_height, std_height)\n", " for i in range(1000)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, we can use a KDE plot to visualize the distribution of these values." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(t3)\n", "\n", "plt.xlabel('Average height (cm)')\n", "plt.ylabel('Probability density')\n", "plt.title('Sampling Distribution of the Mean');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This distribution is called a [**sampling distribution**]() because it represents the variation in the results due to the random sampling process.\n", "If we recruit 100 people and compute the mean of their heights, the result might be as low as 175 cm, or as high as 179 cm, due to chance.\n", "\n", "The average of the sampling distribution is close to the presumed mean of the population." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "np.mean(t3), mean_height" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The standard deviation of the sampling distribution is the standard error of the estimate." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "np.std(t3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can use `percentile` to compute a 90% confidence interval." ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "np.percentile(t3, [5, 95])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If I reported this result in a paper, I would say that the estimated height of adult male residents of the U.S. is 177 cm, 90% CI [176, 178] cm.\n", "\n", "Informally, that means that the estimate could plausibly be off by about a centimeter either way, just due to random sampling.\n", "But we should remember that there are other possible sources of error, so we might be off by more than that.\n", "\n", "The confidence interval puts an upper bound on the precision of the estimate; in this example, the precision of the estimate is 1 cm at best, and might worse." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Resampling Framework\n", "\n", "The examples we've done so far fit into the framework in this diagram:\n", "\n", "![](https://github.com/AllenDowney/ElementsOfDataScience/raw/master/figs/resampling.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using data from an experiment, we compute a sample statistic. In the vaccine example, we computed risks for each group and efficacy. In the height example, we computed the average height in the sample.\n", "\n", "Then we build a model of the sampling process.\n", "In the vaccine example, the model assumes that everyone in each group has the same probability of getting sick, and we use the data to choose the probability.\n", "In the height example, the model assumes that heights are drawn from a normal distribution, and we use the data to choose the parameters `mu` and `sigma`.\n", "\n", "We use the model to simulate the experiment many times. Each simulation generates a dataset that's similar to the original, which we use to compute the sample statistic.\n", "\n", "Finally, we collect the sample statistics from the simulations and use them to plot the sampling distribution and compute standard errors and confidence intervals." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I emphasize the role of the model in this framework because for a given experiment there might be several possible models, each including some elements of the real world and ignoring others.\n", "\n", "For example, our model of the vaccine experiment assumes that everyone in each group has the same risk, but that's probably not true.\n", "Here's another version of `simulate_group` that includes variation in risk within each group." ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "def simulate_variable_group(n, p):\n", " ps = np.random.uniform(0, 2*p, size=n)\n", " xs = np.random.random(size=n)\n", " k = np.sum(xs < ps)\n", " return k / n * 1000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This version of the function generates `ps`, which is an array of probabilities uniformly distributed between `0` and `2*n`.\n", "Of course, that's just a guess about how the probabilities might be distributed in the group, but we can use it to get a sense of what effect this distribution has on the results.\n", "\n", "The rest of the function is the same a the previous version: it generates `xs`, which is an array of random values between `0` and `1`.\n", "Then it compares `xs` and `ps`, counting the number of times `p` exceeds `x`.\n", "\n", "Here's how we call this function, simulating the treatment group." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "p = k_treatment / n_treatment\n", "simulate_variable_group(n_treatment, p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The return value is the number of cases per 1000." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** Using this function to run 1000 simulations of the treatment group. Compute the mean of the results and confirm that it is close to the observed `risk_treatment`. To quantify the spread of the sampling distribution, compute the standard error. How does it compare to the standard error we computed with the original model, where everyone in the group has the same risk? " ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** The following is a version of `simulate_trial` that uses `simulate_variable_group`, from the previous exercise, to simulate the vaccine trial using the modified model, with variation in risk within the groups.\n", "\n", "Use this function to simulate 1000 trials. Compute the mean of the sampling distribution and confirm that it is close to the observed `efficacy`. Compute the standard error and compare it to the standard error we computed for the original model" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "def simulate_variable_trial(n1, p1, n2, p2):\n", " risk1 = simulate_variable_group(n1, p1)\n", " risk2 = simulate_variable_group(n2, p2)\n", " efficacy = 1 - risk2 / risk1\n", " return efficacy" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** One nice thing about the resampling framework is that it is easy to compute the sampling distribution for other statistics.\n", "\n", "For example, suppose we want to estimate the coefficient of variation (standard deviation as a fraction of the mean) for adult male height. Here's how we can compute it." ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "cv = std_height / mean_height\n", "cv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, the standard deviation is about 5% of the mean. \n", "The following is a version of `simulate_sample` that generates a random sample of heights and returns the coefficient of variation, rather than the mean. " ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [], "source": [ "def simulate_sample_cv(n, mu, sigma):\n", " sample = np.random.normal(mu, sigma, size=n)\n", " return sample.std() / sample.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use this function to simulate 1000 samples with size `n=103`, using `mean_height` for `mu` and `std_height` for `sigma`. Plot the sampling distribution of the coefficient of variation, and compute a 90% confidence interval." ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** In Chapter 10 we used data from the General Social Survey, specifically a variable called `GUNLAW`, to describe support for a gun control law as a function of age, sex, and years of education.\n", "Now let's come back to that dataset and see how the responses have changed over time.\n", "\n", "The following cell reloads the data." ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "tags": [] }, "outputs": [], "source": [ "from os.path import basename, exists\n", "\n", "def download(url):\n", " filename = basename(url)\n", " if not exists(filename):\n", " from urllib.request import urlretrieve\n", " local, _ = urlretrieve(url, filename)\n", " print('Downloaded ' + local)\n", " \n", "download('https://github.com/AllenDowney/' +\n", " 'ElementsOfDataScience/raw/master/data/gss_eda.hdf')" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "gss = pd.read_hdf('gss_eda.hdf', 'gss')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The column named `GUNLAW` records responses to the question \"Would you favor or oppose a law which would require a person to obtain a police permit before he or she could buy a gun?\"\n", "\n", "The response code `1` means yes; `2` means no. It will be easier to work with this variable if we recode it so `0` means no." ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [], "source": [ "gss['GUNLAW'].replace(2, 0, inplace=True)\n", "gss['GUNLAW'].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For each year of the survey, I would like to compute the number of respondents and the number who said they favor this law.\n", "I'll use `groupby` to group the respondents by year of interview and `agg` to compute two aggregation functions, `sum` and `count`." ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [], "source": [ "grouped = gss.groupby('YEAR')['GUNLAW']\n", "agg = grouped.agg(['sum', 'count'])\n", "agg.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is a `DataFrame` with two columns: `sum` is the number of respondents who said \"yes\"; `count` is the number of respondents who were asked the question.\n", "\n", "In some years the question was not asked, so I'll use `drop` to remove those rows." ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [], "source": [ "zero = (agg['count'] == 0)\n", "labels = agg.index[zero]\n", "agg.drop(labels, inplace=True)" ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "tags": [] }, "outputs": [], "source": [ "assert (gss['GUNLAW'].value_counts().sum()\n", " == agg['count'].sum())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can plot the percentage of respondents who favor gun control (at least for this wording of the question) during each year." ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [], "source": [ "percent = agg['sum'] / agg['count'] * 100\n", "percent.plot(style='o')\n", "\n", "plt.xlabel('Year of survey')\n", "plt.ylabel('Percent in favor')\n", "plt.title('Support for gun control over time');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The results vary from year to year. It is hard to tell how much of this variation is due to real changes in opinion, and how much is due to random sampling.\n", "We can answer that question by computing confidence intervals for each of these data points.\n", "\n", "Here is a version of `simulate_group` that returns results as a percentage, rather than per 1000." ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [], "source": [ "def simulate_group_percent(n, p):\n", " xs = np.random.random(size=n)\n", " k = np.sum(xs < p)\n", " return k / n * 100" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Write a loop that goes through the rows in `agg` and computes a confidence interval for each year.\n", "You can use `itertuples` to iterate the rows, like this:\n", "\n", "```\n", "for year, k, n in agg.itertuples():\n", " print(year, k, n)\n", "```\n", "\n", "For each row, compute a 90% confidence interval and plot it as a vertical line.\n", "Then plot the data points and label the axes.\n", "The result should give you a sense of how much variation we expect to see from year to year due to random sampling." ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" } }, "nbformat": 4, "nbformat_minor": 2 }