{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Testing a difference in means\n", "\n", "Elements of Data Science\n", "\n", "by [Allen Downey](https://allendowney.com)\n", "\n", "[MIT License](https://opensource.org/licenses/MIT)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# If we're running on Colab, install empiricaldist\n", "# https://pypi.org/project/empiricaldist/\n", "\n", "import sys\n", "IN_COLAB = 'google.colab' in sys.modules\n", "\n", "if IN_COLAB:\n", " !pip install empiricaldist" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Difference in means\n", "\n", "A few days ago the [following question appeared on the statistics forum of Reddit](https://www.reddit.com/r/statistics/comments/fmfm0i/q_testing_equivalence_of_means_from_two_gamma/):\n", "\n", "> This may be a simple question for most of you, but I’m still learning statistics and I can’t find any resource that explains how to test two sample means from gamma distributions. Both samples have n > 30, so does that mean I can use a ttest due to central limit theorem? If so, can someone please ELI15?\n", "\n", "One of the first responses suggests:\n", "\n", "> Look up Permutation Tests\n", "\n", "I think this is a good suggestion, so let's do it. Then I'll come back to the questions the original poster (OP) asked about the central limit theorem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fake data\n", "\n", "First, let's generate some fake data by sampling from two gamma distributions with different means. \n", "\n", "I'll set the seed for the random number generator so we get the same samples every time." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "np.random.seed(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As an example, I'll assume that the actual means in the population are 2 and 3, and that the sample sizes are 30 and 40." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "mean1, mean2 = 2, 3\n", "size1, size2 = 30, 40" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "sample1 = np.random.gamma(mean1, size=size1)\n", "np.mean(sample1)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "sample2 = np.random.gamma(mean2, size=size2)\n", "np.mean(sample2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Warm-up Exercise:** Compute the difference in means between the two samples." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Test statistic\n", "\n", "From the statement of the problem it sounds like we have no prior reason to expect the difference in means to be positive or negative, so it is appropriate to run a two-sided test. One way to do that is to use the absolute value of the difference in means as a test statistic.\n", "\n", "The following function takes the two samples and returns the absolute value of the difference in their means." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def test_statistic(sample1, sample2):\n", " \"\"\"Compute the test statistic.\n", " \n", " sample1: sequence of values\n", " sample2: sequence of values\n", " \n", " return: float\n", " \"\"\"\n", " diff = np.mean(sample1) - np.mean(sample2)\n", " return np.abs(diff)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the observed difference in the means." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "actual_diff = test_statistic(sample1, sample2)\n", "actual_diff" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This value is the \"apparent effect\"; based on these samples, it seems like there is a difference between the means of the two populations. \n", "\n", "But we would like to check whether it is plausible that the two populations actually have the same mean, and the apparent difference is solely due to random sampling." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Permutation\n", "\n", "One way to do that is by resampling or, more specifically, by permutation. Here's the process:\n", "\n", "1. The \"null hypothesis\" is that the two populations have the same mean. Under this hypothesis, we could consider both groups to be samples from the same population, so we can merge them into a single sample.\n", "\n", "2. To simulate the data-generating process under the null hypothesis, we can shuffle (permute) the merged sample and then split it into samples with the given sizes.\n", "\n", "**Exercise:** Compute the probability of seeing a difference in means as big as `actual_diff` under the null hypothesis.\n", "\n", "Use `np.concatenate` to merge the samples. Use `np.random.shuffle` to shuffle the merged sample. Use `np.split` to split the shuffled sample into groups with sizes `size1` and `size 2`. Then compute the difference in means between the shuffled groups.\n", "\n", "Repeat that process 1000 times and count the number of times the observed difference exceeds `actual_diff`." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## There is only one test\n", "\n", "The computation in the previous section is an example of a framework I call \"There is Only One Test\". The idea is that all hypothesis tests have the same logical structure, shown in the following diagram:\n", "\n", "\n", "\n", "The key steps in this framework are:\n", "\n", "1. Choose a test statistic that quantifies the observed effect.\n", "\n", "2. Define a model of the null hypothesis and use it to generate simulated data.\n", "\n", "3. Compute the distribution of the test statistic under the null hypothesis.\n", "\n", "4. Compute a p-value, which is probability, under the null hypothesis, of seeing an effect as extreme as what you saw." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Estimation\n", "\n", "So far we've computed a p-value, which is the probability of seeing a difference in means as big as `actual_diff` under the null hypothesis, which is that both samples are actually drawn from the same population.\n", "\n", "As I wrote in 2105, [p-values are not completely useless](http://allendowney.blogspot.com/2015/03/statistical-inference-is-only-mostly.html). If you think an apparent effect might be due to chance, the p-value can tell you whether that's plausible.\n", "\n", "However, p-values don't provide a lot of guidance for decision making. For that, it is often more useful to estimate the size of the effect. In this example, estimating the size of the difference is easy; in fact, we've already done it. \n", "\n", "The estimated size of the difference in means is:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "np.mean(sample1) - np.mean(sample2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This result is a \"point estimate\", that is, a single value that quantifies the size of the effect.\n", "\n", "But we might wonder how precise the estimate is. We can answer that question by computing a standard error and/or a confidence interval.\n", "\n", "That's what we'll do in this section, and once again we'll do it by resampling. Here are the key steps:\n", "\n", "1. We'll use the data to build of model of the population.\n", "\n", "2. We'll use the model to generate simulated data.\n", "\n", "3. We'll use the simulated data to compute a sample of possible differences in the mean.\n", "\n", "To build a model of the population, we will make an assumption that sounds strange, but actually works:\n", "\n", "* We assume that the values in the sample represent the entire population.\n", "\n", "Under this assumption, we can generate a new sample by choosing values from the old sample with replacement. Here's a function that does it." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "def resample(sample):\n", " \"\"\"Draw a new sample from the data with replacement.\n", " \n", " sample: sequence of values\n", " \n", " returns: NumPy array\n", " \"\"\"\n", " return np.random.choice(sample, len(sample), replace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the size of the resampled data is the same as the size of the original sample. As a result, the same value might appear more than once, and other values might not appear at all.\n", "\n", "If we run `resample` many times and compute the mean of each batch of resampled data, we get a sample from the \"sampling distribution of the mean\", which is the distribution of means we would get if we ran the same experiment over and over.\n", "\n", "The following function simulates the experiment 1001 times and returns a sample of possible means:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "def sampling_distribution_of_mean(sample):\n", " \"\"\"Simulate the sampling process and compute sample means.\n", " \n", " sample: sequence of values\n", " \n", " returns: NumPy array of means\n", " \"\"\"\n", " means = [resample(sample).mean() for i in range(1001)]\n", " return np.array(means)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's what it looks like for `sample1`:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "sample_of_means1 = sampling_distribution_of_mean(sample1)\n", "np.mean(sample_of_means1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The mean of possible means is pretty close to the mean of the original sample:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "np.mean(sample1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's as expected, and it doesn't tell us much we didn't know. However, the standard deviation of the sampling distribution tells us how much we expect the sample mean to vary from one simulated experiment to the next" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "standard_error = np.std(sample_of_means1)\n", "standard_error" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This value is the \"standard error\", which quantifies the precision of the estimate. I'm using the word \"precision\" carefully here, because standard error doesn't tell us whether the estimate is accurate; it only tells us how precise it is. You can [read more about the difference between precision and accuracy here](https://en.wikipedia.org/wiki/Accuracy_and_precision).\n", "\n", "Another way to quantify precision is to compute a \"confidence interval\"; for example, a 90% confidence interval contains 90% of the sampling distribution.\n", "\n", "We can use `np.percentile` to compute the interval between the 5th and 95th percentiles." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "CI = np.percentile(sample_of_means1, [5, 95])\n", "CI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example the 90% confidence interval contains the actual population mean, which is 2. Of course, in the real world, we usually don't know the population mean. If we did, we wouldn't be estimating it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** The following cell computes the sampling distribution of the mean for `sample2`. Compute the standard error and a 90% confidence interval. Does the CI contain the population mean?" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "sample_of_means2 = sampling_distribution_of_mean(sample2)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** Estimate the sampling distribution for the difference in means by subtracting `sample_of_means1` and `sample_of_means2` elementwise. Compute the SE and 90% CI." ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Estimating sampling distributions\n", "\n", "The process we used in the previous section is an example of a framework for computing sampling distributions, standard errors, and confidence intervals, as shown in the following figure:\n", "\n", "\n", "\n", "The essential steps are:\n", "\n", "1. Choose a sample statistic that quantifies the thing you want to estimate.\n", "\n", "2. Use the data to make a model of the population.\n", "\n", "3. Use the model to simulate the sampling process and generate simulated data.\n", "\n", "4. Compute the sampling distribution of the estimate.\n", "\n", "5. Use the sampling distribution to compute the standard error, confidence interval, or both.\n", "\n", "But remember that this process only accounts for variability due to random sampling. It doesn't tell us anything about systematic errors in the sampling process, measurement error, or other sources of error." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The central limit theorem\n", "\n", "Now let's get back to the original question:\n", "\n", "> This may be a simple question for most of you, but I’m still learning statistics and I can’t find any resource that explains how to test two sample means from gamma distributions. Both samples have n > 30, so does that mean I can use a ttest due to central limit theorem? If so, can someone please ELI15?\n", "\n", "The question is about [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test) which is an analytic method for estimating p-values under certain assumptions about the distribution of the data.\n", "\n", "Without getting into the details, the t-test is accurate enough for practical purposes as long as the sampling distribution of the mean is approximately normal (Gaussian).\n", "\n", "If the distribution of the data is approximately normal, so is the sampling distribution of the means, so in that case there's no problem: the t-test is fine." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But what if the distribution of the data is not normal, as in this example, where we are told that the data are closer to a gamma distribution?\n", "\n", "In that case, the t-test is usually fine anyway because of the central limit theorem, which implies that the sampling distribution of the mean is often (but not always) approximately normal even if the distribution of the data is not.\n", "\n", "Now you might wonder: how do we know when this approximation is good enough? Well, one way is to use resampling to estimate the sampling distribution. So let's do that.\n", "\n", "First, here's the distribution of the data in the two samples, represented as a CDF." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "from empiricaldist import Cdf\n", "\n", "cdf1 = Cdf.from_seq(sample1)\n", "cdf1.plot(label='Sample 1')\n", "\n", "cdf2 = Cdf.from_seq(sample2)\n", "cdf2.plot(label='Sample 2')\n", "\n", "plt.xlabel('Values from the samples')\n", "plt.ylabel('CDF')\n", "plt.title('Distribution of the samples');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, the data were literally drawn from a gamma distribution, no it is no surprise that these CDFs do not look like normal distributions.\n", "\n", "To see that more clearly, I'll compute normal distributions with the same mean and standard deviation, and we'll see how different they look." ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import norm\n", "\n", "def plot_normal_model(sample, **options):\n", " \"\"\"Plot the CDF of a normal distribution with the\n", " same mean and std of the sample.\n", " \n", " sample: sequence of values\n", " options: passed to plt.plot\n", " \"\"\"\n", " mean, std = np.mean(sample), np.std(sample)\n", " xs = np.linspace(np.min(sample), np.max(sample))\n", " ys = norm.cdf(xs, mean, std)\n", " plt.plot(xs, ys, alpha=0.4, **options)" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "from empiricaldist import Cdf\n", "\n", "plot_normal_model(sample1, color='C0', label='Normal model')\n", "cdf1 = Cdf.from_seq(sample1)\n", "cdf1.plot(label='Sample 1')\n", "\n", "plot_normal_model(sample2, color='C1', label='Normal model')\n", "cdf2 = Cdf.from_seq(sample2)\n", "cdf2.plot(label='Sample 2')\n", "\n", "plt.xlabel('Values from the samples')\n", "plt.ylabel('CDF')\n", "plt.title('Distribution of the samples')\n", "\n", "plt.legend();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The normal model does not fit the data particularly well; in particular, the normal model implies that negative values have non-negligible probability. But the values from a gamma distribution cannot be negative (like most quantities that we measure in the world).\n", "\n", "But now let's do the same thing with the sampling distributions of the means:" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "plot_normal_model(sample_of_means1, color='C0', label='Normal model')\n", "cdf1 = Cdf.from_seq(sample_of_means1)\n", "cdf1.plot(label='Sample of means 1')\n", "\n", "plot_normal_model(sample_of_means2, color='C1', label='Normal model')\n", "cdf2 = Cdf.from_seq(sample_of_means2)\n", "cdf2.plot(label='Sample of means2')\n", "\n", "plt.xlabel('Sample mean')\n", "plt.ylabel('CDF')\n", "plt.title('Sampling distributions')\n", "\n", "plt.legend();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The models fit the data so well you can barely see them. So the answer to the original question is a clear yes: with a gamma distribution and sample sizes near 30, the sampling distribution of the mean is approximately normal, the assumptions of the t-test are satisfied, and the results are accurate enough for practical purpose.\n", "\n", "At least, for *these* gamma distributions that's true. For other gamma distributions it might not be. In particular, when the mean of a gamma distribution is small, its skewness is large, and we need a larger sample size to ensure that the sampling distribution is normal enough.\n", "\n", "So how are we supposed to know? Well, one option is to use resampling to estimate the sampling distribution. But if we have to use resampling to know whether or not the t-test is reliable, we might as well use resampling to compute the p-value, standard error, and/or confidence interval, in my opinion." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 2 }