{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Bayesian interpretation of medical tests\n", "-----------------------------------------\n", "\n", "This notebooks explores several problems related to interpreting the results of medical tests.\n", "\n", "Copyright 2016 Allen Downey\n", "\n", "MIT License: http://opensource.org/licenses/MIT" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from __future__ import print_function, division\n", "\n", "from thinkbayes2 import Pmf, Suite\n", "\n", "from fractions import Fraction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Medical tests\n", "\n", "Suppose we test a patient to see if they have a disease, and the test comes back positive. What is the probability that the patient is actually sick (that is, has the disease)?\n", "\n", "To answer this question, we need to know:\n", "\n", "* The prevalence of the disease in the population the patient is from. Let's assume the patient is identified as a member of a population where the known prevalence is `p`.\n", "\n", "* The sensitivity of the test, `s`, which is the probability of a positive test if the patient is sick.\n", "\n", "* The false positive rate of the test, `t`, which is the probability of a positive test if the patient is not sick.\n", "\n", "Given these parameters, we can compute the probability that the patient is sick, given a positive test.\n", "\n", "### Test class\n", "\n", "To do that, I'll define a `Test` class that extends `Suite`, so it inherits `Update` and provides `Likelihood`.\n", "\n", "The instance variables of `Test` are:\n", "\n", "* `p`, `s`, and `t`: Copies of the parameters.\n", "* `d`: a dictionary that maps from hypotheses to their probabilities. The hypotheses are the strings `sick` and `notsick`.\n", "* `likelihood`: a dictionary that encodes the likelihood of the possible data values `pos` and `neg` under the hypotheses.\n", "\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class Test(Suite):\n", " \"\"\"Represents beliefs about a patient based on a medical test.\"\"\"\n", " \n", " def __init__(self, p, s, t, label='Test'):\n", " # initialize the prior probabilities\n", " d = dict(sick=p, notsick=1-p)\n", " super(Test, self).__init__(d, label)\n", " \n", " # store the parameters\n", " self.p = p\n", " self.s = s\n", " self.t = t\n", " \n", " # make a nested dictionary to compute likelihoods\n", " self.likelihood = dict(pos=dict(sick=s, notsick=t),\n", " neg=dict(sick=1-s, notsick=1-t))\n", " \n", " def Likelihood(self, data, hypo):\n", " \"\"\"\n", " data: 'pos' or 'neg'\n", " hypo: 'sick' or 'notsick'\n", " \"\"\"\n", " return self.likelihood[data][hypo]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can create a `Test` object with parameters chosen for demonstration purposes (most medical tests are better than this!):" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 9/10\n", "sick 1/10\n" ] } ], "source": [ "p = Fraction(1, 10) # prevalence\n", "s = Fraction(9, 10) # sensitivity\n", "t = Fraction(3, 10) # false positive rate\n", "\n", "test = Test(p, s, t)\n", "test.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you are curious, here's the nested dictionary that computes the likelihoods:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "{'neg': {'notsick': Fraction(7, 10), 'sick': Fraction(1, 10)},\n", " 'pos': {'notsick': Fraction(3, 10), 'sick': Fraction(9, 10)}}" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test.likelihood" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's how we update the `Test` object with a positive outcome:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 3/4\n", "sick 1/4\n" ] } ], "source": [ "test.Update('pos')\n", "test.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The positive test provides evidence that the patient is sick, increasing the probability from 0.1 to 0.25." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Uncertainty about `t`\n", "\n", "So far, this is basic Bayesian inference. Now let's add a wrinkle. Suppose that we don't know the value of `t` with certainty, but we have reason to believe that `t` is either 0.2 or 0.4 with equal probability.\n", "\n", "Again, we would like to know the probability that a patient who tests positive actually has the disease. As we did with the Red Die problem, we will consider several scenarios:\n", "\n", "**Scenario A**: The patients are drawn at random from the relevant population, and the reason we are uncertain about `t` is that either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.\n", "\n", "**Scenario B**: As in Scenario A, the patients are drawn at random from the relevant population, but the reason we are uncertain about `t` is that previous studies of the test have been contradictory. That is, there is only one version of the test, and we have reason to believe that `t` is the same for all groups, but we are not sure what the correct value of `t` is.\n", "\n", "**Scenario C**: As in Scenario A, there are two versions of the test or two groups of people. But now the patients are being filtered so we only see the patients who tested positive and we don't know how many patients tested negative. For example, suppose you are a specialist and patients are only referred to you after they test positive.\n", "\n", "**Scenario D**: As in Scenario B, we have reason to think that `t` is the same for all patients, and as in Scenario C, we only see patients who test positive and don't know how many tested negative." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Scenario A\n", "\n", "We can represent this scenario with a hierarchical model, where the levels of the hierarchy are:\n", "\n", "* At the top level, the possible values of `t` and their probabilities.\n", "* At the bottom level, the probability that the patient is sick or not, conditioned on `t`.\n", "\n", "To represent the hierarchy, I'll define a `MetaTest`, which is a `Suite` that contains `Test` objects with different values of `t` as hypotheses." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class MetaTest(Suite):\n", " \"\"\"Represents a set of tests with different values of `t`.\"\"\"\n", " \n", " def Likelihood(self, data, hypo):\n", " \"\"\"\n", " data: 'pos' or 'neg'\n", " hypo: Test object\n", " \"\"\"\n", " # the return value from `Update` is the total probability of the\n", " # data for a hypothetical value of `t`\n", " return hypo.Update(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To update a `MetaTest`, we update each of the hypothetical `Test` objects. The return value from `Update` is the normalizing constant, which is the total probability of the data under the hypothesis.\n", "\n", "We use the normalizing constants from the bottom level of the hierarchy as the likelihoods at the top level.\n", "\n", "Here's how we create the `MetaTest` for the scenario we described:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test(t=0.2) 1/2\n", "Test(t=0.4) 1/2\n" ] } ], "source": [ "q = Fraction(1, 2)\n", "t1 = Fraction(2, 10)\n", "t2 = Fraction(4, 10)\n", "\n", "test1 = Test(p, s, t1, 'Test(t=0.2)')\n", "test2 = Test(p, s, t2, 'Test(t=0.4)')\n", "\n", "metatest = MetaTest({test1:q, test2:1-q})\n", "metatest.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At the top level, there are two tests, with different values of `t`. Initially, they are equally likely.\n", "\n", "When we update the `MetaTest`, it updates the embedded `Test` objects and then the `MetaTest` itself." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "Fraction(9, 25)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "metatest.Update('pos')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are the results." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test(t=0.4) 5/8\n", "Test(t=0.2) 3/8\n" ] } ], "source": [ "metatest.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because a positive test is more likely if `t=0.4`, the positive test is evidence in favor of the hypothesis that `t=0.4`.\n", "\n", "This `MetaTest` object represents what we should believe about `t` after seeing the test, as well as what we should believe about the probability that the patient is sick." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Marginal distributions\n", "\n", "To compute the probability that the patient is sick, we have to compute the marginal probabilities of `sick` and `notsick`, averaging over the possible values of `t`. The following function computes this distribution:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def MakeMixture(metapmf, label='mix'):\n", " \"\"\"Make a mixture distribution.\n", "\n", " Args:\n", " metapmf: Pmf that maps from Pmfs to probs.\n", " label: string label for the new Pmf.\n", "\n", " Returns: Pmf object.\n", " \"\"\"\n", " mix = Pmf(label=label)\n", " for pmf, p1 in metapmf.Items():\n", " for x, p2 in pmf.Items():\n", " mix.Incr(x, p1 * p2)\n", " return mix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the posterior predictive distribution:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 3/4\n", "sick 1/4\n" ] } ], "source": [ "predictive = MakeMixture(metatest)\n", "predictive.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After seeing the test, the probability that the patient is sick is 0.25, which is the same result we got with `t=0.3`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Two patients\n", "\n", "Now suppose you test two patients and they both test positive. What is the probability that they are both sick?\n", "\n", "To answer that, I define a few more functions to work with Metatests:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def MakeMetaTest(p, s, pmf_t):\n", " \"\"\"Makes a MetaTest object with the given parameters.\n", " \n", " p: prevalence\n", " s: sensitivity\n", " pmf_t: Pmf of possible values for `t`\n", " \"\"\"\n", " tests = {}\n", " for t, q in pmf_t.Items():\n", " label = 'Test(t=%s)' % str(t)\n", " tests[Test(p, s, t, label)] = q\n", " return MetaTest(tests)\n", "\n", "def Marginal(metatest):\n", " \"\"\"Extracts the marginal distribution of t.\n", " \"\"\"\n", " marginal = Pmf()\n", " for test, prob in metatest.Items():\n", " marginal[test.t] = prob\n", " return marginal\n", "\n", "def Conditional(metatest, t):\n", " \"\"\"Extracts the distribution of sick/notsick conditioned on t.\"\"\"\n", " for test, prob in metatest.Items():\n", " if test.t == t:\n", " return test" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`MakeMetaTest` makes a `MetaTest` object starting with a given PMF of `t`.\n", "\n", "`Marginal` extracts the PMF of `t` from a `MetaTest`.\n", "\n", "`Conditional` takes a specified value for `t` and returns the PMF of `sick` and `notsick` conditioned on `t`.\n", "\n", "I'll test these functions using the same parameters from above:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test(t=1/5) 1/2\n", "Test(t=2/5) 1/2\n" ] } ], "source": [ "pmf_t = Pmf({t1:q, t2:1-q})\n", "metatest = MakeMetaTest(p, s, pmf_t)\n", "metatest.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are the results" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test(t=2/5) 5/8\n", "Test(t=1/5) 3/8\n" ] } ], "source": [ "metatest = MakeMetaTest(p, s, pmf_t)\n", "metatest.Update('pos')\n", "metatest.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Same as before. Now we can extract the posterior distribution of `t`." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1/5 3/8\n", "2/5 5/8\n" ] } ], "source": [ "Marginal(metatest).Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Having seen one positive test, we are a little more inclined to believe that `t=0.4`; that is, that the false positive rate for this patient/test is high.\n", "\n", "And we can extract the conditional distributions for the patient:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 2/3\n", "sick 1/3\n" ] } ], "source": [ "cond1 = Conditional(metatest, t1)\n", "cond1.Print()" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 4/5\n", "sick 1/5\n" ] } ], "source": [ "cond2 = Conditional(metatest, t2)\n", "cond2.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can make the posterior marginal distribution of sick/notsick, which is a weighted mixture of the conditional distributions:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 3/4\n", "sick 1/4\n" ] } ], "source": [ "MakeMixture(metatest).Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At this point we have a `MetaTest` that contains our updated information about the test (the distribution of `t`) and about the patient that tested positive.\n", "\n", "Now, to compute the probability that both patients are sick, we have to know the distribution of `t` for both patients. And that depends on details of the scenario.\n", "\n", "In Scenario A, the reason we are uncertain about `t` is either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.\n", "\n", "So the value of `t` for each patient is an independent choice from `pmf_t`; that is, if we learn something about `t` for one patient, that tells us nothing about `t` for other patients.\n", "\n", "So if we consider two patients who have tested positive, the MetaTest we just computed represents our belief about each of the two patients independently.\n", "\n", "To compute the probability that both patients are sick, we can convolve the two distributions." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Pmf({'notsicknotsick': Fraction(8, 15), 'sicksick': Fraction(1, 15), 'sicknotsick': Fraction(2, 15), 'notsicksick': Fraction(4, 15)}) 15/64\n", "Pmf({'notsicknotsick': Fraction(16, 25), 'sicksick': Fraction(1, 25), 'sicknotsick': Fraction(4, 25), 'notsicksick': Fraction(4, 25)}) 25/64\n", "Pmf({'notsicknotsick': Fraction(8, 15), 'sicksick': Fraction(1, 15), 'sicknotsick': Fraction(4, 15), 'notsicksick': Fraction(2, 15)}) 15/64\n", "Pmf({'notsicknotsick': Fraction(4, 9), 'sicksick': Fraction(1, 9), 'sicknotsick': Fraction(2, 9), 'notsicksick': Fraction(2, 9)}) 9/64\n" ] } ], "source": [ "convolution = metatest + metatest\n", "convolution.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we can compute the posterior marginal distribution of sick/notsick for the two patients:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsicknotsick 9/16\n", "notsicksick 3/16\n", "sicknotsick 3/16\n", "sicksick 1/16\n" ] } ], "source": [ "marginal = MakeMixture(metatest+metatest)\n", "marginal.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So in Scenario A the probability that both patients are sick is 1/16.\n", "\n", "As an aside, we could have computed the marginal distributions first and then convolved them, which is computationally more efficient:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsicknotsick 9/16\n", "notsicksick 3/16\n", "sicknotsick 3/16\n", "sicksick 1/16\n" ] } ], "source": [ "marginal = MakeMixture(metatest) + MakeMixture(metatest)\n", "marginal.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can confirm that this result is correct by simulation. Here's a generator that generates random pairs of patients:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from random import random\n", "\n", "def flip(p):\n", " return random() < p\n", "\n", "def generate_pair_A(p, s, pmf_t):\n", " while True:\n", " sick1, sick2 = flip(p), flip(p)\n", " \n", " t = pmf_t.Random()\n", " test1 = flip(s) if sick1 else flip(t)\n", "\n", " t = pmf_t.Random()\n", " test2 = flip(s) if sick2 else flip(t)\n", "\n", " yield test1, test2, sick1, sick2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's a function that runs the simulation for a given number of iterations:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def run_simulation(generator, iters=100000):\n", " pmf_t = Pmf([0.2, 0.4])\n", " pair_iterator = generator(0.1, 0.9, pmf_t)\n", "\n", " outcomes = Pmf()\n", " for i in range(iters):\n", " test1, test2, sick1, sick2 = next(pair_iterator)\n", " if test1 and test2:\n", " outcomes[sick1, sick2] += 1\n", "\n", " outcomes.Normalize()\n", " return outcomes" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(False, False) 0.5635400907715582\n", "(False, True) 0.18267776096822994\n", "(True, False) 0.19130105900151284\n", "(True, True) 0.062481089258698934\n" ] } ], "source": [ "outcomes = run_simulation(generate_pair_A)\n", "outcomes.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we increase `iters`, the probablity of (True, True) converges on 1/16, which is what we got from the analysis.\n", "\n", "Good so far!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Scenario B\n", "\n", "In Scenario B, we have reason to believe the `t` is the same for all patients, but we are not sure what it is. So each time we see a positive test, we get some information about `t` for all patients.\n", "\n", "The first time we see positive test we do the same update as in Scenario A:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test(t=2/5) 5/8\n", "Test(t=1/5) 3/8\n" ] } ], "source": [ "metatest1 = MakeMetaTest(p, s, pmf_t)\n", "metatest1.Update('pos')\n", "metatest1.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And the marginal distribution of sick/notsick is the same:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 3/4\n", "sick 1/4\n" ] } ], "source": [ "marginal = MakeMixture(metatest1)\n", "marginal.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now suppose the second patient arrives. We need a new `MetaTest` that contains the updated information about the test, but no information about the patient other than the prior probability of being sick, `p`:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test(t=1/5) 3/8\n", "Test(t=2/5) 5/8\n" ] } ], "source": [ "metatest2 = MakeMetaTest(p, s, Marginal(metatest1))\n", "metatest2.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can update this `MetaTest` with the result from the second test:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test(t=2/5) 25/34\n", "Test(t=1/5) 9/34\n" ] } ], "source": [ "metatest2.Update('pos')\n", "metatest2.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This distribution contains updated information about the test, based on two positive outcomes, and updated information about a patient who has tested positive (once).\n", "\n", "After seeing two patients with positive tests, the probability that `t=0.4` has increased to 25/34, around 74%.\n", "\n", "For either patient, the probability of being sick is given by the marginal distribution from `metatest2`:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 13/17\n", "sick 4/17\n" ] } ], "source": [ "predictive = MakeMixture(metatest2)\n", "predictive.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After two tests, the probability that the patient is sick is slightly lower than after one (4/17 is about 23.5%, compared to 25%). That's because the second positive test increases our belief that the false positive rate is high (t=0.4), which decreases our belief that either patient is sick.\n", "\n", "Now, to compute the probability that both are sick, we can't just convolve the posterior marginal distribution with itself, as we did in Scenario A, because the selection of `t` is not independent for the two patients. Instead, we have to make a weighted mixture of conditional distributions.\n", "\n", "If we know `t=t1`, we can compute the joint distribution for the two patients:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsicknotsick 4/9\n", "notsicksick 2/9\n", "sicknotsick 2/9\n", "sicksick 1/9\n" ] } ], "source": [ "cond_t1 = Conditional(metatest2, t1)\n", "conjunction_t1 = cond_t1 + cond_t1\n", "conjunction_t1.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we know that `t=t1`, the probability of `sicksick` is 0.111. And for `t=t2`:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsicknotsick 16/25\n", "notsicksick 4/25\n", "sicknotsick 4/25\n", "sicksick 1/25\n" ] } ], "source": [ "cond_t2 = Conditional(metatest2, t2)\n", "conjunction_t2 = cond_t2 + cond_t2\n", "conjunction_t2.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we know that `t=t2`, the probability of `sicksick` is `0.04`.\n", "\n", "The overall probability of `sicksick` is the weighted average of these probabilities:" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "Fraction(1, 17)" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "posterior_t = Marginal(metatest2)\n", "posterior_t[t1] * conjunction_t1['sicksick'] + posterior_t[t2] * conjunction_t2['sicksick']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`1/17` is about `0.0588`, slightly smaller than in Scenario A (`1/16`, which is about `0.0667`).\n", "\n", "To compute the probabilities for all four outcomes, I'll make a `Metapmf` that contains the two conditional distributions." ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Pmf({'notsicknotsick': Fraction(16, 25), 'sicksick': Fraction(1, 25), 'sicknotsick': Fraction(4, 25), 'notsicksick': Fraction(4, 25)}) 25/34\n", "Pmf({'notsicknotsick': Fraction(4, 9), 'sicksick': Fraction(1, 9), 'sicknotsick': Fraction(2, 9), 'notsicksick': Fraction(2, 9)}) 9/34\n" ] } ], "source": [ "metapmf = Pmf()\n", "for t, prob in Marginal(metatest2).Items():\n", " cond = Conditional(metatest2, t)\n", " conjunction = cond + cond\n", " metapmf[conjunction] = prob\n", " \n", "metapmf.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And finally we can use `MakeMixture` to compute the weighted averages of the posterior probabilities:" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsicknotsick 10/17\n", "notsicksick 3/17\n", "sicknotsick 3/17\n", "sicksick 1/17\n" ] } ], "source": [ "predictive = MakeMixture(metapmf)\n", "predictive.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To confirm that this result is correct, I'll use the simuation again with a different generator:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def generate_pair_B(p, s, pmf_t):\n", " while True:\n", " sick1, sick2 = flip(p), flip(p)\n", " \n", " t = pmf_t.Random()\n", " test1 = flip(s) if sick1 else flip(t)\n", "\n", " # Here's the difference\n", " # t = pmf_t.Random()\n", " test2 = flip(s) if sick2 else flip(t)\n", "\n", " yield test1, test2, sick1, sick2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The difference between Scenario A and Scenario B is the line I commented out. In Scenario B, we generate `t` once and it applies to both patients." ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(False, False) 0.5922922348213004\n", "(False, True) 0.17568537390555478\n", "(True, False) 0.17145112674034738\n", "(True, True) 0.06057126453279748\n" ] } ], "source": [ "outcomes = run_simulation(generate_pair_B)\n", "outcomes.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As `iters` increases, the results from the simulation converge on `1/17`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Summary so far\n", "\n", "In summary:\n", "\n", " P(sick|pos) P(sicksick|pospos)\n", " Scenario A 1/4 = 25% 1/16 = 6.25%\n", " Scenario B 1/4 = 25% 1/17 ~= 5.88%\n", "\n", "If we are only interested in one patient at a time, Scenarios A and B are the same. But for collections of patients, they yield different probabilities.\n", "\n", "A real scenario might combine elements of A and B; that is, the false positive rate might be different for different people, and we might have some uncertainty about what it is. In that case, the most accurate probability for two patients might be anywhere between `1/16` and `1/17`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Scenario C\n", "\n", "Scenario C is similar to Scenario A: we believe that the false positive rate `t` might be different for different people, or for different versions of the test. The difference is that in Scenario A we see all patients, sick or not, positive test or not.\n", "\n", "In Scenario C, we only see patients after they have tested positive, and we don't know how many tested negative. For example, if you are a specialist and patients are referred to you only if they test positive, Scenario C might be a good model of your situation.\n", "\n", "Before I analyze this scenario, I'll start with a simulation. As a reminder, here's a generator that generates pairs of patients in Scenario A:" ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def generate_pair_A(p, s, pmf_t):\n", " while True:\n", " sick1, sick2 = flip(p), flip(p)\n", " \n", " t = pmf_t.Random()\n", " test1 = flip(s) if sick1 else flip(t)\n", "\n", " t = pmf_t.Random()\n", " test2 = flip(s) if sick2 else flip(t)\n", "\n", " yield test1, test2, sick1, sick2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's the simulator that uses the generator to estimate the probability that two patients who test positive are both sick." ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def run_simulation(generator, iters=100000):\n", " pmf_t = Pmf([0.2, 0.4])\n", " pair_iterator = generator(0.1, 0.9, pmf_t)\n", "\n", " outcomes = Pmf()\n", " for i in range(iters):\n", " test1, test2, sick1, sick2 = next(pair_iterator)\n", " if test1 and test2:\n", " outcomes[sick1, sick2] += 1\n", "\n", " outcomes.Normalize()\n", " return outcomes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we saw before, this probability converges on $1/16$." ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(False, False) 0.5723947125730094\n", "(False, True) 0.18567476175837688\n", "(True, False) 0.1796802951122041\n", "(True, True) 0.062250230556409464\n" ] } ], "source": [ "outcomes = run_simulation(generate_pair_A)\n", "outcomes.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now here's a generator that generates pairs of patients in Scenario C. The difference is that for each pair we check the outcome of the tests; if they are not both positive, we loop back and try again:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def generate_pair_C(p, s, pmf_t):\n", " while True:\n", " sick1, sick2 = flip(p), flip(p)\n", " \n", " t = pmf_t.Random()\n", " test1 = flip(s) if sick1 else flip(t)\n", "\n", " t = pmf_t.Random()\n", " test2 = flip(s) if sick2 else flip(t)\n", "\n", " # here is the difference\n", " if test1 and test2:\n", " yield test1, test2, sick1, sick2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When we run it, it seems like the probability is still `1/16`:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(False, False) 0.56137\n", "(False, True) 0.18819000000000002\n", "(True, False) 0.18706\n", "(True, True) 0.06338\n" ] } ], "source": [ "outcomes = run_simulation(generate_pair_C)\n", "outcomes.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you examine the code, you see that the conditional in `generate_pair_C` makes no difference because it is redundant with the conditional in `run_simulation`. In Scenarios A and C, we filter out pairs if they are not both positive; it doesn't matter whether the filtering happens in the generator or the simulator.\n", "\n", "In fact, Scenarios A and C are identical. In both scenarios, when we see a patient with a positive test, we learn something about the patient (more likely to be sick) and something about the particular test applied to the patient (more likely to generate false positives).\n", "\n", "This is similar to what we saw in the Red Die problem. In Scenario C, the reddish die is more likely to produce a red outcome, so a red outcome provides evidence that we rolled the reddish die.\n", "\n", "However, that is not the case with Scenario D." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Scenario D" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "As a reminder, Scenario D is similar to B: we have reason to think that `t` is either `0.2` or `0.4` for everyone. The difference in Scenario D is that we only see patients if they test positive.\n", "\n", "Here's a generator that generates single patients:" ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def generate_patient_D(p, s, pmf_t):\n", " while True:\n", " # choose t\n", " t = pmf_t.Random()\n", " \n", " # generate patients until positive test\n", " while True:\n", " sick = flip(p)\n", " test = flip(s) if sick else flip(t)\n", " if test:\n", " yield test, sick\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's a simulator that counts the fraction of positive tests that turn out to be sick:" ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def run_single_simulation(generator, iters=100000):\n", " pmf_t = Pmf([0.2, 0.4])\n", " iterator = generator(0.1, 0.9, pmf_t)\n", "\n", " outcomes = Pmf()\n", " for i in range(iters):\n", " test, sick = next(iterator)\n", " if test:\n", " outcomes[sick] += 1\n", "\n", " outcomes.Normalize()\n", " return outcomes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When we run the simulation, it doesn't look like it converges to `1/4` as it does in the other three scenarios." ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "False 0.7319500000000001\n", "True 0.26805\n" ] } ], "source": [ "outcomes = run_single_simulation(generate_patient_D)\n", "outcomes.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So how can we analyze this scenario?\n", "\n", "The key is to realize that, as in Scenario D of the Red Dice problem, if we roll until we get red, we don't learn anything about the die we rolled, and in this case, if we generate patients until we get a positive test, we don't learn anything about `t`. The likelihood of the data (a positive test) is 1, regardless of `t`.\n", "\n", "We can compute the probablity the patient is sick by creating a MetaTest and updating only the lower level (the Test objects) but not the upper level (the distribution of `t`)." ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "collapsed": false }, "outputs": [], "source": [ "metatest = MakeMetaTest(p, s, pmf_t)\n", "for hypo in metatest:\n", " hypo.Update('pos')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After the update, the marginal distribution of `t` is unchanged:" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1/5 1/2\n", "2/5 1/2\n" ] } ], "source": [ "Marginal(metatest).Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But the conditional probabilities have been updated:" ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 2/3\n", "sick 1/3\n" ] } ], "source": [ "Conditional(metatest, t1).Print()" ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 4/5\n", "sick 1/5\n" ] } ], "source": [ "Conditional(metatest, t2).Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use `MakeMixture` to compute the weighted average of the conditional distributions. " ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 11/15\n", "sick 4/15\n" ] } ], "source": [ "MakeMixture(metatest).Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So in Scenario D, a patient who tests positive has a probability of `4/15` of being sick, which is about 26.7%, and consistent with the simulation.\n", "\n", "That's a little higher than in the other three Scenarios, because we have less reason to think that `t` is high." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Scenario D, two patients\n", "\n", "Now let's see what happens with two patients. Here's a generator that generates pairs of patients:" ] }, { "cell_type": "code", "execution_count": 50, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def generate_pair_D(p, s, pmf_t):\n", " while True:\n", " t = pmf_t.Random()\n", " while True:\n", " sick1, sick2 = flip(p), flip(p)\n", " \n", " test1 = flip(s) if sick1 else flip(t)\n", " test2 = flip(s) if sick2 else flip(t)\n", "\n", " if test1 and test2:\n", " yield test1, test2, sick1, sick2\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's what we get when we run the simulation:" ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(False, False) 0.541945\n", "(False, True) 0.191442\n", "(True, False) 0.191182\n", "(True, True) 0.075431\n" ] } ], "source": [ "outcomes = run_simulation(generate_pair_D, iters=1000000)\n", "outcomes.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It looks like the probability that both patients are sick is higher than `1/16`.\n", "\n", "We can compute the result exactly using the posterior distribution and the same method we used in Scenario B, computing the mixture of two conjunctions:" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def MixConjunctions(metatest):\n", " metapmf = Pmf()\n", " for t, prob in Marginal(metatest).Items():\n", " cond = Conditional(metatest, t)\n", " conjunction = cond + cond\n", " metapmf[conjunction] = prob\n", " \n", " return MakeMixture(metapmf)" ] }, { "cell_type": "code", "execution_count": 53, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsicknotsick 122/225\n", "notsicksick 43/225\n", "sicknotsick 43/225\n", "sicksick 17/225\n" ] } ], "source": [ "MixConjunctions(metatest).Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In Scenario D, the probability that both patients are sick is `17/225`, or about 0.0755, which is consistent with the simulation and, again, a little higher than in the other scenarios." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In summary:\n", "\n", " P(sick|pos) P(sicksick|pospos)\n", " Scenario A 1/4 = 25% 1/16 = 6.25%\n", " Scenario B 1/4 = 25% 1/17 ~= 5.88%\n", " Scenario C 1/4 = 25% 1/16 = 6.25%\n", " Scenario D 4/15 ~= 26.7% 17/225 ~= 7.55%\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Symbolic solutions\n", "\n", "One nice thing about Python is that the same code can work with floating-point numbers, rational numbers (`Fraction` objects), and SymPy symbols.\n", "\n", "The following functions solve the various scenarios:" ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def scenario_a(p, s, pmf_t):\n", " metatest = MakeMetaTest(p, s, pmf_t)\n", " metatest.Update('pos')\n", " single = MakeMixture(metatest)\n", " pair = single + single\n", " return single, pair" ] }, { "cell_type": "code", "execution_count": 55, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 3/4\n", "sick 1/4\n", "notsicknotsick 9/16\n", "notsicksick 3/16\n", "sicknotsick 3/16\n", "sicksick 1/16\n" ] } ], "source": [ "single, pair = scenario_a(p, s, pmf_t)\n", "single.Print()\n", "pair.Print()" ] }, { "cell_type": "code", "execution_count": 56, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def scenario_b(p, s, pmf_t):\n", " metatest1 = MakeMetaTest(p, s, pmf_t)\n", " metatest1.Update('pos')\n", " single = MakeMixture(metatest1)\n", " \n", " metatest2 = MakeMetaTest(p, s, Marginal(metatest1))\n", " metatest2.Update('pos')\n", " pair = MixConjunctions(metatest2)\n", " return single, pair" ] }, { "cell_type": "code", "execution_count": 57, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 3/4\n", "sick 1/4\n", "notsicknotsick 10/17\n", "notsicksick 3/17\n", "sicknotsick 3/17\n", "sicksick 1/17\n" ] } ], "source": [ "single, pair = scenario_b(p, s, pmf_t)\n", "single.Print()\n", "pair.Print()" ] }, { "cell_type": "code", "execution_count": 58, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def scenario_d(p, s, pmf_t):\n", " metatest = MakeMetaTest(p, s, pmf_t)\n", " for hypo in metatest:\n", " hypo.Update('pos')\n", " single = MakeMixture(metatest)\n", " pair = MixConjunctions(metatest)\n", " return single, pair" ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick 11/15\n", "sick 4/15\n", "notsicknotsick 122/225\n", "notsicksick 43/225\n", "sicknotsick 43/225\n", "sicksick 17/225\n" ] } ], "source": [ "single, pair = scenario_d(p, s, pmf_t)\n", "single.Print()\n", "pair.Print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's the symbolic version:" ] }, { "cell_type": "code", "execution_count": 60, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sympy import symbols\n", "\n", "p, s, q, t1, t2 = symbols(['p', 's', 'q', 't1', 't2'])\n", "pmf_t = Pmf({t1:q, t2:1-q})" ] }, { "cell_type": "code", "execution_count": 61, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def PrintSymSuite(suite):\n", " for hypo, prob in suite.Items():\n", " print(hypo, prob.simplify())" ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick (p - 1)*(-q*t1 + t2*(q - 1))/(q*(p*s - t1*(p - 1)) - (q - 1)*(p*s - t2*(p - 1)))\n", "sick p*s/(q*(p*s - t1*(p - 1)) - (q - 1)*(p*s - t2*(p - 1)))\n", "notsicknotsick (p - 1)**2*(q*t1 - t2*(q - 1))**2/(q*(p*s - t1*(p - 1)) - (q - 1)*(p*s - t2*(p - 1)))**2\n", "sicksick p**2*s**2/(q*(p*s - t1*(p - 1)) - (q - 1)*(p*s - t2*(p - 1)))**2\n", "sicknotsick -p*s*(p - 1)*(q*t1 - t2*(q - 1))/(q*(p*s - t1*(p - 1)) - (q - 1)*(p*s - t2*(p - 1)))**2\n", "notsicksick -p*s*(p - 1)*(q*t1 - t2*(q - 1))/(q*(p*s - t1*(p - 1)) - (q - 1)*(p*s - t2*(p - 1)))**2\n" ] } ], "source": [ "single, pair = scenario_a(p, s, pmf_t)\n", "PrintSymSuite(single)\n", "PrintSymSuite(pair)" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick (p - 1)*(-q*t1 + t2*(q - 1))/(q*(p*s - t1*(p - 1)) - (q - 1)*(p*s - t2*(p - 1)))\n", "sick p*s/(q*(p*s - t1*(p - 1)) - (q - 1)*(p*s - t2*(p - 1)))\n", "notsicknotsick (p - 1)**2*(q*t1**2 + t2**2*(-q + 1))/(q*(p*s - t1*(p - 1))**2 - (q - 1)*(p*s - t2*(p - 1))**2)\n", "sicksick p**2*s**2/(q*(p*s - t1*(p - 1))**2 - (q - 1)*(p*s - t2*(p - 1))**2)\n", "sicknotsick p*s*(p - 1)*(-q*t1 + t2*(q - 1))/(q*(p*s - t1*(p - 1))**2 - (q - 1)*(p*s - t2*(p - 1))**2)\n", "notsicksick p*s*(p - 1)*(-q*t1 + t2*(q - 1))/(q*(p*s - t1*(p - 1))**2 - (q - 1)*(p*s - t2*(p - 1))**2)\n" ] } ], "source": [ "single, pair = scenario_b(p, s, pmf_t)\n", "PrintSymSuite(single)\n", "PrintSymSuite(pair)" ] }, { "cell_type": "code", "execution_count": 64, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "notsick (p - 1)*(-q*t1*(p*s - t2*(p - 1)) + t2*(q - 1)*(p*s - t1*(p - 1)))/((p*s - t1*(p - 1))*(p*s - t2*(p - 1)))\n", "sick p*s*(q*(p*s - t2*(p - 1)) - (q - 1)*(p*s - t1*(p - 1)))/((p*s - t1*(p - 1))*(p*s - t2*(p - 1)))\n", "notsicknotsick (p - 1)**2*(q*t1**2*(p*s - t2*(p - 1))**2 + t2**2*(-q + 1)*(p*s - t1*(p - 1))**2)/((p*s - t1*(p - 1))**2*(p*s - t2*(p - 1))**2)\n", "sicksick p**2*s**2*(q*(p*s - t2*(p - 1))**2 + (-q + 1)*(p*s - t1*(p - 1))**2)/((p*s - t1*(p - 1))**2*(p*s - t2*(p - 1))**2)\n", "sicknotsick p*s*(p - 1)*(-q*t1*(p*s - t2*(p - 1))**2 + t2*(q - 1)*(p*s - t1*(p - 1))**2)/((p*s - t1*(p - 1))**2*(p*s - t2*(p - 1))**2)\n", "notsicksick p*s*(p - 1)*(-q*t1*(p*s - t2*(p - 1))**2 + t2*(q - 1)*(p*s - t1*(p - 1))**2)/((p*s - t1*(p - 1))**2*(p*s - t2*(p - 1))**2)\n" ] } ], "source": [ "single, pair = scenario_d(p, s, pmf_t)\n", "PrintSymSuite(single)\n", "PrintSymSuite(pair)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 }