{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "#Statistical Inference for Everyone: Technical Supplement\n", "\n", "\n", "\n", "This document is the technical supplement, for instructors, for [Statistical Inference for Everyone], the introductory statistical inference textbook from the perspective of \"probability theory as logic\".\n", "\n", "\n", "\n", "[Statistical Inference for Everyone]: http://web.bryant.edu/~bblais/statistical-inference-for-everyone-sie.html\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Chapter 1\n", "\n", "## Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### History\n", "\n", "For a much more detailed account, please see (Loredo, 1990; Jaynes, 2003)\n", "\n", "First formal account of the calculation of probabilities from Bernoulli(Bernoulli, 1713), who defined probability as a “degree of certainty”. His theorem states that, if the probability of an event is $p$ then the limiting frequency of that event converges to $p$. It was later, by Bayes and Laplace that the inverse problem was solved: given n occurrences out of $N$ trials, what is the probability $p$ of a single occurrence?\n", "\n", "The solution was published posthumously by Rev. Thomas Bayes (1763), and soon redis- covered, generalized, and applied to astrophysics by Laplace. It is Laplace who really brought probability theory to a mature state, applying it to problems in astrophysics, geology, meteo- rology, and others. One famous application was the determination of the masses of Jupiter and Saturn and the quantification of the uncertainties.\n", "\n", "### Derivation of Bayes' Rule (aka Bayes' Theorem)\n", "\n", "Laplace took as axioms the sum and product rules for probability:\n", "\n", "\n", "\\begin{eqnarray}\n", "p(A|C) + p(\\bar{A}|C) &=& 1 \\\\\\\\\n", "p(AB|C) &=& p(A|BC)p(B|C)\n", "\\end{eqnarray}\n", "\n", "from there, given the obvious symmetry $p(AB|C)=p(BA|C)$ we get \n", "\n", "\\begin{eqnarray}\n", "p(A|BC)p(B) &=& p(B|AC)p(A) \\\\\\\\\n", "p(A|BC) &=& \\frac{p(B|AC)p(A)}{p(B)}\n", "\\end{eqnarray}\n", "\n", "which is Bayes' Theorem.\n", "\n", "### Further History\n", "After Laplace's death, his ideas came under attack by mathematicians. They\n", "criticized two aspects of the theory:\n", "\n", "1. The axioms, although *reasonable*, were not clearly unique for \n", "a definition of probability as vague as ``degrees of plausibility''. The\n", "definition seemed vague, and thus the axioms which support the theory seemed\n", "arbitrary. If one defines probabilities as limiting frequencies of events,\n", "then these axioms are justified.\n", "2. It was unclear how to assign the prior probabilities in the first place.\n", "Bernoulli introduced the *Principle of Insufficient Reason*, which states\n", "that if the evidence does not provide any reason to choose $A_1$ or $A_2$,\n", "then one assigns equal probability to both. If there are $N$ such\n", "propositions, then the probability is assigned $1/N$ for each. Although\n", "again, reasonable, it was not clear how to generalize it to a continuous\n", "variable. \n", "\n", "If one defines probabilities as limiting frequencies of events,\n", "this problem disappears, because the notion of prior probabilities\n", "disappeared, as well as the probability of an hypothesis. Hypotheses are true\n", "or false (1 or 0) for *all* elements of an ensemble or repeated\n", "experiment, and thus does not have a limiting frequency other than 0 or 1.\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Response\n", "\n", "Shifting to a limiting frequency definition, researchers avoided the issues\n", "above, and did not pursue their direct solution vigorously. The solutions did\n", "come, however. \n", "\n", "In the mid-1900's, R. T. Cox (1946, 1961) and E. T. Jaynes (1957, 1958)\n", "demonstrated that, from a small collection of reasonable \"desiderata\" (aka\n", "axioms), one could develop a complete and rigorous mathematical theory from\n", "\"degrees of plausibility\". These \"desiderata\" are:\n", "* Degrees of plausibility are represented by real numbers\n", "* Qualitative correspondence with common sense. Consistent with deductive\n", "logic in the limit of true and false propositions.\n", "* Consistency\n", " 1. If a conclusion can be reasoned out in more than one way, every possible\n", " way must lead to the same result\n", " 2. The theory must use all of the information provided\n", " 3. Equivalent states of knowledge must be represented by equivalent\n", " plausibility assignments\n", "\n", "With just these it is shown that the original, Laplace, methods of using Bayes'\n", "theorem were the correct ones. It is also shown that **any theory of\n", "probability is either Bayesian, or violates one of the above desiderata**. \n", "\n", "The concern about assigning prior probabilities was answered in the work of\n", "Shannon and Jaynes, with the advent of maximum entropy methods and the methods\n", "of transformation groups. \n", "\n", "As a note, it is worth quoting Loredo, 1990: \n", "\n", "> It is worth emphasizing that probabilities are *assigned*, not *measured*. This is because probabilities are measures of plausibilities of propositions; they thus reflect whatever information one may have bearing on the truth of propositions, and are not properties of the propositions themselves.\n", "\n", "> ...\n", "\n", "> In this sense, Bayesian Probability Theory is 'subjective,' it describes\n", "states of knowledge, not states of nature. But it is `objective' in that we\n", "insist that equivalent states of knowledge be represented by equal\n", "probabilities, and that problems be well-posed: enough information must be\n", "provided to allow unique, unambiguous probability assignments.\n", "\n", "Although there isn't a unique solution for converting verbal descriptions into\n", "prior probabilities in *every* case, the current methods allow this\n", "translation in many very useful cases.\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Procedure\n", "\n", "In all of the cases that follow, there is a common procedure. We want to\n", "estimate parameters in a model, so we write down a probability distribution\n", "for those parameters dependent on the data, and any available information,\n", "$I$. If we have one parameter in the model, then the form is like:\n", "\n", "\\begin{eqnarray}\n", "p({\\rm parameter}|{\\rm data},I)\n", "\\end{eqnarray}\n", "\n", "We apply Bayes' theorem/rule to write it in terms of things we have a handle on:\n", "\n", "\\begin{eqnarray}\n", "p({\\rm parameter}|{\\rm data},I) &=& \\frac{p({\\rm data}|{\\rm\n", "parameter},I)p({\\rm parameter}|I)}{p({\\rm data}|I)}\n", "\\end{eqnarray}\n", "\n", "The left-hand side and the top two terms have names, and the bottom term is a\n", "normalization constant (which we will often omit, and work in proportions).\n", "\n", "\\begin{eqnarray}\n", "\\overbrace{p({\\rm parameter}|{\\rm data},I)}^{\\rm posterior} &=& \\frac{\\overbrace{p({\\rm data}|{\\rm\n", "parameter},I)}^{\\rm likelihood}\\overbrace{p({\\rm parameter}|I)}^{\\rm\n", "prior}}{\\underbrace{p({\\rm data}|I)}_{\\rm normalization}} \\\\\\\\\n", "&\\propto& \\overbrace{p({\\rm data}|{\\rm\n", "parameter},I)}^{\\rm likelihood}\\overbrace{p({\\rm parameter}|I)}^{\\rm\n", "prior}\n", "\\end{eqnarray}\n", "\n", "The likelihood is how the data could be generated from the model. The prior is\n", "a weighting of the parameter possibilities, before we see the data. \n", "\n", "All of the information in the problem is complete once the posterior is\n", "written down. After that, it is a matter of working with that distribution to\n", "obtain the estimate. Often, we take the maximum posterior, but we can also\n", "take the mean, median or any other central measure. We can look at standard\n", "deviations to determine confidence intervals, but we can also look at\n", "quartiles. We will often look at the log of the posterior, as an analytical\n", "trick. When all else fails, we can find the estimate numerically from the\n", "posterior. \n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Orthodox Hypothesis Tests\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Much of orthodox hypothesis testing can be interpreted in a much more\n", "straightforward way with the Bayesian approach. For example, the $p$ value\n", "calculated in the orthodox fashion is *\"the probability, computed assuming\n", "that the null hypothesis $H_o$ is true, of observing a value of the test\n", "statistic that is at least as extreme as the value actually computed from the\n", "data\"* (Bowerman and O'Connell, 2003). In the orthodox method, if you want to\n", "infer from the data that the mean value is, say, greater than zero you set up\n", "the null with $H_o: \\mu\\le 0$ and the alternate with $H_a: \\mu>0$, select the\n", "appropriate statistic ($z$, $t$, etc$\\ldots$), calculate the $p$-value of the\n", "null, where you use hypothetical data and look for the frequency that $H_o$ is\n", "true. Finally, you reject the null at the level of significance, usually at\n", "the 5% level. No wonder students get confused!\n", "\n", "In the Bayesian way, we take the posterior distribution for the parameter,\n", "$\\mu$, and then ask *\"what is the probability that $\\mu$ is greater than 0?\"*,\n", "integrate the posterior probability distribution from 0 to infinity and get\n", "that probability directly. In many applications, *they both give\n", "identical numerical results!* \n", "\n", "(Jeffreys, 1939) put it in the following way:\n", "> What the use of $p$ implies, therefore, is that a hypothesis that may be\n", "true may be rejected because is has not predicted observable results that have\n", "not occurred. This seems a remarkable procedure. On the face of it the fact\n", "that such results have not occurred might be more reasonably be taken as\n", "evidence for the law, not against it.\n", "\n", "\n", "Another comment, this time from Jaynes is that\n", "\n", ">If\n", "we reject $H_o$, then we should reject probabilities conditional on $H_o$\n", "being true. The $p$-value is such a probability, so the procedure invalidates\n", "itself. \n", "\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---------------------" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from IPython.core.display import HTML\n", "\n", "\n", "def css_styling():\n", " styles = open(\"../styles/custom.css\", \"r\").read()\n", " return HTML(styles)\n", "css_styling()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.9" } }, "nbformat": 4, "nbformat_minor": 0 }