{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Homework 2\n", "\n", "## Intro to Statistics\n", "
\n", "This notebook is arranged in cells. Texts are usually written in the markdown cells, and here you can use html tags (make it bold, italic, colored, etc). You can double click on this cell to see the formatting.
\n", "
\n", "The ellipsis (...) are provided where you are expected to write your solution but feel free to change the template (not over much) in case this style is not to your taste.
\n", "
\n", "Hit \"Shift-Enter\" on a code cell to evaluate it. Double click a Markdown cell to edit.

\n", " Problems are directly taken from MacKay Chapter 3 (http://www.inference.org.uk/itprnn/book.pdf). We recommend you to read Chapter 3 before starting HW2. Problem 2 could be challenging. If you have any questions/concerns, talk to us during discussion/office hours.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "### Link Okpy" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from client.api.notebook import Notebook\n", "ok = Notebook('hw2.ok')\n", "_ = ok.auth(inline = True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Imports" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import numpy as np\n", "from scipy.integrate import quad\n", "#For plotting\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 1 - Inferring a Decay Constant\n", "\n", "Unstable particles are emitted from a source and decay at a distance $x$, a real number that has an exponential probability distribution with characteristic length $\\lambda$. Decay events can be observed only if they occur in a window extending from $x$ = 1cm to $x$ = 20cm. $N$ decays are observed at locations {$x_1$, ... , $x_N$}. What is $\\lambda$?
\n", "![alt text](decay.png \"Title\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given $\\lambda$, the probability of observing a particle at a distance $x$ is:
\n", "\n", "\\begin{equation}\n", "P(x\\ |\\ \\lambda) = \n", "\\begin{cases}\n", "\\frac{1}{\\lambda}e^{-x/\\lambda}\\big/\\ Z(\\lambda)\\ \\ \\ \\ \\ \\ a < x < b \\\\\n", "0 \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\mathrm{otherwise}\n", "\\end{cases}\n", "\\end{equation}\n", "\n", "where\n", "$$ Z(\\lambda) = \\int_a^b dx \\frac{1}{\\lambda}e^{-x/\\lambda} = \\big( e^{-a/\\lambda} - e^{-b/\\lambda} \\big). $$\n", "Here, $a = 1,\\ b = 20$.\n", "

\n", " 1. Write a function for $Z(\\lambda)$. Then, use it to write another function for $P(x|\\lambda)$.
\n", "
\n", "Henceforth, we refer to $\\lambda$ as $L$ (for the sake of simplicity).
\n", "
\n", "Check if your function can return a correct value if either $x$ or $L$ is a 2D array. Say $x$ is a scalar, and $L$ is a vector with $N$ elements. If you calculate the product $x*L$, the dimension of $x$ is stretched to $N \\times 1$ in order to match that of $L$ ([broadcasting](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)). If $x$ and $L$ are both vectors, then they must have the same dimensions to perform arithmetic operations on them ($x*L$, $x$+$L$, $x$/$L$, etc)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def Z(L, a, b):\n", " '''Normalizing constant function for a characteristic length L, assuming that the data are\n", " truncated between x = a and x = b.\n", " '''\n", " return ...\n", "\n", "# Z = quad(lambda y, L: 1./L*exp(-y/L), a, b, args = (L, ))[0] will return the same value.\n", "\n", "def pdf_decay(x, L, a, b):\n", " 'Probability of one data point, given L'\n", " return ..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " 2. Plot $P(x|\\lambda)$ as a function of $x$ for $\\lambda = 2, 5, 10$. Make sure to label each plot. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create arrays for x and L.\n", "x = np.linspace(1e-1, 20, 100)\n", "L = ...\n", "\n", "# Plot the probability desity as a function of x for each lambda.\n", "# Hint: You can use a for-loop and make a plot for each element of an array L. (But you don't\n", "#have to do it in this way.)\n", "# Hint2: You should label each plot. To do this in a for-loop, you should remember that you can\n", "#insert values into a string with the placeholder % (https://docs.python.org/2.4/lib/typesseq-strings.html).\n", "\n", "...\n", "\n", "plt.xlim(1, 20)\n", "plt.xticks(np.append(np.array([1]), np.arange(2, 20+1, 2)))\n", "plt.ylim(0, 0.3)\n", "plt.xlabel(' ... ')\n", "plt.ylabel(' ... ')\n", "plt.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " 3. Plot $P(x|\\lambda)$ as a function of $\\lambda$ for $x = 3, 5, 12$. (This function is known as the likelihood of $\\lambda$) Make sure to label each plot. Note that a peak emerges in each plot. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create arrays for x and L.\n", "x = ...\n", "L = np.logspace(-1, 2, 100)\n", "\n", "# Plot the probability desity as a function of L for each x. Label each plot.\n", "...\n", "\n", "plt.xlim(1.e-1, 1.e2)\n", "plt.ylim(0, 0.2)\n", "plt.xlabel(' ... ')\n", "plt.ylabel(' ... ')\n", "plt.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " 4. Plot $P(x|\\lambda)$ as a function of $x$ and $\\lambda$. Create a surface plot. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Import packages for making a 3D plot\n", "from mpl_toolkits.mplot3d import Axes3D\n", "from matplotlib import cm" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create arrays for x and L. These define your \"x\" and \"y\" coordinates.\n", "x = np.linspace(1, 3, 30)\n", "L = np.logspace(-0.5, 1, 30)\n", "# Create coordinate matrices from coordinate vectors.\n", "x, L = np.meshgrid(x, L)\n", "\n", "# Evaluate probability densities at all (x,y) coordinates. This is your \"z\" coordinate.\n", "z = ...\n", "\n", "# Make plot\n", "fig = plt.figure(figsize = (12,8))\n", "ax = fig.gca(projection='3d')\n", "surf = ax.plot_surface(x, L, z, vmax=3, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=True)\n", "\n", "# Add contour plots\n", "cset = ax.contour(x, L, z, 10, zdir='x', offset=0.9, cmap=cm.Set1)\n", "cset = ax.contour(x, L, z, 20, zdir='y', offset=10.5, cmap=cm.Set1)\n", "\n", "ax.set_xlim(0.9, 3)\n", "ax.set_ylim(1.e-1, 1.e1)\n", "\n", "ax.set_xlabel(' ... ')\n", "ax.set_ylabel(' ... ')\n", "ax.set_zlabel(' ... ')\n", "\n", "fig.colorbar(surf, shrink=0.5, aspect=5)\n", "plt.show()\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the above figure, two contour plots (constant $x$ and $y$ slices) are also included. Compare them to the figures you created in part 2 and 3. They are the same; they correspond to vertical sections through surface.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now write Bayes' theorem:
\n", "\n", "\\begin{align}\n", "P(\\lambda\\ |\\ \\{x_1, ..., x_N\\}) & = \\frac{P(\\{x\\}|\\lambda)P(\\lambda)}{P(\\{x\\})} \\\\\n", "& \\propto \\frac{1}{(\\lambda Z(\\lambda))^N}\\ \\mathrm{exp} \\big( -\\sum_1^N x_n/\\lambda \\big) P(\\lambda)\n", "\\end{align}\n", "
\n", " 5. Define the likelihood function $P(\\{x\\}|\\lambda)$ and plot $P(\\{x \\} = \\{1.5, 2, 3, 4, 5, 12\\}|\\lambda)$ as a function of $\\lambda$. Estimate the peak posterior value of $\\lambda$ and the error on $\\lambda$ by fitting to a gaussian at the peak. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def likelihoodP(x, L):\n", " 'The likelihood function given a dataset (x array) and a characteristic length L'\n", " ...\n", " return ..." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create an array for L. Assume that it is evenly spaced numbers over the interval (1e-1, 1e2).\n", "L = np.logspace(-1, 2, 1000)\n", "# Create an array for x.\n", "x = ...\n", "\n", "# Evaluate the likelihood function and plot it as a function of L\n", "P = ...\n", "\n", "# Make plot\n", "plt.semilogx( ... )\n", "\n", "plt.xlim(1.e-1,1.e+2)\n", "plt.ylim(0, 1.4e-6)\n", "plt.xlabel(' ... ')\n", "plt.ylabel(' ... ')\n", "plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Estimate the peak posterior value of L (Hint - https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html)\n", "max_L = ...\n", "\n", "print(\"The peak posterior value of the characteristic length is = \", max_L)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Estimate the error on L by fitting to a gaussian at the peak\n", "# Import packages for curve fitting\n", "from scipy.optimize import curve_fit\n", "\n", "# Create an array of L near L_max\n", "L = np.linspace(max_ind-0.2, max_ind+0.2, 100)\n", "x = ...\n", "\n", "# Define Gaussian function with arbitrary amplitude (See https://en.wikipedia.org/wiki/Normal_distribution)\n", "def gaussian(x, Amp, mu, sig):\n", " return ...\n", "\n", "# Fit a Gaussian function to a data\n", "#(https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html)\n", "# You can use different packages if you wish. This is only a suggestion.\n", "popt, pcov = curve_fit( ... )\n", "\n", "# Plot both data and fit\n", "plt.plot( ... , 'b-', label = 'likelihood')\n", "plt.plot( ... , 'r--', label='fit')\n", "plt.legend()\n", "plt.show()\n", "\n", "Error = ...\n", "print(\"The error on L is estimated to be = \", Error)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 2 - Biased Coin\n", "\n", "When spun on edge 256 times, a Belgian one-euro coin came up heads 142 times and tails 114. Do these data give evidence that the coin is biased rather than fair?
\n", "
\n", "We compare the models $\\mathcal{H}_0$ - the coin is fair - and $\\mathcal{H}_1$ - the coin is biased.
\n", "
\n", "First, suppose that the model $\\mathcal{H}_1$ assumes a uniform prior distribution for $p$ (the probability of getting heads in a single toss): $P(p|\\mathcal{H}_1) = 1$.
\n", "
\n", "Let the data $D$ be a sequence which contains counts of the two possible outcomes (H - head / T - tail): e.g. HHTHT, HHHTTHTT, etc.
\n", "
\n", "Given a particular $p$, the probability that $F$ tosses results in a sequence $D$ of $F_H$ heads and $F_T$ tails is:\n", "$$ P(D|p,\\mathcal{H}_1) = p^{F_H} (1-p)^{F_T}. $$\n", "
\n", "Then,\n", "$$ P(D|\\mathcal{H}_1) = \\int_0^1 dp\\ p^{F_H} (1-p)^{F_T} = \\frac{\\Gamma(F_H+1)\\Gamma(F_T+1)}{\\Gamma(F_H+F_T+2)} .$$\n", "Note that the above integral is a \"Beta function\" $B(F_H+1, F_T+1)$ and can be written in terms of the gamma function. (See http://www.math.uah.edu/stat/special/Beta.html)
\n", "
\n", "The gamma function is an extension of the factorial function $\\Gamma(n+1) = n!$

\n", "$$ \\frac{\\Gamma(F_H+1)\\Gamma(F_T+1)}{\\Gamma(F_H+F_T+2)} = \\frac{F_H! F_T!}{(F_H+F_T+1)!} $$\n", "
\n", "Similarly,\n", "$$ P(D|\\mathcal{H}_0) = \\big(\\frac{1}{2}\\big)^F. $$\n", "
\n", " 1. Find the likelihood ratio $\\frac{P(D|\\mathcal{H}_1)}{P(D|\\mathcal{H}_0)}$, assuming the uniform prior of $\\mathcal{H}_1$. Which model does the data favor?
\n", "
\n", "(Hint: If the argument of the gamma function is large, math.gamma() overflows. You can prevent this by using the fact:\n", "$$ log(xy/z) = log(x)+log(y)-log(z) $$
\n", "Then, you can evaluate $P = \\Gamma(x)*\\Gamma(y)/\\Gamma(z)$ in the following way:\n", "$$ Q = log(P) = log(\\Gamma(x))+log(\\Gamma(y))-log(\\Gamma(z)) $$\n", "$$ P = e^Q $$\n", "
\n", "You can easily evaluate logarithm of the gamma function using \"lgamma\" (from math import lgamma) see https://docs.python.org/2/library/math.html)
\n", "
\n", "(Hint2: For reference, you can read: https://en.wikipedia.org/wiki/Bayes_factor)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "F = ...; F_H = ...; F_T = F - F_H\n", "\n", "from math import factorial, log, exp, lgamma\n", "\n", "Likelihood_H1 = ...\n", "Likelihood_H0 = ...\n", "\n", "ratio = ...\n", "\n", "print(\"The likelihood ratio is = \", ratio)\n", "print(\"The data give evidence in favor of ...\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Instead of assuming a uniform prior, suppose that we add a small bias, and consequently the prior were presciently set:
\n", "$$ P(p|\\mathcal{H}_1, \\alpha) = \\frac{1}{Z(\\alpha)}p^{\\alpha-1}(1-p)^{\\alpha-1},\\ \\ \\mathrm{where}\\ \\ Z(\\alpha) = \\Gamma(\\alpha)^2/\\Gamma(2\\alpha) $$\n", "
\n", " 2. Find the likelihood ratio $\\frac{P(D|\\mathcal{H}_1)}{P(D|\\mathcal{H}_0)}$, assuming the above prior of $\\mathcal{H}_1$. Let $\\alpha$ = \\{ .37, 1.0, 2.7, 7.4, 20, 55, 148, 403, 1096 \\}.
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Answer: " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "alpha = ...\n", "\n", "def likelihood_ratio(F_H, F_T, alpha):\n", " ...\n", " return ...\n", "\n", "ratio = np.zeros_like(alpha)\n", "for i in range(len(alpha)):\n", " ratio[i] = likelihood_ratio(F_H, F_T, alpha[i])\n", "\n", "print(\"For alpha = \", alpha, \", the likelihood ratios are = \", np.around(ratio, decimals=2), \"respectively.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " 3. Does the likelihood ratio for $\\mathcal{H}_1$ over $\\mathcal{H}_0$ increases as $\\alpha$ increases?
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Answer:
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " 4. Now, let $\\mathcal{H}_1$ be the model in which the probability of getting heads is descrete at 142/256. What is the likelihood in this case?
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Likelihood_H1 = ...\n", "Likelihood_H0 = ...\n", "\n", "ratio = ...\n", "\n", "print(\"The likelihood ratio is = \", ratio)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " 5. Explain the above result.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Answer:
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " 6. Now let us test the null hypothesis. Assuming the central limit theorem, we model the binomial as a gaussian centered at $\\mu = F/2$ and with the width given by $\\sigma^2 = F*(p_{heads})*(p_{tails})$. (in this case, $p_{heads} = p_{heads} = 1/2$)
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Define mu (mean) and sigma (square root of the variance)\n", "mu = ...\n", "sigma = np.sqrt( ... )\n", "\n", "# Calculate Z.\n", "Z = abs(F_H-mu)/sigma\n", "\n", "print(\"F_H is %.2f sigma away from the mean.\" %Z)\n", "\n", "# Integrate a normal distribution from x=F_H to x=np.inf (See https://en.wikipedia.org/wiki/Normal_distribution)\n", "def gaussian_normalized(x, mu, sigma):\n", " return ...\n", "\n", "pvalue = quad( ... )[0]*100\n", "\n", "print(\"The p-value is %.2f percent.\" %pvalue)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Problem 3 - Monty Hall\n", "\n", "On a game show, a contestant is told the rules as follows:
\n", "
\n", "There are three doors, labelled 1, 2, 3. A single prize has been hidden behind one of them. You get to select one door. Initially your chosen door will not be opened. Instead, the gameshow host will open one of the other two doors, and he will do so in such a way as not to reveal the prize. For example, if you first choose door 1, he will then open one of doors 2 and 3, and it is guaranteed that he will choose which one to open so that the prize will not be revealed.
\n", "
\n", "At this point, you will be given a fresh choice of door: you can either stick with your first choice, or you can switch to the other closed door. All the doors will then be opened and you will receive whatever is behind your final choice of door.
\n", "
\n", "Imagine that the contestant chooses door 1 first; then the gameshow host opens door 2, revealing nothing behind the door, as promised. Should the contestant (a) stick with door 1, or (b) switch to door 3, or (c) does it make no difference?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let $\\mathcal{H}_i$ denote the hypothesis that the prize is behind door $i$. We make the following assumptions: the three hypotheses $\\mathcal{H}_1, \\mathcal{H}_2, \\mathcal{H}_3$ are equiprobable a priori, i.e.,
\n", "
\n", "$$ P(\\mathcal{H}_1) = P(\\mathcal{H}_2) = P(\\mathcal{H}_3) = \\frac{1}{3} $$\n", "
\n", "The datum we receive, after choosing door 1, is one of $D$ = 3 and $D$ = 2 (meaning door 3 or 2 is opened, respectively).
\n", " 1. Find $P(D=2|\\mathcal{H}_1), P(D=3|\\mathcal{H}_1), P(D=2|\\mathcal{H}_2), P(D=3|\\mathcal{H}_2), P(D=2|\\mathcal{H}_3), P(D=3|\\mathcal{H}_3)$.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Answer:
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, using Bayes’ theorem, we evaluate the posterior probabilities of the hypotheses:
\n", "$$ P(\\mathcal{H}_i|D=2) = \\frac{P(D=2|\\mathcal{H}_i)P(\\mathcal{H}_i)}{P(D=2)} $$
\n", "
\n", " 2. First, we need to calculate the normalizing constant (denominator). Find $P(D=2), P(D=3)$
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Answer:
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " 3. Evaluate the posterior probability and argue if the contestant should switch to door 3.
\n", "
Alternatively, you can perform a thought experiment in which the game is played with 100 doors. The rules are now that the contestant chooses one door, then the game show host opens 98 doors in such a way as not to reveal the prize, leaving the contestant’s selected door and one other door closed. The contestant may now stick or switch. Where do you think the prize is?

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Answer:
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "Imagine that the game happens again and just as the gameshow host is about to open one of the doors a violent earthquake rattles the building and one of the three doors flies open. It happens to be door 3, and it happens not to have the prize behind it. The contestant had initially chosen door 1.

\n", "Repositioning his toupee, the host suggests, ‘OK, since you chose door 1 initially, door 3 is a valid door for me to open, according to the rules of the game; I’ll let door 3 stay open. Let’s carry on as if nothing happened.’\n", "Should the contestant stick with door 1, or switch to door 2, or does it make no difference? Assume that the prize was placed randomly, that the gameshow host does not know where it is, and that the door flew open because its latch was broken by the earthquake.

\n", "[A similar alternative scenario is a gameshow whose confused host forgets the rules, and where the prize is, and opens one of the unchosen doors at random. He opens door 3, and the prize is not revealed. Should the contestant choose what’s behind door 1 or door 2? Does the optimal decision for the contestant depend on the contestant’s beliefs about whether the gameshow host is confused or not?]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "If door 3 is opened by an earthquake, the inference comes out differently – even though visually the scene looks the same. The nature of the data, and the probability of the data, are both now different. The possible data outcomes are, firstly, that any number of the doors might have opened. We could label the eight possible outcomes d = (0,0,0),(0,0,1),(0,1,0),(1,0,0),(0,1,1),...,(1,1,1).

\n", "Secondly, it might be that the prize is visible after the earthquake has opened one or more doors. So the data $D$ consists of the value of d, and a statement of whether the prize was revealed. It is hard to say what the probabilities of these outcomes are, since they depend on our beliefs about the reliability of the door latches and the properties of earthquakes, but it is possible to extract the desired posterior probability without naming the values of $P($d$|\\mathcal{H}_i)$ for each d.

\n", "All that matters are the relative values of the quantities $P(D|\\mathcal{H}_1)$, $P(D|\\mathcal{H}_2)$, $P(D|\\mathcal{H}_3)$, for the value of $D$ that actually occurred. The value of $D$ that actually occurred is ‘d = (0, 0, 1), and no prize visible’.\n", "\n", " 4. How does $P(D|\\mathcal{H}_1)$ compare with $P(D|\\mathcal{H}_2)$? What is $P(D|\\mathcal{H}_3)$? Find $P(D|\\mathcal{H}_1)/P(D)$ and $P(D|\\mathcal{H}_2)/P(D)$.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Answer:
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " 5. Evaluate the posterior probability and argue if the contestant should switch.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Answer:
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## To Submit\n", "Execute the following cell to submit.\n", "If you make changes, execute the cell again to resubmit the final copy of the notebook, they do not get updated automatically.
\n", "__We recommend that all the above cells should be executed (their output visible) in the notebook at the time of submission.__
\n", "Only the final submission before the deadline will be graded. \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "_ = ok.submit()\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" } }, "nbformat": 4, "nbformat_minor": 2 }