{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lecture 14: Location, Scale and LOTUS\n", "\n", "\n", "## Stat 110, Prof. Joe Blitzstein, Harvard University\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Standard Normal (from last time...)\n", "\n", "- $\\mathcal{Z} \\sim \\mathcal{N}(0,1)$\n", "- PDF $\\frac{1}{\\sqrt{2\\pi}} ~~ e^{-\\frac{z^2}{2}}$\n", "- CDF $\\Phi$\n", "- Mean $\\mathbb{E}(\\mathcal{Z}) = 0$\n", "- Variance $\\operatorname{Var}(\\mathcal{Z}) = \\mathbb{E}(\\mathcal{Z}^2) = 1$\n", "- Skew (3rd moment) $\\mathbb{E}(\\mathcal{Z^3}) = 0$ (odd moments are 0 since they are odd functions)\n", "- $-\\mathcal{Z} \\sim \\mathcal{N}(0,1)$ (by symmetry; this simply flips the bell curve about its mean)\n", "\n", "... and with the standard normal distribution under our belts, we can now turn to the more general form. \n", "\n", "But first let's revisit variance once more and extend what we know.\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Rules on Variance\n", "\n", "\\begin{align}\n", " & \\text{[1]} & \\operatorname{Var}(X) &= \\mathbb{E}( (X - \\mathbb{E}X)^2 ) \\\\\n", " & & &= \\mathbb{E}X^2 - (\\mathbb{E}X)^2 \\\\\n", " \\\\\n", " & \\text{[2]} & \\operatorname{Var}(X+c) &= \\operatorname{Var}(X) \\\\\n", " \\\\\n", " & \\text{[3]} & \\operatorname{Var}(cX) &= c^2 ~~ \\operatorname{Var}(X) \\\\\n", " \\\\\n", " & \\text{[4]} & \\operatorname{Var}(X+Y) &\\neq \\operatorname{Var}(X) + \\operatorname{Var}(Y) ~~ \\text{in general} \n", "\\end{align}\n", "\n", "* We already know $\\text{[1]}$\n", "* Re $\\text{[2]}$, adding a constant $c$ has no effect on $\\operatorname{Var}(X)$.\n", "* Re $\\text{[3]}$, pulling out a scaling constant $c$ means you have to square it.\n", "* $\\operatorname{Var}(X) \\ge 0$; $\\operatorname{Var}(X)=0$ if and only if $P(X=a) = 1$ for some $a$... _variance can never be negative!_\n", "* Re $\\text{[4]}$, unlike expected value, variance is _not_ linear. But if $X$ and $Y$ are independent, then $\\operatorname{Var}(X+Y) = \\operatorname{Var}(X) + \\operatorname{Var}(Y)$.\n", "\n", "As a case in point for (4), consider\n", "\n", "\\begin{align}\n", " \\operatorname{Var}(X + X) &= \\operatorname{Var}(2X) \\\\\n", " &= 4 ~~ \\operatorname{Var}(X) \\\\\n", " &\\neq 2 ~~ \\operatorname{Var}(X) & \\quad \\blacksquare \\\\\n", "\\end{align}\n", "\n", "... and now we know enough about variance to return back to the general form of the normal distribution.\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## General Normal Distribution\n", "\n", "### Description\n", "\n", "Let $X = \\mu + \\sigma \\mathcal{Z}$, where\n", "\n", "- $\\mu \\in \\mathbb{R}$ (also known as _location_)\n", "- $\\sigma \\gt 0$ (_standard deviation_, also known as _scale_)\n", "\n", "Then $X \\sim \\mathcal{N}(\\mu, \\sigma^2)$\n", "\n", "### Expected value\n", "\n", "It follows immediately that\n", "\n", "\\begin{align}\n", " \\mathbb{E}(X) &= \\mu\n", "\\end{align}\n", "\n", "### Variance\n", "\n", "From what we know about variance,\n", "\n", "\\begin{align}\n", " \\operatorname{Var}(\\mu + \\sigma \\mathcal{Z}) &= \\sigma^2 ~~ \\operatorname{Var}(\\mathcal{Z}) \\\\\n", " &= \\sigma^2\n", "\\end{align}\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Standardization\n", "\n", "Solving for $\\mathcal{Z}$, we have\n", "\n", "\\begin{align}\n", " \\mathcal{Z} &= \\frac{X - \\mu}{\\sigma}\n", "\\end{align}\n", "\n", "### CDF & PDF\n", "\n", "For the general normal distribution, we can _standardize_ it to allow us to obtain both cdf and pdf.\n", "\n", "Given $X \\sim \\mathcal{N}(\\mu, \\sigma^2)$, we can get the cdf and pdf\n", "\n", "\\begin{align}\n", " \\text{cdf} ~~ P(X \\le x) &= P\\left(\\frac{X-\\mu}{\\sigma} \\le \\frac{x - \\mu}{\\sigma}\\right) \\\\\n", " &= \\Phi \\left(\\frac{x-\\mu}{\\sigma} \\right) \\\\\n", " \\\\\n", " \\Rightarrow \\text{pdf} ~~ \\Phi' \\left(\\frac{x-\\mu}{\\sigma} \\right) &= \\frac{1}{\\sigma} ~~ \\frac{1}{\\sqrt{2\\pi}} ~~ e^{-\\frac{\\left(\\frac{x-\\mu}{\\sigma}\\right)^2}{2}}\n", "\\end{align}\n", "\n", "### $-X$\n", "\n", "We can also do $-X$, but apply what we've just covered.\n", "\n", "\\begin{align}\n", " -X &= -\\mu + \\sigma (-\\mathcal{Z}) \\sim \\mathcal{N}(-\\mu, \\sigma^2)\n", "\\end{align}\n", "\n", "\n", "### Linearity?\n", "\n", "Later we will show that if $X_j \\sim \\mathcal{N}(\\mu, \\sigma^2)$ are independent (consider $j \\in {1,2}$), then $X_1 + X_2 \\sim \\mathcal{N}(\\mu_1 + \\mu_2, \\sigma_1^2 + \\sigma_2^2)$.\n", "\n", "\n", "### $\\Phi$ and the 68-95-98.7% Rule\n", "\n", "Since $\\Phi$ cannot be computed in terms of other functions, we have the 68-95-98.7% Rule.\n", "\n", "If $X \\sim \\mathcal{N}(\\mu, \\sigma^2)$, then as a rule of thumb $\\Phi$ takes on the following values with relation to $\\sigma$:\n", "\n", "\\begin{align}\n", " P(\\lvert X-\\mu \\rvert &\\le \\sigma) \\approx 0.68 \\\\\n", " P(\\lvert X-\\mu \\rvert &\\le 2 \\sigma) \\approx 0.95 \\\\\n", " P(\\lvert X-\\mu \\rvert &\\le 3 \\sigma) \\approx 0.987\n", "\\end{align}\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Variance of $\\operatorname{Pois}(\\lambda)$\n", "\n", "### Intuition\n", "\n", "Suppose we have the following:\n", "$\\newcommand\\T{\\Rule{0pt}{1em}{.3em}}$\n", "\\begin{array}{|c|c|}\n", "\\hline Prob \\T & P_0 & P_1 & P_1 & P_3 & \\dots \\\\\\hline\n", " X \\T & 0 & 1 & 2 & 3 & \\dots \\\\\\hline\n", " X^2 \\T & 0^2 & 1^2 & 2^2 & 3^2 & \\dots \\\\\\hline\n", "\\end{array}\n", "\n", "And so you can see that the probabilities for $X$ also are the same for $X^2$. That means we should be able to do this:\n", "\n", "\\begin{align}\n", " \\mathbb{E}(X) &= \\sum_x x ~ P(X=x) \\\\\n", " \\mathbb{E}(X^2) &= \\sum_x x^2 ~ P(X=x) \\\\\n", "\\end{align}\n", "\n", "### The case for $\\operatorname{Pois}(\\lambda)$\n", "\n", "Let $X \\sim \\operatorname{Pois}(\\lambda)$. \n", "\n", "Recall that $\\operatorname{Var}(X) = \\mathbb{E}X^2 - (\\mathbb{E}X)^2$. We know that $\\mathbb{E}(X) = \\lambda$, so all we need to do is figure out what $\\mathbb{E}(X^2)$ is.\n", "\n", "\\begin{align}\n", " \\mathbb{E}(X^2) &= \\sum_{k=0}^{\\infty} k^2 ~ \\frac{e^{-\\lambda} \\lambda^k}{k!} \\\\\n", " \\\\\n", " \\text{recall that } \\sum_{k=0}^{\\infty} \\frac{\\lambda^k}{k!} &= e^{\\lambda} & \\quad \\text{Taylor series for } e^x \\\\\n", " \\\\\n", " \\sum_{k=1}^{\\infty} \\frac{k ~ \\lambda^{k-1}}{k!} &= e^{\\lambda} & \\quad \\text{applying the derivative operator} \\\\\n", " \\sum_{k=1}^{\\infty} \\frac{k ~ \\lambda^{k}}{k!} &= \\lambda ~ e^{\\lambda} & \\quad \\text{muliply by } \\lambda \\text{, replenishing it} \\\\\n", " \\sum_{k=1}^{\\infty} \\frac{k^2 ~ \\lambda^{k-1}}{k!} &= \\lambda ~ e^{\\lambda} + e^{\\lambda} = e^{\\lambda} (\\lambda + 1) & \\quad \\text{applying the derivative operator again} \\\\\n", " \\sum_{k=1}^{\\infty} \\frac{k^2 ~ \\lambda^{k}}{k!} &= \\lambda e^{\\lambda} (\\lambda + 1) & \\quad \\text{replenish } \\lambda \\text{ one last time} \\\\\n", " \\\\\n", " \\therefore \\mathbb{E}(X^2) &= \\sum_{k=0}^{\\infty} k^2 ~ \\frac{e^{-\\lambda} \\lambda^k}{k!} \\\\\n", " &= e^{-\\lambda} \\lambda e^{\\lambda} (\\lambda + 1) \\\\\n", " &= \\lambda^2 + \\lambda \\\\\n", " \\\\\n", " \\operatorname{Var}(X) &= \\mathbb{E}(X^2) - (\\mathbb{E}X)^2 \\\\\n", " &= \\lambda^2 + \\lambda - \\lambda^2 \\\\\n", " &= \\lambda & \\quad \\blacksquare\n", "\\end{align}\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Variance of $\\operatorname{Binom}(X)$\n", "\n", "Let $X \\sim \\operatorname{Binom}(n,p)$.\n", "\n", "$\\mathbb{E}(X) = np$. \n", "\n", "Find $\\operatorname{Var}(X)$ using all the tricks you have at your disposal.\n", "\n", "### The path of least resistance\n", "\n", "Let's try applying (4) from the above Rules of Variance. \n", "\n", "We can do so because $X \\sim \\operatorname{Binom}(n,p)$ means that the $n$ trials are _independent Bernoulli_.\n", "\n", "\\begin{align}\n", " X &= I_1 + I_2 + \\dots + I_n & \\quad \\text{where } I_j \\text{ are i.i.d. } \\operatorname{Bern}(p) \\\\\n", " \\\\\n", " \\Rightarrow X^2 &= I_1^2 + I_2^2 + \\dots + I_n^2 + 2I_1I_2 + 2I_1I_3 + \\dots + 2I_{n-1}I_n & \\quad \\text{don't worry, this is not as bad as it looks} \\\\\n", " \\\\\n", " \\therefore \\mathbb{E}(X^2) &= n \\mathbb{E}(I_1^2) + 2 \\binom{n}{2} \\mathbb{E}(I_1I_2) & \\quad \\text{by symmetry} \\\\\n", " &= n p + 2 \\binom{n}{2} \\mathbb{E}(I_1I_2) & \\quad \\text{since } \\mathbb{E}(I_j^2) = \\mathbb{E}(I_j) \\\\\n", " &= n p + n (n-1) p^2 & \\quad \\text{since } I_1I_2 \\text{ is the event that both } I_1 \\text{ and } I_2 \\text{ are successes} \\\\\n", " &= np + n^2 p^2 - np^2 \\\\\n", " \\\\\n", " \\operatorname{Var}(X) &= \\mathbb{E}(X^2) - (\\mathbb{E}X)^2 \\\\\n", " &= np + n^2 p^2 - np^2 - (np)^2 \\\\\n", " &= np - np^2 \\\\\n", " &= np(1-p) \\\\\n", " &= npq & \\quad \\blacksquare\n", "\\end{align}\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Variance of $\\operatorname{Geom}(p)$\n", "\n", "Let $X \\sim \\operatorname{Geom}(p)$.\n", "\n", "It has PDF $q^{k-1}p$.\n", "\n", "Find $\\operatorname{Var}(X)$.\n", "\n", "### Applying what we know of the Geometric Series\n", "\n", "\\begin{align}\n", " a + ar + ar^2 + ar^3 + \\dots &= \\sum_{k=0}^{\\infty} ar^k = \\frac{a}{1-r} & \\quad \\text{for } \\lvert r \\rvert \\le 1 \\\\\n", " \\\\\n", " \\therefore 1 + r + r^2 + r^3 + \\dots &= \\sum_{k=0}^{\\infty} r^k = \\frac{1}{1-r} & \\quad \\text{when } a = 1 \\\\\n", " \\\\\n", " \\\\\n", " \\text{and since we know } \\sum_{k=0}^{\\infty} q^k &= \\frac{1}{1-q} \\\\\n", " \\sum_{k=1}^{\\infty} k q^{k-1} &= \\frac{1}{(1-q)^2} & \\quad \\text{differentiate wrt }q \\\\\n", " \\sum_{k=1}^{\\infty} k q^{k} &= \\frac{q}{(1-q)^2} & \\quad \\text{multiply by }q \\\\\n", " \\sum_{k=1}^{\\infty} k^2 q^{k-1} &= - \\frac{q+1}{(q-1)^3} = \\frac{(-1)(q+1)}{(-1)^3(1-q)^3} = \\frac{q+1}{p^3} & \\quad \\text{differentiate once more by }q \\\\\n", " \\\\\n", " \\Rightarrow \\mathbb{E}(X) &= \\sum_{k=0}^{\\infty} k q^{k-1} p \\\\\n", " &= p \\sum_{k=1}^{\\infty} k q^{k-1} \\\\\n", " &= \\frac{p}{(1-q)^2} \\\\\n", " &= \\frac{1}{p} \\\\\n", " \\\\\n", " \\Rightarrow \\mathbb{E}(X^2) &= \\sum_{k=0}^{\\infty} k^2 q^{k-1} p \\\\\n", " &= p \\sum_{k=1}^{\\infty} k^2 q^{k-1} \\\\\n", " &= p \\frac{q+1}{p^3} \\\\\n", " &= \\frac{q+1}{p^2} \\\\\n", " \\\\\n", " \\operatorname{Var}(X) &= \\mathbb{E}(X^2) - (\\mathbb{E}X)^2 \\\\\n", " &= \\frac{q+1}{p^2} - \\left( \\frac{1}{p} \\right)^2 \\\\\n", " &= \\frac{q+1}{p^2} - \\frac{1}{p^2} \\\\\n", " &= \\boxed{\\frac{q}{p^2}} & \\quad \\blacksquare\n", "\\end{align}\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Why is LOTUS true?\n", "\n", "Proving LOTUS for the discrete case, we will show $\\mathbb{E}(g(X)) = \\sum_{x} g(x) \\, P(X=x)$.\n", "\n", "Building on what we did when we proved linearity, we have\n", "\n", "\\begin{align}\n", " \\underbrace{g(x) \\, P(X=x)}_{\\text{grouped \"super-pebble\"}} &= \\underbrace{\\sum_{s \\in S} g(X(s)) \\, P(\\{s\\})}_{\\text{individual pebbles}} \\\\\n", " &= \\sum_{x} \\sum_{s: X(s)=x} g(X(s)) \\, P(\\{s\\}) \\\\\n", " &= \\sum_{x} g(x) \\sum_{s: X(s)=x} P(\\{s\\}) \\\\\n", " &= \\sum_{x} g(x) P(X=x) & \\quad \\blacksquare\n", "\\end{align}\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "View [Lecture 14: Location, Scale, and LOTUS | Statistics 110](http://bit.ly/2CyYFg4) on YouTube." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.3" } }, "nbformat": 4, "nbformat_minor": 1 }