{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# The EM Algorithm\n", "\n", "The expectation-maximization algorithm is an iterative approach to find a (local) maximum of the likelihood that can be computationally much more tractable than directly maximizing the likelihood.\n", "\n", "It is particularly useful in problems that have missing data or latent (unobserved) variables.\n", "\n", "It is in a general class of optimization algorithms that alternate between two types of operations, constructing or estimating something, and maximizing or minimizing something." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We observe data $(X, Y) \\sim {\\mathbb P}_\\theta$, where $\\{{\\mathbb P}_\\theta : \\theta \\in \\Theta\\}$ is a known parametric family of probability distributions, but the particular value of $\\theta$ is unknown.\n", "\n", "We want to estimate $\\theta$ using maximum likelihood, that is, to find\n", "\n", "$$\n", " \\hat{\\theta} \\in \\arg \\max_{\\eta \\in \\Theta} {\\mathbb P}_\\eta (X,Y).\n", "$$\n", "\n", "It's common to work with the log likelihood; since the logarithm is a monotonic function, the value of $\\eta$ that maximizes the log likelihood is the same as the value that maximizes the likelihood:\n", "\n", "$$\n", " \\hat{\\theta} \\in \\arg \\max_{\\eta \\in \\Theta} \\ln {\\mathbb P}_\\eta (X,Y) = \\arg \\max_{\\eta \\in \\Theta} L(\\eta).\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The EM Algorithm\n", "\n", "
\n", "\n", "Initialize starting guess $\\hat{\\theta}_0$. Repeat until convergence (which is not guaranteed) or boredom:\n", "
\n", "\n", "\n", "
\n", "\n", "It's a theorem that $\\ln {\\mathbb P}_{\\hat{\\theta}_{i+1}}(x) \\ge \\ln {\\mathbb P}_{\\hat{\\theta}_i}(x)$.\n", "\n", "However, the algorithm need not converge to a maximum of the likelihood. It does behave well for exponential families." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Why does it work?\n", "\n", "Recall **Jensen's Inequality:** if $f$ is a convex function, then $\\mathbb{E} f(X) \\ge f(\\mathbb{E}\\mathbb{X})$.\n", "\n", "Jensen's inequality implies that $Q(\\eta, \\hat{\\theta}_i) \\le \\ln {\\mathbb P}_{\\hat{\\theta}_i}(x, z)$.\n", "\n", "Note that if, in fact, $Y$ is deterministic, we are free to define ${\\mathbb P}_{\\eta} ((X, Z) | X=x)$ however we wish (as long as we give positive probability to every $y$): the conditional expectation with respect to that distribution will still lower bound the value at the true $y$, by Jensen's inequality." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 2 }