{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lecture 32: Markov chains (cont.), irreducibility, recurrence, transience, reversibility, random walk on an undirected network\n", "\n", "\n", "## Stat 110, Prof. Joe Blitzstein, Harvard University\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Examples of Markov Chains\n", "\n", "Markov chains are memoryless, in a way, since the past doesn't really inform the future; only the present counts. Recall that the future is conditionally independent of the past, given the present.\n", "\n", "### Some key concepts\n", "\n", "* A chain is **irreducible** if it is possible to get from any state to another.\n", "* A state is **recurrent** if, when starting there, the chain has probability of 1.0 for returning to that state. Note that if there is probability 1.0 for returning to a certain state, then it follows that in a Markov chain, you can return to that state _infinitely_ many times with probability 1.0.\n", "* Otherwise, the state is **transient**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example 1\n", "\n", "![title](images/L3201.png)\n", "\n", "* This Markov chain is _irreducible_, as it is indeed possible to go from any one state to another.\n", "* All of the states in this Markov chain are _recurrent_." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example 2\n", "\n", "![title](images/L3202a.png)\n", "\n", "* In this example, the chain is _reducible_; notice how there are actually two chains (1-2-3 and 4-5-6).\n", "* However, note that all of the states are _recurrent_.\n", "\n", "And if we connected states 3 and 6...\n", "\n", "![title](images/L3202b.png)\n", "\n", "* This example is still not _irreducible_.\n", "* But states 1, 2 and 3 are now _transient_, since there is no way to return to any of those states once that edge from 3 to 6 is traversed.\n", "* The chain would become _irreducible_ and all states _recurrent_ if we added yet another edge from 4 to 1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example 3\n", "\n", "![title](images/L3203.png)\n", "\n", "* The Markov chain in this example is _reducible_.\n", "* States 1 and 2 are _transient_.\n", "* States 0 and 3 are _recurrent_, but once you reach states 0 or 3, you cannot leave; these states are called _absorbing states_.\n", "* In case you didn't notice, the Markov chain in this example is the Gambler's Ruin, where a player either loses all her money (say state 0) or wins all the money (state 3)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example 4\n", "\n", "![title](images/L3204.png)\n", "\n", "* This is a _periodic_ Markov chain.\n", "* It is _irreducible_.\n", "* All states are _recurrent_.\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Stationary Distributions\n", "\n", "Recall the definition of a stationary distribution from the last lecture.\n", "\n", "$\\vec{s}$, a probability row vector (PMF), is _stationary_ for a Markov chain with transition matrix $Q$ if $\\vec{s} \\, Q = \\vec{s}$." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### Theorems of Stationary Distributions\n", "\n", "For any _irreducible_ Markov chain with finitely many states:\n", "\n", "\n", "1. A stationary distribution $\\vec{s}$ exists.\n", "1. It is unique.\n", "1. $\\vec{s}_i = \\frac{1}{r_i}$, where $r_i$ is the average return time for returning back to $i$.\n", "1. If we also assume there is no _periodicity_ in the chain, where $Q^m$ is strictly positive for some $m$, then $P(X_n = i) \\rightarrow s_i$ as $n \\rightarrow \\infty$\n", "\n", "Regarding 4, if we any probability vector $\\vec{t}$, then $\\vec{t} \\, Q \\rightarrow \\vec{s}$.\n", "\n", "So the above theories of stationary distributions are worthy of study, since\n", "\n", "* they assure existence and uniqueness of stationary distribution under certain assumptions\n", "* they capture long-run behavior\n", "* show relation to average number of step for return to a state\n", "\n", "_But how would we compute the stationary distribution?_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reversible Markov Chains\n", "\n", "**Definition** Markov chains with transition matrix $Q = \\left[ q_{ij} \\right]$ is _reversible_ if there is a probability vector $\\vec{s}$ such that $s_i \\, q_{ij} = s_j \\, q_{ji}$ for all states $i,j$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Theorem: Reversible transition matrices and Stationary distribution\n", "\n", "If a transition matrix is _reversible_ with respect to $\\vec{s}$, then that $\\vec{s}$ is _stationary_. This reversibility is with reference to time, so it is also called _time reversible_.\n", "\n", "For intuition, imagine a video tape of some particle changing states. If you ran that video backwards and show that to someone, and that person could not tell if the action was moving forwards or backwards, then that would be an example of _time reversiblity_.\n", "\n", "\n", "**Proof**\n", "\n", "Let $s_i \\, q_{ij} = s_j \\, q_{ji}$ for all $i,j$; show that $\\vec{s} \\, Q = \\vec{s}$.\n", "\n", "\\begin{align}\n", " \\sum_i s_i \\, q_{ij} &= \\sum_i s_j \\, q_{ji} \\\\\n", " &= s_j \\sum_i q_{ji} \\\\\n", " &= s_j &\\text{ but this is just the definition of matrix multiplication} \\\\\n", " \\\\\\\\\n", " \\Rightarrow \\vec{s} \\, Q &= \\vec{s}\n", "\\end{align}\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example of reversible Markov chain\n", "\n", "A random walk on a undirected network is an example of a reversible Markov chain.\n", "\n", "![title](images/L3205.png)\n", "\n", "In the diagram above, the nodes 1 through 4 are joined in an undirected graph. The degree of each node $d_i$ is the number of edges emanating from said node, so $d_1=2, d_2=2, d_3=3, d_4=1$.\n", "\n", "With transition matrix $Q$ for the graph above, then $d_i \\, q_{ij} = d_j \\, q_{ji}$.\n", "\n", "**Proof**\n", "\n", "Let $i \\ne j$. \n", "\n", "Then $q_{ij}, q_{ji}$ are either both 0 or both non-zero. _The key is that we are talking about an undirected graph, and all edges are two-way streets._\n", "\n", "If there is an edge joining $i,j$), then $q_{ij} = \\frac{1}{d_i}$. \n", "\n", "So in a graph with $M$ nodes $1, 2, \\dots , M$, where each node has degree $d_i$, then $\\vec{s}$ with $s_i = \\frac{d_i}{\\sum_{j} d_j}$ is stationary.\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "View [Lecture 32: Markov Chains Continued | Statistics 110](http://bit.ly/2McjRIq) on YouTube." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.3" } }, "nbformat": 4, "nbformat_minor": 2 }