{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Foundations of Computational Economics #20\n",
"\n",
"by Fedor Iskhakov, ANU\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"## Finite Markov chains\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"\n",
"[https://youtu.be/6rixSK8YQFc](https://youtu.be/6rixSK8YQFc)\n",
"\n",
"Description: Stochastic matrix, irreducibility and aperiodicity, stationary distribution."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Essential for economics!\n",
"\n",
"- Markov chains, and more generally Markov processes are all over economic theory, computations and econometrics. \n",
"- All dynamic models in the second half of the course use finite Markov chains as building blocks "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Stochastic processes\n",
"\n",
"Mathematical objects describing an evolution of random variable(s) in time.\n",
"\n",
"$$\n",
"\\{X_t\\}_{t \\in T}\n",
"$$\n",
"\n",
"Examples:\n",
"\n",
"- simple random walk: $ X_t $ takes integer values, time is discrete, $ X_{t+1} = X_{t} + 1 $\n",
" or $ X_{t+1} = X_{t} - 1 $ with some probabilities which do not depend on $ X_t $ \n",
"- Wiener process (Brownian motion): independent normally distributed increments, continuous time "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Markov processes and Markov property\n",
"\n",
"Markov property (*memorylessness*): probability distribution over $ X_{t+1} $ depends only on $ X_t $, but not\n",
"on any previous values of the process\n",
"\n",
"- Markov process: stochastic process which has Markov property \n",
"- Finite Markov chains are special case of Markov processes "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Finite Markov chains\n",
"\n",
"- $ X_{t} $ takes values from finite set $ S = \\{x_1,\\dots,x_n\\} $ which is called **state space** \n",
"- time is discrete \n",
"- probability transition from $ x_i \\in S $ to $ x_j \\in S $ denoted $ P_{ij} $, only depends on $ x_i $ and not on any previous values of $ X_t $) \n",
"\n",
"\n",
"$$\n",
"\\mathbb{P}(X_{t+1}| X_{t},X_{t-1},\\dots) = \\mathbb{P}(X_{t+1}=x_j | X_{t}=x_i) = P_{ij}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Example\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"$$\n",
"S = \\{ \\text{Sunny}, \\text{Partly Cloudy}, \\text{Rainy} \\} = \\{ x_1, x_2, x_3 \\}\n",
"$$\n",
"\n",
"$$\n",
"\\begin{array}{lll}\n",
"P_{11} = 0.5; &\n",
"P_{12} = 0.4; &\n",
"P_{13} = 0.1; \\\\\n",
"P_{21} = 0.4; &\n",
"P_{22} = 0.5; &\n",
"P_{23} = 0.1; \\\\\n",
"P_{31} = 0.2; &\n",
"P_{32} = 0.2; &\n",
"P_{33} = 0.6;\n",
"\\end{array}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Transition probability matrix\n",
"\n",
"$$\n",
"P = \\left(\n",
"\\begin{array}{lll}\n",
"P_{11}, &\n",
"P_{12}, &\n",
"P_{13} \\\\\n",
"P_{21}, &\n",
"P_{22}, &\n",
"P_{23} \\\\\n",
"P_{31}, &\n",
"P_{32}, &\n",
"P_{33}\n",
"\\end{array}\n",
"\\right) = \\left(\n",
"\\begin{array}{lll}\n",
"0.5 & 0.4 & 0.1 \\\\\n",
"0.4 & 0.5 & 0.1 \\\\\n",
"0.2 & 0.2 & 0.6\n",
"\\end{array}\n",
"\\right)\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Stochastic (or Markov) matrix\n",
"\n",
"- all elements are nonnegative \n",
"- each row sums to one \n",
"\n",
"\n",
"Transition matrix $ P $ is stochastic"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Finite Markov chain\n",
"\n",
"- A sequence of random variables on $ S $ that have the Markov property \n",
"- Fully characterized by its transition probability matrix $ P $ "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Simulation of Markov chain\n",
"\n",
"By definition $ \\{P_{ik}\\}, k=1,\\dots,n $ is probability distribution\n",
"of $ X_{t+1} $ given $ X_{t} = x_i $\n",
"\n",
"Therefore, it is possible to simulate the chain with\n",
"\n",
"1. Draw $ X_0 $ from the *initial* distribution $ \\psi $ \n",
"1. At every step, given current $ X_t = x_i $, draw $ X_{t+1} $ from discrete distribution $ \\{P_{ik}\\}, k=1,\\dots,n $ "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"hide-output": false,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"import numpy as np\n",
"P = np.array([[.5,.4,.1],[.4,.5,.1],[.2,.2,.6]])\n",
"ψ = np.array([0.2,0.3,0.5]) # arbitrary distribution of initial value"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"hide-output": false,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"def ddraw(probs):\n",
" '''Draws one realization of discrete random variables\n",
" generated from given probability distibution (base 0)\n",
" '''\n",
" assert probs.ndim == 1, 'Expecting a one-dimensional array of probabilities'\n",
" assert np.abs(probs.sum()-1)<1e-10, 'Passed probabilities do not sum up to one'\n",
" cdf = np.cumsum(probs) # cumulative distribution\n",
" u = np.random.uniform() # u(0,1)\n",
" for i in range(cdf.size):\n",
" if u<=cdf[i]:\n",
" return i # between i-1 and i values of cdf"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"hide-output": false,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0 - 1\n",
"1 - 2\n",
"2 - 2\n",
"3 - 1\n",
"4 - 2\n",
"5 - 1\n",
"6 - 1\n",
"7 - 2\n",
"8 - 0\n",
"9 - 0\n"
]
}
],
"source": [
"# generate a sequence of discrete random variables with distribution ψ\n",
"for i in range(10):\n",
" print(i,ddraw(ψ),sep=' - ')"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"hide-output": false,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"def simmc(P,psi,T=100):\n",
" '''Simulates Markov chain with given transition probability matrix (P),\n",
" initial state distribution (psi) for a given number T of steps (time periods)\n",
" '''\n",
" P = np.asarray(P) # convert to numpy array\n",
" psi = np.asarray(psi)\n",
" assert np.all(np.abs(P.sum(axis=1)-1)<1e-10), 'Passed P is not stochastic matrix'\n",
" m = psi.size # number of states in the chain\n",
" # simulate the initial state\n",
" X = np.empty(T,dtype=int) # allocate space for the simulated values\n",
" X[0] = ddraw(psi) # initial values in first column\n",
" # main loop over time\n",
" for t in range(1,T):\n",
" X[t] = ddraw(P[int(X[t-1]),:]) # simulate using appropriate row\n",
" return X"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"hide-output": false,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Transition matrix:\n",
"[[0.5 0.4 0.1]\n",
" [0.4 0.5 0.1]\n",
" [0.2 0.2 0.6]]\n",
"Distribution of initial value:\n",
"[0.2 0.3 0.5]\n",
"Simulation:\n",
"[1 1 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 2 2 2 2\n",
" 2 2 0 1 1 0 0 1 1 1 1 1 0 0 0 0 0 0 1 2 2 1 2 2 2 2 2 0 1 0 1 1 0 0 0 0 1\n",
" 2 2 1 1 2 2 2 1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1]\n",
"Partly cloudy >> Partly cloudy >> Partly cloudy >> Sunny >> Sunny >> Sunny >> Sunny >> Sunny >> Partly cloudy >> Sunny >> Sunny >> Sunny >> Sunny >> Sunny >> Partly cloudy >> Sunny >> Sunny >> Sunny >> Sunny >> Partly cloudy >> Partly cloudy >> Sunny >> Sunny >> Partly cloudy >> Sunny >> Partly cloudy >> Sunny >> Sunny >> Partly cloudy >> Sunny >> Sunny >> Sunny >> Partly cloudy >> Rainy >> Rainy >> Rainy >> Rainy >> Rainy >> Rainy >> Sunny >> Partly cloudy >> Partly cloudy >> Sunny >> Sunny >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Sunny >> Sunny >> Sunny >> Sunny >> Sunny >> Sunny >> Partly cloudy >> Rainy >> Rainy >> Partly cloudy >> Rainy >> Rainy >> Rainy >> Rainy >> Rainy >> Sunny >> Partly cloudy >> Sunny >> Partly cloudy >> Partly cloudy >> Sunny >> Sunny >> Sunny >> Sunny >> Partly cloudy >> Rainy >> Rainy >> Partly cloudy >> Partly cloudy >> Rainy >> Rainy >> Rainy >> Partly cloudy >> Sunny >> Partly cloudy >> Sunny >> Partly cloudy >> Sunny >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> Partly cloudy >> "
]
}
],
"source": [
"print('Transition matrix:',P,sep='\\n')\n",
"print('Distribution of initial value:',ψ,sep='\\n')\n",
"sm = simmc(P,ψ,T=100)\n",
"print('Simulation:',sm,sep='\\n')\n",
"weather=['Sunny','Partly cloudy','Rainy']\n",
"for i in sm:\n",
" print(weather[i],end=' >> ')"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Marginal distribution\n",
"\n",
"Suppose that\n",
"\n",
"1. $ \\{X_t\\} $ is a Markov chain with stochastic matrix $ P $ \n",
"1. the distribution of $ X_t $ is known to be $ \\psi_t $ \n",
"\n",
"\n",
"What then is the distribution of $ X_{t+1} $, and, more generally, of $ X_{t+m} $?"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Solution\n",
"\n",
"Let’s start from finding the distribution of $ X_{t+1} $ denoted $ \\psi_{t+1} $\n",
"\n",
"Fix $ y \\in S $. Using the [law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probability),\n",
"we can decompose the probability that $ \\mathbb{P}\\{X_{t+1} = y\\} $ as follows:\n",
"\n",
"$$\n",
"\\mathbb {P} \\{X_{t+1} = y \\}\n",
" = \\sum_{x \\in S} \\mathbb {P} \\{ X_{t+1} = y \\, | \\, X_t = x \\}\n",
" \\cdot \\mathbb {P} \\{ X_t = x \\}\n",
"$$\n",
"\n",
"In words, to get the probability of being at $ y $ tomorrow, we account for\n",
"all ways this can happen and sum their probabilities"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Matrix form\n",
"\n",
"$$\n",
"\\mathbb {P} \\{X_{t+1} = x_j \\}\n",
" = \\sum_{x_i \\in S} \\mathbb {P} \\{ X_{t+1} = x_j | X_t = x_i \\}\n",
" \\cdot \\mathbb {P} \\{ X_t = x_i \\}\n",
" = \\sum_{i \\in \\{1,\\dots,n\\}} P_{ij} \\cdot \\psi_{t,i}\n",
"$$\n",
"\n",
"- the first element is in transition probability matrix $ P $ \n",
"- the second term is in the distribution $ \\psi_t $ \n",
"\n",
"\n",
"Repeating for all $ j $, in matrix notation we have\n",
"\n",
"$$\n",
"\\psi_{t+1} = \\psi_{t} \\cdot P\n",
"$$\n",
"\n",
"- $ \\psi $ is a row vector "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Many steps\n",
"\n",
"To find the distribution of $ X_{t+m} $ we can repeat the same arguments $ m $ times\n",
"\n",
"$$\n",
"\\psi_{t+m} = \\psi_{t} \\cdot P^m\n",
"$$\n",
"\n",
"- $ P^m $ is the power of the transition probability matrix \n",
"- we can take powers of square matrices without any problem "
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"hide-output": false,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Transition matrix:\n",
"[[0.5 0.4 0.1]\n",
" [0.4 0.5 0.1]\n",
" [0.2 0.2 0.6]]\n",
"Distribution of initial value:\n",
"[0.2 0.3 0.5]\n",
"Distribution in one time period:\n",
"[0.32 0.33 0.35]\n",
"Distribution in two time periods:\n",
"[0.362 0.363 0.275]\n",
"Distribution in ten time periods:\n",
"[0.39985352 0.39985352 0.20029297]\n"
]
}
],
"source": [
"print('Transition matrix:',P,sep='\\n')\n",
"print('Distribution of initial value:',ψ,sep='\\n')\n",
"\n",
"print('Distribution in one time period:', ψ@P ,sep='\\n')\n",
"print('Distribution in two time periods:', ψ@P@P ,sep='\\n')\n",
"\n",
"print('Distribution in ten time periods:', ψ @ np.linalg.matrix_power(P,10) ,sep='\\n')\n",
"# BE AWARE that P**10 works element-wise!"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Multiple step transition probabilities\n",
"\n",
"- powers of transition probability matrix give probabilities of transition in multiple steps! \n",
"\n",
"\n",
"$$\n",
"Q = P^m \\Rightarrow \\mathbb{P}\\{X_{t+m} = x_j\\} = Q_{ij} \\text{ when } X_t = x_i\n",
"$$\n",
"\n",
"To see this, assume $ \\psi_t=(1,0,\\dots,0) $ and apply $ \\psi_{t+m} = \\psi_{t} \\cdot P^m $\n",
"to get the probability distribution for the state in $ m $ periods.\n",
"Elements of this vector are individual transition probabilities from $ x_i=1 $.\n",
"The argument can be repeated for all degenerate $ \\psi_t $ placing all probability\n",
"mass to each of the values $ x_i $."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Irreducibility and aperiodicity\n",
"\n",
"- central concepts in Markov chain theory \n",
"\n",
"\n",
"Two states $ i $ and $ j $ are said to **communicate** if there exist positive integers $ k $ and $ l $ such that\n",
"\n",
"$$\n",
"P_{ij}^k > 0 \\text{ and } P_{ji}^l > 0\n",
"$$\n",
"\n",
"- state $ i $ can be reached from state $ j $ in some number of steps, and \n",
"- state $ j $ can be reached from state $ i $ in some number of steps \n",
"\n",
"\n",
"Markov chain is called **irreducible** if every pair of states communicate."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Aperiodicity\n",
"\n",
"Markov chain is called **periodic** if it cycles in a predictable way, and aperiodic otherwise\n",
"\n",
"$$\n",
"P = \\left(\n",
"\\begin{array}{lll}\n",
"0 & 1 & 0 \\\\\n",
"0 & 0 & 1 \\\\\n",
"1 & 0 & 0\n",
"\\end{array}\n",
"\\right)\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### More formally\n",
"\n",
"The **period** of a state $ i, x_i \\in S $ is the greatest common divisor of the integers $ k $ such that $ P^k_{ii} > 0 $\n",
"\n",
"In the last example, these integers for state 1 are 3,6,9,…, and so the period of state 1 is 3.\n",
"\n",
"A stochastic matrix is called **aperiodic** if the period of every state is 1, and **periodic** otherwise"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Stationary distributions\n",
"\n",
"**Stationary** or **invariant** distribution is $ \\psi^\\star $ such that\n",
"\n",
"$$\n",
"\\psi^\\star = \\psi^\\star \\cdot P\n",
"$$\n",
"\n",
"It immediately follows that\n",
"\n",
"$$\n",
"\\psi^\\star = \\psi^\\star \\cdot P^k \\text{ for any } k,\n",
"$$\n",
"\n",
"that is if $ X_0 $ has distribution $ \\psi^\\star $, then $ X_t $ has the same distribution for all $ t $"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"hide-output": false,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Transition matrix:\n",
"[[0.5 0.4 0.1]\n",
" [0.4 0.5 0.1]\n",
" [0.2 0.2 0.6]]\n",
"Distribution of initial value:\n",
"[0.4 0.4 0.2]\n",
"Distribution in one time period:\n",
"[0.4 0.4 0.2]\n",
"Distribution in two time periods:\n",
"[0.4 0.4 0.2]\n",
"Distribution in ten time periods:\n",
"[0.4 0.4 0.2]\n"
]
}
],
"source": [
"ψ = np.array([0.4,0.4,0.2])\n",
"print('Transition matrix:',P,sep='\\n')\n",
"print('Distribution of initial value:',ψ,sep='\\n')\n",
"\n",
"print('Distribution in one time period:', ψ@P ,sep='\\n')\n",
"print('Distribution in two time periods:', ψ@P@P ,sep='\\n')\n",
"\n",
"print('Distribution in ten time periods:', ψ @ np.linalg.matrix_power(P,10) ,sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Existence of stationary distributions\n",
"\n",
"**Theorem:** Every stochastic map has at least one stationary distribution.\n",
"\n",
"Proof: application of [Brouwer’s fixed point theorem](https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem)\n",
"\n",
"$ \\psi^\\star $ is a fixed point of the mapping $ \\psi \\mapsto \\psi P $ from (row) vectors to (row) vectors\n",
"\n",
"Note: stochastic matrix may have multiple stationary distributions, for example for the identity matrix every distribution is stationary."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Uniqueness of stationary distribution\n",
"\n",
"**Theorem:** If stochastic matrix $ P $ is both aperiodic and irreducible, then\n",
"\n",
"1. $ P $ has exactly one stationary distribution $ \\psi^\\star $ \n",
"1. For any initial distribution $ \\psi_0 $ it holds $ \\| \\psi_0 P^t - \\psi^\\star \\| \\to 0 $ as $ t \\to \\infty $ \n",
"\n",
"\n",
"A stochastic matrix satisfying the conditions of the theorem is sometimes called **uniformly ergodic**\n",
"\n",
"One easy sufficient condition for aperiodicity and irreducibility is that all elements of \\$ P \\$ are strictly positive (why?)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Convergence to stationarity\n",
"\n",
"Part 2 of the theorem gives us a particular way to compute stationary distributions:\n",
"\n",
"1. Start from arbitrary distribution $ \\psi_0 $ \n",
"1. Compute the updated distribution $ \\psi_t = \\psi_{t-1} \\cdot P $ until $ \\psi_t $ and $ \\psi_{t-1} $ are indistinguishable \n",
"\n",
"\n",
"Under uniform ergodicity conditions of the theorem (i.e. aperiodicity and irreducibility),\n",
"convergence is towards the stationary distribution $ \\psi^\\star $!"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"hide-output": false,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Transition matrix:\n",
"[[0.5 0.4 0.1]\n",
" [0.4 0.5 0.1]\n",
" [0.2 0.2 0.6]]\n",
"after step 1 the distribution is array([0.5, 0.4, 0.1])\n",
"after step 2 the distribution is array([0.43, 0.42, 0.15])\n",
"after step 3 the distribution is array([0.413, 0.412, 0.175])\n",
"after step 4 the distribution is array([0.4063, 0.4062, 0.1875])\n",
"after step 5 the distribution is array([0.40313, 0.40312, 0.19375])\n",
"after step 6 the distribution is array([0.401563, 0.401562, 0.196875])\n",
"after step 7 the distribution is array([0.4007813, 0.4007812, 0.1984375])\n",
"after step 8 the distribution is array([0.40039063, 0.40039062, 0.19921875])\n",
"after step 9 the distribution is array([0.40019531, 0.40019531, 0.19960938])\n",
"after step 10 the distribution is array([0.40009766, 0.40009766, 0.19980469])\n",
"after step 11 the distribution is array([0.40004883, 0.40004883, 0.19990234])\n",
"after step 12 the distribution is array([0.40002441, 0.40002441, 0.19995117])\n",
"after step 13 the distribution is array([0.40001221, 0.40001221, 0.19997559])\n",
"after step 14 the distribution is array([0.4000061 , 0.4000061 , 0.19998779])\n",
"after step 15 the distribution is array([0.40000305, 0.40000305, 0.1999939 ])\n",
"after step 16 the distribution is array([0.40000153, 0.40000153, 0.19999695])\n",
"after step 17 the distribution is array([0.40000076, 0.40000076, 0.19999847])\n",
"after step 18 the distribution is array([0.40000038, 0.40000038, 0.19999924])\n",
"after step 19 the distribution is array([0.40000019, 0.40000019, 0.19999962])\n",
"after step 20 the distribution is array([0.4000001 , 0.4000001 , 0.19999981])\n",
"after step 21 the distribution is array([0.40000005, 0.40000005, 0.1999999 ])\n",
"after step 22 the distribution is array([0.40000002, 0.40000002, 0.19999995])\n",
"after step 23 the distribution is array([0.40000001, 0.40000001, 0.19999998])\n",
"after step 24 the distribution is array([0.40000001, 0.40000001, 0.19999999])\n",
"after step 25 the distribution is array([0.4 , 0.4 , 0.19999999])\n",
"after step 26 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 27 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 28 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 29 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 30 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 31 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 32 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 33 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 34 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 35 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 36 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 37 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 38 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 39 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 40 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 41 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 42 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 43 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 44 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 45 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 46 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 47 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 48 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 49 the distribution is array([0.4, 0.4, 0.2])\n",
"after step 50 the distribution is array([0.4, 0.4, 0.2])\n"
]
}
],
"source": [
"ψ = np.array([1,0,0])\n",
"print('Transition matrix:',P,sep='\\n')\n",
"\n",
"for t in range(50):\n",
" ψ = ψ @ P\n",
" print('after step %2d the distribution is %r'%(t+1,ψ))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Ergodicity\n",
"\n",
"Under irreducibility it also holds that\n",
"\n",
"$$\n",
"\\frac{1}{T} \\sum_{t = 1}^T \\mathbf{1}\\{X_t = x_j\\} \\to \\psi^\\star_j\n",
"\\text{ as } T \\to \\infty\n",
"$$\n",
"\n",
"- $ \\mathbf{1}\\{X_t = x_j\\} = 1 $ if $ X_t = x_j $ and zero otherwise \n",
"- convergence is with probability one \n",
"- the result does not depend on the distribution (or value) of $ X_0 $ \n",
"\n",
"\n",
"In other words, the stationary distribution $ \\psi^\\star $ also shows the limiting fractions of time\n",
"that the Markov chain $ X_t $ spends at each point of the state space"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Further learning resources\n",
"\n",
"- QuantEcon lecture on Markov chains\n",
" [https://python.quantecon.org/finite_markov.html](https://python.quantecon.org/finite_markov.html) "
]
}
],
"metadata": {
"celltoolbar": "Slideshow",
"date": 1612589585.625011,
"download_nb": false,
"filename": "20_markov.rst",
"filename_with_path": "20_markov",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
},
"title": "Foundations of Computational Economics #20"
},
"nbformat": 4,
"nbformat_minor": 4
}