{ "cells": [ { "cell_type": "markdown", "id": "a26c5c54", "metadata": {}, "source": [ "\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "id": "68b6d433", "metadata": {}, "source": [ "# Markov Jump Linear Quadratic Dynamic Programming" ] }, { "cell_type": "markdown", "id": "246971b8", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Markov Jump Linear Quadratic Dynamic Programming](#Markov-Jump-Linear-Quadratic-Dynamic-Programming) \n", " - [Overview](#Overview) \n", " - [Review of useful LQ dynamic programming formulas](#Review-of-useful-LQ-dynamic-programming-formulas) \n", " - [Linked Riccati equations for Markov LQ dynamic programming](#Linked-Riccati-equations-for-Markov-LQ-dynamic-programming) \n", " - [Applications](#Applications) \n", " - [Example 1](#Example-1) \n", " - [Example 2](#Example-2) \n", " - [More examples](#More-examples) " ] }, { "cell_type": "markdown", "id": "a0284af7", "metadata": {}, "source": [ "In addition to what’s in Anaconda, this lecture will need the following libraries:" ] }, { "cell_type": "code", "execution_count": null, "id": "778b8c5b", "metadata": { "hide-output": false }, "outputs": [], "source": [ "!pip install --upgrade quantecon" ] }, { "cell_type": "markdown", "id": "3794dea1", "metadata": {}, "source": [ "## Overview\n", "\n", "This lecture describes **Markov jump linear quadratic dynamic programming**, an extension of the method described in the [first LQ control lecture](https://python-intro.quantecon.org/lqcontrol.html).\n", "\n", "Markov jump linear quadratic dynamic programming is described and analyzed in [[DVGC99](https://python-advanced.quantecon.org/zreferences.html#id233)] and the references cited there.\n", "\n", "The method has been applied to problems in macroeconomics and monetary economics by [[SW+08](https://python-advanced.quantecon.org/zreferences.html#id234)] and [[SW09](https://python-advanced.quantecon.org/zreferences.html#id235)].\n", "\n", "The periodic models of seasonality described in chapter 14 of [[HS13](https://python-advanced.quantecon.org/zreferences.html#id71)] are a special case of Markov jump linear quadratic problems.\n", "\n", "**Markov jump linear quadratic dynamic programming** combines advantages\n", "of\n", "\n", "- the computational simplicity of **linear quadratic dynamic\n", " programming**, with \n", "- the ability of **finite state Markov chains** to represent\n", " interesting patterns of random variation. \n", "\n", "\n", "The idea is to replace the constant matrices that define a **linear quadratic dynamic programming problem**\n", "with $ N $ sets of matrices that are fixed functions of\n", "the state of an $ N $ state Markov chain.\n", "\n", "The state of the Markov chain together with the continuous $ n \\times 1 $ state vector $ x_t $ form the state of the system.\n", "\n", "For the class of infinite horizon problems being studied in this lecture, we obtain $ N $ interrelated\n", "matrix Riccati equations that determine $ N $ optimal value\n", "functions and $ N $ linear decision rules.\n", "\n", "One of these value functions and one of these decision rules apply in each of the $ N $ Markov states.\n", "\n", "That is, when the Markov state is in state $ j $, the value function and the decision rule\n", "for state $ j $ prevails." ] }, { "cell_type": "markdown", "id": "b57a1fa2", "metadata": {}, "source": [ "## Review of useful LQ dynamic programming formulas\n", "\n", "To begin, it is handy to have the following reminder in mind.\n", "\n", "A **linear quadratic dynamic programming problem** consists of a scalar\n", "discount factor $ \\beta \\in (0,1) $, an $ n\\times 1 $ state\n", "vector $ x_t $, an initial condition for $ x_0 $, a\n", "$ k \\times 1 $ control vector $ u_t $, a $ p \\times 1 $\n", "random shock vector $ w_{t+1} $ and the following two triples of\n", "matrices:\n", "\n", "- A triple of matrices $ (R, Q, W) $ defining a loss function \n", "\n", "\n", "$$\n", "r(x_t, u_t) = x_t' R x_t + u_t' Q u_t + 2 u_t' W x_t\n", "$$\n", "\n", "- a triple of matrices $ (A, B, C) $ defining a state-transition\n", " law \n", "\n", "\n", "$$\n", "x_{t+1} = A x_t + B u_t + C w_{t+1}\n", "$$\n", "\n", "The problem is\n", "\n", "$$\n", "-x_0' P x_0 - \\rho = \\min_{\\{u_t\\}_{t=0}^\\infty} E \\sum_{t=0}^{\\infty} \\beta^t r(x_t, u_t)\n", "$$\n", "\n", "subject to the transition law for the state.\n", "\n", "The optimal decision rule has the form\n", "\n", "$$\n", "u_t = - F x_t\n", "$$\n", "\n", "and the optimal value function is of the form\n", "\n", "$$\n", "-\\left( x_t' P x_t + \\rho \\right)\n", "$$\n", "\n", "where $ P $ solves the algebraic matrix Riccati equation\n", "\n", "$$\n", "P = R+ \\beta A' P A\n", " -(\\beta B' P A + W)' (Q + \\beta B P B )^{-1} (\\beta B P A + W)\n", "$$\n", "\n", "and the constant $ \\rho $ satisfies\n", "\n", "$$\n", "\\rho = \\beta\n", " \\left( \\rho + {\\rm trace}(P C C') \\right)\n", "$$\n", "\n", "and the matrix $ F $ in the decision rule for $ u_t $ satisfies\n", "\n", "$$\n", "F = (Q + \\beta B' P B)^{-1} (\\beta (B' P A )+ W)\n", "$$\n", "\n", "With the preceding formulas in mind, we are ready to approach Markov Jump linear quadratic dynamic programming." ] }, { "cell_type": "markdown", "id": "4ad20512", "metadata": {}, "source": [ "## Linked Riccati equations for Markov LQ dynamic programming\n", "\n", "The key idea is to make the matrices $ A, B, C, R, Q, W $ fixed\n", "functions of a finite state $ s $ that is governed by an $ N $\n", "state Markov chain.\n", "\n", "This makes decision rules depend on the Markov\n", "state, and so fluctuate through time in limited ways.\n", "\n", "In particular, we use the following extension of a discrete-time linear\n", "quadratic dynamic programming problem.\n", "\n", "We let $ s_t \\in [1, 2, \\ldots, N] $ be a time $ t $ realization of an\n", "$ N $-state Markov chain with transition matrix $ \\Pi $ having\n", "typical element $ \\Pi_{ij} $.\n", "\n", "Here $ i $ denotes today and\n", "$ j $ denotes tomorrow and\n", "\n", "$$\n", "\\Pi_{ij} = {\\rm Prob}(s_{t+1} = j |s_t = i)\n", "$$\n", "\n", "We’ll switch between labeling today’s state as $ s_t $ and\n", "$ i $ and between labeling tomorrow’s state as $ s_{t+1} $ or\n", "$ j $.\n", "\n", "The decision-maker solves the minimization problem:\n", "\n", "$$\n", "\\min_{\\{u_t\\}_{t=0}^\\infty} E \\sum_{t=0}^{\\infty} \\beta^t r(x_t, s_t, u_t)\n", "$$\n", "\n", "with\n", "\n", "$$\n", "r(x_t, s_t, u_t) = x_t' R_{s_t} x_t + u_t' Q_{s_t} u_t + 2 u_t' W_{s_t} x_t\n", "$$\n", "\n", "subject to linear laws of motion with matrices $ (A,B,C) $ each\n", "possibly dependent on the Markov-state-$ s_t $:\n", "\n", "$$\n", "x_{t+1} = A_{s_t} x_t + B_{s_t} u_t + C_{s_t} w_{t+1}\n", "$$\n", "\n", "where $ \\{w_{t+1}\\} $ is an i.i.d. stochastic process with\n", "$ w_{t+1} \\sim {\\cal N}(0,I) $.\n", "\n", "The optimal decision rule for this problem has the form\n", "\n", "$$\n", "u_t = - F_{s_t} x_t\n", "$$\n", "\n", "and the optimal value functions are of the form\n", "\n", "$$\n", "-\\left( x_t' P_{s_t} x_t + \\rho_{s_t} \\right)\n", "$$\n", "\n", "or equivalently\n", "\n", "$$\n", "-x_t' P_i x_t - \\rho_i\n", "$$\n", "\n", "The optimal value functions $ - x' P_i x - \\rho_i $ for\n", "$ i = 1, \\ldots, n $ satisfy the $ N $\n", "interrelated Bellman equations\n", "\n", "$$\n", "\\begin{split}\n", "-x' P_i x - \\rho_i & = \\max_u -\n", " \\\\\n", " &\n", " \\left[\n", " x'R_i x + u' Q_i u + 2 u' W_i x +\n", " \\beta \\sum_j \\Pi_{ij}E ((A_i x + B_i u + C_i w)' P_j\n", " (A_i x + B_i u + C_i w) x + \\rho_j)\n", " \\right]\n", "\\end{split}\n", "$$\n", "\n", "The matrices $ P_{s_t} = P_i $ and the scalars\n", "$ \\rho_{s_t} = \\rho_i, i = 1, \\ldots $, n satisfy the following stacked system of\n", "**algebraic matrix Riccati** equations:\n", "\n", "$$\n", "P_i = R_i + \\beta \\sum_j A_i' P_j A_i\n", " \\Pi_{ij}\n", " -\\sum_j \\Pi_{ij}[ (\\beta B_i' P_j A_i + W_i)' (Q + \\beta B_i' P_j B_i)^{-1}\n", " (\\beta B_i' P_j A_i + W_i)]\n", "$$\n", "\n", "$$\n", "\\rho_i = \\beta\n", " \\sum_j \\Pi_{ij} ( \\rho_j + {\\rm trace}(P_j C_i C_i') )\n", "$$\n", "\n", "and the $ F_i $ in the optimal decision rules are\n", "\n", "$$\n", "F_i = (Q_i + \\beta \\sum_j \\Pi_{ij} B_i' P_j B_i)^{-1}\n", "(\\beta \\sum_j \\Pi_{ij}(B_i' P_j A_i )+ W_i)\n", "$$" ] }, { "cell_type": "markdown", "id": "10ffe4c0", "metadata": {}, "source": [ "## Applications\n", "\n", "We now describe some Python code and a few examples that put the code to work.\n", "\n", "To begin, we import these Python modules" ] }, { "cell_type": "code", "execution_count": null, "id": "2ff83788", "metadata": { "hide-output": false }, "outputs": [], "source": [ "import numpy as np\n", "import quantecon as qe\n", "import matplotlib.pyplot as plt\n", "from mpl_toolkits.mplot3d import Axes3D\n", "\n", "%matplotlib inline" ] }, { "cell_type": "code", "execution_count": null, "id": "9272dbf6", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Set discount factor\n", "β = 0.95" ] }, { "cell_type": "markdown", "id": "f7d509b5", "metadata": {}, "source": [ "## Example 1\n", "\n", "This example is a version of a classic problem of optimally adjusting a variable $ k_t $ to a target level in the face of costly adjustment.\n", "\n", "This provides a model of gradual adjustment.\n", "\n", "Given $ k_{0} $, the objective function is\n", "\n", "$$\n", "\\max_{\\left\\{ k_{t}\\right\\} _{t=1}^{\\infty}}E_{0}\\sum_{t=0}^{\\infty}\\beta^{t}r\\left(s_{t},k_{t}\\right)\n", "$$\n", "\n", "where the one-period payoff function is\n", "\n", "$$\n", "r(s_{t},k_{t})=f_{1,s_t} k_{t}-f_{2,s_t} k_{t}^{2}-d_{s_t}(k_{t+1}-k_{t})^{2},\n", "$$\n", "\n", "$ E_0 $ is a mathematical expectation conditioned on time $ 0 $ information $ x_0, s_0 $\n", "and the transition law for continuous state variable $ k_t $ is\n", "\n", "$$\n", "k_{t+1}-k_{t}=u_{t}\n", "$$\n", "\n", "We can think of $ k_t $ as the decision-maker’s capital and\n", "$ u_t $ as costs of adjusting the level of capital.\n", "\n", "We assume that $ f_{1}\\left(s_{t}\\right)>0 $,\n", "$ f_{2}\\left(s_{t}\\right)>0 $, and $ d\\left(s_{t}\\right)>0 $.\n", "\n", "Denote the state transition matrix for Markov state\n", "$ s_{t}\\in\\left\\{1,2 \\right\\} $ as $ \\Pi $:\n", "\n", "$$\n", "\\Pr \\left(s_{t+1}=j \\mid s_{t}= i \\right)=\\Pi_{ij}\n", "$$\n", "\n", "Let $ x_{t}=\\begin{bmatrix} k_{t}\\\\ 1 \\end{bmatrix} $\n", "\n", "We can represent the one-period payoff function\n", "$ r\\left(s_{t},k_{t}\\right) $ and the state-transition law as\n", "\n", "$$\n", "\\begin{aligned}\n", "r\\left(s_{t},k_{t}\\right) =f_{1,s_t} k_{t}-f_{2,s_t} k_{t}^{2}-d_{s_t} u_{t}{}^{2} \\\\\n", " =-x_{t}^{\\prime}\\underset{\\equiv R(s_{t})}{\\underbrace{\\begin{bmatrix}\n", "f_{2,s_t} & -\\frac{f_{1,s_t}}{2}\\\\\n", "-\\frac{f_{1,s_t}}{2} & 0\n", "\\end{bmatrix}}}x_{t}+\\underset{\\equiv Q\\left(s_{t}\\right)}{\\underbrace{d_{s_t}}}u_{t}{}^{2}\n", " \\end{aligned}\n", "$$\n", "\n", "$$\n", "x_{t+1}=\\begin{bmatrix}\n", "k_{t+1}\\\\\n", "1\n", "\\end{bmatrix}=\\underset{\\equiv A\\left(s_{t}\\right)}{\\underbrace{I_{2}}}x_{t}+\\underset{\\equiv B\\left(s_{t}\\right)}{\\underbrace{\\begin{bmatrix}\n", "1\\\\\n", "0\n", "\\end{bmatrix}}}u_{t}\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "id": "409a32d8", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def construct_arrays1(f1_vals=[1. ,1.],\n", " f2_vals=[1., 1.],\n", " d_vals=[1., 1.]):\n", " \"\"\"\n", " Construct matrices that map the problem described in example 1\n", " into a Markov jump linear quadratic dynamic programming problem\n", " \"\"\"\n", "\n", " # Number of Markov states\n", " m = len(f1_vals)\n", " # Number of state and control variables\n", " n, k = 2, 1\n", "\n", " # Construct sets of matrices for each state\n", " As = [np.eye(n) for i in range(m)]\n", " Bs = [np.array([[1, 0]]).T for i in range(m)]\n", "\n", " Rs = np.zeros((m, n, n))\n", " Qs = np.zeros((m, k, k))\n", "\n", " for i in range(m):\n", " Rs[i, 0, 0] = f2_vals[i]\n", " Rs[i, 1, 0] = - f1_vals[i] / 2\n", " Rs[i, 0, 1] = - f1_vals[i] / 2\n", "\n", " Qs[i, 0, 0] = d_vals[i]\n", "\n", " Cs, Ns = None, None\n", "\n", " # Compute the optimal k level of the payoff function in each state\n", " k_star = np.empty(m)\n", " for i in range(m):\n", " k_star[i] = f1_vals[i] / (2 * f2_vals[i])\n", "\n", " return Qs, Rs, Ns, As, Bs, Cs, k_star" ] }, { "cell_type": "markdown", "id": "f254512d", "metadata": {}, "source": [ "The continuous part of the state $ x_t $ consists of two variables,\n", "namely, $ k_t $ and a constant term." ] }, { "cell_type": "code", "execution_count": null, "id": "212b6151", "metadata": { "hide-output": false }, "outputs": [], "source": [ "state_vec1 = [\"k\", \"constant term\"]" ] }, { "cell_type": "markdown", "id": "e1545ed0", "metadata": {}, "source": [ "We start with a Markov transition matrix that makes the Markov state be\n", "strictly periodic:\n", "\n", "$$\n", "\\Pi_{1}=\\begin{bmatrix}\n", "0 & 1\\\\\n", "1 & 0\n", "\\end{bmatrix},\n", "$$\n", "\n", "We set $ f_{1,{s_t}} $ and $ f_{2,{s_t}} $ to be independent of the\n", "Markov state $ s_t $\n", "\n", "$$\n", "f_{1,1}=f_{1,2} = 1,\n", "$$\n", "\n", "$$\n", "f_{2,1} =f_{2,2} = 1\n", "$$\n", "\n", "In contrast to $ f_{1,{s_t}} $ and $ f_{2,{s_t}} $, we make the\n", "adjustment cost $ d_{s_t} $ vary across Markov states $ s_t $.\n", "\n", "We set the adjustment cost to be lower in Markov state $ 2 $\n", "\n", "$$\n", "d_1=1, d_2 = 0.5\n", "$$\n", "\n", "The following code forms a Markov switching LQ problem and computes the\n", "optimal value functions and optimal decision rules for each Markov state" ] }, { "cell_type": "code", "execution_count": null, "id": "4c3ad483", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Construct Markov transition matrix\n", "Π1 = np.array([[0., 1.],\n", " [1., 0.]])" ] }, { "cell_type": "code", "execution_count": null, "id": "98d84b74", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Construct matrices\n", "Qs, Rs, Ns, As, Bs, Cs, k_star = construct_arrays1(d_vals=[1., 0.5])" ] }, { "cell_type": "code", "execution_count": null, "id": "314f582e", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Construct a Markov Jump LQ problem\n", "ex1_a = qe.LQMarkov(Π1, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)\n", "# Solve for optimal value functions and decision rules\n", "ex1_a.stationary_values();" ] }, { "cell_type": "markdown", "id": "699b34bf", "metadata": {}, "source": [ "Let’s look at the value function matrices and the decision rules for\n", "each Markov state" ] }, { "cell_type": "code", "execution_count": null, "id": "875d62b8", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# P(s)\n", "ex1_a.Ps" ] }, { "cell_type": "code", "execution_count": null, "id": "d14a38e3", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# d(s) = 0, since there is no randomness\n", "ex1_a.ds" ] }, { "cell_type": "code", "execution_count": null, "id": "dc051e2f", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# F(s)\n", "ex1_a.Fs" ] }, { "cell_type": "markdown", "id": "731cf10f", "metadata": {}, "source": [ "Now we’ll plot the decision rules and see if they make sense" ] }, { "cell_type": "code", "execution_count": null, "id": "260c236a", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Plot the optimal decision rules\n", "k_grid = np.linspace(0., 1., 100)\n", "# Optimal choice in state s1\n", "u1_star = - ex1_a.Fs[0, 0, 1] - ex1_a.Fs[0, 0, 0] * k_grid\n", "# Optimal choice in state s2\n", "u2_star = - ex1_a.Fs[1, 0, 1] - ex1_a.Fs[1, 0, 0] * k_grid\n", "\n", "fig, ax = plt.subplots()\n", "ax.plot(k_grid, k_grid + u1_star, label=\"$\\overline{s}_1$ (high)\")\n", "ax.plot(k_grid, k_grid + u2_star, label=\"$\\overline{s}_2$ (low)\")\n", "\n", "# The optimal k*\n", "ax.scatter([0.5, 0.5], [0.5, 0.5], marker=\"*\")\n", "ax.plot([k_star[0], k_star[0]], [0., 1.0], '--')\n", "\n", "# 45 degree line\n", "ax.plot([0., 1.], [0., 1.], '--', label=\"45 degree line\")\n", "\n", "ax.set_xlabel(\"$k_t$\")\n", "ax.set_ylabel(\"$k_{t+1}$\")\n", "ax.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "34fce2f8", "metadata": {}, "source": [ "The above graph plots $ k_{t+1}= k_t + u_t = k_t - F x_t $ as an affine\n", "(i.e., linear in $ k_t $ plus a constant) function of $ k_t $\n", "for both Markov states $ s_t $.\n", "\n", "It also plots the 45 degree line.\n", "\n", "Notice that the two $ s_t $-dependent *closed loop* functions that\n", "determine $ k_{t+1} $ as functions of $ k_t $ share the same\n", "rest point (also called a fixed point) at $ k_t = 0.5 $.\n", "\n", "Evidently, the optimal decision rule in Markov state $ 2 $,\n", "in which the adjustment cost is lower, makes $ k_{t+1} $ a flatter\n", "function of $ k_t $ in Markov state $ 2 $.\n", "\n", "This happens because when $ k_t $ is not at its fixed point,\n", "$ | u_{t,2}| > | u_{t,2} | $, so\n", "that the decision-maker adjusts toward the fixed point faster when\n", "the Markov state $ s_t $ takes a value that makes it cheaper." ] }, { "cell_type": "code", "execution_count": null, "id": "50810d4d", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Compute time series\n", "T = 20\n", "x0 = np.array([[0., 1.]]).T\n", "x_path = ex1_a.compute_sequence(x0, ts_length=T)[0]\n", "\n", "fig, ax = plt.subplots()\n", "ax.plot(range(T), x_path[0, :-1])\n", "ax.set_xlabel(\"$t$\")\n", "ax.set_ylabel(\"$k_t$\")\n", "ax.set_title(\"Optimal path of $k_t$\")\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "1493b0aa", "metadata": {}, "source": [ "Now we’ll depart from the preceding transition matrix that made the\n", "Markov state be strictly periodic.\n", "\n", "We’ll begin with symmetric transition matrices of the form\n", "\n", "$$\n", "\\Pi_{2}=\\begin{bmatrix}\n", "1-\\lambda & \\lambda\\\\\n", "\\lambda & 1-\\lambda\n", "\\end{bmatrix}.\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "id": "57b16124", "metadata": { "hide-output": false }, "outputs": [], "source": [ "λ = 0.8 # high λ\n", "Π2 = np.array([[1-λ, λ],\n", " [λ, 1-λ]])\n", "\n", "ex1_b = qe.LQMarkov(Π2, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)\n", "ex1_b.stationary_values();\n", "ex1_b.Fs" ] }, { "cell_type": "code", "execution_count": null, "id": "ed5ccee5", "metadata": { "hide-output": false }, "outputs": [], "source": [ "λ = 0.2 # low λ\n", "Π2 = np.array([[1-λ, λ],\n", " [λ, 1-λ]])\n", "\n", "ex1_b = qe.LQMarkov(Π2, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)\n", "ex1_b.stationary_values();\n", "ex1_b.Fs" ] }, { "cell_type": "markdown", "id": "53d78ff9", "metadata": {}, "source": [ "We can plot optimal decision rules associated with different\n", "$ \\lambda $ values." ] }, { "cell_type": "code", "execution_count": null, "id": "eb6ec1d3", "metadata": { "hide-output": false }, "outputs": [], "source": [ "λ_vals = np.linspace(0., 1., 10)\n", "F1 = np.empty((λ_vals.size, 2))\n", "F2 = np.empty((λ_vals.size, 2))\n", "\n", "for i, λ in enumerate(λ_vals):\n", " Π2 = np.array([[1-λ, λ],\n", " [λ, 1-λ]])\n", "\n", " ex1_b = qe.LQMarkov(Π2, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)\n", " ex1_b.stationary_values();\n", " F1[i, :] = ex1_b.Fs[0, 0, :]\n", " F2[i, :] = ex1_b.Fs[1, 0, :]" ] }, { "cell_type": "code", "execution_count": null, "id": "cfe80cc0", "metadata": { "hide-output": false }, "outputs": [], "source": [ "for i, state_var in enumerate(state_vec1):\n", " fig, ax = plt.subplots()\n", " ax.plot(λ_vals, F1[:, i], label=\"$\\overline{s}_1$\", color=\"b\")\n", " ax.plot(λ_vals, F2[:, i], label=\"$\\overline{s}_2$\", color=\"r\")\n", "\n", " ax.set_xlabel(\"$\\lambda$\")\n", " ax.set_ylabel(\"$F_{s_t}$\")\n", " ax.set_title(f\"Coefficient on {state_var}\")\n", " ax.legend()\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "ad6538f5", "metadata": {}, "source": [ "Notice how the decision rules’ constants and slopes behave as functions\n", "of $ \\lambda $.\n", "\n", "Evidently, as the Markov chain becomes *more nearly periodic* (i.e., as\n", "$ \\lambda \\rightarrow 1 $), the dynamic program adjusts capital\n", "faster in the low adjustment cost Markov state to take advantage of what\n", "is only temporarily a more favorable time to invest.\n", "\n", "Now let’s study situations in which the Markov transition matrix\n", "$ \\Pi $ is asymmetric\n", "\n", "$$\n", "\\Pi_{3}=\\begin{bmatrix}\n", "1-\\lambda & \\lambda\\\\\n", "\\delta & 1-\\delta\n", "\\end{bmatrix}.\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "id": "f296f008", "metadata": { "hide-output": false }, "outputs": [], "source": [ "λ, δ = 0.8, 0.2\n", "Π3 = np.array([[1-λ, λ],\n", " [δ, 1-δ]])\n", "\n", "ex1_b = qe.LQMarkov(Π3, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)\n", "ex1_b.stationary_values();\n", "ex1_b.Fs" ] }, { "cell_type": "markdown", "id": "73afb5f8", "metadata": {}, "source": [ "We can plot optimal decision rules for different $ \\lambda $ and\n", "$ \\delta $ values." ] }, { "cell_type": "code", "execution_count": null, "id": "8bf9ccba", "metadata": { "hide-output": false }, "outputs": [], "source": [ "λ_vals = np.linspace(0., 1., 10)\n", "δ_vals = np.linspace(0., 1., 10)\n", "\n", "λ_grid = np.empty((λ_vals.size, δ_vals.size))\n", "δ_grid = np.empty((λ_vals.size, δ_vals.size))\n", "F1_grid = np.empty((λ_vals.size, δ_vals.size, len(state_vec1)))\n", "F2_grid = np.empty((λ_vals.size, δ_vals.size, len(state_vec1)))\n", "\n", "for i, λ in enumerate(λ_vals):\n", " λ_grid[i, :] = λ\n", " δ_grid[i, :] = δ_vals\n", " for j, δ in enumerate(δ_vals):\n", " Π3 = np.array([[1-λ, λ],\n", " [δ, 1-δ]])\n", "\n", " ex1_b = qe.LQMarkov(Π3, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)\n", " ex1_b.stationary_values();\n", " F1_grid[i, j, :] = ex1_b.Fs[0, 0, :]\n", " F2_grid[i, j, :] = ex1_b.Fs[1, 0, :]" ] }, { "cell_type": "code", "execution_count": null, "id": "30d96340", "metadata": { "hide-output": false }, "outputs": [], "source": [ "for i, state_var in enumerate(state_vec1):\n", " fig = plt.figure()\n", " ax = fig.add_subplot(111, projection='3d')\n", " # high adjustment cost, blue surface\n", " ax.plot_surface(λ_grid, δ_grid, F1_grid[:, :, i], color=\"b\")\n", " # low adjustment cost, red surface\n", " ax.plot_surface(λ_grid, δ_grid, F2_grid[:, :, i], color=\"r\")\n", " ax.set_xlabel(\"$\\lambda$\")\n", " ax.set_ylabel(\"$\\delta$\")\n", " ax.set_zlabel(\"$F_{s_t}$\")\n", " ax.set_title(f\"coefficient on {state_var}\")\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "7c2a030b", "metadata": {}, "source": [ "The following code defines a wrapper function that computes optimal\n", "decision rules for cases with different Markov transition matrices" ] }, { "cell_type": "code", "execution_count": null, "id": "8ecdf540", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def run(construct_func, vals_dict, state_vec):\n", " \"\"\"\n", " A Wrapper function that repeats the computation above\n", " for different cases\n", " \"\"\"\n", "\n", " Qs, Rs, Ns, As, Bs, Cs, k_star = construct_func(**vals_dict)\n", "\n", " # Symmetric Π\n", " # Notice that pure periodic transition is a special case\n", " # when λ=1\n", " print(\"symmetric Π case:\\n\")\n", " λ_vals = np.linspace(0., 1., 10)\n", " F1 = np.empty((λ_vals.size, len(state_vec)))\n", " F2 = np.empty((λ_vals.size, len(state_vec)))\n", "\n", " for i, λ in enumerate(λ_vals):\n", " Π2 = np.array([[1-λ, λ],\n", " [λ, 1-λ]])\n", "\n", " mplq = qe.LQMarkov(Π2, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)\n", " mplq.stationary_values();\n", " F1[i, :] = mplq.Fs[0, 0, :]\n", " F2[i, :] = mplq.Fs[1, 0, :]\n", "\n", " for i, state_var in enumerate(state_vec):\n", " fig = plt.figure()\n", " ax = fig.add_subplot(111)\n", " ax.plot(λ_vals, F1[:, i], label=\"$\\overline{s}_1$\", color=\"b\")\n", " ax.plot(λ_vals, F2[:, i], label=\"$\\overline{s}_2$\", color=\"r\")\n", "\n", " ax.set_xlabel(\"$\\lambda$\")\n", " ax.set_ylabel(\"$F(\\overline{s}_t)$\")\n", " ax.set_title(f\"coefficient on {state_var}\")\n", " ax.legend()\n", " plt.show()\n", "\n", " # Plot optimal k*_{s_t} and k that optimal policies are targeting\n", " # only for example 1\n", " if state_vec == [\"k\", \"constant term\"]:\n", " fig = plt.figure()\n", " ax = fig.add_subplot(111)\n", " for i in range(2):\n", " F = [F1, F2][i]\n", " c = [\"b\", \"r\"][i]\n", " ax.plot([0, 1], [k_star[i], k_star[i]], \"--\",\n", " color=c, label=\"$k^*(\\overline{s}_\"+str(i+1)+\")$\")\n", " ax.plot(λ_vals, - F[:, 1] / F[:, 0], color=c,\n", " label=\"$k^{target}(\\overline{s}_\"+str(i+1)+\")$\")\n", "\n", " # Plot a vertical line at λ=0.5\n", " ax.plot([0.5, 0.5], [min(k_star), max(k_star)], \"-.\")\n", "\n", " ax.set_xlabel(\"$\\lambda$\")\n", " ax.set_ylabel(\"$k$\")\n", " ax.set_title(\"Optimal k levels and k targets\")\n", " ax.text(0.5, min(k_star)+(max(k_star)-min(k_star))/20, \"$\\lambda=0.5$\")\n", " ax.legend(bbox_to_anchor=(1., 1.))\n", " plt.show()\n", "\n", " # Asymmetric Π\n", " print(\"asymmetric Π case:\\n\")\n", " δ_vals = np.linspace(0., 1., 10)\n", "\n", " λ_grid = np.empty((λ_vals.size, δ_vals.size))\n", " δ_grid = np.empty((λ_vals.size, δ_vals.size))\n", " F1_grid = np.empty((λ_vals.size, δ_vals.size, len(state_vec)))\n", " F2_grid = np.empty((λ_vals.size, δ_vals.size, len(state_vec)))\n", "\n", " for i, λ in enumerate(λ_vals):\n", " λ_grid[i, :] = λ\n", " δ_grid[i, :] = δ_vals\n", " for j, δ in enumerate(δ_vals):\n", " Π3 = np.array([[1-λ, λ],\n", " [δ, 1-δ]])\n", "\n", " mplq = qe.LQMarkov(Π3, Qs, Rs, As, Bs, Cs=Cs, Ns=Ns, beta=β)\n", " mplq.stationary_values();\n", " F1_grid[i, j, :] = mplq.Fs[0, 0, :]\n", " F2_grid[i, j, :] = mplq.Fs[1, 0, :]\n", "\n", " for i, state_var in enumerate(state_vec):\n", " fig = plt.figure()\n", " ax = fig.add_subplot(111, projection='3d')\n", " ax.plot_surface(λ_grid, δ_grid, F1_grid[:, :, i], color=\"b\")\n", " ax.plot_surface(λ_grid, δ_grid, F2_grid[:, :, i], color=\"r\")\n", " ax.set_xlabel(\"$\\lambda$\")\n", " ax.set_ylabel(\"$\\delta$\")\n", " ax.set_zlabel(\"$F(\\overline{s}_t)$\")\n", " ax.set_title(f\"coefficient on {state_var}\")\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "d7a0fbea", "metadata": {}, "source": [ "To illustrate the code with another example, we shall set\n", "$ f_{2,s_t} $ and $ d_{s_t} $ as constant functions and\n", "\n", "$$\n", "f_{1,1} = 0.5, f_{1,2} = 1\n", "$$\n", "\n", "Thus, the sole role of the Markov jump state $ s_t $ is to identify\n", "times in which capital is very productive and other times in which it is\n", "less productive.\n", "\n", "The example below reveals much about the structure of the optimum\n", "problem and optimal policies.\n", "\n", "Only $ f_{1,{s_t}} $ varies with $ s_t $.\n", "\n", "So there are different $ s_t $-dependent optimal static $ k $\n", "level in different states $ k^*_{s_t}=\\frac{f_{1,{s_t}}}{2 f_{2,{s_t}}} $,\n", "values of $ k $ that maximize one-period payoff functions in each\n", "state.\n", "\n", "We denote a target $ k $ level as $ k^{target}_{s_t} $, the fixed\n", "point of the optimal policies in each state, given the value of\n", "$ \\lambda $.\n", "\n", "We call $ k^{target}_{s_t} $ a “target” because in each Markov state\n", "$ s_t $, optimal policies are contraction mappings and will push\n", "$ k_t $ towards a fixed point $ k^{target}_{s_t} $.\n", "\n", "When $ \\lambda \\rightarrow 0 $, each Markov state becomes close to\n", "absorbing state and consequently\n", "$ k^{target}_{s_t} \\rightarrow k^*_{s_t} $.\n", "\n", "But when $ \\lambda \\rightarrow 1 $, the Markov transition matrix\n", "becomes more nearly periodic, so the optimum decision rules target more\n", "at the optimal $ k $ level in the other state in order to enjoy higher\n", "expected payoff in the next period.\n", "\n", "The switch happens at $ \\lambda = 0.5 $ when both states are equally\n", "likely to be reached.\n", "\n", "Below we plot an additional figure that shows optimal $ k $ levels\n", "in the two states Markov jump state and also how the targeted $ k $\n", "levels change as $ \\lambda $ changes." ] }, { "cell_type": "code", "execution_count": null, "id": "340c5187", "metadata": { "hide-output": false }, "outputs": [], "source": [ "run(construct_arrays1, {\"f1_vals\":[0.5, 1.]}, state_vec1)" ] }, { "cell_type": "markdown", "id": "b4fff3fb", "metadata": {}, "source": [ "Set $ f_{1,{s_t}} $ and $ d_{s_t} $ as constant functions and\n", "\n", "$$\n", "f_{2,1} = 0.5, f_{2,2} = 1\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "id": "74337028", "metadata": { "hide-output": false }, "outputs": [], "source": [ "run(construct_arrays1, {\"f2_vals\":[0.5, 1.]}, state_vec1)" ] }, { "cell_type": "markdown", "id": "69c448ce", "metadata": {}, "source": [ "## Example 2\n", "\n", "We now add to the example 1 setup another state variable $ w_t $\n", "that follows the evolution law\n", "\n", "$$\n", "w_{t+1}=\\alpha_{0}\\left(s_{t}\\right)+\\rho\\left(s_{t}\\right)w_{t}+\\sigma\\left(s_{t}\\right)\\epsilon_{t+1},\\quad\\epsilon_{t+1}\\sim N\\left(0,1\\right)\n", "$$\n", "\n", "We think of $ w_t $ as a rental rate or tax rate that the decision\n", "maker pays each period for $ k_t $.\n", "\n", "To capture this idea, we add to the decision-maker’s one-period payoff\n", "function the product of $ w_t $ and $ k_t $\n", "\n", "$$\n", "r(s_{t},k_{t},w_{t})=f_{1,s_{t}} k_{t}-f_{2, s_{t}} k_{t}^{2}-d_{s_{t}} (k_{t+1}-k_{t})^{2}-w_{t}k_{t},\n", "$$\n", "\n", "We now let the continuous part of the state at time $ t $ be\n", "$ x_{t}=\\begin{bmatrix} k_{t}\\\\ 1\\\\ w_{t} \\end{bmatrix} $\n", "and continue to set the control $ u_{t}=k_{t+1}-k_{t} $.\n", "\n", "We can write the one-period payoff function\n", "$ r\\left(s_{t},k_{t},w_{t}\\right) $ and the state-transition law as\n", "\n", "$$\n", "\\begin{aligned}\n", "r\\left(s_{t},k_{t},w_{t}\\right)\n", " & =f_{1}\\left(s_{t}\\right)k_{t}-f_{2}\\left(s_{t}\\right)k_{t}^{2}-d\\left(s_{t}\\right)\\left(k_{t+1}-k_{t}\\right)^{2}-w_{t}k_{t} \\\\\n", " & =-\\left(x_{t}^{\\prime}\\underset{\\equiv R\\left(s_{t}\\right)}{\\underbrace{\n", " \\begin{bmatrix}\n", " f_{2}\\left(s_{t}\\right) & -\\frac{f_{1}\\left(s_{t}\\right)}{2} & \\frac{1}{2}\\\\\n", " -\\frac{f_{1}\\left(s_{t}\\right)}{2} & 0 & 0\\\\\n", " \\frac{1}{2} & 0 & 0\n", " \\end{bmatrix}}}\n", "x_{t}+\n", "\\underset{\\equiv Q\\left(s_{t}\\right)}{\\underbrace{d\\left(s_{t}\\right)}}u_{t}^{2}\\right),\n", "\\end{aligned}\n", "$$\n", "\n", "and\n", "\n", "$$\n", "x_{t+1}=\\begin{bmatrix}\n", "k_{t+1}\\\\\n", "1\\\\\n", "w_{t+1}\n", "\\end{bmatrix}=\\underset{\\equiv A\\left(s_{t}\\right)}{\\underbrace{\\begin{bmatrix}\n", "1 & 0 & 0\\\\\n", "0 & 1 & 0\\\\\n", "0 & \\alpha_{0}\\left(s_{t}\\right) & \\rho\\left(s_{t}\\right)\n", "\\end{bmatrix}}}x_{t}+\\underset{\\equiv B\\left(s_{t}\\right)}{\\underbrace{\\begin{bmatrix}\n", "1\\\\\n", "0\\\\\n", "0\n", "\\end{bmatrix}}}u_{t}+\\underset{\\equiv C\\left(s_{t}\\right)}{\\underbrace{\\begin{bmatrix}\n", "0\\\\\n", "0\\\\\n", "\\sigma\\left(s_{t}\\right)\n", "\\end{bmatrix}}}\\epsilon_{t+1}\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "id": "5173f732", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def construct_arrays2(f1_vals=[1. ,1.],\n", " f2_vals=[1., 1.],\n", " d_vals=[1., 1.],\n", " α0_vals=[1., 1.],\n", " ρ_vals=[0.9, 0.9],\n", " σ_vals=[1., 1.]):\n", " \"\"\"\n", " Construct matrices that maps the problem described in example 2\n", " into a Markov jump linear quadratic dynamic programming problem.\n", " \"\"\"\n", "\n", " m = len(f1_vals)\n", " n, k, j = 3, 1, 1\n", "\n", " Rs = np.zeros((m, n, n))\n", " Qs = np.zeros((m, k, k))\n", " As = np.zeros((m, n, n))\n", " Bs = np.zeros((m, n, k))\n", " Cs = np.zeros((m, n, j))\n", "\n", " for i in range(m):\n", " Rs[i, 0, 0] = f2_vals[i]\n", " Rs[i, 1, 0] = - f1_vals[i] / 2\n", " Rs[i, 0, 1] = - f1_vals[i] / 2\n", " Rs[i, 0, 2] = 1/2\n", " Rs[i, 2, 0] = 1/2\n", "\n", " Qs[i, 0, 0] = d_vals[i]\n", "\n", " As[i, 0, 0] = 1\n", " As[i, 1, 1] = 1\n", " As[i, 2, 1] = α0_vals[i]\n", " As[i, 2, 2] = ρ_vals[i]\n", "\n", " Bs[i, :, :] = np.array([[1, 0, 0]]).T\n", "\n", " Cs[i, :, :] = np.array([[0, 0, σ_vals[i]]]).T\n", "\n", " Ns = None\n", " k_star = None\n", "\n", " return Qs, Rs, Ns, As, Bs, Cs, k_star" ] }, { "cell_type": "code", "execution_count": null, "id": "36b8b2b7", "metadata": { "hide-output": false }, "outputs": [], "source": [ "state_vec2 = [\"k\", \"constant term\", \"w\"]" ] }, { "cell_type": "markdown", "id": "191d4599", "metadata": {}, "source": [ "Only $ d_{s_t} $ depends on $ s_t $." ] }, { "cell_type": "code", "execution_count": null, "id": "77a15e72", "metadata": { "hide-output": false }, "outputs": [], "source": [ "run(construct_arrays2, {\"d_vals\":[1., 0.5]}, state_vec2)" ] }, { "cell_type": "markdown", "id": "0dc5ae4c", "metadata": {}, "source": [ "Only $ f_{1,{s_t}} $ depends on $ s_t $." ] }, { "cell_type": "code", "execution_count": null, "id": "c9f5b4d4", "metadata": { "hide-output": false }, "outputs": [], "source": [ "run(construct_arrays2, {\"f1_vals\":[0.5, 1.]}, state_vec2)" ] }, { "cell_type": "markdown", "id": "504af940", "metadata": {}, "source": [ "Only $ f_{2,{s_t}} $ depends on $ s_t $." ] }, { "cell_type": "code", "execution_count": null, "id": "43d85b60", "metadata": { "hide-output": false }, "outputs": [], "source": [ "run(construct_arrays2, {\"f2_vals\":[0.5, 1.]}, state_vec2)" ] }, { "cell_type": "markdown", "id": "bf3b109b", "metadata": {}, "source": [ "Only $ \\alpha_0(s_t) $ depends on $ s_t $." ] }, { "cell_type": "code", "execution_count": null, "id": "e7d87ed8", "metadata": { "hide-output": false }, "outputs": [], "source": [ "run(construct_arrays2, {\"α0_vals\":[0.5, 1.]}, state_vec2)" ] }, { "cell_type": "markdown", "id": "1491b8dc", "metadata": {}, "source": [ "Only $ \\rho_{s_t} $ depends on $ s_t $." ] }, { "cell_type": "code", "execution_count": null, "id": "3cc0392a", "metadata": { "hide-output": false }, "outputs": [], "source": [ "run(construct_arrays2, {\"ρ_vals\":[0.5, 0.9]}, state_vec2)" ] }, { "cell_type": "markdown", "id": "3654ce97", "metadata": {}, "source": [ "Only $ \\sigma_{s_t} $ depends on $ s_t $." ] }, { "cell_type": "code", "execution_count": null, "id": "bbd5c845", "metadata": { "hide-output": false }, "outputs": [], "source": [ "run(construct_arrays2, {\"σ_vals\":[0.5, 1.]}, state_vec2)" ] }, { "cell_type": "markdown", "id": "5e09541c", "metadata": {}, "source": [ "## More examples\n", "\n", "The following lectures describe how Markov jump linear quadratic dynamic programming can be used to extend the [[Bar79](https://python-advanced.quantecon.org/zreferences.html#id136)] model\n", "of optimal tax-smoothing and government debt in several interesting directions\n", "\n", "1. [How to Pay for a War: Part 1](https://python-advanced.quantecon.org/tax_smoothing_1.html) \n", "1. [How to Pay for a War: Part 2](https://python-advanced.quantecon.org/tax_smoothing_2.html) \n", "1. [How to Pay for a War: Part 3](https://python-advanced.quantecon.org/tax_smoothing_3.html) " ] } ], "metadata": { "date": 1705369586.7984066, "filename": "markov_jump_lq.md", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "title": "Markov Jump Linear Quadratic Dynamic Programming" }, "nbformat": 4, "nbformat_minor": 5 }