{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "
\n", " \n", " \"QuantEcon\"\n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Discrete State Dynamic Programming" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Discrete State Dynamic Programming](#Discrete-State-Dynamic-Programming) \n", " - [Overview](#Overview) \n", " - [Discrete DPs](#Discrete-DPs) \n", " - [Solving Discrete DPs](#Solving-Discrete-DPs) \n", " - [Example: A Growth Model](#Example:-A-Growth-Model) \n", " - [Exercises](#Exercises) \n", " - [Solutions](#Solutions) \n", " - [Appendix: Algorithms](#Appendix:-Algorithms) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to what’s in Anaconda, this lecture will need the following libraries:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": true }, "outputs": [], "source": [ "!pip install --upgrade quantecon" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "In this lecture we discuss a family of dynamic programming problems with the following features:\n", "\n", "1. a discrete state space and discrete choices (actions) \n", "1. an infinite horizon \n", "1. discounted rewards \n", "1. Markov state transitions \n", "\n", "\n", "We call such problems discrete dynamic programs or discrete DPs.\n", "\n", "Discrete DPs are the workhorses in much of modern quantitative economics, including\n", "\n", "- monetary economics \n", "- search and labor economics \n", "- household savings and consumption theory \n", "- investment theory \n", "- asset pricing \n", "- industrial organization, etc. \n", "\n", "\n", "When a given model is not inherently discrete, it is common to replace it with a discretized version in order to use discrete DP techniques.\n", "\n", "This lecture covers\n", "\n", "- the theory of dynamic programming in a discrete setting, plus examples and\n", " applications \n", "- a powerful set of routines for solving discrete DPs from the [QuantEcon code library](http://quantecon.org/quantecon-py) \n", "\n", "\n", "Let’s start with some imports:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "import quantecon as qe\n", "import scipy.sparse as sparse\n", "from quantecon import compute_fixed_point\n", "from quantecon.markov import DiscreteDP" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### How to Read this Lecture\n", "\n", "We use dynamic programming many applied lectures, such as\n", "\n", "- The [shortest path lecture](https://python-intro.quantecon.org/short_path.html) \n", "- The [McCall search model lecture](https://python-intro.quantecon.org/mccall_model.html) \n", "\n", "\n", "The objective of this lecture is to provide a more systematic and theoretical treatment, including algorithms and implementation while focusing on the discrete case." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Code\n", "\n", "Among other things, it offers\n", "\n", "- a flexible, well-designed interface \n", "- multiple solution methods, including value function and policy function iteration \n", "- high-speed operations via carefully optimized JIT-compiled functions \n", "- the ability to scale to large problems by minimizing vectorized operators and allowing operations on sparse matrices \n", "\n", "\n", "JIT compilation relies on [Numba](http://numba.pydata.org/), which should work\n", "seamlessly if you are using [Anaconda](https://www.anaconda.com/download/) as [suggested](https://python-programming.quantecon.org/getting_started.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### References\n", "\n", "For background reading on dynamic programming and additional applications, see, for example,\n", "\n", "- [[LS18]](https://python-programming.quantecon.org/zreferences.html#ljungqvist2012) \n", "- [[HLL96]](https://python-programming.quantecon.org/zreferences.html#hernandezlermalasserre1996), section 3.5 \n", "- [[Put05]](https://python-programming.quantecon.org/zreferences.html#puterman2005) \n", "- [[SLP89]](https://python-programming.quantecon.org/zreferences.html#stokeylucas1989) \n", "- [[Rus96]](https://python-programming.quantecon.org/zreferences.html#rust1996) \n", "- [[MF02]](https://python-programming.quantecon.org/zreferences.html#mirandafackler2002) \n", "- [EDTC](http://johnstachurski.net/edtc.html), chapter 5 \n", "\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Discrete DPs\n", "\n", "Loosely speaking, a discrete DP is a maximization problem with an objective\n", "function of the form\n", "\n", "\n", "\n", "$$\n", "\\mathbb{E}\n", "\\sum_{t = 0}^{\\infty} \\beta^t r(s_t, a_t) \\tag{1}\n", "$$\n", "\n", "where\n", "\n", "- $ s_t $ is the state variable \n", "- $ a_t $ is the action \n", "- $ \\beta $ is a discount factor \n", "- $ r(s_t, a_t) $ is interpreted as a current reward when the state is $ s_t $ and the action chosen is $ a_t $ \n", "\n", "\n", "Each pair $ (s_t, a_t) $ pins down transition probabilities $ Q(s_t, a_t, s_{t+1}) $ for the next period state $ s_{t+1} $.\n", "\n", "Thus, actions influence not only current rewards but also the future time path of the state.\n", "\n", "The essence of dynamic programming problems is to trade off current rewards\n", "vs favorable positioning of the future state (modulo randomness).\n", "\n", "Examples:\n", "\n", "- consuming today vs saving and accumulating assets \n", "- accepting a job offer today vs seeking a better one in the future \n", "- exercising an option now vs waiting " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Policies\n", "\n", "The most fruitful way to think about solutions to discrete DP problems is to compare *policies*.\n", "\n", "In general, a policy is a randomized map from past actions and states to\n", "current action.\n", "\n", "In the setting formalized below, it suffices to consider so-called *stationary Markov policies*, which consider only the current state.\n", "\n", "In particular, a stationary Markov policy is a map $ \\sigma $ from states to actions\n", "\n", "- $ a_t = \\sigma(s_t) $ indicates that $ a_t $ is the action to be taken in state $ s_t $ \n", "\n", "\n", "It is known that, for any arbitrary policy, there exists a stationary Markov policy that dominates it at least weakly.\n", "\n", "- See section 5.5 of [[Put05]](https://python-programming.quantecon.org/zreferences.html#puterman2005) for discussion and proofs. \n", "\n", "\n", "In what follows, stationary Markov policies are referred to simply as policies.\n", "\n", "The aim is to find an optimal policy, in the sense of one that maximizes [(1)](#equation-dp-objective).\n", "\n", "Let’s now step through these ideas more carefully." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Formal Definition\n", "\n", "Formally, a discrete dynamic program consists of the following components:\n", "\n", "1. A finite set of *states* $ S = \\{0, \\ldots, n-1\\} $. \n", "1. A finite set of *feasible actions* $ A(s) $ for each state $ s \\in S $, and a corresponding set of *feasible state-action pairs*. \n", " $$\n", " \\mathit{SA} := \\{(s, a) \\mid s \\in S, \\; a \\in A(s)\\}\n", " $$\n", "1. A *reward function* $ r\\colon \\mathit{SA} \\to \\mathbb{R} $. \n", "1. A *transition probability function* $ Q\\colon \\mathit{SA} \\to \\Delta(S) $, where $ \\Delta(S) $ is the set of probability distributions over $ S $. \n", "1. A *discount factor* $ \\beta \\in [0, 1) $. \n", "\n", "\n", "We also use the notation $ A := \\bigcup_{s \\in S} A(s) = \\{0, \\ldots, m-1\\} $ and call this set the *action space*.\n", "\n", "A *policy* is a function $ \\sigma\\colon S \\to A $.\n", "\n", "A policy is called *feasible* if it satisfies $ \\sigma(s) \\in A(s) $ for all $ s \\in S $.\n", "\n", "Denote the set of all feasible policies by $ \\Sigma $.\n", "\n", "If a decision-maker uses a policy $ \\sigma \\in \\Sigma $, then\n", "\n", "- the current reward at time $ t $ is $ r(s_t, \\sigma(s_t)) $ \n", "- the probability that $ s_{t+1} = s' $ is $ Q(s_t, \\sigma(s_t), s') $ \n", "\n", "\n", "For each $ \\sigma \\in \\Sigma $, define\n", "\n", "- $ r_{\\sigma} $ by $ r_{\\sigma}(s) := r(s, \\sigma(s)) $) \n", "- $ Q_{\\sigma} $ by $ Q_{\\sigma}(s, s') := Q(s, \\sigma(s), s') $ \n", "\n", "\n", "Notice that $ Q_\\sigma $ is a [stochastic matrix](https://python-intro.quantecon.org/finite_markov.html#Stochastic-Matrices) on $ S $.\n", "\n", "It gives transition probabilities of the *controlled chain* when we follow policy $ \\sigma $.\n", "\n", "If we think of $ r_\\sigma $ as a column vector, then so is $ Q_\\sigma^t r_\\sigma $, and the $ s $-th row of the latter has the interpretation\n", "\n", "\n", "\n", "$$\n", "(Q_\\sigma^t r_\\sigma)(s) = \\mathbb E [ r(s_t, \\sigma(s_t)) \\mid s_0 = s ]\n", "\\quad \\text{when } \\{s_t\\} \\sim Q_\\sigma \\tag{2}\n", "$$\n", "\n", "Comments\n", "\n", "- $ \\{s_t\\} \\sim Q_\\sigma $ means that the state is generated by stochastic matrix $ Q_\\sigma $. \n", "- See [this discussion](https://python-intro.quantecon.org/finite_markov.html#Multiple-Step-Transition-Probabilities) on computing expectations of Markov chains for an explanation of the expression in [(2)](#equation-ddp-expec). \n", "\n", "\n", "Notice that we’re not really distinguishing between functions from $ S $ to $ \\mathbb R $ and vectors in $ \\mathbb R^n $.\n", "\n", "This is natural because they are in one to one correspondence." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Value and Optimality\n", "\n", "Let $ v_{\\sigma}(s) $ denote the discounted sum of expected reward flows from policy $ \\sigma $\n", "when the initial state is $ s $.\n", "\n", "To calculate this quantity we pass the expectation through the sum in\n", "[(1)](#equation-dp-objective) and use [(2)](#equation-ddp-expec) to get\n", "\n", "$$\n", "v_{\\sigma}(s) = \\sum_{t=0}^{\\infty} \\beta^t (Q_{\\sigma}^t r_{\\sigma})(s)\n", "\\qquad (s \\in S)\n", "$$\n", "\n", "This function is called the *policy value function* for the policy $ \\sigma $.\n", "\n", "The *optimal value function*, or simply *value function*, is the function $ v^*\\colon S \\to \\mathbb{R} $ defined by\n", "\n", "$$\n", "v^*(s) = \\max_{\\sigma \\in \\Sigma} v_{\\sigma}(s)\n", "\\qquad (s \\in S)\n", "$$\n", "\n", "(We can use max rather than sup here because the domain is a finite set)\n", "\n", "A policy $ \\sigma \\in \\Sigma $ is called *optimal* if $ v_{\\sigma}(s) = v^*(s) $ for all $ s \\in S $.\n", "\n", "Given any $ w \\colon S \\to \\mathbb R $, a policy $ \\sigma \\in \\Sigma $ is called $ w $-greedy if\n", "\n", "$$\n", "\\sigma(s) \\in \\operatorname*{arg\\,max}_{a \\in A(s)}\n", "\\left\\{\n", " r(s, a) +\n", " \\beta \\sum_{s' \\in S} w(s') Q(s, a, s')\n", "\\right\\}\n", "\\qquad (s \\in S)\n", "$$\n", "\n", "As discussed in detail below, optimal policies are precisely those that are $ v^* $-greedy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Two Operators\n", "\n", "It is useful to define the following operators:\n", "\n", "- The *Bellman operator* $ T\\colon \\mathbb{R}^S \\to \\mathbb{R}^S $\n", " is defined by \n", "\n", "\n", "$$\n", "(T v)(s) = \\max_{a \\in A(s)}\n", "\\left\\{\n", " r(s, a) + \\beta \\sum_{s' \\in S} v(s') Q(s, a, s')\n", "\\right\\}\n", "\\qquad (s \\in S)\n", "$$\n", "\n", "- For any policy function $ \\sigma \\in \\Sigma $, the operator $ T_{\\sigma}\\colon \\mathbb{R}^S \\to \\mathbb{R}^S $ is defined by \n", "\n", "\n", "$$\n", "(T_{\\sigma} v)(s) = r(s, \\sigma(s)) +\n", " \\beta \\sum_{s' \\in S} v(s') Q(s, \\sigma(s), s')\n", "\\qquad (s \\in S)\n", "$$\n", "\n", "This can be written more succinctly in operator notation as\n", "\n", "$$\n", "T_{\\sigma} v = r_{\\sigma} + \\beta Q_{\\sigma} v\n", "$$\n", "\n", "The two operators are both monotone\n", "\n", "- $ v \\leq w $ implies $ Tv \\leq Tw $ pointwise on $ S $, and\n", " similarly for $ T_\\sigma $ \n", "\n", "\n", "They are also contraction mappings with modulus $ \\beta $\n", "\n", "- $ \\lVert Tv - Tw \\rVert \\leq \\beta \\lVert v - w \\rVert $ and similarly for $ T_\\sigma $, where $ \\lVert \\cdot\\rVert $ is the max norm \n", "\n", "\n", "For any policy $ \\sigma $, its value $ v_{\\sigma} $ is the unique fixed point of $ T_{\\sigma} $.\n", "\n", "For proofs of these results and those in the next section, see, for example, [EDTC](http://johnstachurski.net/edtc.html), chapter 10." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Bellman Equation and the Principle of Optimality\n", "\n", "The main principle of the theory of dynamic programming is that\n", "\n", "- the optimal value function $ v^* $ is a unique solution to the *Bellman equation* \n", "\n", "\n", "$$\n", "v(s) = \\max_{a \\in A(s)}\n", " \\left\\{\n", " r(s, a) + \\beta \\sum_{s' \\in S} v(s') Q(s, a, s')\n", " \\right\\}\n", "\\qquad (s \\in S)\n", "$$\n", "\n", "or in other words, $ v^* $ is the unique fixed point of $ T $, and\n", "\n", "- $ \\sigma^* $ is an optimal policy function if and only if it is $ v^* $-greedy \n", "\n", "\n", "By the definition of greedy policies given above, this means that\n", "\n", "$$\n", "\\sigma^*(s) \\in \\operatorname*{arg\\,max}_{a \\in A(s)}\n", " \\left\\{\n", " r(s, a) + \\beta \\sum_{s' \\in S} v^*(s') Q(s, \\sigma(s), s')\n", " \\right\\}\n", "\\qquad (s \\in S)\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solving Discrete DPs\n", "\n", "Now that the theory has been set out, let’s turn to solution methods.\n", "\n", "The code for solving discrete DPs is available in [ddp.py](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/markov/ddp.py) from the [QuantEcon.py](http://quantecon.org/quantecon-py) code library.\n", "\n", "It implements the three most important solution methods for discrete dynamic programs, namely\n", "\n", "- value function iteration \n", "- policy function iteration \n", "- modified policy function iteration \n", "\n", "\n", "Let’s briefly review these algorithms and their implementation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Value Function Iteration\n", "\n", "Perhaps the most familiar method for solving all manner of dynamic programs is value function iteration.\n", "\n", "This algorithm uses the fact that the Bellman operator $ T $ is a contraction mapping with fixed point $ v^* $.\n", "\n", "Hence, iterative application of $ T $ to any initial function $ v^0 \\colon S \\to \\mathbb R $ converges to $ v^* $.\n", "\n", "The details of the algorithm can be found in [the appendix](#ddp-algorithms)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Policy Function Iteration\n", "\n", "This routine, also known as Howard’s policy improvement algorithm, exploits more closely the particular structure of a discrete DP problem.\n", "\n", "Each iteration consists of\n", "\n", "1. A policy evaluation step that computes the value $ v_{\\sigma} $ of a policy $ \\sigma $ by solving the linear equation $ v = T_{\\sigma} v $. \n", "1. A policy improvement step that computes a $ v_{\\sigma} $-greedy policy. \n", "\n", "\n", "In the current setting, policy iteration computes an exact optimal policy in finitely many iterations.\n", "\n", "- See theorem 10.2.6 of [EDTC](http://johnstachurski.net/edtc.html) for a proof. \n", "\n", "\n", "The details of the algorithm can be found in [the appendix](#ddp-algorithms)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Modified Policy Function Iteration\n", "\n", "Modified policy iteration replaces the policy evaluation step in policy iteration with “partial policy evaluation”.\n", "\n", "The latter computes an approximation to the value of a policy $ \\sigma $ by iterating $ T_{\\sigma} $ for a specified number of times.\n", "\n", "This approach can be useful when the state space is very large and the linear system in the policy evaluation step of policy iteration is correspondingly difficult to solve.\n", "\n", "The details of the algorithm can be found in [the appendix](#ddp-algorithms).\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example: A Growth Model\n", "\n", "Let’s consider a simple consumption-saving model.\n", "\n", "A single household either consumes or stores its own output of a single consumption good.\n", "\n", "The household starts each period with current stock $ s $.\n", "\n", "Next, the household chooses a quantity $ a $ to store and consumes $ c = s - a $\n", "\n", "- Storage is limited by a global upper bound $ M $. \n", "- Flow utility is $ u(c) = c^{\\alpha} $. \n", "\n", "\n", "Output is drawn from a discrete uniform distribution on $ \\{0, \\ldots, B\\} $.\n", "\n", "The next period stock is therefore\n", "\n", "$$\n", "s' = a + U\n", "\\quad \\text{where} \\quad\n", "U \\sim U[0, \\ldots, B]\n", "$$\n", "\n", "The discount factor is $ \\beta \\in [0, 1) $." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Discrete DP Representation\n", "\n", "We want to represent this model in the format of a discrete dynamic program.\n", "\n", "To this end, we take\n", "\n", "- the state variable to be the stock $ s $ \n", "- the state space to be $ S = \\{0, \\ldots, M + B\\} $ \n", " - hence $ n = M + B + 1 $ \n", "- the action to be the storage quantity $ a $ \n", "- the set of feasible actions at $ s $ to be $ A(s) = \\{0, \\ldots, \\min\\{s, M\\}\\} $ \n", " - hence $ A = \\{0, \\ldots, M\\} $ and $ m = M + 1 $ \n", "- the reward function to be $ r(s, a) = u(s - a) $ \n", "- the transition probabilities to be \n", "\n", "\n", "\n", "\n", "$$\n", "Q(s, a, s') :=\n", "\\begin{cases}\n", " \\frac{1}{B + 1} & \\text{if } a \\leq s' \\leq a + B\n", " \\\\\n", " 0 & \\text{ otherwise}\n", "\\end{cases} \\tag{3}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Defining a DiscreteDP Instance\n", "\n", "This information will be used to create an instance of DiscreteDP by passing\n", "the following information\n", "\n", "1. An $ n \\times m $ reward array $ R $. \n", "1. An $ n \\times m \\times n $ transition probability array $ Q $. \n", "1. A discount factor $ \\beta $. \n", "\n", "\n", "For $ R $ we set $ R[s, a] = u(s - a) $ if $ a \\leq s $ and $ -\\infty $ otherwise.\n", "\n", "For $ Q $ we follow the rule in [(3)](#equation-ddp-def-ogq).\n", "\n", "Note:\n", "\n", "- The feasibility constraint is embedded into $ R $ by setting $ R[s, a] = -\\infty $ for $ a \\notin A(s) $. \n", "- Probability distributions for $ (s, a) $ with $ a \\notin A(s) $ can be arbitrary. \n", "\n", "\n", "The following code sets up these objects for us" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "class SimpleOG:\n", "\n", " def __init__(self, B=10, M=5, α=0.5, β=0.9):\n", " \"\"\"\n", " Set up R, Q and β, the three elements that define an instance of\n", " the DiscreteDP class.\n", " \"\"\"\n", "\n", " self.B, self.M, self.α, self.β = B, M, α, β\n", " self.n = B + M + 1\n", " self.m = M + 1\n", "\n", " self.R = np.empty((self.n, self.m))\n", " self.Q = np.zeros((self.n, self.m, self.n))\n", "\n", " self.populate_Q()\n", " self.populate_R()\n", "\n", " def u(self, c):\n", " return c**self.α\n", "\n", " def populate_R(self):\n", " \"\"\"\n", " Populate the R matrix, with R[s, a] = -np.inf for infeasible\n", " state-action pairs.\n", " \"\"\"\n", " for s in range(self.n):\n", " for a in range(self.m):\n", " self.R[s, a] = self.u(s - a) if a <= s else -np.inf\n", "\n", " def populate_Q(self):\n", " \"\"\"\n", " Populate the Q matrix by setting\n", "\n", " Q[s, a, s'] = 1 / (1 + B) if a <= s' <= a + B\n", "\n", " and zero otherwise.\n", " \"\"\"\n", "\n", " for a in range(self.m):\n", " self.Q[:, a, a:(a + self.B + 1)] = 1.0 / (self.B + 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s run this code and create an instance of `SimpleOG`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "g = SimpleOG() # Use default parameters" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Instances of `DiscreteDP` are created using the signature `DiscreteDP(R, Q, β)`.\n", "\n", "Let’s create an instance using the objects stored in `g`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "ddp = qe.markov.DiscreteDP(g.R, g.Q, g.β)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have an instance `ddp` of `DiscreteDP` we can solve it as follows" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "results = ddp.solve(method='policy_iteration')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s see what we’ve got here" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "dir(results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "(In IPython version 4.0 and above you can also type `results.` and hit the tab key)\n", "\n", "The most important attributes are `v`, the value function, and `σ`, the optimal policy" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "results.v" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "results.sigma" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we’ve used policy iteration, these results will be exact unless we hit the iteration bound `max_iter`.\n", "\n", "Let’s make sure this didn’t happen" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "results.max_iter" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "results.num_iter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another interesting object is `results.mc`, which is the controlled chain defined by $ Q_{\\sigma^*} $, where $ \\sigma^* $ is the optimal policy.\n", "\n", "In other words, it gives the dynamics of the state when the agent follows the optimal policy.\n", "\n", "Since this object is an instance of MarkovChain from [QuantEcon.py](http://quantecon.org/quantecon-py) (see [this lecture](https://python-intro.quantecon.org/finite_markov.html) for more discussion), we\n", "can easily simulate it, compute its stationary distribution and so on." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "results.mc.stationary_distributions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here’s the same information in a bar graph\n", "\n", "\n", "\n", " \n", "What happens if the agent is more patient?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "ddp = qe.markov.DiscreteDP(g.R, g.Q, 0.99) # Increase β to 0.99\n", "results = ddp.solve(method='policy_iteration')\n", "results.mc.stationary_distributions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we look at the bar graph we can see the rightward shift in probability mass\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### State-Action Pair Formulation\n", "\n", "The `DiscreteDP` class in fact, provides a second interface to set up an instance.\n", "\n", "One of the advantages of this alternative set up is that it permits the use of a sparse matrix for `Q`.\n", "\n", "(An example of using sparse matrices is given in the exercises below)\n", "\n", "The call signature of the second formulation is `DiscreteDP(R, Q, β, s_indices, a_indices)` where\n", "\n", "- `s_indices` and `a_indices` are arrays of equal length `L` enumerating all feasible state-action pairs \n", "- `R` is an array of length `L` giving corresponding rewards \n", "- `Q` is an `L x n` transition probability array \n", "\n", "\n", "Here’s how we could set up these objects for the preceding example" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "B, M, α, β = 10, 5, 0.5, 0.9\n", "n = B + M + 1\n", "m = M + 1\n", "\n", "def u(c):\n", " return c**α\n", "\n", "s_indices = []\n", "a_indices = []\n", "Q = []\n", "R = []\n", "b = 1.0 / (B + 1)\n", "\n", "for s in range(n):\n", " for a in range(min(M, s) + 1): # All feasible a at this s\n", " s_indices.append(s)\n", " a_indices.append(a)\n", " q = np.zeros(n)\n", " q[a:(a + B + 1)] = b # b on these values, otherwise 0\n", " Q.append(q)\n", " R.append(u(s - a))\n", "\n", "ddp = qe.markov.DiscreteDP(R, Q, β, s_indices, a_indices)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For larger problems, you might need to write this code more efficiently by vectorizing or using Numba." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises\n", "\n", "In the [stochastic optimal growth lecture](https://python-intro.quantecon.org/optgrowth.html) from our introductory lecture series, we solve a benchmark model that has an analytical solution.\n", "\n", "The exercise is to replicate this solution using `DiscreteDP`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solutions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup\n", "\n", "Details of the model can be found in [the lecture on optimal growth](https://python-intro.quantecon.org/optgrowth.html).\n", "\n", "We let $ f(k) = k^{\\alpha} $ with $ \\alpha = 0.65 $, $ u(c) =\n", "\\log c $, and $ \\beta = 0.95 $" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "α = 0.65\n", "f = lambda k: k**α\n", "u = np.log\n", "β = 0.95" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we want to solve a finite state version of the continuous state model above.\n", "\n", "We discretize the state space into a grid of size `grid_size=500`, from $ 10^{-6} $ to `grid_max=2`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "grid_max = 2\n", "grid_size = 500\n", "grid = np.linspace(1e-6, grid_max, grid_size)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We choose the action to be the amount of capital to save for the next\n", "period (the state is the capital stock at the beginning of the period).\n", "\n", "Thus the state indices and the action indices are both `0`, …, `grid_size-1`.\n", "\n", "Action (indexed by) `a` is feasible at state (indexed by) `s` if and only if `grid[a] < f([grid[s])` (zero consumption is not allowed because of the log utility).\n", "\n", "Thus the Bellman equation is:\n", "\n", "$$\n", "v(k) = \\max_{0 < k' < f(k)} u(f(k) - k') + \\beta v(k'),\n", "$$\n", "\n", "where $ k' $ is the capital stock in the next period.\n", "\n", "The transition probability array `Q` will be highly sparse (in fact it\n", "is degenerate as the model is deterministic), so we formulate the\n", "problem with state-action pairs, to represent `Q` in [scipy sparse matrix\n", "format](http://docs.scipy.org/doc/scipy/reference/sparse.html).\n", "\n", "We first construct indices for state-action pairs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Consumption matrix, with nonpositive consumption included\n", "C = f(grid).reshape(grid_size, 1) - grid.reshape(1, grid_size)\n", "\n", "# State-action indices\n", "s_indices, a_indices = np.where(C > 0)\n", "\n", "# Number of state-action pairs\n", "L = len(s_indices)\n", "\n", "print(L)\n", "print(s_indices)\n", "print(a_indices)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Reward vector `R` (of length `L`):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "R = u(C[s_indices, a_indices])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "(Degenerate) transition probability matrix `Q` (of shape `(L, grid_size)`), where we choose the [scipy.sparse.lil_matrix](http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.lil_matrix.html) format, while any format will do (internally it will be converted to the csr format):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "Q = sparse.lil_matrix((L, grid_size))\n", "Q[np.arange(L), a_indices] = 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "(If you are familiar with the data structure of [scipy.sparse.csr_matrix](http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html), the following is the most efficient way to create the `Q` matrix in\n", "the current case)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# data = np.ones(L)\n", "# indptr = np.arange(L+1)\n", "# Q = sparse.csr_matrix((data, a_indices, indptr), shape=(L, grid_size))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Discrete growth model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "ddp = DiscreteDP(R, Q, β, s_indices, a_indices)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Notes**\n", "\n", "Here we intensively vectorized the operations on arrays to simplify the code.\n", "\n", "As [noted](https://python-programming.quantecon.org/need_for_speed.html#numba-p-c-vectorization), however, vectorization is memory consumptive, and it can be prohibitively so for grids with large size." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Solving the Model\n", "\n", "Solve the dynamic optimization problem:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "res = ddp.solve(method='policy_iteration')\n", "v, σ, num_iter = res.v, res.sigma, res.num_iter\n", "num_iter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that `sigma` contains the *indices* of the optimal *capital\n", "stocks* to save for the next period. The following translates `sigma`\n", "to the corresponding consumption vector." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Optimal consumption in the discrete version\n", "c = f(grid) - grid[σ]\n", "\n", "# Exact solution of the continuous version\n", "ab = α * β\n", "c1 = (np.log(1 - ab) + np.log(ab) * ab / (1 - ab)) / (1 - β)\n", "c2 = α / (1 - ab)\n", "\n", "def v_star(k):\n", " return c1 + c2 * np.log(k)\n", "\n", "def c_star(k):\n", " return (1 - ab) * k**α" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us compare the solution of the discrete model with that of the\n", "original continuous model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(1, 2, figsize=(14, 4))\n", "ax[0].set_ylim(-40, -32)\n", "ax[0].set_xlim(grid[0], grid[-1])\n", "ax[1].set_xlim(grid[0], grid[-1])\n", "\n", "lb0 = 'discrete value function'\n", "ax[0].plot(grid, v, lw=2, alpha=0.6, label=lb0)\n", "\n", "lb0 = 'continuous value function'\n", "ax[0].plot(grid, v_star(grid), 'k-', lw=1.5, alpha=0.8, label=lb0)\n", "ax[0].legend(loc='upper left')\n", "\n", "lb1 = 'discrete optimal consumption'\n", "ax[1].plot(grid, c, 'b-', lw=2, alpha=0.6, label=lb1)\n", "\n", "lb1 = 'continuous optimal consumption'\n", "ax[1].plot(grid, c_star(grid), 'k-', lw=1.5, alpha=0.8, label=lb1)\n", "ax[1].legend(loc='upper left')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The outcomes appear very close to those of the continuous version.\n", "\n", "Except for the “boundary” point, the value functions are very close:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "np.abs(v - v_star(grid)).max()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "np.abs(v - v_star(grid))[1:].max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The optimal consumption functions are close as well:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "np.abs(c - c_star(grid)).max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In fact, the optimal consumption obtained in the discrete version is not\n", "really monotone, but the decrements are quite small:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "diff = np.diff(c)\n", "(diff >= 0).all()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "dec_ind = np.where(diff < 0)[0]\n", "len(dec_ind)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "np.abs(diff[dec_ind]).max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The value function is monotone:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "(np.diff(v) > 0).all()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Comparison of the Solution Methods\n", "\n", "Let us solve the problem with the other two methods." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Value Iteration" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "ddp.epsilon = 1e-4\n", "ddp.max_iter = 500\n", "res1 = ddp.solve(method='value_iteration')\n", "res1.num_iter" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "np.array_equal(σ, res1.sigma)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Modified Policy Iteration" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "res2 = ddp.solve(method='modified_policy_iteration')\n", "res2.num_iter" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "np.array_equal(σ, res2.sigma)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Speed Comparison" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "%timeit ddp.solve(method='value_iteration')\n", "%timeit ddp.solve(method='policy_iteration')\n", "%timeit ddp.solve(method='modified_policy_iteration')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As is often the case, policy iteration and modified policy iteration are\n", "much faster than value iteration." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Replication of the Figures\n", "\n", "Using `DiscreteDP` we replicate the figures shown in the lecture." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Convergence of Value Iteration\n", "\n", "Let us first visualize the convergence of the value iteration algorithm\n", "as in the lecture, where we use `ddp.bellman_operator` implemented as\n", "a method of `DiscreteDP`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "w = 5 * np.log(grid) - 25 # Initial condition\n", "n = 35\n", "fig, ax = plt.subplots(figsize=(8,5))\n", "ax.set_ylim(-40, -20)\n", "ax.set_xlim(np.min(grid), np.max(grid))\n", "lb = 'initial condition'\n", "ax.plot(grid, w, color=plt.cm.jet(0), lw=2, alpha=0.6, label=lb)\n", "for i in range(n):\n", " w = ddp.bellman_operator(w)\n", " ax.plot(grid, w, color=plt.cm.jet(i / n), lw=2, alpha=0.6)\n", "lb = 'true value function'\n", "ax.plot(grid, v_star(grid), 'k-', lw=2, alpha=0.8, label=lb)\n", "ax.legend(loc='upper left')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We next plot the consumption policies along with the value iteration" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "w = 5 * u(grid) - 25 # Initial condition\n", "\n", "fig, ax = plt.subplots(3, 1, figsize=(8, 10))\n", "true_c = c_star(grid)\n", "\n", "for i, n in enumerate((2, 4, 6)):\n", " ax[i].set_ylim(0, 1)\n", " ax[i].set_xlim(0, 2)\n", " ax[i].set_yticks((0, 1))\n", " ax[i].set_xticks((0, 2))\n", "\n", " w = 5 * u(grid) - 25 # Initial condition\n", " compute_fixed_point(ddp.bellman_operator, w, max_iter=n, print_skip=1)\n", " σ = ddp.compute_greedy(w) # Policy indices\n", " c_policy = f(grid) - grid[σ]\n", "\n", " ax[i].plot(grid, c_policy, 'b-', lw=2, alpha=0.8,\n", " label='approximate optimal consumption policy')\n", " ax[i].plot(grid, true_c, 'k-', lw=2, alpha=0.8,\n", " label='true optimal consumption policy')\n", " ax[i].legend(loc='upper left')\n", " ax[i].set_title(f'{n} value function iterations')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Dynamics of the Capital Stock\n", "\n", "Finally, let us work on [Exercise\n", "2](https://python-intro.quantecon.org/optgrowth.html#Exercise-1), where we plot\n", "the trajectories of the capital stock for three different discount\n", "factors, $ 0.9 $, $ 0.94 $, and $ 0.98 $, with initial\n", "condition $ k_0 = 0.1 $." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "discount_factors = (0.9, 0.94, 0.98)\n", "k_init = 0.1\n", "\n", "# Search for the index corresponding to k_init\n", "k_init_ind = np.searchsorted(grid, k_init)\n", "\n", "sample_size = 25\n", "\n", "fig, ax = plt.subplots(figsize=(8,5))\n", "ax.set_xlabel(\"time\")\n", "ax.set_ylabel(\"capital\")\n", "ax.set_ylim(0.10, 0.30)\n", "\n", "# Create a new instance, not to modify the one used above\n", "ddp0 = DiscreteDP(R, Q, β, s_indices, a_indices)\n", "\n", "for beta in discount_factors:\n", " ddp0.beta = beta\n", " res0 = ddp0.solve()\n", " k_path_ind = res0.mc.simulate(init=k_init_ind, ts_length=sample_size)\n", " k_path = grid[k_path_ind]\n", " ax.plot(k_path, 'o-', lw=2, alpha=0.75, label=f'$\\\\beta = {beta}$')\n", "\n", "ax.legend(loc='lower right')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Appendix: Algorithms\n", "\n", "This appendix covers the details of the solution algorithms implemented for `DiscreteDP`.\n", "\n", "We will make use of the following notions of approximate optimality:\n", "\n", "- For $ \\varepsilon > 0 $, $ v $ is called an $ \\varepsilon $-approximation of $ v^* $ if $ \\lVert v - v^*\\rVert < \\varepsilon $. \n", "- A policy $ \\sigma \\in \\Sigma $ is called $ \\varepsilon $-optimal if $ v_{\\sigma} $ is an $ \\varepsilon $-approximation of $ v^* $. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Value Iteration\n", "\n", "The `DiscreteDP` value iteration method implements value function iteration as\n", "follows\n", "\n", "1. Choose any $ v^0 \\in \\mathbb{R}^n $, and specify $ \\varepsilon > 0 $; set $ i = 0 $. \n", "1. Compute $ v^{i+1} = T v^i $. \n", "1. If $ \\lVert v^{i+1} - v^i\\rVert < [(1 - \\beta) / (2\\beta)] \\varepsilon $,\n", " then go to step 4; otherwise, set $ i = i + 1 $ and go to step 2. \n", "1. Compute a $ v^{i+1} $-greedy policy $ \\sigma $, and return $ v^{i+1} $ and $ \\sigma $. \n", "\n", "\n", "Given $ \\varepsilon > 0 $, the value iteration algorithm\n", "\n", "- terminates in a finite number of iterations \n", "- returns an $ \\varepsilon/2 $-approximation of the optimal value function and an $ \\varepsilon $-optimal policy function (unless `iter_max` is reached) \n", "\n", "\n", "(While not explicit, in the actual implementation each algorithm is\n", "terminated if the number of iterations reaches `iter_max`)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Policy Iteration\n", "\n", "The `DiscreteDP` policy iteration method runs as follows\n", "\n", "1. Choose any $ v^0 \\in \\mathbb{R}^n $ and compute a $ v^0 $-greedy policy $ \\sigma^0 $; set $ i = 0 $. \n", "1. Compute the value $ v_{\\sigma^i} $ by solving\n", " the equation $ v = T_{\\sigma^i} v $. \n", "1. Compute a $ v_{\\sigma^i} $-greedy policy\n", " $ \\sigma^{i+1} $; let $ \\sigma^{i+1} = \\sigma^i $ if\n", " possible. \n", "1. If $ \\sigma^{i+1} = \\sigma^i $, then return $ v_{\\sigma^i} $\n", " and $ \\sigma^{i+1} $; otherwise, set $ i = i + 1 $ and go to\n", " step 2. \n", "\n", "\n", "The policy iteration algorithm terminates in a finite number of\n", "iterations.\n", "\n", "It returns an optimal value function and an optimal policy function (unless `iter_max` is reached)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Modified Policy Iteration\n", "\n", "The `DiscreteDP` modified policy iteration method runs as follows:\n", "\n", "1. Choose any $ v^0 \\in \\mathbb{R}^n $, and specify $ \\varepsilon > 0 $ and $ k \\geq 0 $; set $ i = 0 $. \n", "1. Compute a $ v^i $-greedy policy $ \\sigma^{i+1} $; let $ \\sigma^{i+1} = \\sigma^i $ if possible (for $ i \\geq 1 $). \n", "1. Compute $ u = T v^i $ ($ = T_{\\sigma^{i+1}} v^i $). If $ \\mathrm{span}(u - v^i) < [(1 - \\beta) / \\beta] \\varepsilon $, then go to step 5; otherwise go to step 4. \n", " - Span is defined by $ \\mathrm{span}(z) = \\max(z) - \\min(z) $. \n", "1. Compute $ v^{i+1} = (T_{\\sigma^{i+1}})^k u $ ($ = (T_{\\sigma^{i+1}})^{k+1} v^i $); set $ i = i + 1 $ and go to step 2. \n", "1. Return $ v = u + [\\beta / (1 - \\beta)] [(\\min(u - v^i) + \\max(u - v^i)) / 2] \\mathbf{1} $ and $ \\sigma_{i+1} $. \n", "\n", "\n", "Given $ \\varepsilon > 0 $, provided that $ v^0 $ is such that\n", "$ T v^0 \\geq v^0 $, the modified policy iteration algorithm\n", "terminates in a finite number of iterations.\n", "\n", "It returns an $ \\varepsilon/2 $-approximation of the optimal value function and an $ \\varepsilon $-optimal policy function (unless `iter_max` is reached).\n", "\n", "See also the documentation for `DiscreteDP`." ] } ], "metadata": { "date": 1624431172.23872, "filename": "discrete_dp.rst", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "next_doc": { "link": "index_lq_control", "title": "LQ Control" }, "prev_doc": { "link": "muth_kalman", "title": "Reverse Engineering a la Muth" }, "title": "Discrete State Dynamic Programming" }, "nbformat": 4, "nbformat_minor": 2 }