{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "
\n", " \n", " \"QuantEcon\"\n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Optimal Growth I: The Stochastic Optimal Growth Model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Optimal Growth I: The Stochastic Optimal Growth Model](#Optimal-Growth-I:-The-Stochastic-Optimal-Growth-Model) \n", " - [Overview](#Overview) \n", " - [The Model](#The-Model) \n", " - [Computation](#Computation) \n", " - [Exercises](#Exercises) \n", " - [Solutions](#Solutions) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "In this lecture, we’re going to study a simple optimal growth model with one agent.\n", "\n", "The model is a version of the standard one sector infinite horizon growth model studied in\n", "\n", "- [[SLP89]](https://python-programming.quantecon.org/zreferences.html#stokeylucas1989), chapter 2 \n", "- [[LS18]](https://python-programming.quantecon.org/zreferences.html#ljungqvist2012), section 3.1 \n", "- [EDTC](http://johnstachurski.net/edtc.html), chapter 1 \n", "- [[Sun96]](https://python-programming.quantecon.org/zreferences.html#sundaram1996), chapter 12 \n", "\n", "\n", "The technique we use to solve the model is dynamic programming.\n", "\n", "Our treatment of dynamic programming follows on from earlier\n", "treatments in our lectures on [shortest paths](https://python-intro.quantecon.org/short_path.html) and\n", "[job search](https://python-intro.quantecon.org/mccall_model.html).\n", "\n", "We’ll discuss some of the technical details of dynamic programming as we\n", "go along." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Code\n", "\n", "Regarding code, our implementation in this lecture will focus on clarity and flexibility.\n", "\n", "Both of these things are helpful, particularly for those readers still trying to understand\n", "the material, but they do cost us some speed — as you will\n", "see when you run the code.\n", "\n", "In the [next lecture](https://python-programming.quantecon.org/optgrowth_fast.html) we will sacrifice some of this\n", "clarity and flexibility in order to accelerate our code with just-in-time (JIT) compilation.\n", "\n", "Let’s start with some imports:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from scipy.interpolate import interp1d\n", "from scipy.optimize import minimize_scalar\n", "\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Model\n", "\n", "\n", "\n", "Consider an agent who owns an amount $ y_t \\in \\mathbb R_+ := [0, \\infty) $ of a consumption good at time $ t $.\n", "\n", "This output can either be consumed or invested.\n", "\n", "When the good is invested it is transformed one-for-one into capital.\n", "\n", "The resulting capital stock, denoted here by $ k_{t+1} $, will then be used for production.\n", "\n", "Production is stochastic, in that it also depends on a shock $ \\xi_{t+1} $ realized at the end of the current period.\n", "\n", "Next period output is\n", "\n", "$$\n", "y_{t+1} := f(k_{t+1}) \\xi_{t+1}\n", "$$\n", "\n", "where $ f \\colon \\mathbb R_+ \\to \\mathbb R_+ $ is called the production function.\n", "\n", "The resource constraint is\n", "\n", "\n", "\n", "$$\n", "k_{t+1} + c_t \\leq y_t \\tag{1}\n", "$$\n", "\n", "and all variables are required to be nonnegative." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assumptions and Comments\n", "\n", "In what follows,\n", "\n", "- The sequence $ \\{\\xi_t\\} $ is assumed to be IID. \n", "- The common distribution of each $ \\xi_t $ will be denoted $ \\phi $. \n", "- The production function $ f $ is assumed to be increasing and continuous. \n", "- Depreciation of capital is not made explicit but can be incorporated into the production function. \n", "\n", "\n", "While many other treatments of the stochastic growth model use $ k_t $ as the state variable, we will use $ y_t $.\n", "\n", "This will allow us to treat a stochastic model while maintaining only one state variable.\n", "\n", "We consider alternative states and timing specifications in some of our other lectures." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Optimization\n", "\n", "Taking $ y_0 $ as given, the agent wishes to maximize\n", "\n", "\n", "\n", "$$\n", "\\mathbb E \\left[ \\sum_{t = 0}^{\\infty} \\beta^t u(c_t) \\right] \\tag{2}\n", "$$\n", "\n", "subject to\n", "\n", "\n", "\n", "$$\n", "y_{t+1} = f(y_t - c_t) \\xi_{t+1}\n", "\\quad \\text{and} \\quad\n", "0 \\leq c_t \\leq y_t\n", "\\quad \\text{for all } t \\tag{3}\n", "$$\n", "\n", "where\n", "\n", "- $ u $ is a bounded, continuous and strictly increasing utility function and \n", "- $ \\beta \\in (0, 1) $ is a discount factor. \n", "\n", "\n", "In [(3)](#equation-og-conse) we are assuming that the resource constraint [(1)](#equation-outcsdp0) holds with equality — which is reasonable because $ u $ is strictly increasing and no output will be wasted at the optimum.\n", "\n", "In summary, the agent’s aim is to select a path $ c_0, c_1, c_2, \\ldots $ for consumption that is\n", "\n", "1. nonnegative, \n", "1. feasible in the sense of [(1)](#equation-outcsdp0), \n", "1. optimal, in the sense that it maximizes [(2)](#equation-texs0-og2) relative to all other feasible consumption sequences, and \n", "1. *adapted*, in the sense that the action $ c_t $ depends only on\n", " observable outcomes, not on future outcomes such as $ \\xi_{t+1} $. \n", "\n", "\n", "In the present context\n", "\n", "- $ y_t $ is called the *state* variable — it summarizes the “state of the world” at the start of each period. \n", "- $ c_t $ is called the *control* variable — a value chosen by the agent each period after observing the state. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Policy Function Approach\n", "\n", "\n", "\n", "One way to think about solving this problem is to look for the best **policy function**.\n", "\n", "A policy function is a map from past and present observables into current action.\n", "\n", "We’ll be particularly interested in **Markov policies**, which are maps from the current state $ y_t $ into a current action $ c_t $.\n", "\n", "For dynamic programming problems such as this one (in fact for any [Markov decision process](https://en.wikipedia.org/wiki/Markov_decision_process)), the optimal policy is always a Markov policy.\n", "\n", "In other words, the current state $ y_t $ provides a sufficient statistic\n", "for the history in terms of making an optimal decision today.\n", "\n", "This is quite intuitive, but if you wish you can find proofs in texts such as [[SLP89]](https://python-programming.quantecon.org/zreferences.html#stokeylucas1989) (section 4.1).\n", "\n", "Hereafter we focus on finding the best Markov policy.\n", "\n", "In our context, a Markov policy is a function $ \\sigma \\colon\n", "\\mathbb R_+ \\to \\mathbb R_+ $, with the understanding that states are mapped to actions via\n", "\n", "$$\n", "c_t = \\sigma(y_t) \\quad \\text{for all } t\n", "$$\n", "\n", "In what follows, we will call $ \\sigma $ a *feasible consumption policy* if it satisfies\n", "\n", "\n", "\n", "$$\n", "0 \\leq \\sigma(y) \\leq y\n", "\\quad \\text{for all} \\quad\n", "y \\in \\mathbb R_+ \\tag{4}\n", "$$\n", "\n", "In other words, a feasible consumption policy is a Markov policy that respects the resource constraint.\n", "\n", "The set of all feasible consumption policies will be denoted by $ \\Sigma $.\n", "\n", "Each $ \\sigma \\in \\Sigma $ determines a [continuous state Markov process](https://python-programming.quantecon.org/stationary_densities.html) $ \\{y_t\\} $ for output via\n", "\n", "\n", "\n", "$$\n", "y_{t+1} = f(y_t - \\sigma(y_t)) \\xi_{t+1},\n", "\\quad y_0 \\text{ given} \\tag{5}\n", "$$\n", "\n", "This is the time path for output when we choose and stick with the policy $ \\sigma $.\n", "\n", "We insert this process into the objective function to get\n", "\n", "\n", "\n", "$$\n", "\\mathbb E\n", "\\left[ \\,\n", "\\sum_{t = 0}^{\\infty} \\beta^t u(c_t) \\,\n", "\\right] =\n", "\\mathbb E\n", "\\left[ \\,\n", "\\sum_{t = 0}^{\\infty} \\beta^t u(\\sigma(y_t)) \\,\n", "\\right] \\tag{6}\n", "$$\n", "\n", "This is the total expected present value of following policy $ \\sigma $ forever,\n", "given initial income $ y_0 $.\n", "\n", "The aim is to select a policy that makes this number as large as possible.\n", "\n", "The next section covers these ideas more formally." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Optimality\n", "\n", "The $ \\sigma $ associated with a given policy $ \\sigma $ is the mapping defined by\n", "\n", "\n", "\n", "$$\n", "v_{\\sigma}(y) =\n", "\\mathbb E \\left[ \\sum_{t = 0}^{\\infty} \\beta^t u(\\sigma(y_t)) \\right] \\tag{7}\n", "$$\n", "\n", "when $ \\{y_t\\} $ is given by [(5)](#equation-firstp0-og2) with $ y_0 = y $.\n", "\n", "In other words, it is the lifetime value of following policy $ \\sigma $\n", "starting at initial condition $ y $.\n", "\n", "The **value function** is then defined as\n", "\n", "\n", "\n", "$$\n", "v^*(y) := \\sup_{\\sigma \\in \\Sigma} \\; v_{\\sigma}(y) \\tag{8}\n", "$$\n", "\n", "The value function gives the maximal value that can be obtained from state $ y $, after considering all feasible policies.\n", "\n", "A policy $ \\sigma \\in \\Sigma $ is called **optimal** if it attains the supremum in [(8)](#equation-vfcsdp0) for all $ y \\in \\mathbb R_+ $." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Bellman Equation\n", "\n", "With our assumptions on utility and production function, the value function as defined in [(8)](#equation-vfcsdp0) also satisfies a **Bellman equation**.\n", "\n", "For this problem, the Bellman equation takes the form\n", "\n", "\n", "\n", "$$\n", "v(y) = \\max_{0 \\leq c \\leq y}\n", " \\left\\{\n", " u(c) + \\beta \\int v(f(y - c) z) \\phi(dz)\n", " \\right\\}\n", "\\qquad (y \\in \\mathbb R_+) \\tag{9}\n", "$$\n", "\n", "This is a *functional equation in* $ v $.\n", "\n", "The term $ \\int v(f(y - c) z) \\phi(dz) $ can be understood as the expected next period value when\n", "\n", "- $ v $ is used to measure value \n", "- the state is $ y $ \n", "- consumption is set to $ c $ \n", "\n", "\n", "As shown in [EDTC](http://johnstachurski.net/edtc.html), theorem 10.1.11 and a range of other texts\n", "\n", "> *The value function* $ v^* $ *satisfies the Bellman equation*\n", "\n", "\n", "In other words, [(9)](#equation-fpb30) holds when $ v=v^* $.\n", "\n", "The intuition is that maximal value from a given state can be obtained by optimally trading off\n", "\n", "- current reward from a given action, vs \n", "- expected discounted future value of the state resulting from that action \n", "\n", "\n", "The Bellman equation is important because it gives us more information about the value function.\n", "\n", "It also suggests a way of computing the value function, which we discuss below." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Greedy Policies\n", "\n", "The primary importance of the value function is that we can use it to compute optimal policies.\n", "\n", "The details are as follows.\n", "\n", "Given a continuous function $ v $ on $ \\mathbb R_+ $, we say that\n", "$ \\sigma \\in \\Sigma $ is $ v $-**greedy** if $ \\sigma(y) $ is a solution to\n", "\n", "\n", "\n", "$$\n", "\\max_{0 \\leq c \\leq y}\n", " \\left\\{\n", " u(c) + \\beta \\int v(f(y - c) z) \\phi(dz)\n", " \\right\\} \\tag{10}\n", "$$\n", "\n", "for every $ y \\in \\mathbb R_+ $.\n", "\n", "In other words, $ \\sigma \\in \\Sigma $ is $ v $-greedy if it optimally\n", "trades off current and future rewards when $ v $ is taken to be the value\n", "function.\n", "\n", "In our setting, we have the following key result\n", "\n", "> - A feasible consumption policy is optimal if and only it is $ v^* $-greedy. \n", "\n", "\n", "\n", "The intuition is similar to the intuition for the Bellman equation, which was\n", "provided after [(9)](#equation-fpb30).\n", "\n", "See, for example, theorem 10.1.11 of [EDTC](http://johnstachurski.net/edtc.html).\n", "\n", "Hence, once we have a good approximation to $ v^* $, we can compute the\n", "(approximately) optimal policy by computing the corresponding greedy policy.\n", "\n", "The advantage is that we are now solving a much lower dimensional optimization\n", "problem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Bellman Operator\n", "\n", "How, then, should we compute the value function?\n", "\n", "One way is to use the so-called **Bellman operator**.\n", "\n", "(An operator is a map that sends functions into functions.)\n", "\n", "The Bellman operator is denoted by $ T $ and defined by\n", "\n", "\n", "\n", "$$\n", "Tv(y) := \\max_{0 \\leq c \\leq y}\n", "\\left\\{\n", " u(c) + \\beta \\int v(f(y - c) z) \\phi(dz)\n", "\\right\\}\n", "\\qquad (y \\in \\mathbb R_+) \\tag{11}\n", "$$\n", "\n", "In other words, $ T $ sends the function $ v $ into the new function\n", "$ Tv $ defined by [(11)](#equation-fcbell20-optgrowth).\n", "\n", "By construction, the set of solutions to the Bellman equation\n", "[(9)](#equation-fpb30) *exactly coincides with* the set of fixed points of $ T $.\n", "\n", "For example, if $ Tv = v $, then, for any $ y \\geq 0 $,\n", "\n", "$$\n", "v(y)\n", "= Tv(y)\n", "= \\max_{0 \\leq c \\leq y}\n", "\\left\\{\n", " u(c) + \\beta \\int v^*(f(y - c) z) \\phi(dz)\n", "\\right\\}\n", "$$\n", "\n", "which says precisely that $ v $ is a solution to the Bellman equation.\n", "\n", "It follows that $ v^* $ is a fixed point of $ T $." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Review of Theoretical Results\n", "\n", "\n", "\n", "One can also show that $ T $ is a contraction mapping on the set of\n", "continuous bounded functions on $ \\mathbb R_+ $ under the supremum distance\n", "\n", "$$\n", "\\rho(g, h) = \\sup_{y \\geq 0} |g(y) - h(y)|\n", "$$\n", "\n", "See [EDTC](http://johnstachurski.net/edtc.html), lemma 10.1.18.\n", "\n", "Hence, it has exactly one fixed point in this set, which we know is equal to the value function.\n", "\n", "It follows that\n", "\n", "- The value function $ v^* $ is bounded and continuous. \n", "- Starting from any bounded and continuous $ v $, the sequence $ v, Tv, T^2v, \\ldots $\n", " generated by iteratively applying $ T $ converges uniformly to $ v^* $. \n", "\n", "\n", "This iterative method is called **value function iteration**.\n", "\n", "We also know that a feasible policy is optimal if and only if it is $ v^* $-greedy.\n", "\n", "It’s not too hard to show that a $ v^* $-greedy policy exists\n", "(see [EDTC](http://johnstachurski.net/edtc.html), theorem 10.1.11 if you get stuck).\n", "\n", "Hence, at least one optimal policy exists.\n", "\n", "Our problem now is how to compute it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Unbounded Utility\n", "\n", "\n", "\n", "The results stated above assume that the utility function is bounded.\n", "\n", "In practice economists often work with unbounded utility functions — and so will we.\n", "\n", "In the unbounded setting, various optimality theories exist.\n", "\n", "Unfortunately, they tend to be case-specific, as opposed to valid for a large range of applications.\n", "\n", "Nevertheless, their main conclusions are usually in line with those stated for\n", "the bounded case just above (as long as we drop the word “bounded”).\n", "\n", "Consult, for example, section 12.2 of [EDTC](http://johnstachurski.net/edtc.html), [[Kam12]](https://python-programming.quantecon.org/zreferences.html#kamihigashi2012) or [[MdRV10]](https://python-programming.quantecon.org/zreferences.html#mv2010)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Computation\n", "\n", "\n", "\n", "Let’s now look at computing the value function and the optimal policy.\n", "\n", "We will use fitted value function iteration, which was described in detail in\n", "a [previous lecture](https://python-intro.quantecon.org/mccall_fitted_vfi.html).\n", "\n", "The algorithm will be\n", "\n", "\n", "\n", "1. Begin with an array of values $ \\{ v_1, \\ldots, v_I \\} $ representing\n", " the values of some initial function $ v $ on the grid points $ \\{ y_1, \\ldots, y_I \\} $. \n", "1. Build a function $ \\hat v $ on the state space $ \\mathbb R_+ $ by\n", " linear interpolation, based on these data points. \n", "1. Obtain and record the value $ T \\hat v(y_i) $ on each grid point\n", " $ y_i $ by repeatedly solving [(11)](#equation-fcbell20-optgrowth). \n", "1. Unless some stopping condition is satisfied, set\n", " $ \\{ v_1, \\ldots, v_I \\} = \\{ T \\hat v(y_1), \\ldots, T \\hat v(y_I) \\} $ and go to step 2. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Scalar Maximization\n", "\n", "To maximize the right hand side of the Bellman equation, we are going to use\n", "the `minimize_scalar` routine from SciPy.\n", "\n", "Since we are maximizing rather than minmizing, we will use the fact that the\n", "maximizer of $ g $ on the interval $ [a, b] $ is the minimizer of\n", "$ -g $ on the same interval.\n", "\n", "To this end, and to keep the interface tidy, we will wrap `minimize_scalar`\n", "in an outer function as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "def maximize(g, a, b, args):\n", " \"\"\"\n", " Maximize the function g over the interval [a, b].\n", "\n", " We use the fact that the maximizer of g on any interval is\n", " also the minimizer of -g. The tuple args collects any extra\n", " arguments to g.\n", "\n", " Returns the maximal value and the maximizer.\n", " \"\"\"\n", "\n", " objective = lambda x: -g(x, *args)\n", " result = minimize_scalar(objective, bounds=(a, b), method='bounded')\n", " maximizer, maximum = result.x, -result.fun\n", " return maximizer, maximum" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Optimal Growth Model\n", "\n", "We will assume for now that $ \\phi $ is the distribution of $ \\exp(\\mu + s \\zeta) $ when $ \\zeta $ is standard normal.\n", "\n", "We will store this and other primitives of the optimal growth model in a class.\n", "\n", "The class, defined below, combines both parameters and a method that realizes the\n", "right hand side of the Bellman equation [(9)](#equation-fpb30)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "class OptimalGrowthModel:\n", "\n", " def __init__(self,\n", " u, # utility function\n", " f, # production function\n", " β=0.96, # discount factor\n", " μ=0, # shock location parameter\n", " s=0.1, # shock scale parameter\n", " grid_max=4,\n", " grid_size=120,\n", " shock_size=250,\n", " seed=1234):\n", "\n", " self.u, self.f, self.β, self.μ, self.s = u, f, β, μ, s\n", "\n", " # Set up grid\n", " self.grid = np.linspace(1e-4, grid_max, grid_size)\n", "\n", " # Store shocks (with a seed, so results are reproducible)\n", " np.random.seed(seed)\n", " self.shocks = np.exp(μ + s * np.random.randn(shock_size))\n", "\n", " def objective(self, c, y, v_array):\n", " \"\"\"\n", " Right hand side of the Bellman equation.\n", " \"\"\"\n", "\n", " u, f, β, shocks = self.u, self.f, self.β, self.shocks\n", "\n", " v = interp1d(self.grid, v_array)\n", "\n", " return u(c) + β * np.mean(v(f(y - c) * shocks))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the second last line we are using linear interpolation.\n", "\n", "In the last line, the expectation in [(11)](#equation-fcbell20-optgrowth) is\n", "computed via Monte Carlo, using the approximation\n", "\n", "$$\n", "\\int v(f(y - c) z) \\phi(dz) \\approx \\frac{1}{n} \\sum_{i=1}^n v(f(y - c) \\xi_i)\n", "$$\n", "\n", "where $ \\{\\xi_i\\}_{i=1}^n $ are IID draws from $ \\phi $.\n", "\n", "Monte Carlo is not always the most efficient way to compute integrals numerically\n", "but it does have some theoretical advantages in the present setting.\n", "\n", "(For example, it preserves the contraction mapping property of the Bellman operator — see, e.g., [[PalS13]](https://python-programming.quantecon.org/zreferences.html#pal2013))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Bellman Operator\n", "\n", "The next function implements the Bellman operator.\n", "\n", "(We could have added it as a method to the `OptimalGrowthModel` class, but we\n", "prefer small classes rather than monolithic ones for this kind of\n", "numerical work.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "def T(og, v):\n", " \"\"\"\n", " The Bellman operator. Updates the guess of the value function\n", " and also computes a v-greedy policy.\n", "\n", " * og is an instance of OptimalGrowthModel\n", " * v is an array representing a guess of the value function\n", "\n", " \"\"\"\n", " v_new = np.empty_like(v)\n", " v_greedy = np.empty_like(v)\n", "\n", " for i in range(len(grid)):\n", " y = grid[i]\n", "\n", " # Maximize RHS of Bellman equation at state y\n", " c_star, v_max = maximize(og.objective, 1e-10, y, (y, v))\n", " v_new[i] = v_max\n", " v_greedy[i] = c_star\n", "\n", " return v_greedy, v_new" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### An Example\n", "\n", "Let’s suppose now that\n", "\n", "$$\n", "f(k) = k^{\\alpha}\n", "\\quad \\text{and} \\quad\n", "u(c) = \\ln c\n", "$$\n", "\n", "For this particular problem, an exact analytical solution is available (see [[LS18]](https://python-programming.quantecon.org/zreferences.html#ljungqvist2012), section 3.1.2), with\n", "\n", "\n", "\n", "$$\n", "v^*(y) =\n", "\\frac{\\ln (1 - \\alpha \\beta) }{ 1 - \\beta} +\n", "\\frac{(\\mu + \\alpha \\ln (\\alpha \\beta))}{1 - \\alpha}\n", " \\left[\n", " \\frac{1}{1- \\beta} - \\frac{1}{1 - \\alpha \\beta}\n", " \\right] +\n", " \\frac{1}{1 - \\alpha \\beta} \\ln y \\tag{12}\n", "$$\n", "\n", "and optimal consumption policy\n", "\n", "$$\n", "\\sigma^*(y) = (1 - \\alpha \\beta ) y\n", "$$\n", "\n", "It is valuable to have these closed-form solutions because it lets us check\n", "whether our code works for this particular case.\n", "\n", "In Python, the functions above can be expressed as" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "def v_star(y, α, β, μ):\n", " \"\"\"\n", " True value function\n", " \"\"\"\n", " c1 = np.log(1 - α * β) / (1 - β)\n", " c2 = (μ + α * np.log(α * β)) / (1 - α)\n", " c3 = 1 / (1 - β)\n", " c4 = 1 / (1 - α * β)\n", " return c1 + c2 * (c3 - c4) + c4 * np.log(y)\n", "\n", "def σ_star(y, α, β):\n", " \"\"\"\n", " True optimal policy\n", " \"\"\"\n", " return (1 - α * β) * y" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next let’s create an instance of the model with the above primitives and assign it to the variable `og`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "α = 0.4\n", "def fcd(k):\n", " return k**α\n", "\n", "og = OptimalGrowthModel(u=np.log, f=fcd)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let’s see what happens when we apply our Bellman operator to the exact\n", "solution $ v^* $ in this case.\n", "\n", "In theory, since $ v^* $ is a fixed point, the resulting function should again be $ v^* $.\n", "\n", "In practice, we expect some small numerical error" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "grid = og.grid\n", "\n", "v_init = v_star(grid, α, og.β, og.μ) # Start at the solution\n", "v_greedy, v = T(og, v_init) # Apply T once\n", "\n", "fig, ax = plt.subplots()\n", "ax.set_ylim(-35, -24)\n", "ax.plot(grid, v, lw=2, alpha=0.6, label='$Tv^*$')\n", "ax.plot(grid, v_init, lw=2, alpha=0.6, label='$v^*$')\n", "ax.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The two functions are essentially indistinguishable, so we are off to a good start.\n", "\n", "Now let’s have a look at iterating with the Bellman operator, starting off\n", "from an arbitrary initial condition.\n", "\n", "The initial condition we’ll start with is, somewhat arbitrarily, $ v(y) = 5 \\ln (y) $" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "v = 5 * np.log(grid) # An initial condition\n", "n = 35\n", "\n", "fig, ax = plt.subplots()\n", "\n", "ax.plot(grid, v, color=plt.cm.jet(0),\n", " lw=2, alpha=0.6, label='Initial condition')\n", "\n", "for i in range(n):\n", " v_greedy, v = T(og, v) # Apply the Bellman operator\n", " ax.plot(grid, v, color=plt.cm.jet(i / n), lw=2, alpha=0.6)\n", "\n", "ax.plot(grid, v_star(grid, α, og.β, og.μ), 'k-', lw=2,\n", " alpha=0.8, label='True value function')\n", "\n", "ax.legend()\n", "ax.set(ylim=(-40, 10), xlim=(np.min(grid), np.max(grid)))\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The figure shows\n", "\n", "1. the first 36 functions generated by the fitted value function iteration algorithm, with hotter colors given to higher iterates \n", "1. the true value function $ v^* $ drawn in black \n", "\n", "\n", "The sequence of iterates converges towards $ v^* $.\n", "\n", "We are clearly getting closer." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Iterating to Convergence\n", "\n", "We can write a function that iterates until the difference is below a particular\n", "tolerance level." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "def solve_model(og,\n", " tol=1e-4,\n", " max_iter=1000,\n", " verbose=True,\n", " print_skip=25):\n", "\n", " # Set up loop\n", " v = np.log(og.grid) # Initial condition\n", " i = 0\n", " error = tol + 1\n", "\n", " while i < max_iter and error > tol:\n", " v_greedy, v_new = T(og, v)\n", " error = np.max(np.abs(v - v_new))\n", " i += 1\n", " if verbose and i % print_skip == 0:\n", " print(f\"Error at iteration {i} is {error}.\")\n", " v = v_new\n", "\n", " if i == max_iter:\n", " print(\"Failed to converge!\")\n", "\n", " if verbose and i < max_iter:\n", " print(f\"\\nConverged in {i} iterations.\")\n", "\n", " return v_greedy, v_new" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s use this function to compute an approximate solution at the defaults." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "v_greedy, v_solution = solve_model(og)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we check our result by plotting it against the true value:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "ax.plot(grid, v_solution, lw=2, alpha=0.6,\n", " label='Approximate value function')\n", "\n", "ax.plot(grid, v_star(grid, α, og.β, og.μ), lw=2,\n", " alpha=0.6, label='True value function')\n", "\n", "ax.legend()\n", "ax.set_ylim(-35, -24)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The figure shows that we are pretty much on the money." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Policy Function\n", "\n", "\n", "\n", "The policy `v_greedy` computed above corresponds to an approximate optimal policy.\n", "\n", "The next figure compares it to the exact solution, which, as mentioned\n", "above, is $ \\sigma(y) = (1 - \\alpha \\beta) y $" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(9, 5))\n", "\n", "ax.plot(grid, v_greedy, lw=2,\n", " alpha=0.6, label='Approximate policy function')\n", "\n", "ax.plot(grid, σ_star(grid, α, og.β),\n", " lw=2, alpha=0.6, label='True policy function')\n", "\n", "ax.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The figure shows that we’ve done a good job in this instance of approximating\n", "the true policy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1\n", "\n", "A common choice for utility function in this kind of work is the CRRA\n", "specification\n", "\n", "$$\n", "u(c) = \\frac{c^{1 - γ} - 1} {1 - γ}\n", "$$\n", "\n", "Maintaining the other defaults, including the Cobb-Douglas production\n", "function, solve the optimal growth model with this\n", "utility specification.\n", "\n", "In doing so,\n", "\n", "- Set $ \\gamma = 1.5 $. \n", "- Use the `solve_model` function defined above. \n", "- Time how long this function takes to run, so we can compare it to faster\n", " code developed in the [next lecture](https://python-programming.quantecon.org/optgrowth_fast.html) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solutions\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1\n", "\n", "Here we set up the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "γ = 1.5 # Preference parameter\n", "\n", "def u_crra(c):\n", " return (c**(1 - γ) - 1) / (1 - γ)\n", "\n", "og = OptimalGrowthModel(u=u_crra, f=fcd)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let’s run it, with a timer." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "%%time\n", "v_greedy, v_solution = solve_model(og)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s plot the policy function just to see what it looks like:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "ax.plot(grid, v_greedy, lw=2,\n", " alpha=0.6, label='Approximate optimal policy')\n", "\n", "ax.legend()\n", "plt.show()" ] } ], "metadata": { "date": 1585449010.7585895, "filename": "optgrowth.rst", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "title": "Optimal Growth I: The Stochastic Optimal Growth Model" }, "nbformat": 4, "nbformat_minor": 2 }