{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "
\n", " \n", " \"QuantEcon\"\n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Optimal Growth II: Accelerating the Code with Numba" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Optimal Growth II: Accelerating the Code with Numba](#Optimal-Growth-II:-Accelerating-the-Code-with-Numba) \n", " - [Overview](#Overview) \n", " - [The Model](#The-Model) \n", " - [Computation](#Computation) \n", " - [Exercises](#Exercises) \n", " - [Solutions](#Solutions) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to what’s in Anaconda, this lecture will need the following libraries:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": true }, "outputs": [], "source": [ "!pip install --upgrade quantecon\n", "!pip install --upgrade interpolation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "In a [previous lecture](https://python-programming.quantecon.org/optgrowth.html), we studied a stochastic optimal\n", "growth model with one representative agent.\n", "\n", "We solved the model using dynamic programming.\n", "\n", "In writing our code, we focused on clarity and flexibility.\n", "\n", "These are important, but there’s often a trade-off between flexibility and\n", "speed.\n", "\n", "The reason is that, when code is less flexible, we can exploit structure more\n", "easily.\n", "\n", "(This is true about algorithms and mathematical problems more generally:\n", "more specific problems have more structure, which, with some thought, can be\n", "exploited for better results.)\n", "\n", "So, in this lecture, we are going to accept less flexibility while gaining\n", "speed, using just-in-time (JIT) compilation to\n", "accelerate our code.\n", "\n", "Let’s start with some imports:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from interpolation import interp\n", "from numba import jit, njit, jitclass, prange, float64, int32\n", "from quantecon.optimize.scalar_maximization import brent_max\n", "\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are using an interpolation function from\n", "[interpolation.py](https://github.com/EconForge/interpolation.py) because it\n", "helps us JIT-compile our code.\n", "\n", "The function brent_max is also designed for embedding in JIT-compiled code.\n", "\n", "These are alternatives to similar functions in SciPy (which, unfortunately, are not JIT-aware)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Model\n", "\n", "\n", "\n", "The model is the same as discussed in [this lecture](https://python-programming.quantecon.org/optgrowth.html).\n", "\n", "We will use the same algorithm to solve it—the only difference is in the\n", "implementation itself.\n", "\n", "We will use the CRRA utility specification\n", "\n", "$$\n", "u(c) = \\frac{c^{1 - γ} - 1} {1 - γ}\n", "$$\n", "\n", "We continue to assume that\n", "\n", "- $ f(k) = k^{\\alpha} $ \n", "- $ \\phi $ is the distribution of $ \\exp(\\mu + s \\zeta) $ when $ \\zeta $ is standard normal " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Computation\n", "\n", "\n", "\n", "As before, we will store the primitives of the optimal growth model in a class.\n", "\n", "But now we are going to use [Numba’s](https://python-programming.quantecon.org/numba.html) `@jitclass` decorator to\n", "target our class for JIT compilation.\n", "\n", "Because we are going to use Numba to compile our class, we need to specify the\n", "types of the data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "opt_growth_data = [\n", " ('α', float64), # Production parameter\n", " ('β', float64), # Discount factor\n", " ('μ', float64), # Shock location parameter\n", " ('γ', float64), # Preference parameter\n", " ('s', float64), # Shock scale parameter\n", " ('grid', float64[:]), # Grid (array)\n", " ('shocks', float64[:]) # Shock draws (array)\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note the convention for specifying the types of each argument.\n", "\n", "Now we’re ready to create our class, which will combine parameters and a\n", "method that realizes the right hand side of the Bellman equation [(9)](https://python-programming.quantecon.org/optgrowth.html#equation-fpb30).\n", "\n", "You will notice that, unlike in the [previous lecture](https://python-programming.quantecon.org/optgrowth.html), we\n", "hardwire the Cobb-Douglas production and CRRA utility specifications into the\n", "class.\n", "\n", "Thus, we are losing flexibility, but we will gain substantial speed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "@jitclass(opt_growth_data)\n", "class OptimalGrowthModel:\n", "\n", " def __init__(self,\n", " α=0.4,\n", " β=0.96,\n", " μ=0,\n", " s=0.1,\n", " γ=1.5,\n", " grid_max=4,\n", " grid_size=120,\n", " shock_size=250,\n", " seed=1234):\n", "\n", " self.α, self.β, self.γ, self.μ, self.s = α, β, γ, μ, s\n", "\n", " # Set up grid\n", " self.grid = np.linspace(1e-5, grid_max, grid_size)\n", "\n", " # Store shocks (with a seed, so results are reproducible)\n", " np.random.seed(seed)\n", " self.shocks = np.exp(μ + s * np.random.randn(shock_size))\n", "\n", " def f(self, k):\n", " return k**self.α\n", "\n", " def u(self, c):\n", " return (c**(1 - self.γ) - 1) / (1 - self.γ)\n", "\n", " def objective(self, c, y, v_array):\n", " \"\"\"\n", " Right hand side of the Bellman equation.\n", " \"\"\"\n", "\n", " u, f, β, shocks = self.u, self.f, self.β, self.shocks\n", "\n", " v = lambda x: interp(self.grid, v_array, x)\n", "\n", " return u(c) + β * np.mean(v(f(y - c) * shocks))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Bellman Operator\n", "\n", "Here’s a function that uses JIT compilation to accelerate the Bellman operator" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "@jit(nopython=True)\n", "def T(og, v):\n", " \"\"\"\n", " The Bellman operator.\n", "\n", " * og is an instance of OptimalGrowthModel\n", " * v is an array representing a guess of the value function\n", " \"\"\"\n", " v_new = np.empty_like(v)\n", "\n", " for i in range(len(og.grid)):\n", " y = og.grid[i]\n", "\n", " # Maximize RHS of Bellman equation at state y\n", " v_max = brent_max(og.objective, 1e-10, y, args=(y, v))[1]\n", " v_new[i] = v_max\n", "\n", " return v_new" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here’s another function, very similar to the last, that computes a $ v $-greedy\n", "policy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "@jit(nopython=True)\n", "def get_greedy(og, v):\n", " \"\"\"\n", " Compute a v-greedy policy.\n", "\n", " * og is an instance of OptimalGrowthModel\n", " * v is an array representing a guess of the value function\n", " \"\"\"\n", " v_greedy = np.empty_like(v)\n", "\n", " for i in range(len(og.grid)):\n", " y = og.grid[i]\n", "\n", " # Find maximizer of RHS of Bellman equation at state y\n", " c_star = brent_max(og.objective, 1e-10, y, args=(y, v))[0]\n", " v_greedy[i] = c_star\n", "\n", " return v_greedy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last two functions could be merged, as they were in our [previous implementation](https://python-programming.quantecon.org/optgrowth.html), but we resisted doing so to increase efficiency.\n", "\n", "Here’s a function that iterates from a starting guess of the value function until the difference between successive iterates is below a particular tolerance level." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "def solve_model(og,\n", " tol=1e-4,\n", " max_iter=1000,\n", " verbose=True,\n", " print_skip=25):\n", "\n", " # Set up loop\n", " v = np.log(og.grid) # Initial condition\n", " i = 0\n", " error = tol + 1\n", "\n", " while i < max_iter and error > tol:\n", " v_new = T(og, v)\n", " error = np.max(np.abs(v - v_new))\n", " i += 1\n", " if verbose and i % print_skip == 0:\n", " print(f\"Error at iteration {i} is {error}.\")\n", " v = v_new\n", "\n", " if i == max_iter:\n", " print(\"Failed to converge!\")\n", "\n", " if verbose and i < max_iter:\n", " print(f\"\\nConverged in {i} iterations.\")\n", "\n", " return v_new" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s compute the approximate solution at the default parameters.\n", "\n", "First we create an instance:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "og = OptimalGrowthModel()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we call `solve_model`, using the `%%time` magic to check how long it\n", "takes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "%%time\n", "v_solution = solve_model(og)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You will notice that this is *much* faster than our [original implementation](https://python-programming.quantecon.org/optgrowth.html#ogex1).\n", "\n", "Let’s plot the resulting policy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "v_greedy = get_greedy(og, v_solution)\n", "\n", "fig, ax = plt.subplots()\n", "\n", "ax.plot(og.grid, v_greedy, lw=2,\n", " alpha=0.6, label='Approximate value function')\n", "\n", "ax.legend(loc='lower right')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Everything seems in order, so our code acceleration has been successful!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1\n", "\n", "Once an optimal consumption policy $ \\sigma $ is given, income follows\n", "\n", "$$\n", "y_{t+1} = f(y_t - \\sigma(y_t)) \\xi_{t+1}\n", "$$\n", "\n", "The next figure shows a simulation of 100 elements of this sequence for three\n", "different discount factors (and hence three different policies).\n", "\n", "\n", "\n", " \n", "In each sequence, the initial condition is $ y_0 = 0.1 $.\n", "\n", "The discount factors are `discount_factors = (0.8, 0.9, 0.98)`.\n", "\n", "We have also dialed down the shocks a bit with `s = 0.05`.\n", "\n", "Otherwise, the parameters and primitives are the same as the log-linear model discussed earlier in the lecture.\n", "\n", "Notice that more patient agents typically have higher wealth.\n", "\n", "Replicate the figure modulo randomness." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solutions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1\n", "\n", "Here’s one solution" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "def simulate_og(σ_func, og, y0=0.1, ts_length=100):\n", " '''\n", " Compute a time series given consumption policy σ.\n", " '''\n", " y = np.empty(ts_length)\n", " ξ = np.random.randn(ts_length-1)\n", " y[0] = y0\n", " for t in range(ts_length-1):\n", " y[t+1] = (y[t] - σ_func(y[t]))**og.α * np.exp(og.μ + og.s * ξ[t])\n", " return y" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "for β in (0.8, 0.9, 0.98):\n", "\n", " og = OptimalGrowthModel(β=β, s=0.05)\n", "\n", " v_solution = solve_model(og)\n", " v_greedy = get_greedy(og, v_solution)\n", "\n", " # Define an optimal policy function\n", " σ_func = lambda x: interp(og.grid, v_greedy, x)\n", " y = simulate_og(σ_func, og)\n", " ax.plot(y, lw=2, alpha=0.6, label=rf'$\\beta = {β}$')\n", "\n", "ax.legend(loc='lower right')\n", "plt.show()" ] } ], "metadata": { "date": 1585449010.9212792, "filename": "optgrowth_fast.rst", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "title": "Optimal Growth II: Accelerating the Code with Numba" }, "nbformat": 4, "nbformat_minor": 2 }