{ "cells": [ { "cell_type": "markdown", "id": "094fcc30", "metadata": {}, "source": [ "# Cake Eating II: Numerical Methods" ] }, { "cell_type": "markdown", "id": "759c398f", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Cake Eating II: Numerical Methods](#Cake-Eating-II:-Numerical-Methods) \n", " - [Overview](#Overview) \n", " - [Reviewing the Model](#Reviewing-the-Model) \n", " - [Value Function Iteration](#Value-Function-Iteration) \n", " - [Time Iteration](#Time-Iteration) \n", " - [Exercises](#Exercises) " ] }, { "cell_type": "markdown", "id": "c134172c", "metadata": {}, "source": [ "## Overview\n", "\n", "In this lecture we continue the study of [the cake eating problem](https://python.quantecon.org/cake_eating_problem.html).\n", "\n", "The aim of this lecture is to solve the problem using numerical\n", "methods.\n", "\n", "At first this might appear unnecessary, since we already obtained the optimal\n", "policy analytically.\n", "\n", "However, the cake eating problem is too simple to be useful without\n", "modifications, and once we start modifying the problem, numerical methods become essential.\n", "\n", "Hence it makes sense to introduce numerical methods now, and test them on this\n", "simple problem.\n", "\n", "Since we know the analytical solution, this will allow us to assess the\n", "accuracy of alternative numerical methods.\n", "\n", "We will use the following imports:" ] }, { "cell_type": "code", "execution_count": null, "id": "29e7c14a", "metadata": { "hide-output": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "from scipy.optimize import minimize_scalar, bisect" ] }, { "cell_type": "markdown", "id": "e28715aa", "metadata": {}, "source": [ "## Reviewing the Model\n", "\n", "You might like to [review the details](https://python.quantecon.org/cake_eating_problem.html) before we start.\n", "\n", "Recall in particular that the Bellman equation is\n", "\n", "\n", "\n", "$$\n", "v(x) = \\max_{0\\leq c \\leq x} \\{u(c) + \\beta v(x-c)\\}\n", "\\quad \\text{for all } x \\geq 0. \\tag{37.1}\n", "$$\n", "\n", "where $ u $ is the CRRA utility function.\n", "\n", "The analytical solutions for the value function and optimal policy were found\n", "to be as follows." ] }, { "cell_type": "code", "execution_count": null, "id": "d2fbc10a", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def c_star(x, β, γ):\n", "\n", " return (1 - β ** (1/γ)) * x\n", "\n", "\n", "def v_star(x, β, γ):\n", "\n", " return (1 - β**(1 / γ))**(-γ) * (x**(1-γ) / (1-γ))" ] }, { "cell_type": "markdown", "id": "56b59b4c", "metadata": {}, "source": [ "Our first aim is to obtain these analytical solutions numerically." ] }, { "cell_type": "markdown", "id": "a086a229", "metadata": {}, "source": [ "## Value Function Iteration\n", "\n", "The first approach we will take is **value function iteration**.\n", "\n", "This is a form of **successive approximation**, and was discussed in our [lecture on job search](https://python.quantecon.org/mccall_model.html).\n", "\n", "The basic idea is:\n", "\n", "1. Take an arbitary intial guess of $ v $. \n", "1. Obtain an update $ w $ defined by \n", " $$\n", " w(x) = \\max_{0\\leq c \\leq x} \\{u(c) + \\beta v(x-c)\\}\n", " $$\n", "1. Stop if $ w $ is approximately equal to $ v $, otherwise set\n", " $ v=w $ and go back to step 2. \n", "\n", "\n", "Let’s write this a bit more mathematically." ] }, { "cell_type": "markdown", "id": "b911f266", "metadata": {}, "source": [ "### The Bellman Operator\n", "\n", "We introduce the **Bellman operator** $ T $ that takes a function v as an\n", "argument and returns a new function $ Tv $ defined by\n", "\n", "$$\n", "Tv(x) = \\max_{0 \\leq c \\leq x} \\{u(c) + \\beta v(x - c)\\}\n", "$$\n", "\n", "From $ v $ we get $ Tv $, and applying $ T $ to this yields\n", "$ T^2 v := T (Tv) $ and so on.\n", "\n", "This is called **iterating with the Bellman operator** from initial guess\n", "$ v $.\n", "\n", "As we discuss in more detail in later lectures, one can use Banach’s\n", "contraction mapping theorem to prove that the sequence of functions $ T^n\n", "v $ converges to the solution to the Bellman equation." ] }, { "cell_type": "markdown", "id": "df39ceef", "metadata": {}, "source": [ "### Fitted Value Function Iteration\n", "\n", "Both consumption $ c $ and the state variable $ x $ are continuous.\n", "\n", "This causes complications when it comes to numerical work.\n", "\n", "For example, we need to store each function $ T^n v $ in order to\n", "compute the next iterate $ T^{n+1} v $.\n", "\n", "But this means we have to store $ T^n v(x) $ at infinitely many $ x $, which is, in general, impossible.\n", "\n", "To circumvent this issue we will use fitted value function iteration, as\n", "discussed previously in [one of the lectures](https://python.quantecon.org/mccall_fitted_vfi.html) on job\n", "search.\n", "\n", "The process looks like this:\n", "\n", "1. Begin with an array of values $ \\{ v_0, \\ldots, v_I \\} $ representing\n", " the values of some initial function $ v $ on the grid points $ \\{ x_0, \\ldots, x_I \\} $. \n", "1. Build a function $ \\hat v $ on the state space $ \\mathbb R_+ $ by\n", " linear interpolation, based on these data points. \n", "1. Obtain and record the value $ T \\hat v(x_i) $ on each grid point\n", " $ x_i $ by repeatedly solving the maximization problem in the Bellman\n", " equation. \n", "1. Unless some stopping condition is satisfied, set\n", " $ \\{ v_0, \\ldots, v_I \\} = \\{ T \\hat v(x_0), \\ldots, T \\hat v(x_I) \\} $ and go to step 2. \n", "\n", "\n", "In step 2 we’ll use continuous piecewise linear interpolation." ] }, { "cell_type": "markdown", "id": "c672c676", "metadata": {}, "source": [ "### Implementation\n", "\n", "The `maximize` function below is a small helper function that converts a\n", "SciPy minimization routine into a maximization routine." ] }, { "cell_type": "code", "execution_count": null, "id": "ce77989b", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def maximize(g, a, b, args):\n", " \"\"\"\n", " Maximize the function g over the interval [a, b].\n", "\n", " We use the fact that the maximizer of g on any interval is\n", " also the minimizer of -g. The tuple args collects any extra\n", " arguments to g.\n", "\n", " Returns the maximal value and the maximizer.\n", " \"\"\"\n", "\n", " objective = lambda x: -g(x, *args)\n", " result = minimize_scalar(objective, bounds=(a, b), method='bounded')\n", " maximizer, maximum = result.x, -result.fun\n", " return maximizer, maximum" ] }, { "cell_type": "markdown", "id": "127f3401", "metadata": {}, "source": [ "We’ll store the parameters $ \\beta $ and $ \\gamma $ in a\n", "class called `CakeEating`.\n", "\n", "The same class will also provide a method called `state_action_value` that\n", "returns the value of a consumption choice given a particular state and guess\n", "of $ v $." ] }, { "cell_type": "code", "execution_count": null, "id": "d23f2e16", "metadata": { "hide-output": false }, "outputs": [], "source": [ "class CakeEating:\n", "\n", " def __init__(self,\n", " β=0.96, # discount factor\n", " γ=1.5, # degree of relative risk aversion\n", " x_grid_min=1e-3, # exclude zero for numerical stability\n", " x_grid_max=2.5, # size of cake\n", " x_grid_size=120):\n", "\n", " self.β, self.γ = β, γ\n", "\n", " # Set up grid\n", " self.x_grid = np.linspace(x_grid_min, x_grid_max, x_grid_size)\n", "\n", " # Utility function\n", " def u(self, c):\n", "\n", " γ = self.γ\n", "\n", " if γ == 1:\n", " return np.log(c)\n", " else:\n", " return (c ** (1 - γ)) / (1 - γ)\n", "\n", " # first derivative of utility function\n", " def u_prime(self, c):\n", "\n", " return c ** (-self.γ)\n", "\n", " def state_action_value(self, c, x, v_array):\n", " \"\"\"\n", " Right hand side of the Bellman equation given x and c.\n", " \"\"\"\n", "\n", " u, β = self.u, self.β\n", " v = lambda x: np.interp(x, self.x_grid, v_array)\n", "\n", " return u(c) + β * v(x - c)" ] }, { "cell_type": "markdown", "id": "3ad421cd", "metadata": {}, "source": [ "We now define the Bellman operation:" ] }, { "cell_type": "code", "execution_count": null, "id": "8b371b35", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def T(v, ce):\n", " \"\"\"\n", " The Bellman operator. Updates the guess of the value function.\n", "\n", " * ce is an instance of CakeEating\n", " * v is an array representing a guess of the value function\n", "\n", " \"\"\"\n", " v_new = np.empty_like(v)\n", "\n", " for i, x in enumerate(ce.x_grid):\n", " # Maximize RHS of Bellman equation at state x\n", " v_new[i] = maximize(ce.state_action_value, 1e-10, x, (x, v))[1]\n", "\n", " return v_new" ] }, { "cell_type": "markdown", "id": "7ce18cd8", "metadata": {}, "source": [ "After defining the Bellman operator, we are ready to solve the model.\n", "\n", "Let’s start by creating a `CakeEating` instance using the default parameterization." ] }, { "cell_type": "code", "execution_count": null, "id": "501b9630", "metadata": { "hide-output": false }, "outputs": [], "source": [ "ce = CakeEating()" ] }, { "cell_type": "markdown", "id": "acf1f01e", "metadata": {}, "source": [ "Now let’s see the iteration of the value function in action.\n", "\n", "We start from guess $ v $ given by $ v(x) = u(x) $ for every\n", "$ x $ grid point." ] }, { "cell_type": "code", "execution_count": null, "id": "41966dfb", "metadata": { "hide-output": false }, "outputs": [], "source": [ "x_grid = ce.x_grid\n", "v = ce.u(x_grid) # Initial guess\n", "n = 12 # Number of iterations\n", "\n", "fig, ax = plt.subplots()\n", "\n", "ax.plot(x_grid, v, color=plt.cm.jet(0),\n", " lw=2, alpha=0.6, label='Initial guess')\n", "\n", "for i in range(n):\n", " v = T(v, ce) # Apply the Bellman operator\n", " ax.plot(x_grid, v, color=plt.cm.jet(i / n), lw=2, alpha=0.6)\n", "\n", "ax.legend()\n", "ax.set_ylabel('value', fontsize=12)\n", "ax.set_xlabel('cake size $x$', fontsize=12)\n", "ax.set_title('Value function iterations')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "2373e090", "metadata": {}, "source": [ "To do this more systematically, we introduce a wrapper function called\n", "`compute_value_function` that iterates until some convergence conditions are\n", "satisfied." ] }, { "cell_type": "code", "execution_count": null, "id": "ac4131db", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def compute_value_function(ce,\n", " tol=1e-4,\n", " max_iter=1000,\n", " verbose=True,\n", " print_skip=25):\n", "\n", " # Set up loop\n", " v = np.zeros(len(ce.x_grid)) # Initial guess\n", " i = 0\n", " error = tol + 1\n", "\n", " while i < max_iter and error > tol:\n", " v_new = T(v, ce)\n", "\n", " error = np.max(np.abs(v - v_new))\n", " i += 1\n", "\n", " if verbose and i % print_skip == 0:\n", " print(f\"Error at iteration {i} is {error}.\")\n", "\n", " v = v_new\n", "\n", " if error > tol:\n", " print(\"Failed to converge!\")\n", " elif verbose:\n", " print(f\"\\nConverged in {i} iterations.\")\n", "\n", " return v_new" ] }, { "cell_type": "markdown", "id": "45b2d2d5", "metadata": {}, "source": [ "Now let’s call it, noting that it takes a little while to run." ] }, { "cell_type": "code", "execution_count": null, "id": "3a47d318", "metadata": { "hide-output": false }, "outputs": [], "source": [ "v = compute_value_function(ce)" ] }, { "cell_type": "markdown", "id": "86b71dc1", "metadata": {}, "source": [ "Now we can plot and see what the converged value function looks like." ] }, { "cell_type": "code", "execution_count": null, "id": "1678ab9f", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "ax.plot(x_grid, v, label='Approximate value function')\n", "ax.set_ylabel('$V(x)$', fontsize=12)\n", "ax.set_xlabel('$x$', fontsize=12)\n", "ax.set_title('Value function')\n", "ax.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "c7349a40", "metadata": {}, "source": [ "Next let’s compare it to the analytical solution." ] }, { "cell_type": "code", "execution_count": null, "id": "6ba5530b", "metadata": { "hide-output": false }, "outputs": [], "source": [ "v_analytical = v_star(ce.x_grid, ce.β, ce.γ)" ] }, { "cell_type": "code", "execution_count": null, "id": "b109f95b", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "ax.plot(x_grid, v_analytical, label='analytical solution')\n", "ax.plot(x_grid, v, label='numerical solution')\n", "ax.set_ylabel('$V(x)$', fontsize=12)\n", "ax.set_xlabel('$x$', fontsize=12)\n", "ax.legend()\n", "ax.set_title('Comparison between analytical and numerical value functions')\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "a9c80bbb", "metadata": {}, "source": [ "The quality of approximation is reasonably good for large $ x $, but\n", "less so near the lower boundary.\n", "\n", "The reason is that the utility function and hence value function is very\n", "steep near the lower boundary, and hence hard to approximate." ] }, { "cell_type": "markdown", "id": "523d12b1", "metadata": {}, "source": [ "### Policy Function\n", "\n", "Let’s see how this plays out in terms of computing the optimal policy.\n", "\n", "In the [first lecture on cake eating](https://python.quantecon.org/cake_eating_problem.html), the optimal\n", "consumption policy was shown to be\n", "\n", "$$\n", "\\sigma^*(x) = \\left(1-\\beta^{1/\\gamma} \\right) x\n", "$$\n", "\n", "Let’s see if our numerical results lead to something similar.\n", "\n", "Our numerical strategy will be to compute\n", "\n", "$$\n", "\\sigma(x) = \\arg \\max_{0 \\leq c \\leq x} \\{u(c) + \\beta v(x - c)\\}\n", "$$\n", "\n", "on a grid of $ x $ points and then interpolate.\n", "\n", "For $ v $ we will use the approximation of the value function we obtained\n", "above.\n", "\n", "Here’s the function:" ] }, { "cell_type": "code", "execution_count": null, "id": "5b7019ca", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def σ(ce, v):\n", " \"\"\"\n", " The optimal policy function. Given the value function,\n", " it finds optimal consumption in each state.\n", "\n", " * ce is an instance of CakeEating\n", " * v is a value function array\n", "\n", " \"\"\"\n", " c = np.empty_like(v)\n", "\n", " for i in range(len(ce.x_grid)):\n", " x = ce.x_grid[i]\n", " # Maximize RHS of Bellman equation at state x\n", " c[i] = maximize(ce.state_action_value, 1e-10, x, (x, v))[0]\n", "\n", " return c" ] }, { "cell_type": "markdown", "id": "3122291e", "metadata": {}, "source": [ "Now let’s pass the approximate value function and compute optimal consumption:" ] }, { "cell_type": "code", "execution_count": null, "id": "b4ca634d", "metadata": { "hide-output": false }, "outputs": [], "source": [ "c = σ(ce, v)" ] }, { "cell_type": "markdown", "id": "4c74b96b", "metadata": {}, "source": [ "\n", "\n", "Let’s plot this next to the true analytical solution" ] }, { "cell_type": "code", "execution_count": null, "id": "9938159f", "metadata": { "hide-output": false }, "outputs": [], "source": [ "c_analytical = c_star(ce.x_grid, ce.β, ce.γ)\n", "\n", "fig, ax = plt.subplots()\n", "\n", "ax.plot(ce.x_grid, c_analytical, label='analytical')\n", "ax.plot(ce.x_grid, c, label='numerical')\n", "ax.set_ylabel(r'$\\sigma(x)$')\n", "ax.set_xlabel('$x$')\n", "ax.legend()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "a2b89296", "metadata": {}, "source": [ "The fit is reasonable but not perfect.\n", "\n", "We can improve it by increasing the grid size or reducing the\n", "error tolerance in the value function iteration routine.\n", "\n", "However, both changes will lead to a longer compute time.\n", "\n", "Another possibility is to use an alternative algorithm, which offers the\n", "possibility of faster compute time and, at the same time, more accuracy.\n", "\n", "We explore this next." ] }, { "cell_type": "markdown", "id": "ab04e8a4", "metadata": {}, "source": [ "## Time Iteration\n", "\n", "Now let’s look at a different strategy to compute the optimal policy.\n", "\n", "Recall that the optimal policy satisfies the Euler equation\n", "\n", "\n", "\n", "$$\n", "u' (\\sigma(x)) = \\beta u' ( \\sigma(x - \\sigma(x)))\n", "\\quad \\text{for all } x > 0 \\tag{37.2}\n", "$$\n", "\n", "Computationally, we can start with any initial guess of\n", "$ \\sigma_0 $ and now choose $ c $ to solve\n", "\n", "$$\n", "u^{\\prime}( c ) = \\beta u^{\\prime} (\\sigma_0(x - c))\n", "$$\n", "\n", "Choosing $ c $ to satisfy this equation at all $ x > 0 $ produces a function of $ x $.\n", "\n", "Call this new function $ \\sigma_1 $, treat it as the new guess and\n", "repeat.\n", "\n", "This is called **time iteration**.\n", "\n", "As with value function iteration, we can view the update step as action of an\n", "operator, this time denoted by $ K $.\n", "\n", "- In particular, $ K\\sigma $ is the policy updated from $ \\sigma $\n", " using the procedure just described. \n", "- We will use this terminology in the exercises below. \n", "\n", "\n", "The main advantage of time iteration relative to value function iteration is that it operates in policy space rather than value function space.\n", "\n", "This is helpful because the policy function has less curvature, and hence is easier to approximate.\n", "\n", "In the exercises you are asked to implement time iteration and compare it to\n", "value function iteration.\n", "\n", "You should find that the method is faster and more accurate.\n", "\n", "This is due to\n", "\n", "1. the curvature issue mentioned just above and \n", "1. the fact that we are using more information — in this case, the first order conditions. " ] }, { "cell_type": "markdown", "id": "44aed8e3", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "id": "dcd9c7be", "metadata": {}, "source": [ "## Exercise 37.1\n", "\n", "Try the following modification of the problem.\n", "\n", "Instead of the cake size changing according to $ x_{t+1} = x_t - c_t $,\n", "let it change according to\n", "\n", "$$\n", "x_{t+1} = (x_t - c_t)^{\\alpha}\n", "$$\n", "\n", "where $ \\alpha $ is a parameter satisfying $ 0 < \\alpha < 1 $.\n", "\n", "(We will see this kind of update rule when we study optimal growth models.)\n", "\n", "Make the required changes to value function iteration code and plot the value and policy functions.\n", "\n", "Try to reuse as much code as possible." ] }, { "cell_type": "markdown", "id": "8832bb2c", "metadata": {}, "source": [ "## Solution to[ Exercise 37.1](https://python.quantecon.org/#cen_ex1)\n", "\n", "We need to create a class to hold our primitives and return the right hand side of the Bellman equation.\n", "\n", "We will use [inheritance](https://en.wikipedia.org/wiki/Inheritance_%28object-oriented_programming%29) to maximize code reuse." ] }, { "cell_type": "code", "execution_count": null, "id": "656bd3fb", "metadata": { "hide-output": false }, "outputs": [], "source": [ "class OptimalGrowth(CakeEating):\n", " \"\"\"\n", " A subclass of CakeEating that adds the parameter α and overrides\n", " the state_action_value method.\n", " \"\"\"\n", "\n", " def __init__(self,\n", " β=0.96, # discount factor\n", " γ=1.5, # degree of relative risk aversion\n", " α=0.4, # productivity parameter\n", " x_grid_min=1e-3, # exclude zero for numerical stability\n", " x_grid_max=2.5, # size of cake\n", " x_grid_size=120):\n", "\n", " self.α = α\n", " CakeEating.__init__(self, β, γ, x_grid_min, x_grid_max, x_grid_size)\n", "\n", " def state_action_value(self, c, x, v_array):\n", " \"\"\"\n", " Right hand side of the Bellman equation given x and c.\n", " \"\"\"\n", "\n", " u, β, α = self.u, self.β, self.α\n", " v = lambda x: np.interp(x, self.x_grid, v_array)\n", "\n", " return u(c) + β * v((x - c)**α)" ] }, { "cell_type": "code", "execution_count": null, "id": "31c32833", "metadata": { "hide-output": false }, "outputs": [], "source": [ "og = OptimalGrowth()" ] }, { "cell_type": "markdown", "id": "b60f1fce", "metadata": {}, "source": [ "Here’s the computed value function." ] }, { "cell_type": "code", "execution_count": null, "id": "deab2e75", "metadata": { "hide-output": false }, "outputs": [], "source": [ "v = compute_value_function(og, verbose=False)\n", "\n", "fig, ax = plt.subplots()\n", "\n", "ax.plot(x_grid, v, lw=2, alpha=0.6)\n", "ax.set_ylabel('value', fontsize=12)\n", "ax.set_xlabel('state $x$', fontsize=12)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "555c37a9", "metadata": {}, "source": [ "Here’s the computed policy, combined with the solution we derived above for\n", "the standard cake eating case $ \\alpha=1 $." ] }, { "cell_type": "code", "execution_count": null, "id": "000ac9f0", "metadata": { "hide-output": false }, "outputs": [], "source": [ "c_new = σ(og, v)\n", "\n", "fig, ax = plt.subplots()\n", "\n", "ax.plot(ce.x_grid, c_analytical, label=r'$\\alpha=1$ solution')\n", "ax.plot(ce.x_grid, c_new, label=fr'$\\alpha={og.α}$ solution')\n", "\n", "ax.set_ylabel('consumption', fontsize=12)\n", "ax.set_xlabel('$x$', fontsize=12)\n", "\n", "ax.legend(fontsize=12)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "b8512d4e", "metadata": {}, "source": [ "Consumption is higher when $ \\alpha < 1 $ because, at least for large $ x $, the return to savings is lower." ] }, { "cell_type": "markdown", "id": "1722e5e4", "metadata": {}, "source": [ "## Exercise 37.2\n", "\n", "Implement time iteration, returning to the original case (i.e., dropping the\n", "modification in the exercise above)." ] }, { "cell_type": "markdown", "id": "a1027212", "metadata": {}, "source": [ "## Solution to[ Exercise 37.2](https://python.quantecon.org/#cen_ex2)\n", "\n", "Here’s one way to implement time iteration." ] }, { "cell_type": "code", "execution_count": null, "id": "78d4290a", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def K(σ_array, ce):\n", " \"\"\"\n", " The policy function operator. Given the policy function,\n", " it updates the optimal consumption using Euler equation.\n", "\n", " * σ_array is an array of policy function values on the grid\n", " * ce is an instance of CakeEating\n", "\n", " \"\"\"\n", "\n", " u_prime, β, x_grid = ce.u_prime, ce.β, ce.x_grid\n", " σ_new = np.empty_like(σ_array)\n", "\n", " σ = lambda x: np.interp(x, x_grid, σ_array)\n", "\n", " def euler_diff(c, x):\n", " return u_prime(c) - β * u_prime(σ(x - c))\n", "\n", " for i, x in enumerate(x_grid):\n", "\n", " # handle small x separately --- helps numerical stability\n", " if x < 1e-12:\n", " σ_new[i] = 0.0\n", "\n", " # handle other x\n", " else:\n", " σ_new[i] = bisect(euler_diff, 1e-10, x - 1e-10, x)\n", "\n", " return σ_new" ] }, { "cell_type": "code", "execution_count": null, "id": "a89676d5", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def iterate_euler_equation(ce,\n", " max_iter=500,\n", " tol=1e-5,\n", " verbose=True,\n", " print_skip=25):\n", "\n", " x_grid = ce.x_grid\n", "\n", " σ = np.copy(x_grid) # initial guess\n", "\n", " i = 0\n", " error = tol + 1\n", " while i < max_iter and error > tol:\n", "\n", " σ_new = K(σ, ce)\n", "\n", " error = np.max(np.abs(σ_new - σ))\n", " i += 1\n", "\n", " if verbose and i % print_skip == 0:\n", " print(f\"Error at iteration {i} is {error}.\")\n", "\n", " σ = σ_new\n", "\n", " if error > tol:\n", " print(\"Failed to converge!\")\n", " elif verbose:\n", " print(f\"\\nConverged in {i} iterations.\")\n", "\n", " return σ" ] }, { "cell_type": "code", "execution_count": null, "id": "09f42004", "metadata": { "hide-output": false }, "outputs": [], "source": [ "ce = CakeEating(x_grid_min=0.0)\n", "c_euler = iterate_euler_equation(ce)" ] }, { "cell_type": "code", "execution_count": null, "id": "7ca6f1c6", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "ax.plot(ce.x_grid, c_analytical, label='analytical solution')\n", "ax.plot(ce.x_grid, c_euler, label='time iteration solution')\n", "\n", "ax.set_ylabel('consumption')\n", "ax.set_xlabel('$x$')\n", "ax.legend(fontsize=12)\n", "\n", "plt.show()" ] } ], "metadata": { "date": 1714442504.2128556, "filename": "cake_eating_numerical.md", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "title": "Cake Eating II: Numerical Methods" }, "nbformat": 4, "nbformat_minor": 5 }