{ "cells": [ { "cell_type": "markdown", "id": "15b22612", "metadata": {}, "source": [ "$$\n", "\\newcommand{\\argmax}{arg\\,max}\n", "\\newcommand{\\argmin}{arg\\,min}\n", "$$" ] }, { "cell_type": "markdown", "id": "af8f5b81", "metadata": {}, "source": [ "\n", "\n", "
\n", " \n", " \"QuantEcon\"\n", " \n", "
" ] }, { "cell_type": "markdown", "id": "12a5bcf5", "metadata": {}, "source": [ "# Job Search I: The McCall Search Model" ] }, { "cell_type": "markdown", "id": "abdab611", "metadata": {}, "source": [ "# GPU\n", "\n", "This lecture was built using a machine with access to a GPU — although it will also run without one.\n", "\n", "[Google Colab](https://colab.research.google.com/) has a free tier with GPUs\n", "that you can access as follows:\n", "\n", "1. Click on the “play” icon top right \n", "1. Select Colab \n", "1. Set the runtime environment to include a GPU " ] }, { "cell_type": "markdown", "id": "df73459f", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Job Search I: The McCall Search Model](#Job-Search-I:-The-McCall-Search-Model) \n", " - [Overview](#Overview) \n", " - [The McCall Model](#The-McCall-Model) \n", " - [Computing the Optimal Policy: Take 1](#Computing-the-Optimal-Policy:-Take-1) \n", " - [Computing an Optimal Policy: Take 2](#Computing-an-Optimal-Policy:-Take-2) \n", " - [Continuous Offer Distribution](#Continuous-Offer-Distribution) \n", " - [Volatility](#Volatility) \n", " - [Exercises](#Exercises) " ] }, { "cell_type": "markdown", "id": "0165f9de", "metadata": {}, "source": [ "> “Questioning a McCall worker is like having a conversation with an out-of-work friend:\n", "> ‘Maybe you are setting your sights too high’, or ‘Why did you quit your old job before you\n", "> had a new one lined up?’ This is real social science: an attempt to model, to understand,\n", "> human behavior by visualizing the situation people find themselves in, the options they face\n", "> and the pros and cons as they themselves see them.” – Robert E. Lucas, Jr.\n", "\n", "\n", "In addition to what’s in Anaconda, this lecture will need the following libraries:" ] }, { "cell_type": "code", "execution_count": null, "id": "060b6554", "metadata": { "hide-output": false }, "outputs": [], "source": [ "!pip install quantecon jax" ] }, { "cell_type": "markdown", "id": "c4b31836", "metadata": {}, "source": [ "## Overview\n", "\n", "The McCall search model [[McCall, 1970](https://python.quantecon.org/zreferences.html#id208)] helped transform economists’ way of thinking about labor markets.\n", "\n", "To clarify notions such as “involuntary” unemployment, McCall modeled the decision problem of an unemployed worker in terms of factors including\n", "\n", "- current and likely future wages \n", "- impatience \n", "- unemployment compensation \n", "\n", "\n", "To solve the decision problem McCall used dynamic programming.\n", "\n", "Here we set up McCall’s model and use dynamic programming to analyze it.\n", "\n", "As we’ll see, McCall’s model is not only interesting in its own right but also an excellent vehicle for learning dynamic programming.\n", "\n", "Let’s start with some imports:" ] }, { "cell_type": "code", "execution_count": null, "id": "496c8de0", "metadata": { "hide-output": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import numba\n", "import jax\n", "import jax.numpy as jnp\n", "from typing import NamedTuple\n", "from functools import partial\n", "import quantecon as qe\n", "from quantecon.distributions import BetaBinomial" ] }, { "cell_type": "markdown", "id": "1fb01d74", "metadata": {}, "source": [ "## The McCall Model\n", "\n", "\n", "\n", "An unemployed agent receives in each period a job offer at wage $ W_t $.\n", "\n", "In this lecture, we adopt the following simple environment:\n", "\n", "- The offer sequence $ \\{W_t\\}_{t \\geq 0} $ is IID, with $ q(w) $ being the probability of observing wage $ w $ in finite set $ \\mathbb{W} $. \n", "- The agent observes $ W_t $ at the start of $ t $. \n", "- The agent knows that $ \\{W_t\\} $ is IID with common distribution $ q $ and can use this when computing expectations. \n", "\n", "\n", "(In later lectures, we will relax these assumptions.)\n", "\n", "At time $ t $, our agent has two choices:\n", "\n", "1. Accept the offer and work permanently at constant wage $ W_t $. \n", "1. Reject the offer, receive unemployment compensation $ c $, and reconsider next period. \n", "\n", "\n", "The agent is infinitely lived and aims to maximize the expected discounted\n", "sum of earnings\n", "\n", "\n", "\n", "$$\n", "{\\mathbb E} \\sum_{t=0}^\\infty \\beta^t y_t \\tag{42.1}\n", "$$\n", "\n", "The constant $ \\beta $ lies in $ (0, 1) $ and is called a **discount factor**.\n", "\n", "The smaller is $ \\beta $, the more the agent discounts future earnings relative to current earnings.\n", "\n", "The variable $ y_t $ is income, equal to\n", "\n", "- his/her wage $ W_t $ when employed \n", "- unemployment compensation $ c $ when unemployed " ] }, { "cell_type": "markdown", "id": "34b4e926", "metadata": {}, "source": [ "### A Trade-Off\n", "\n", "The worker faces a trade-off:\n", "\n", "- Waiting too long for a good offer is costly, since the future is discounted. \n", "- Accepting too early is costly, since better offers might arrive in the future. \n", "\n", "\n", "To decide the optimal wait time in the face of this trade-off, we use [dynamic programming](https://dp.quantecon.org/).\n", "\n", "Dynamic programming can be thought of as a two-step procedure that\n", "\n", "1. first assigns values to “states” and \n", "1. then deduces optimal actions given those values \n", "\n", "\n", "We’ll go through these steps in turn." ] }, { "cell_type": "markdown", "id": "ab22c78c", "metadata": {}, "source": [ "### The Value Function\n", "\n", "In order to optimally trade-off current and future rewards, we need to think about two things:\n", "\n", "1. the current payoffs we get from different choices \n", "1. the different states that those choices will lead to in next period \n", "\n", "\n", "To weigh these two aspects of the decision problem, we need to assign *values*\n", "to states.\n", "\n", "To this end, let $ v^*(w) $ be the total lifetime value accruing to an\n", "unemployed worker who enters the current period unemployed when the wage is\n", "$ w \\in \\mathbb{W} $.\n", "\n", "(In particular, the agent has wage offer $ w $ in hand and can accept or reject it.)\n", "\n", "More precisely, $ v^*(w) $ denotes the total sum of expected discounted earnings\n", "when an agent always behaves in an optimal way. points in time.\n", "\n", "Of course $ v^*(w) $ is not trivial to calculate because we don’t yet know\n", "what decisions are optimal and what aren’t!\n", "\n", "If we don’t know what opimal choices are, it feels imposible to calculate\n", "$ v^*(w) $.\n", "\n", "But let’s put this aside for now and think of $ v^* $ as a function that assigns\n", "to each possible wage $ w $ the maximal lifetime value $ v^*(w) $ that can be\n", "obtained with that offer in hand.\n", "\n", "A crucial observation is that this function $ v^* $ must satisfy\n", "\n", "\n", "\n", "$$\n", "v^*(w)\n", "= \\max \\left\\{\n", " \\frac{w}{1 - \\beta}, \\, c + \\beta\n", " \\sum_{w' \\in \\mathbb{W}} v^*(w') q (w')\n", " \\right\\} \\tag{42.2}\n", "$$\n", "\n", "for every possible $ w $ in $ \\mathbb{W} $.\n", "\n", "This is a version of the **Bellman equation**, which is\n", "ubiquitous in economic dynamics and other fields involving planning over time.\n", "\n", "The intuition behind it is as follows:\n", "\n", "- the first term inside the max operation is the lifetime payoff from accepting current offer, since\n", " such a worker works forever at $ w $ and values this income stream as \n", "\n", "\n", "$$\n", "\\frac{w}{1 - \\beta} = w + \\beta w + \\beta^2 w + \\cdots\n", "$$\n", "\n", "- the second term inside the max operation is the continuation value, which is\n", " the lifetime payoff from rejecting the current offer and then behaving\n", " optimally in all subsequent periods \n", "\n", "\n", "If we optimize and pick the best of these two options, we obtain maximal\n", "lifetime value from today, given current offer $ w $.\n", "\n", "But this is precisely $ v^*(w) $, which is the left-hand side of [(42.2)](#equation-odu-pv).\n", "\n", "Putting this all together, we see that [(42.2)](#equation-odu-pv) is valid for all $ w $." ] }, { "cell_type": "markdown", "id": "5cd9e12f", "metadata": {}, "source": [ "### The Optimal Policy\n", "\n", "We still don’t know how to compute $ v^* $ (although [(42.2)](#equation-odu-pv) gives us hints\n", "we’ll return to below).\n", "\n", "But suppose for now that we do know $ v^* $.\n", "\n", "Once we have this function in hand we can easily make optimal choices (i.e., make the\n", "right choice between accept and reject given any $ w $).\n", "\n", "All we have to do is select the maximal choice on the right-hand side of [(42.2)](#equation-odu-pv).\n", "\n", "In other words, we make the best choice between stopping and continuing, given\n", "the information provided to us by $ v^* $.\n", "\n", "The optimal action is best thought of as a **policy**, which is, in general, a map from\n", "states to actions.\n", "\n", "Given any $ w $, we can read off the corresponding best choice (accept or\n", "reject) by picking the max on the right-hand side of [(42.2)](#equation-odu-pv).\n", "\n", "Thus, we have a map from $ \\mathbb W $ to $ \\{0, 1\\} $, with 1 meaning accept and 0 meaning reject.\n", "\n", "We can write the policy as follows\n", "\n", "$$\n", "\\sigma(w) := \\mathbf{1}\n", " \\left\\{\n", " \\frac{w}{1 - \\beta} \\geq c + \\beta \\sum_{w' \\in \\mathbb W}\n", " v^*(w') q (w')\n", " \\right\\}\n", "$$\n", "\n", "Here $ \\mathbf{1}\\{ P \\} = 1 $ if statement $ P $ is true and equals 0 otherwise.\n", "\n", "We can also write this as\n", "\n", "$$\n", "\\sigma(w) := \\mathbf{1} \\{ w \\geq \\bar w \\}\n", "$$\n", "\n", "where\n", "\n", "\n", "\n", "$$\n", "\\bar w := (1 - \\beta) \\left\\{ c + \\beta \\sum_{w'} v^*(w') q (w') \\right\\} \\tag{42.3}\n", "$$\n", "\n", "Here $ \\bar w $ (called the **reservation wage**) is a constant depending on\n", "$ \\beta, c $ and the wage distribution.\n", "\n", "The agent should accept if and only if the current wage offer exceeds the reservation wage.\n", "\n", "In view of [(42.3)](#equation-reswage), we can compute this reservation wage if we can compute the value function." ] }, { "cell_type": "markdown", "id": "23e93d1e", "metadata": {}, "source": [ "## Computing the Optimal Policy: Take 1\n", "\n", "To put the above ideas into action, we need to compute the value function at each $ w \\in \\mathbb W $.\n", "\n", "To simplify notation, let’s set\n", "\n", "$$\n", "\\mathbb W := \\{w_1, \\ldots, w_n \\}\n", " \\quad \\text{and} \\quad\n", " v^*(i) := v^*(w_i)\n", "$$\n", "\n", "The value function is then represented by the vector $ v^* = (v^*(i))_{i=1}^n $.\n", "\n", "In view of [(42.2)](#equation-odu-pv), this vector satisfies the nonlinear system of equations\n", "\n", "\n", "\n", "$$\n", "v^*(i)\n", "= \\max \\left\\{\n", " \\frac{w(i)}{1 - \\beta}, \\, c + \\beta \\sum_{j=1}^n \n", " v^*(j) q (j)\n", " \\right\\}\n", "\\quad\n", "\\text{for } i = 1, \\ldots, n \\tag{42.4}\n", "$$" ] }, { "cell_type": "markdown", "id": "6874d363", "metadata": {}, "source": [ "### The Algorithm\n", "\n", "To compute this vector, we use successive approximations:\n", "\n", "Step 1: pick an arbitrary initial guess $ v \\in \\mathbb R^n $.\n", "\n", "Step 2: compute a new vector $ v' \\in \\mathbb R^n $ via\n", "\n", "\n", "\n", "$$\n", "v'(i)\n", "= \\max \\left\\{\n", " \\frac{w(i)}{1 - \\beta}, \\, c + \\beta \\sum_{j=1}^n\n", " v(j) q (j)\n", " \\right\\}\n", "\\quad\n", "\\text{for } i = 1, \\ldots, n \\tag{42.5}\n", "$$\n", "\n", "Step 3: calculate a measure of a discrepancy between $ v $ and $ v' $, such as $ \\max_i |v(i)- v'(i)| $.\n", "\n", "Step 4: if the deviation is larger than some fixed tolerance, set $ v = v' $ and go to step 2, else continue.\n", "\n", "Step 5: return $ v $.\n", "\n", "For a small tolerance, the returned function $ v $ is a close approximation to the value function $ v^* $.\n", "\n", "The theory below elaborates on this point." ] }, { "cell_type": "markdown", "id": "bc978456", "metadata": {}, "source": [ "### Fixed Point Theory\n", "\n", "What’s the mathematics behind these ideas?\n", "\n", "First, one defines a mapping $ T $ from $ \\mathbb R^n $ to itself via\n", "\n", "\n", "\n", "$$\n", "(Tv)(i)\n", "= \\max \\left\\{\n", " \\frac{w(i)}{1 - \\beta}, \\, c + \\beta \\sum_{j=1}^n\n", " v(j) q (j)\n", " \\right\\}\n", "\\quad\n", "\\text{for } i = 1, \\ldots, n \\tag{42.6}\n", "$$\n", "\n", "(A new vector $ Tv $ is obtained from given vector $ v $ by evaluating\n", "the r.h.s. at each $ i $.)\n", "\n", "The element $ v_k $ in the sequence $ \\{v_k\\} $ of successive approximations corresponds to $ T^k v $.\n", "\n", "- This is $ T $ applied $ k $ times, starting at the initial guess $ v $ \n", "\n", "\n", "One can show that the conditions of the [Banach fixed point theorem](https://en.wikipedia.org/wiki/Banach_fixed-point_theorem) are\n", "satisfied by $ T $ on $ \\mathbb R^n $.\n", "\n", "One implication is that $ T $ has a unique fixed point in $ \\mathbb R^n $.\n", "\n", "- That is, a unique vector $ \\bar v $ such that $ T \\bar v = \\bar v $. \n", "\n", "\n", "Moreover, it’s immediate from the definition of $ T $ that this fixed point is $ v^* $.\n", "\n", "A second implication of the Banach contraction mapping theorem is that\n", "$ \\{ T^k v \\} $ converges to the fixed point $ v^* $ regardless of $ v $." ] }, { "cell_type": "markdown", "id": "ad3039e1", "metadata": {}, "source": [ "### Implementation\n", "\n", "Our default for $ q $, the wage offer distribution, will be [Beta-binomial](https://en.wikipedia.org/wiki/Beta-binomial_distribution)." ] }, { "cell_type": "code", "execution_count": null, "id": "e4943139", "metadata": { "hide-output": false }, "outputs": [], "source": [ "n, a, b = 50, 200, 100 # default parameters\n", "q_default = jnp.array(BetaBinomial(n, a, b).pdf())" ] }, { "cell_type": "markdown", "id": "1154cf11", "metadata": {}, "source": [ "Our default set of values for wages will be" ] }, { "cell_type": "code", "execution_count": null, "id": "f00d8867", "metadata": { "hide-output": false }, "outputs": [], "source": [ "w_min, w_max = 10, 60\n", "w_default = jnp.linspace(w_min, w_max, n+1)" ] }, { "cell_type": "markdown", "id": "1eda3f12", "metadata": {}, "source": [ "Here’s a plot of the probabilities of different wage outcomes:" ] }, { "cell_type": "code", "execution_count": null, "id": "f1fe8914", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "ax.plot(w_default, q_default, '-o', label='$q(w(i))$')\n", "ax.set_xlabel('wages')\n", "ax.set_ylabel('probabilities')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "6f1d85bd", "metadata": {}, "source": [ "We will use [JAX](https://python-programming.quantecon.org/jax_intro.html) to write our code.\n", "\n", "We’ll use `NamedTuple` for our model class to maintain immutability, which works well with JAX’s functional programming paradigm.\n", "\n", "Here’s a class that stores the model parameters with default values." ] }, { "cell_type": "code", "execution_count": null, "id": "f56a6f83", "metadata": { "hide-output": false }, "outputs": [], "source": [ "class McCallModel(NamedTuple):\n", " c: float = 25 # unemployment compensation\n", " β: float = 0.99 # discount factor \n", " w: jnp.ndarray = w_default # array of wage values, w[i] = wage at state i\n", " q: jnp.ndarray = q_default # array of probabilities" ] }, { "cell_type": "markdown", "id": "2b677d0d", "metadata": {}, "source": [ "We implement the Bellman operator $ T $ from [(42.6)](#equation-odu-pv3), which we can write in\n", "terms of array operations as\n", "\n", "\n", "\n", "$$\n", "Tv\n", "= \\max \\left\\{\n", " \\frac{w}{1 - \\beta}, \\, c + \\beta \\sum_{j=1}^n v(j) q (j)\n", " \\right\\}\n", "\\quad \\tag{42.7}\n", "$$\n", "\n", "(The first term inside the max is an array and the second is just a number – here\n", "we mean that the max comparison against this number is done element-by-element for all elements in the array.)\n", "\n", "We can code $ T $ up as follows." ] }, { "cell_type": "code", "execution_count": null, "id": "b7599f5f", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def T(model: McCallModel, v: jnp.ndarray):\n", " c, β, w, q = model\n", " accept = w / (1 - β)\n", " reject = c + β * v @ q\n", " return jnp.maximum(accept, reject)" ] }, { "cell_type": "markdown", "id": "67c5bf61", "metadata": {}, "source": [ "Based on these defaults, let’s try plotting the first few approximate value functions\n", "in the sequence $ \\{ T^k v \\} $.\n", "\n", "We will start from guess $ v $ given by $ v(i) = w(i) / (1 - β) $, which is the value of accepting at every given wage." ] }, { "cell_type": "code", "execution_count": null, "id": "50c8b116", "metadata": { "hide-output": false }, "outputs": [], "source": [ "model = McCallModel()\n", "c, β, w, q = model\n", "v = w / (1 - β) # Initial condition\n", "fig, ax = plt.subplots()\n", "\n", "num_plots = 6\n", "for i in range(num_plots):\n", " ax.plot(w, v, '-', alpha=0.6, lw=2, label=f\"iterate {i}\")\n", " v = T(model, v)\n", "\n", "ax.legend(loc='lower right')\n", "ax.set_xlabel('wage')\n", "ax.set_ylabel('value')\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "6a26283e", "metadata": {}, "source": [ "You can see that convergence is occurring: successive iterates are getting closer together.\n", "\n", "Here’s a more serious iteration effort to compute the limit, which continues\n", "until measured deviation between successive iterates is below `tol`.\n", "\n", "Once we obtain a good approximation to the limit, we will use it to calculate\n", "the reservation wage." ] }, { "cell_type": "code", "execution_count": null, "id": "451e5ff2", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def compute_reservation_wage(\n", " model: McCallModel, # instance containing default parameters\n", " v_init: jnp.ndarray, # initial condition for iteration\n", " tol: float=1e-6, # error tolerance\n", " max_iter: int=500, # maximum number of iterations for loop\n", " ):\n", " \"Computes the reservation wage in the McCall job search model.\"\n", " c, β, w, q = model\n", " i = 0\n", " error = tol + 1 \n", " v = v_init\n", " \n", " while i < max_iter and error > tol:\n", " v_next = T(model, v)\n", " error = jnp.max(jnp.abs(v_next - v))\n", " v = v_next\n", " i += 1\n", " \n", " w_bar = (1 - β) * (c + β * v @ q)\n", " return v, w_bar" ] }, { "cell_type": "markdown", "id": "bf07bcb8", "metadata": {}, "source": [ "The cell computes the reservation wage at the default parameters" ] }, { "cell_type": "code", "execution_count": null, "id": "5c62f771", "metadata": { "hide-output": false }, "outputs": [], "source": [ "model = McCallModel()\n", "c, β, w, q = model\n", "v_init = w / (1 - β) # initial guess\n", "v, w_bar = compute_reservation_wage(model, v_init)\n", "print(w_bar)" ] }, { "cell_type": "markdown", "id": "49d57564", "metadata": {}, "source": [ "### Comparative Statics\n", "\n", "Now that we know how to compute the reservation wage, let’s see how it varies with\n", "parameters.\n", "\n", "Here we compare the reservation wage at two values of $ \\beta $.\n", "\n", "The reservation wages will be plotted alongside the wage offer distribution, so\n", "that we can get a sense of what fraction of offers will be accepted." ] }, { "cell_type": "code", "execution_count": null, "id": "6d980537", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "# Get the default color cycle\n", "prop_cycle = plt.rcParams['axes.prop_cycle']\n", "colors = prop_cycle.by_key()['color']\n", "\n", "# Plot the wage offer distribution\n", "ax.plot(w, q, '-', alpha=0.6, lw=2,\n", " label='wage offer distribution',\n", " color=colors[0])\n", "\n", "# Compute reservation wage with default beta\n", "model_default = McCallModel()\n", "c, β, w, q = model_default\n", "v_init = w / (1 - β)\n", "v_default, res_wage_default = compute_reservation_wage(\n", " model_default, v_init\n", ")\n", "\n", "# Compute reservation wage with lower beta\n", "β_new = 0.96\n", "model_low_beta = McCallModel(β=β_new)\n", "c, β_low, w, q = model_low_beta\n", "v_init_low = w / (1 - β_low)\n", "v_low, res_wage_low = compute_reservation_wage(\n", " model_low_beta, v_init_low\n", ")\n", "\n", "# Plot vertical lines for reservation wages\n", "ax.axvline(x=res_wage_default, color=colors[1], lw=2,\n", " label=f'reservation wage (β={β})')\n", "ax.axvline(x=res_wage_low, color=colors[2], lw=2,\n", " label=f'reservation wage (β={β_new})')\n", "\n", "ax.set_xlabel('wage', fontsize=12)\n", "ax.set_ylabel('probability', fontsize=12)\n", "ax.tick_params(axis='both', which='major', labelsize=11)\n", "ax.legend(loc='upper left', frameon=False, fontsize=11)\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "424012d2", "metadata": {}, "source": [ "We see that the reservation wage is higher when $ \\beta $ is higher.\n", "\n", "This is not surprising, since higher $ \\beta $ is associated with more patience.\n", "\n", "Now let’s look more systematically at what happens when we change $ \\beta $ and $ c $.\n", "\n", "As a first step, given that we’ll use it many times, let’s create a more\n", "efficient, jit-complied version of the function that computes the reservation\n", "wage:" ] }, { "cell_type": "code", "execution_count": null, "id": "56c6127f", "metadata": { "hide-output": false }, "outputs": [], "source": [ "@jax.jit\n", "def compute_res_wage_jitted(\n", " model: McCallModel, # instance containing default parameters\n", " v_init: jnp.ndarray, # initial condition for iteration\n", " tol: float=1e-6, # error tolerance\n", " max_iter: int=500, # maximum number of iterations for loop\n", " ):\n", " c, β, w, q = model\n", " i = 0\n", " error = tol + 1 \n", " initial_state = v_init, i, error\n", " \n", " def cond(loop_state):\n", " v, i, error = loop_state\n", " return jnp.logical_and(i < max_iter, error > tol)\n", "\n", " def update(loop_state):\n", " v, i, error = loop_state\n", " v_next = T(model, v)\n", " error = jnp.max(jnp.abs(v_next - v))\n", " i += 1\n", " new_loop_state = v_next, i, error\n", " return new_loop_state\n", " \n", " final_state = jax.lax.while_loop(cond, update, initial_state)\n", " v, i, error = final_state\n", "\n", " w_bar = (1 - β) * (c + β * v @ q)\n", " return v, w_bar" ] }, { "cell_type": "markdown", "id": "2182d9bb", "metadata": {}, "source": [ "Now we compute the reservation wage at each $ c, \\beta $ pair." ] }, { "cell_type": "code", "execution_count": null, "id": "3dd20c00", "metadata": { "hide-output": false }, "outputs": [], "source": [ "grid_size = 25\n", "c_vals = jnp.linspace(10.0, 30.0, grid_size)\n", "β_vals = jnp.linspace(0.9, 0.99, grid_size)\n", "\n", "res_wage_matrix = np.empty((grid_size, grid_size))\n", "model = McCallModel()\n", "v_init = model.w / (1 - model.β)\n", "\n", "for i, c in enumerate(c_vals):\n", " for j, β in enumerate(β_vals):\n", " model = McCallModel(c=c, β=β)\n", " v, w_bar = compute_res_wage_jitted(model, v_init)\n", " v_init = v\n", " res_wage_matrix[i, j] = w_bar\n", "\n", "fig, ax = plt.subplots()\n", "cs1 = ax.contourf(c_vals, β_vals, res_wage_matrix.T, alpha=0.75)\n", "ctr1 = ax.contour(c_vals, β_vals, res_wage_matrix.T)\n", "plt.clabel(ctr1, inline=1, fontsize=13)\n", "plt.colorbar(cs1, ax=ax)\n", "ax.set_title(\"reservation wage\")\n", "ax.set_xlabel(\"$c$\", fontsize=16)\n", "ax.set_ylabel(\"$β$\", fontsize=16)\n", "ax.ticklabel_format(useOffset=False)\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "17d8c2f3", "metadata": {}, "source": [ "As expected, the reservation wage increases with both patience and unemployment compensation.\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "id": "6d2e46c1", "metadata": {}, "source": [ "## Computing an Optimal Policy: Take 2\n", "\n", "The approach to dynamic programming just described is standard and broadly applicable.\n", "\n", "But for our McCall search model there’s also an easier way that circumvents the\n", "need to compute the value function.\n", "\n", "Let $ h $ denote the continuation value:\n", "\n", "\n", "\n", "$$\n", "h = c + \\beta \\sum_{w'} v^*(w') q (w') \\tag{42.8}\n", "$$\n", "\n", "The Bellman equation can now be written as\n", "\n", "\n", "\n", "$$\n", "v^*(w')\n", " = \\max \\left\\{ \\frac{w'}{1 - \\beta}, \\, h \\right\\} \\tag{42.9}\n", "$$\n", "\n", "Now let’s derive a nonlinear equation for $ h $ alone.\n", "\n", "Starting from [(42.9)](#equation-j1b), we multiply both sides by $ q(w') $ to get\n", "\n", "$$\n", "v^*(w') q(w') = \\max \\left\\{ \\frac{w'}{1 - \\beta}, h \\right\\} q(w')\n", "$$\n", "\n", "Next, we sum both sides over $ w' \\in \\mathbb{W} $:\n", "\n", "$$\n", "\\sum_{w' \\in \\mathbb W} v^*(w') q(w')\n", " = \\sum_{w' \\in \\mathbb W} \\max \\left\\{ \\frac{w'}{1 - \\beta}, h \\right\\} q(w')\n", "$$\n", "\n", "Now multiply both sides by $ \\beta $:\n", "\n", "$$\n", "\\beta \\sum_{w' \\in \\mathbb W} v^*(w') q(w')\n", " = \\beta \\sum_{w' \\in \\mathbb W} \\max \\left\\{ \\frac{w'}{1 - \\beta}, h \\right\\} q(w')\n", "$$\n", "\n", "Add $ c $ to both sides:\n", "\n", "$$\n", "c + \\beta \\sum_{w' \\in \\mathbb W} v^*(w') q(w')\n", " = c + \\beta \\sum_{w' \\in \\mathbb W} \\max \\left\\{ \\frac{w'}{1 - \\beta}, h \\right\\} q(w')\n", "$$\n", "\n", "Finally, using the definition of $ h $ from [(42.8)](#equation-j1), the left-hand side is just $ h $, giving us\n", "\n", "\n", "\n", "$$\n", "h = c + \\beta\n", " \\sum_{w' \\in \\mathbb W}\n", " \\max \\left\\{\n", " \\frac{w'}{1 - \\beta}, h\n", " \\right\\} q (w') \\tag{42.10}\n", "$$\n", "\n", "This is a nonlinear equation in the single scalar $ h $ that we can solve for $ h $.\n", "\n", "As before, we will use successive approximations:\n", "\n", "Step 1: pick an initial guess $ h $.\n", "\n", "Step 2: compute the update $ h' $ via\n", "\n", "\n", "\n", "$$\n", "h'\n", "= c + \\beta\n", " \\sum_{w' \\in \\mathbb W}\n", " \\max \\left\\{\n", " \\frac{w'}{1 - \\beta}, h\n", " \\right\\} q (w')\n", "\\quad \\tag{42.11}\n", "$$\n", "\n", "Step 3: calculate the deviation $ |h - h'| $.\n", "\n", "Step 4: if the deviation is larger than some fixed tolerance, set $ h = h' $ and go to step 2, else return $ h $.\n", "\n", "One can again use the Banach contraction mapping theorem to show that this process always converges.\n", "\n", "The big difference here, however, is that we’re iterating on a scalar $ h $, rather than an $ n $-vector, $ v(i), i = 1, \\ldots, n $.\n", "\n", "Here’s an implementation:" ] }, { "cell_type": "code", "execution_count": null, "id": "bdc6b491", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def compute_reservation_wage_two(\n", " model: McCallModel, # instance containing default parameters\n", " tol: float=1e-5, # error tolerance\n", " max_iter: int=500, # maximum number of iterations for loop\n", " ):\n", " c, β, w, q = model\n", " h = (w @ q) / (1 - β) # initial condition\n", " i = 0\n", " error = tol + 1\n", " initial_loop_state = i, h, error\n", "\n", " def cond(loop_state):\n", " i, h, error = loop_state\n", " return jnp.logical_and(i < max_iter, error > tol)\n", "\n", " def update(loop_state):\n", " i, h, error = loop_state\n", " s = jnp.maximum(w / (1 - β), h)\n", " h_next = c + β * (s @ q)\n", " error = jnp.abs(h_next - h)\n", " i_next = i + 1\n", " new_loop_state = i_next, h_next, error\n", " return new_loop_state\n", "\n", " final_state = jax.lax.while_loop(cond, update, initial_loop_state)\n", " i, h, error = final_state\n", "\n", " # Compute and return the reservation wage\n", " return (1 - β) * h" ] }, { "cell_type": "markdown", "id": "2ef4239e", "metadata": {}, "source": [ "You can use this code to solve the exercise below." ] }, { "cell_type": "markdown", "id": "419c28d8", "metadata": {}, "source": [ "## Continuous Offer Distribution\n", "\n", "The discrete wage offer distribution used above is convenient for theory and\n", "computation, but many realistic distributions are continuous (i.e., have a density).\n", "\n", "Fortunately, the theory changes little in our simple model when we shift to a\n", "continuous offer distribution.\n", "\n", "Recall that $ h $ in [(42.8)](#equation-j1) denotes the value of not accepting a job in this period but\n", "then behaving optimally in all subsequent periods.\n", "\n", "To shift to a continuous offer distribution, we can replace [(42.8)](#equation-j1) by\n", "\n", "\n", "\n", "$$\n", "h\n", "= c + \\beta\n", " \\int v^*(s') q (s') ds'.\n", "\\quad \\tag{42.12}\n", "$$\n", "\n", "Equation [(42.10)](#equation-j2) becomes\n", "\n", "\n", "\n", "$$\n", "h\n", "= c + \\beta\n", " \\int\n", " \\max \\left\\{\n", " \\frac{w(s')}{1 - \\beta}, h\n", " \\right\\} q (s') d s'\n", "\\quad \\tag{42.13}\n", "$$\n", "\n", "The aim is to solve this nonlinear equation by iteration, and from it obtain\n", "the reservation wage." ] }, { "cell_type": "markdown", "id": "472d1f89", "metadata": {}, "source": [ "### Implementation with Lognormal Wages\n", "\n", "Let’s implement this for the case where\n", "\n", "- the state sequence $ \\{ s_t \\} $ is IID and standard normal and \n", "- the wage function is $ w(s) = \\exp(\\mu + \\sigma s) $. \n", "\n", "\n", "This gives us a lognormal wage distribution.\n", "\n", "We use Monte Carlo integration to evaluate the integral, averaging over a large number of wage draws.\n", "\n", "For default parameters, we use `c=25, β=0.99, σ=0.5, μ=2.5`." ] }, { "cell_type": "code", "execution_count": null, "id": "8a0c4891", "metadata": { "hide-output": false }, "outputs": [], "source": [ "class McCallModelContinuous(NamedTuple):\n", " c: float # unemployment compensation\n", " β: float # discount factor\n", " σ: float # scale parameter in lognormal distribution\n", " μ: float # location parameter in lognormal distribution\n", " w_draws: jnp.ndarray # draws of wages for Monte Carlo\n", "\n", "\n", "def create_mccall_continuous(\n", " c=25, β=0.99, σ=0.5, μ=2.5, mc_size=1000, seed=1234\n", " ):\n", " key = jax.random.PRNGKey(seed)\n", " s = jax.random.normal(key, (mc_size,))\n", " w_draws = jnp.exp(μ + σ * s)\n", " return McCallModelContinuous(c, β, σ, μ, w_draws)\n", "\n", "\n", "@jax.jit\n", "def compute_reservation_wage_continuous(model, max_iter=500, tol=1e-5):\n", " c, β, σ, μ, w_draws = model\n", "\n", " h = jnp.mean(w_draws) / (1 - β) # initial guess\n", "\n", " def update(state):\n", " h, i, error = state\n", " integral = jnp.mean(jnp.maximum(w_draws / (1 - β), h))\n", " h_next = c + β * integral\n", " error = jnp.abs(h_next - h)\n", " return h_next, i + 1, error\n", "\n", " def cond(state):\n", " h, i, error = state\n", " return jnp.logical_and(i < max_iter, error > tol)\n", "\n", " initial_state = (h, 0, tol + 1)\n", " final_state = jax.lax.while_loop(cond, update, initial_state)\n", " h_final, _, _ = final_state\n", "\n", " # Now compute the reservation wage\n", " return (1 - β) * h_final" ] }, { "cell_type": "markdown", "id": "7f501c52", "metadata": {}, "source": [ "Now let’s investigate how the reservation wage changes with $ c $ and\n", "$ \\beta $ using a contour plot." ] }, { "cell_type": "code", "execution_count": null, "id": "863700c1", "metadata": { "hide-output": false }, "outputs": [], "source": [ "grid_size = 25\n", "c_vals = jnp.linspace(10.0, 30.0, grid_size)\n", "β_vals = jnp.linspace(0.9, 0.99, grid_size)\n", "\n", "def compute_R_element(c, β):\n", " model = create_mccall_continuous(c=c, β=β)\n", " return compute_reservation_wage_continuous(model)\n", "\n", "# First, vectorize over β (holding c fixed)\n", "compute_R_over_β = jax.vmap(compute_R_element, in_axes=(None, 0))\n", "\n", "# Next, vectorize over c (applying the above function to each c)\n", "compute_R_vectorized = jax.vmap(compute_R_over_β, in_axes=(0, None))\n", "\n", "# Apply to compute the full grid\n", "R = compute_R_vectorized(c_vals, β_vals)" ] }, { "cell_type": "code", "execution_count": null, "id": "47273265", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "cs1 = ax.contourf(c_vals, β_vals, R.T, alpha=0.75)\n", "ctr1 = ax.contour(c_vals, β_vals, R.T)\n", "\n", "plt.clabel(ctr1, inline=1, fontsize=13)\n", "plt.colorbar(cs1, ax=ax)\n", "\n", "\n", "ax.set_title(\"reservation wage\")\n", "ax.set_xlabel(\"$c$\", fontsize=16)\n", "ax.set_ylabel(\"$β$\", fontsize=16)\n", "\n", "ax.ticklabel_format(useOffset=False)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "d5f057fa", "metadata": {}, "source": [ "As with the discrete case, the reservation wage increases with both patience and unemployment compensation." ] }, { "cell_type": "markdown", "id": "42dbc055", "metadata": {}, "source": [ "## Volatility\n", "\n", "An interesting feature of the McCall model is that increased volatility in wage offers\n", "tends to increase the reservation wage.\n", "\n", "The intuition is that volatility is attractive to the worker because they can enjoy\n", "the upside (high wage offers) while rejecting the downside (low wage offers).\n", "\n", "Hence, with more volatility, workers are more willing to continue searching rather than\n", "accept a given offer, which means the reservation wage rises.\n", "\n", "To illustrate this phenomenon, we use a mean-preserving spread of the wage distribution.\n", "\n", "In particular, we vary the scale parameter $ \\sigma $ in the lognormal wage distribution\n", "$ w(s) = \\exp(\\mu + \\sigma s) $ while adjusting $ \\mu $ to keep the mean constant.\n", "\n", "Recall that for a lognormal distribution with parameters $ \\mu $ and $ \\sigma $, the mean is\n", "$ \\exp(\\mu + \\sigma^2/2) $.\n", "\n", "To keep the mean constant at some value $ m $, we need:\n", "\n", "$$\n", "\\mu = \\ln(m) - \\frac{\\sigma^2}{2}\n", "$$\n", "\n", "Let’s implement this and compute the reservation wage for different values of $ \\sigma $:" ] }, { "cell_type": "code", "execution_count": null, "id": "8197a09b", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Fix the mean wage\n", "mean_wage = 20.0\n", "\n", "# Create a range of volatility values\n", "σ_vals = jnp.linspace(0.1, 1.0, 25)\n", "\n", "# Given σ, compute μ to maintain constant mean\n", "def compute_μ_for_mean(σ, mean_wage):\n", " return jnp.log(mean_wage) - (σ**2) / 2\n", "\n", "# Compute reservation wage for each volatility level\n", "res_wages_volatility = []\n", "\n", "for σ in σ_vals:\n", " μ = compute_μ_for_mean(σ, mean_wage)\n", " model = create_mccall_continuous(σ=float(σ), μ=float(μ))\n", " w_bar = compute_reservation_wage_continuous(model)\n", " res_wages_volatility.append(w_bar)\n", "\n", "res_wages_volatility = jnp.array(res_wages_volatility)" ] }, { "cell_type": "markdown", "id": "46bab5e3", "metadata": {}, "source": [ "Now let’s plot the reservation wage as a function of volatility:" ] }, { "cell_type": "code", "execution_count": null, "id": "03a3be7c", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "ax.plot(σ_vals, res_wages_volatility, linewidth=2)\n", "ax.set_xlabel(r'volatility ($\\sigma$)', fontsize=12)\n", "ax.set_ylabel('reservation wage', fontsize=12)\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "cc610f8a", "metadata": {}, "source": [ "As expected, the reservation wage is increasing in $ \\sigma $." ] }, { "cell_type": "markdown", "id": "96e2d1b1", "metadata": {}, "source": [ "### Lifetime Value and Volatility\n", "\n", "We’ve seen that the reservation wage increases with volatility.\n", "\n", "It’s also the case that maximal lifetime value increases with volatility.\n", "\n", "Higher volatility provides more upside potential, while at the same time\n", "workers can protect themselves against downside risk by rejecting low offers.\n", "\n", "This option value translates into higher expected lifetime utility.\n", "\n", "To demonstrate this, we will:\n", "\n", "1. Compute the reservation wage for each volatility level \n", "1. Calculate the expected discounted value of the lifetime income stream\n", " associated with that reservation wage, using Monte Carlo. \n", "\n", "\n", "The simulation works as follows:\n", "\n", "1. Compute the present discounted value of one lifetime earnings path, from a given wage path. \n", "1. Average over a large number of such calculations to approximate expected discounted value. \n", "\n", "\n", "We truncate each path at $ T=100 $, which provides sufficient resolution for our purposes." ] }, { "cell_type": "code", "execution_count": null, "id": "43430fb0", "metadata": { "hide-output": false }, "outputs": [], "source": [ "@jax.jit\n", "def simulate_lifetime_value(key, model, w_bar, n_periods=100):\n", " \"\"\"\n", " Simulate one realization of the wage path and compute lifetime value.\n", "\n", " Parameters:\n", " -----------\n", " key : jax.random.PRNGKey\n", " Random key for JAX\n", " model : McCallModelContinuous\n", " The model containing parameters\n", " w_bar : float\n", " The reservation wage\n", " n_periods : int\n", " Number of periods to simulate\n", "\n", " Returns:\n", " --------\n", " lifetime_value : float\n", " Discounted sum of income over n_periods\n", " \"\"\"\n", " c, β, σ, μ, w_draws = model\n", "\n", " # Draw all wage offers upfront\n", " key, subkey = jax.random.split(key)\n", " s_vals = jax.random.normal(subkey, (n_periods,))\n", " wage_offers = jnp.exp(μ + σ * s_vals)\n", "\n", " # Determine which offers are acceptable\n", " accept = wage_offers >= w_bar\n", "\n", " # Track employment status: employed from first acceptance onward\n", " employed = jnp.cumsum(accept) > 0\n", "\n", " # Get the accepted wage (first wage where accept is True)\n", " first_accept_idx = jnp.argmax(accept)\n", " accepted_wage = wage_offers[first_accept_idx]\n", "\n", " # Earnings at each period: accepted_wage if employed, c if unemployed\n", " earnings = jnp.where(employed, accepted_wage, c)\n", "\n", " # Compute discounted sum\n", " periods = jnp.arange(n_periods)\n", " discount_factors = β ** periods\n", " lifetime_value = jnp.sum(discount_factors * earnings)\n", "\n", " return lifetime_value\n", "\n", "\n", "@jax.jit\n", "def compute_mean_lifetime_value(model, w_bar, num_reps=10000, seed=1234):\n", " \"\"\"\n", " Compute mean lifetime value across many simulations.\n", "\n", " \"\"\"\n", " key = jax.random.PRNGKey(seed)\n", " keys = jax.random.split(key, num_reps)\n", "\n", " # Vectorize the simulation across all replications\n", " simulate_fn = jax.vmap(simulate_lifetime_value, in_axes=(0, None, None))\n", " lifetime_values = simulate_fn(keys, model, w_bar)\n", " return jnp.mean(lifetime_values)" ] }, { "cell_type": "markdown", "id": "bf980ee2", "metadata": {}, "source": [ "Now let’s compute the expected lifetime value for each volatility level:" ] }, { "cell_type": "code", "execution_count": null, "id": "d46b3694", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Use the same volatility range and mean wage\n", "σ_vals = jnp.linspace(0.1, 1.0, 25)\n", "mean_wage = 20.0\n", "\n", "lifetime_vals = []\n", "for σ in σ_vals:\n", " μ = compute_μ_for_mean(σ, mean_wage)\n", " model = create_mccall_continuous(σ=σ, μ=μ)\n", " w_bar = compute_reservation_wage_continuous(model)\n", " lv = compute_mean_lifetime_value(model, w_bar)\n", " lifetime_vals.append(lv)" ] }, { "cell_type": "markdown", "id": "1dca81fb", "metadata": {}, "source": [ "Let’s visualize the expected lifetime value as a function of volatility:" ] }, { "cell_type": "code", "execution_count": null, "id": "615796a6", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "ax.plot(σ_vals, lifetime_vals, linewidth=2, color='green')\n", "ax.set_xlabel(r'volatility ($\\sigma$)', fontsize=12)\n", "ax.set_ylabel('expected lifetime value', fontsize=12)\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "b21bfc76", "metadata": {}, "source": [ "The plot confirms that despite workers setting higher reservation wages when facing\n", "more volatile wage offers (as shown above), they achieve higher expected lifetime\n", "values due to the option value of search." ] }, { "cell_type": "markdown", "id": "b6034582", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "id": "da1d0d08", "metadata": {}, "source": [ "## Exercise 42.1\n", "\n", "Compute the average duration of unemployment when $ \\beta=0.99 $ and\n", "$ c $ takes the following values\n", "\n", "> `c_vals = np.linspace(10, 40, 4)`\n", "\n", "\n", "That is, start the agent off as unemployed, compute their reservation wage\n", "given the parameters, and then simulate to see how long it takes to accept.\n", "\n", "Repeat a large number of times and take the average.\n", "\n", "Plot mean unemployment duration as a function of $ c $ in `c_vals`.\n", "\n", "Try to explain what you see." ] }, { "cell_type": "markdown", "id": "7ee82d5e", "metadata": {}, "source": [ "## Solution\n", "\n", "Here’s a solution using the continuous wage offer distribution with JAX." ] }, { "cell_type": "code", "execution_count": null, "id": "600ffa93", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def compute_stopping_time_continuous(w_bar, key, model):\n", " \"\"\"\n", " Compute stopping time by drawing wages from the continuous distribution\n", " until one exceeds `w_bar`.\n", "\n", " Parameters:\n", " -----------\n", " w_bar : float\n", " The reservation wage\n", " key : jax.random.PRNGKey\n", " Random key for JAX\n", " model : McCallModelContinuous\n", " The model containing wage draws\n", "\n", " Returns:\n", " --------\n", " t_final : int\n", " The stopping time (number of periods until acceptance)\n", " \"\"\"\n", " c, β, σ, μ, w_draws = model\n", "\n", " def update(loop_state):\n", " t, key, accept = loop_state\n", " key, subkey = jax.random.split(key)\n", " # Draw a standard normal and transform to wage\n", " s = jax.random.normal(subkey)\n", " w = jnp.exp(μ + σ * s)\n", " accept = w >= w_bar\n", " t = t + 1\n", " return t, key, accept\n", "\n", " def cond(loop_state):\n", " _, _, accept = loop_state\n", " return jnp.logical_not(accept)\n", "\n", " initial_loop_state = (0, key, False)\n", " t_final, _, _ = jax.lax.while_loop(cond, update, initial_loop_state)\n", " return t_final\n", "\n", "\n", "def compute_mean_stopping_time_continuous(w_bar, model, num_reps=100000, seed=1234):\n", " \"\"\"\n", " Generate a mean stopping time over `num_reps` repetitions.\n", "\n", " Parameters:\n", " -----------\n", " w_bar : float\n", " The reservation wage\n", " model : McCallModelContinuous\n", " The model containing parameters\n", " num_reps : int\n", " Number of simulation replications\n", " seed : int\n", " Random seed\n", "\n", " Returns:\n", " --------\n", " mean_time : float\n", " Average stopping time across all replications\n", " \"\"\"\n", " # Generate a key for each MC replication\n", " key = jax.random.PRNGKey(seed)\n", " keys = jax.random.split(key, num_reps)\n", "\n", " # Vectorize compute_stopping_time_continuous and evaluate across keys\n", " compute_fn = jax.vmap(compute_stopping_time_continuous, in_axes=(None, 0, None))\n", " obs = compute_fn(w_bar, keys, model)\n", "\n", " # Return mean stopping time\n", " return jnp.mean(obs)\n", "\n", "\n", "# Compute mean stopping time for different values of c\n", "c_vals = jnp.linspace(10, 40, 4)\n", "\n", "@jax.jit\n", "def compute_stop_time_for_c_continuous(c):\n", " \"\"\"Compute mean stopping time for a given compensation value c.\"\"\"\n", " model = create_mccall_continuous(c=c)\n", " w_bar = compute_reservation_wage_continuous(model)\n", " return compute_mean_stopping_time_continuous(w_bar, model)\n", "\n", "# Vectorize across all c values\n", "compute_stop_time_vectorized = jax.vmap(compute_stop_time_for_c_continuous)\n", "stop_times = compute_stop_time_vectorized(c_vals)\n", "\n", "fig, ax = plt.subplots()\n", "\n", "ax.plot(c_vals, stop_times, label=\"mean unemployment duration\")\n", "ax.set(xlabel=\"unemployment compensation\", ylabel=\"months\")\n", "ax.legend()\n", "\n", "plt.show()" ] } ], "metadata": { "date": 1770028421.0674343, "filename": "mccall_model.md", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "title": "Job Search I: The McCall Search Model" }, "nbformat": 4, "nbformat_minor": 5 }