{ "cells": [ { "cell_type": "markdown", "id": "eb1c6bc4", "metadata": {}, "source": [ "\n", "" ] }, { "cell_type": "markdown", "id": "55146ee3", "metadata": {}, "source": [ "# Job Search I: The McCall Search Model" ] }, { "cell_type": "markdown", "id": "6ea27e54", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Job Search I: The McCall Search Model](#Job-Search-I:-The-McCall-Search-Model) \n", " - [Overview](#Overview) \n", " - [The McCall Model](#The-McCall-Model) \n", " - [Computing the Optimal Policy: Take 1](#Computing-the-Optimal-Policy:-Take-1) \n", " - [Computing an Optimal Policy: Take 2](#Computing-an-Optimal-Policy:-Take-2) \n", " - [Exercises](#Exercises) " ] }, { "cell_type": "markdown", "id": "031489c3", "metadata": {}, "source": [ "> “Questioning a McCall worker is like having a conversation with an out-of-work friend:\n", "> ‘Maybe you are setting your sights too high’, or ‘Why did you quit your old job before you\n", "> had a new one lined up?’ This is real social science: an attempt to model, to understand,\n", "> human behavior by visualizing the situation people find themselves in, the options they face\n", "> and the pros and cons as they themselves see them.” – Robert E. Lucas, Jr.\n", "\n", "\n", "In addition to what’s in Anaconda, this lecture will need the following libraries:" ] }, { "cell_type": "code", "execution_count": null, "id": "24cfb578", "metadata": { "hide-output": false }, "outputs": [], "source": [ "!pip install quantecon" ] }, { "cell_type": "markdown", "id": "b32911b2", "metadata": {}, "source": [ "## Overview\n", "\n", "The McCall search model [[McCall, 1970](https://python.quantecon.org/zreferences.html#id193)] helped transform economists’ way of thinking about labor markets.\n", "\n", "To clarify notions such as “involuntary” unemployment, McCall modeled the decision problem of an unemployed worker in terms of factors including\n", "\n", "- current and likely future wages \n", "- impatience \n", "- unemployment compensation \n", "\n", "\n", "To solve the decision problem McCall used dynamic programming.\n", "\n", "Here we set up McCall’s model and use dynamic programming to analyze it.\n", "\n", "As we’ll see, McCall’s model is not only interesting in its own right but also an excellent vehicle for learning dynamic programming.\n", "\n", "Let’s start with some imports:" ] }, { "cell_type": "code", "execution_count": null, "id": "e9b89c01", "metadata": { "hide-output": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "plt.rcParams[\"figure.figsize\"] = (11, 5) #set default figure size\n", "import numpy as np\n", "from numba import jit, float64\n", "from numba.experimental import jitclass\n", "import quantecon as qe\n", "from quantecon.distributions import BetaBinomial" ] }, { "cell_type": "markdown", "id": "4b23e8f8", "metadata": {}, "source": [ "## The McCall Model\n", "\n", "\n", "\n", "An unemployed agent receives in each period a job offer at wage $ w_t $.\n", "\n", "In this lecture, we adopt the following simple environment:\n", "\n", "- The offer sequence $ \\{w_t\\}_{t \\geq 0} $ is IID, with $ q(w) $ being the probability of observing wage $ w $ in finite set $ \\mathbb{W} $. \n", "- The agent observes $ w_t $ at the start of $ t $. \n", "- The agent knows that $ \\{w_t\\} $ is IID with common distribution $ q $ and can use this when computing expectations. \n", "\n", "\n", "(In later lectures, we will relax these assumptions.)\n", "\n", "At time $ t $, our agent has two choices:\n", "\n", "1. Accept the offer and work permanently at constant wage $ w_t $. \n", "1. Reject the offer, receive unemployment compensation $ c $, and reconsider next period. \n", "\n", "\n", "The agent is infinitely lived and aims to maximize the expected discounted\n", "sum of earnings\n", "\n", "$$\n", "\\mathbb{E} \\sum_{t=0}^{\\infty} \\beta^t y_t\n", "$$\n", "\n", "The constant $ \\beta $ lies in $ (0, 1) $ and is called a **discount factor**.\n", "\n", "The smaller is $ \\beta $, the more the agent discounts future utility relative to current utility.\n", "\n", "The variable $ y_t $ is income, equal to\n", "\n", "- his/her wage $ w_t $ when employed \n", "- unemployment compensation $ c $ when unemployed " ] }, { "cell_type": "markdown", "id": "1beaf579", "metadata": {}, "source": [ "### A Trade-Off\n", "\n", "The worker faces a trade-off:\n", "\n", "- Waiting too long for a good offer is costly, since the future is discounted. \n", "- Accepting too early is costly, since better offers might arrive in the future. \n", "\n", "\n", "To decide optimally in the face of this trade-off, we use dynamic programming.\n", "\n", "Dynamic programming can be thought of as a two-step procedure that\n", "\n", "1. first assigns values to “states” and \n", "1. then deduces optimal actions given those values \n", "\n", "\n", "We’ll go through these steps in turn." ] }, { "cell_type": "markdown", "id": "7efe09d5", "metadata": {}, "source": [ "### The Value Function\n", "\n", "In order to optimally trade-off current and future rewards, we need to think about two things:\n", "\n", "1. the current payoffs we get from different choices \n", "1. the different states that those choices will lead to in next period \n", "\n", "\n", "To weigh these two aspects of the decision problem, we need to assign *values*\n", "to states.\n", "\n", "To this end, let $ v^*(w) $ be the total lifetime *value* accruing to an\n", "unemployed worker who enters the current period unemployed when the wage is\n", "$ w \\in \\mathbb{W} $.\n", "\n", "In particular, the agent has wage offer $ w $ in hand.\n", "\n", "More precisely, $ v^*(w) $ denotes the value of the objective function\n", "[(28.1)](https://python.quantecon.org/mccall_model_with_separation.html#equation-objective) when an agent in this situation makes *optimal* decisions now\n", "and at all future points in time.\n", "\n", "Of course $ v^*(w) $ is not trivial to calculate because we don’t yet know\n", "what decisions are optimal and what aren’t!\n", "\n", "But think of $ v^* $ as a function that assigns to each possible wage\n", "$ s $ the maximal lifetime value that can be obtained with that offer in\n", "hand.\n", "\n", "A crucial observation is that this function $ v^* $ must satisfy the\n", "recursion\n", "\n", "\n", "\n", "$$\n", "v^*(w)\n", "= \\max \\left\\{\n", " \\frac{w}{1 - \\beta}, \\, c + \\beta\n", " \\sum_{w' \\in \\mathbb{W}} v^*(w') q (w')\n", " \\right\\} \\tag{27.1}\n", "$$\n", "\n", "for every possible $ w $ in $ \\mathbb{W} $.\n", "\n", "This important equation is a version of the **Bellman equation**, which is\n", "ubiquitous in economic dynamics and other fields involving planning over time.\n", "\n", "The intuition behind it is as follows:\n", "\n", "- the first term inside the max operation is the lifetime payoff from accepting current offer, since \n", "\n", "\n", "$$\n", "\\frac{w}{1 - \\beta} = w + \\beta w + \\beta^2 w + \\cdots\n", "$$\n", "\n", "- the second term inside the max operation is the **continuation value**, which is the lifetime payoff from rejecting the current offer and then behaving optimally in all subsequent periods \n", "\n", "\n", "If we optimize and pick the best of these two options, we obtain maximal lifetime value from today, given current offer $ w $.\n", "\n", "But this is precisely $ v^*(w) $, which is the left-hand side of [(27.1)](#equation-odu-pv)." ] }, { "cell_type": "markdown", "id": "694f9c7d", "metadata": {}, "source": [ "### The Optimal Policy\n", "\n", "Suppose for now that we are able to solve [(27.1)](#equation-odu-pv) for the unknown function $ v^* $.\n", "\n", "Once we have this function in hand we can behave optimally (i.e., make the\n", "right choice between accept and reject).\n", "\n", "All we have to do is select the maximal choice on the right-hand side of [(27.1)](#equation-odu-pv).\n", "\n", "The optimal action is best thought of as a **policy**, which is, in general, a map from\n", "states to actions.\n", "\n", "Given *any* $ w $, we can read off the corresponding best choice (accept or\n", "reject) by picking the max on the right-hand side of [(27.1)](#equation-odu-pv).\n", "\n", "Thus, we have a map from $ \\mathbb R $ to $ \\{0, 1\\} $, with 1 meaning accept and 0 meaning reject.\n", "\n", "We can write the policy as follows\n", "\n", "$$\n", "\\sigma(w) := \\mathbf{1}\n", " \\left\\{\n", " \\frac{w}{1 - \\beta} \\geq c + \\beta \\sum_{w' \\in \\mathbb W}\n", " v^*(w') q (w')\n", " \\right\\}\n", "$$\n", "\n", "Here $ \\mathbf{1}\\{ P \\} = 1 $ if statement $ P $ is true and equals 0 otherwise.\n", "\n", "We can also write this as\n", "\n", "$$\n", "\\sigma(w) := \\mathbf{1} \\{ w \\geq \\bar w \\}\n", "$$\n", "\n", "where\n", "\n", "\n", "\n", "$$\n", "\\bar w := (1 - \\beta) \\left\\{ c + \\beta \\sum_{w'} v^*(w') q (w') \\right\\} \\tag{27.2}\n", "$$\n", "\n", "Here $ \\bar w $ (called the *reservation wage*) is a constant depending on\n", "$ \\beta, c $ and the wage distribution.\n", "\n", "The agent should accept if and only if the current wage offer exceeds the reservation wage.\n", "\n", "In view of [(27.2)](#equation-reswage), we can compute this reservation wage if we can compute the value function." ] }, { "cell_type": "markdown", "id": "88296e1c", "metadata": {}, "source": [ "## Computing the Optimal Policy: Take 1\n", "\n", "To put the above ideas into action, we need to compute the value function at\n", "each possible state $ w \\in \\mathbb W $.\n", "\n", "To simplify notation, let’s set\n", "\n", "$$\n", "\\mathbb W := \\{w_1, \\ldots, w_n \\}\n", " \\quad \\text{and} \\quad\n", " v^*(i) := v^*(w_i)\n", "$$\n", "\n", "The value function is then represented by the vector\n", "$ v^* = (v^*(i))_{i=1}^n $.\n", "\n", "In view of [(27.1)](#equation-odu-pv), this vector satisfies the nonlinear system of equations\n", "\n", "\n", "\n", "$$\n", "v^*(i)\n", "= \\max \\left\\{\n", " \\frac{w(i)}{1 - \\beta}, \\, c + \\beta \\sum_{1 \\leq j \\leq n}\n", " v^*(j) q (j)\n", " \\right\\}\n", "\\quad\n", "\\text{for } i = 1, \\ldots, n \\tag{27.3}\n", "$$" ] }, { "cell_type": "markdown", "id": "b1c573f6", "metadata": {}, "source": [ "### The Algorithm\n", "\n", "To compute this vector, we use successive approximations:\n", "\n", "Step 1: pick an arbitrary initial guess $ v \\in \\mathbb R^n $.\n", "\n", "Step 2: compute a new vector $ v' \\in \\mathbb R^n $ via\n", "\n", "\n", "\n", "$$\n", "v'(i)\n", "= \\max \\left\\{\n", " \\frac{w(i)}{1 - \\beta}, \\, c + \\beta \\sum_{1 \\leq j \\leq n}\n", " v(j) q (j)\n", " \\right\\}\n", "\\quad\n", "\\text{for } i = 1, \\ldots, n \\tag{27.4}\n", "$$\n", "\n", "Step 3: calculate a measure of a discrepancy between $ v $ and $ v' $, such as $ \\max_i |v(i)- v'(i)| $.\n", "\n", "Step 4: if the deviation is larger than some fixed tolerance, set $ v = v' $ and go to step 2, else continue.\n", "\n", "Step 5: return $ v $.\n", "\n", "For a small tolerance, the returned function $ v $ is a close approximation to the value function $ v^* $.\n", "\n", "The theory below elaborates on this point." ] }, { "cell_type": "markdown", "id": "2150bb24", "metadata": {}, "source": [ "### Fixed Point Theory\n", "\n", "What’s the mathematics behind these ideas?\n", "\n", "First, one defines a mapping $ T $ from $ \\mathbb R^n $ to\n", "itself via\n", "\n", "\n", "\n", "$$\n", "(Tv)(i)\n", "= \\max \\left\\{\n", " \\frac{w(i)}{1 - \\beta}, \\, c + \\beta \\sum_{1 \\leq j \\leq n}\n", " v(j) q (j)\n", " \\right\\}\n", "\\quad\n", "\\text{for } i = 1, \\ldots, n \\tag{27.5}\n", "$$\n", "\n", "(A new vector $ Tv $ is obtained from given vector $ v $ by evaluating\n", "the r.h.s. at each $ i $.)\n", "\n", "The element $ v_k $ in the sequence $ \\{v_k\\} $ of successive\n", "approximations corresponds to $ T^k v $.\n", "\n", "- This is $ T $ applied $ k $ times, starting at the initial guess\n", " $ v $ \n", "\n", "\n", "One can show that the conditions of the [Banach fixed point theorem](https://en.wikipedia.org/wiki/Banach_fixed-point_theorem) are\n", "satisfied by $ T $ on $ \\mathbb R^n $.\n", "\n", "One implication is that $ T $ has a unique fixed point in $ \\mathbb R^n $.\n", "\n", "- That is, a unique vector $ \\bar v $ such that $ T \\bar v = \\bar v $. \n", "\n", "\n", "Moreover, it’s immediate from the definition of $ T $ that this fixed\n", "point is $ v^* $.\n", "\n", "A second implication of the Banach contraction mapping theorem is that\n", "$ \\{ T^k v \\} $ converges to the fixed point $ v^* $ regardless of\n", "$ v $." ] }, { "cell_type": "markdown", "id": "774cb9e4", "metadata": {}, "source": [ "### Implementation\n", "\n", "Our default for $ q $, the distribution of the state process, will be\n", "[Beta-binomial](https://en.wikipedia.org/wiki/Beta-binomial_distribution)." ] }, { "cell_type": "code", "execution_count": null, "id": "8b5a6782", "metadata": { "hide-output": false }, "outputs": [], "source": [ "n, a, b = 50, 200, 100 # default parameters\n", "q_default = BetaBinomial(n, a, b).pdf() # default choice of q" ] }, { "cell_type": "markdown", "id": "14271888", "metadata": {}, "source": [ "Our default set of values for wages will be" ] }, { "cell_type": "code", "execution_count": null, "id": "e268d289", "metadata": { "hide-output": false }, "outputs": [], "source": [ "w_min, w_max = 10, 60\n", "w_default = np.linspace(w_min, w_max, n+1)" ] }, { "cell_type": "markdown", "id": "29039992", "metadata": {}, "source": [ "Here’s a plot of the probabilities of different wage outcomes:" ] }, { "cell_type": "code", "execution_count": null, "id": "a8f79a5c", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "ax.plot(w_default, q_default, '-o', label='$q(w(i))$')\n", "ax.set_xlabel('wages')\n", "ax.set_ylabel('probabilities')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "5e2c3f18", "metadata": {}, "source": [ "We are going to use Numba to accelerate our code.\n", "\n", "- See, in particular, the discussion of `@jitclass` in [our lecture on Numba](https://python-programming.quantecon.org/numba.html). \n", "\n", "\n", "The following helps Numba by providing some type" ] }, { "cell_type": "code", "execution_count": null, "id": "43f12bea", "metadata": { "hide-output": false }, "outputs": [], "source": [ "mccall_data = [\n", " ('c', float64), # unemployment compensation\n", " ('β', float64), # discount factor\n", " ('w', float64[:]), # array of wage values, w[i] = wage at state i\n", " ('q', float64[:]) # array of probabilities\n", "]" ] }, { "cell_type": "markdown", "id": "34888eba", "metadata": {}, "source": [ "Here’s a class that stores the data and computes the values of state-action pairs,\n", "i.e. the value in the maximum bracket on the right hand side of the Bellman equation [(27.4)](#equation-odu-pv2p),\n", "given the current state and an arbitrary feasible action.\n", "\n", "Default parameter values are embedded in the class." ] }, { "cell_type": "code", "execution_count": null, "id": "4acb9380", "metadata": { "hide-output": false }, "outputs": [], "source": [ "@jitclass(mccall_data)\n", "class McCallModel:\n", "\n", " def __init__(self, c=25, β=0.99, w=w_default, q=q_default):\n", "\n", " self.c, self.β = c, β\n", " self.w, self.q = w_default, q_default\n", "\n", " def state_action_values(self, i, v):\n", " \"\"\"\n", " The values of state-action pairs.\n", " \"\"\"\n", " # Simplify names\n", " c, β, w, q = self.c, self.β, self.w, self.q\n", " # Evaluate value for each state-action pair\n", " # Consider action = accept or reject the current offer\n", " accept = w[i] / (1 - β)\n", " reject = c + β * np.sum(v * q)\n", "\n", " return np.array([accept, reject])" ] }, { "cell_type": "markdown", "id": "cd8f191b", "metadata": {}, "source": [ "Based on these defaults, let’s try plotting the first few approximate value functions\n", "in the sequence $ \\{ T^k v \\} $.\n", "\n", "We will start from guess $ v $ given by $ v(i) = w(i) / (1 - β) $, which is the value of accepting at every given wage.\n", "\n", "Here’s a function to implement this:" ] }, { "cell_type": "code", "execution_count": null, "id": "cd4fff6c", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def plot_value_function_seq(mcm, ax, num_plots=6):\n", " \"\"\"\n", " Plot a sequence of value functions.\n", "\n", " * mcm is an instance of McCallModel\n", " * ax is an axes object that implements a plot method.\n", "\n", " \"\"\"\n", "\n", " n = len(mcm.w)\n", " v = mcm.w / (1 - mcm.β)\n", " v_next = np.empty_like(v)\n", " for i in range(num_plots):\n", " ax.plot(mcm.w, v, '-', alpha=0.4, label=f\"iterate {i}\")\n", " # Update guess\n", " for j in range(n):\n", " v_next[j] = np.max(mcm.state_action_values(j, v))\n", " v[:] = v_next # copy contents into v\n", "\n", " ax.legend(loc='lower right')" ] }, { "cell_type": "markdown", "id": "3d170eba", "metadata": {}, "source": [ "Now let’s create an instance of `McCallModel` and watch iterations $ T^k v $ converge from below:" ] }, { "cell_type": "code", "execution_count": null, "id": "6c750abf", "metadata": { "hide-output": false }, "outputs": [], "source": [ "mcm = McCallModel()\n", "\n", "fig, ax = plt.subplots()\n", "ax.set_xlabel('wage')\n", "ax.set_ylabel('value')\n", "plot_value_function_seq(mcm, ax)\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "431ab89e", "metadata": {}, "source": [ "You can see that convergence is occurring: successive iterates are getting closer together.\n", "\n", "Here’s a more serious iteration effort to compute the limit, which continues until measured deviation between successive iterates is below tol.\n", "\n", "Once we obtain a good approximation to the limit, we will use it to calculate\n", "the reservation wage.\n", "\n", "We’ll be using JIT compilation via Numba to turbocharge our loops." ] }, { "cell_type": "code", "execution_count": null, "id": "444cb595", "metadata": { "hide-output": false }, "outputs": [], "source": [ "@jit(nopython=True)\n", "def compute_reservation_wage(mcm,\n", " max_iter=500,\n", " tol=1e-6):\n", "\n", " # Simplify names\n", " c, β, w, q = mcm.c, mcm.β, mcm.w, mcm.q\n", "\n", " # == First compute the value function == #\n", "\n", " n = len(w)\n", " v = w / (1 - β) # initial guess\n", " v_next = np.empty_like(v)\n", " j = 0\n", " error = tol + 1\n", " while j < max_iter and error > tol:\n", "\n", " for j in range(n):\n", " v_next[j] = np.max(mcm.state_action_values(j, v))\n", "\n", " error = np.max(np.abs(v_next - v))\n", " j += 1\n", "\n", " v[:] = v_next # copy contents into v\n", "\n", " # == Now compute the reservation wage == #\n", "\n", " return (1 - β) * (c + β * np.sum(v * q))" ] }, { "cell_type": "markdown", "id": "6f61d4f3", "metadata": {}, "source": [ "The next line computes the reservation wage at default parameters" ] }, { "cell_type": "code", "execution_count": null, "id": "9e9c1ffd", "metadata": { "hide-output": false }, "outputs": [], "source": [ "compute_reservation_wage(mcm)" ] }, { "cell_type": "markdown", "id": "1b4051c7", "metadata": {}, "source": [ "### Comparative Statics\n", "\n", "Now that we know how to compute the reservation wage, let’s see how it varies with\n", "parameters.\n", "\n", "In particular, let’s look at what happens when we change $ \\beta $ and\n", "$ c $." ] }, { "cell_type": "code", "execution_count": null, "id": "2826c751", "metadata": { "hide-output": false }, "outputs": [], "source": [ "grid_size = 25\n", "R = np.empty((grid_size, grid_size))\n", "\n", "c_vals = np.linspace(10.0, 30.0, grid_size)\n", "β_vals = np.linspace(0.9, 0.99, grid_size)\n", "\n", "for i, c in enumerate(c_vals):\n", " for j, β in enumerate(β_vals):\n", " mcm = McCallModel(c=c, β=β)\n", " R[i, j] = compute_reservation_wage(mcm)" ] }, { "cell_type": "code", "execution_count": null, "id": "884e6ade", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "cs1 = ax.contourf(c_vals, β_vals, R.T, alpha=0.75)\n", "ctr1 = ax.contour(c_vals, β_vals, R.T)\n", "\n", "plt.clabel(ctr1, inline=1, fontsize=13)\n", "plt.colorbar(cs1, ax=ax)\n", "\n", "\n", "ax.set_title(\"reservation wage\")\n", "ax.set_xlabel(\"$c$\", fontsize=16)\n", "ax.set_ylabel(\"$β$\", fontsize=16)\n", "\n", "ax.ticklabel_format(useOffset=False)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "302d49db", "metadata": {}, "source": [ "As expected, the reservation wage increases both with patience and with\n", "unemployment compensation.\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "id": "fda77009", "metadata": {}, "source": [ "## Computing an Optimal Policy: Take 2\n", "\n", "The approach to dynamic programming just described is standard and\n", "broadly applicable.\n", "\n", "But for our McCall search model there’s also an easier way that circumvents the\n", "need to compute the value function.\n", "\n", "Let $ h $ denote the continuation value:\n", "\n", "\n", "\n", "$$\n", "h\n", "= c + \\beta\n", " \\sum_{s'} v^*(s') q (s')\n", "\\quad \\tag{27.6}\n", "$$\n", "\n", "The Bellman equation can now be written as\n", "\n", "$$\n", "v^*(s')\n", "= \\max \\left\\{ \\frac{w(s')}{1 - \\beta}, \\, h \\right\\}\n", "$$\n", "\n", "Substituting this last equation into [(27.6)](#equation-j1) gives\n", "\n", "\n", "\n", "$$\n", "h\n", "= c + \\beta\n", " \\sum_{s' \\in \\mathbb S}\n", " \\max \\left\\{\n", " \\frac{w(s')}{1 - \\beta}, h\n", " \\right\\} q (s')\n", "\\quad \\tag{27.7}\n", "$$\n", "\n", "This is a nonlinear equation that we can solve for $ h $.\n", "\n", "As before, we will use successive approximations:\n", "\n", "Step 1: pick an initial guess $ h $.\n", "\n", "Step 2: compute the update $ h' $ via\n", "\n", "\n", "\n", "$$\n", "h'\n", "= c + \\beta\n", " \\sum_{s' \\in \\mathbb S}\n", " \\max \\left\\{\n", " \\frac{w(s')}{1 - \\beta}, h\n", " \\right\\} q (s')\n", "\\quad \\tag{27.8}\n", "$$\n", "\n", "Step 3: calculate the deviation $ |h - h'| $.\n", "\n", "Step 4: if the deviation is larger than some fixed tolerance, set $ h = h' $ and go to step 2, else return $ h $.\n", "\n", "One can again use the Banach contraction mapping theorem to show that this process always converges.\n", "\n", "The big difference here, however, is that we’re iterating on a scalar $ h $, rather than an $ n $-vector, $ v(i), i = 1, \\ldots, n $.\n", "\n", "Here’s an implementation:" ] }, { "cell_type": "code", "execution_count": null, "id": "ac3c067a", "metadata": { "hide-output": false }, "outputs": [], "source": [ "@jit(nopython=True)\n", "def compute_reservation_wage_two(mcm,\n", " max_iter=500,\n", " tol=1e-5):\n", "\n", " # Simplify names\n", " c, β, w, q = mcm.c, mcm.β, mcm.w, mcm.q\n", "\n", " # == First compute h == #\n", "\n", " h = np.sum(w * q) / (1 - β)\n", " i = 0\n", " error = tol + 1\n", " while i < max_iter and error > tol:\n", "\n", " s = np.maximum(w / (1 - β), h)\n", " h_next = c + β * np.sum(s * q)\n", "\n", " error = np.abs(h_next - h)\n", " i += 1\n", "\n", " h = h_next\n", "\n", " # == Now compute the reservation wage == #\n", "\n", " return (1 - β) * h" ] }, { "cell_type": "markdown", "id": "efa1b211", "metadata": {}, "source": [ "You can use this code to solve the exercise below." ] }, { "cell_type": "markdown", "id": "555ea558", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "id": "7dd6b3d3", "metadata": {}, "source": [ "## Exercise 27.1\n", "\n", "Compute the average duration of unemployment when $ \\beta=0.99 $ and\n", "$ c $ takes the following values\n", "\n", "> `c_vals = np.linspace(10, 40, 25)`\n", "\n", "\n", "That is, start the agent off as unemployed, compute their reservation wage\n", "given the parameters, and then simulate to see how long it takes to accept.\n", "\n", "Repeat a large number of times and take the average.\n", "\n", "Plot mean unemployment duration as a function of $ c $ in `c_vals`." ] }, { "cell_type": "markdown", "id": "31f6f926", "metadata": {}, "source": [ "## Solution to[ Exercise 27.1](https://python.quantecon.org/#mm_ex1)\n", "\n", "Here’s one solution" ] }, { "cell_type": "code", "execution_count": null, "id": "72a08676", "metadata": { "hide-output": false }, "outputs": [], "source": [ "cdf = np.cumsum(q_default)\n", "\n", "@jit(nopython=True)\n", "def compute_stopping_time(w_bar, seed=1234):\n", "\n", " np.random.seed(seed)\n", " t = 1\n", " while True:\n", " # Generate a wage draw\n", " w = w_default[qe.random.draw(cdf)]\n", " # Stop when the draw is above the reservation wage\n", " if w >= w_bar:\n", " stopping_time = t\n", " break\n", " else:\n", " t += 1\n", " return stopping_time\n", "\n", "@jit(nopython=True)\n", "def compute_mean_stopping_time(w_bar, num_reps=100000):\n", " obs = np.empty(num_reps)\n", " for i in range(num_reps):\n", " obs[i] = compute_stopping_time(w_bar, seed=i)\n", " return obs.mean()\n", "\n", "c_vals = np.linspace(10, 40, 25)\n", "stop_times = np.empty_like(c_vals)\n", "for i, c in enumerate(c_vals):\n", " mcm = McCallModel(c=c)\n", " w_bar = compute_reservation_wage_two(mcm)\n", " stop_times[i] = compute_mean_stopping_time(w_bar)\n", "\n", "fig, ax = plt.subplots()\n", "\n", "ax.plot(c_vals, stop_times, label=\"mean unemployment duration\")\n", "ax.set(xlabel=\"unemployment compensation\", ylabel=\"months\")\n", "ax.legend()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "fcc85881", "metadata": {}, "source": [ "## Exercise 27.2\n", "\n", "The purpose of this exercise is to show how to replace the discrete wage\n", "offer distribution used above with a continuous distribution.\n", "\n", "This is a significant topic because many convenient distributions are\n", "continuous (i.e., have a density).\n", "\n", "Fortunately, the theory changes little in our simple model.\n", "\n", "Recall that $ h $ in [(27.6)](#equation-j1) denotes the value of not accepting a job in this period but\n", "then behaving optimally in all subsequent periods:\n", "\n", "To shift to a continuous offer distribution, we can replace [(27.6)](#equation-j1) by\n", "\n", "\n", "\n", "$$\n", "h\n", "= c + \\beta\n", " \\int v^*(s') q (s') ds'.\n", "\\quad \\tag{27.9}\n", "$$\n", "\n", "Equation [(27.7)](#equation-j2) becomes\n", "\n", "\n", "\n", "$$\n", "h\n", "= c + \\beta\n", " \\int\n", " \\max \\left\\{\n", " \\frac{w(s')}{1 - \\beta}, h\n", " \\right\\} q (s') d s'\n", "\\quad \\tag{27.10}\n", "$$\n", "\n", "The aim is to solve this nonlinear equation by iteration, and from it obtain\n", "the reservation wage.\n", "\n", "Try to carry this out, setting\n", "\n", "- the state sequence $ \\{ s_t \\} $ to be IID and standard normal and \n", "- the wage function to be $ w(s) = \\exp(\\mu + \\sigma s) $. \n", "\n", "\n", "You will need to implement a new version of the `McCallModel` class that\n", "assumes a lognormal wage distribution.\n", "\n", "Calculate the integral by Monte Carlo, by averaging over a large number of wage draws.\n", "\n", "For default parameters, use `c=25, β=0.99, σ=0.5, μ=2.5`.\n", "\n", "Once your code is working, investigate how the reservation wage changes with $ c $ and $ \\beta $." ] }, { "cell_type": "markdown", "id": "73f700c0", "metadata": {}, "source": [ "## Solution to[ Exercise 27.2](https://python.quantecon.org/#mm_ex2)\n", "\n", "Here is one solution:" ] }, { "cell_type": "code", "execution_count": null, "id": "72eacc76", "metadata": { "hide-output": false }, "outputs": [], "source": [ "mccall_data_continuous = [\n", " ('c', float64), # unemployment compensation\n", " ('β', float64), # discount factor\n", " ('σ', float64), # scale parameter in lognormal distribution\n", " ('μ', float64), # location parameter in lognormal distribution\n", " ('w_draws', float64[:]) # draws of wages for Monte Carlo\n", "]\n", "\n", "@jitclass(mccall_data_continuous)\n", "class McCallModelContinuous:\n", "\n", " def __init__(self, c=25, β=0.99, σ=0.5, μ=2.5, mc_size=1000):\n", "\n", " self.c, self.β, self.σ, self.μ = c, β, σ, μ\n", "\n", " # Draw and store shocks\n", " np.random.seed(1234)\n", " s = np.random.randn(mc_size)\n", " self.w_draws = np.exp(μ+ σ * s)\n", "\n", "\n", "@jit(nopython=True)\n", "def compute_reservation_wage_continuous(mcmc, max_iter=500, tol=1e-5):\n", "\n", " c, β, σ, μ, w_draws = mcmc.c, mcmc.β, mcmc.σ, mcmc.μ, mcmc.w_draws\n", "\n", " h = np.mean(w_draws) / (1 - β) # initial guess\n", " i = 0\n", " error = tol + 1\n", " while i < max_iter and error > tol:\n", "\n", " integral = np.mean(np.maximum(w_draws / (1 - β), h))\n", " h_next = c + β * integral\n", "\n", " error = np.abs(h_next - h)\n", " i += 1\n", "\n", " h = h_next\n", "\n", " # == Now compute the reservation wage == #\n", "\n", " return (1 - β) * h" ] }, { "cell_type": "markdown", "id": "ab854227", "metadata": {}, "source": [ "Now we investigate how the reservation wage changes with $ c $ and\n", "$ \\beta $.\n", "\n", "We will do this using a contour plot." ] }, { "cell_type": "code", "execution_count": null, "id": "694fe925", "metadata": { "hide-output": false }, "outputs": [], "source": [ "grid_size = 25\n", "R = np.empty((grid_size, grid_size))\n", "\n", "c_vals = np.linspace(10.0, 30.0, grid_size)\n", "β_vals = np.linspace(0.9, 0.99, grid_size)\n", "\n", "for i, c in enumerate(c_vals):\n", " for j, β in enumerate(β_vals):\n", " mcmc = McCallModelContinuous(c=c, β=β)\n", " R[i, j] = compute_reservation_wage_continuous(mcmc)" ] }, { "cell_type": "code", "execution_count": null, "id": "eff51b1d", "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "cs1 = ax.contourf(c_vals, β_vals, R.T, alpha=0.75)\n", "ctr1 = ax.contour(c_vals, β_vals, R.T)\n", "\n", "plt.clabel(ctr1, inline=1, fontsize=13)\n", "plt.colorbar(cs1, ax=ax)\n", "\n", "\n", "ax.set_title(\"reservation wage\")\n", "ax.set_xlabel(\"$c$\", fontsize=16)\n", "ax.set_ylabel(\"$β$\", fontsize=16)\n", "\n", "ax.ticklabel_format(useOffset=False)\n", "\n", "plt.show()" ] } ], "metadata": { "date": 1714442507.6833463, "filename": "mccall_model.md", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "title": "Job Search I: The McCall Search Model" }, "nbformat": 4, "nbformat_minor": 5 }