{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# The McCall Search Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Contents:\n",
"\n",
"- [The McCall Search Model](#The-McCall-Search-Model) \n",
" - [Revisiting the Finite-State McCall Model](#Revisiting-the-Finite-State-McCall-Model) \n",
" - [Solving for the Optimal Policy](#Solving-for-the-Optimal-Policy)\n",
" - [Implementing Value Function Iteration](#Implementing-Value-Function-Iteration) \n",
" - [Comparative Statics](#Comparative-Statics) \n",
"\n",
"\n",
"This lab includes:\n",
"\n",
"(1) An overview of the finite-state McCall search model;\n",
"\n",
"(2) A description of the value function iteration algorithm;\n",
"\n",
"(3) An implementation of value function iteration applied to the McCall model;\n",
"\n",
"(4) Some simple comparative statics analysis of the McCall model.\n",
"\n",
"----"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Revisiting the Finite-State McCall Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We start by briefly reviewing the finite-state McCall model setup.\n",
"\n",
"A worker wants to maximize expected lifetime utility, $\\mathbb{E} \\sum_{t=0}^\\infty \\beta^t y_t$, where $\\beta \\in (0,1)$ is the discount factor and $\\{y_t\\}$ is an income sequence.\n",
"\n",
"The worker starts out being unemployed, but at each period they receive a job offer with wage $w$ drawn from a finite i.i.d. distribution (known to the worker) with outcomes $\\{w_1,\\ldots,w_S\\}$ and corresponding probability weights $\\{p_1,\\ldots,p_S\\}$.\n",
"\n",
"At time $T$ the worker may accept the given job offer and receive the wage $w$ for all $t \\geq T$.\n",
"\n",
"Or the worker may reject the offer, receive the unemployment check $c > 0$ at time $T$, and repeat the job search again at $T+1$.\n",
"\n",
"The worker chooses whichever option is associated with a greater expected lifetime utility. Therefore, the value function $v(w)$ of this problem satisfies the Bellman equation \n",
"\n",
"\\begin{align}\n",
" v(w) = \\max_{\\text{accept,reject}} \\left\\{ \\frac{w}{1-\\beta} \\, , \\, c + \\beta \\sum_{i = 1}^S v(w_i ') p_i \\right\\} \n",
"\\end{align}\n",
"\n",
"In this lab we will combine some of the tools we've learned in the first two labs to solve for the value function of the above Bellman equation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Solving for the Optimal Policy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Bellman equation is equivalent to the following nonlinear system of equations that we need to solve with respect to the $v_i$'s:\n",
"\n",
"\\begin{align}\n",
"v_i = \\max \\left\\{ \\frac{w_i}{1 - \\beta} \\, , \\, c + \\beta \\sum_{i=1}^S v_i p_i \\right\\} \\text{ for } i = 1,\\ldots,S \\, .\n",
"\\end{align}\n",
"\n",
"Our solution to this system of equations should be of the form $v \\in \\mathbb{R}^S$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The value function iteration algorithm allows us to solve for $v$ with the following steps:\n",
"\n",
"(1) Pick some $v \\in \\mathbb{R}^S$ -- the initial guess for the solution;\n",
"\n",
"(2) Map the chosen $v$ using the Bellman mapping to obtain $v_1 \\in \\mathbb{R}^S$, so that \n",
"\\begin{align*} \n",
"v_i' = \\max \\left\\{ \\frac{w_i}{1-\\beta} \\, , \\, c + \\beta \\sum_{i=1}^S v_i p_i \\right\\} \\text{ for } i = 1,\\ldots,S \\, ;\n",
"\\end{align*} \n",
"\n",
"(3) Note the deviation between $v$ and $v'$ -- if the deviation is above some threshhold then stop, otherwise repeat the process.\n",
"\n",
"Once the above process stops, the resulting $v$ will be an approvimation of the true solution for the value function.\n",
"\n",
"Given $|\\beta| < 1$, our Bellman map satisfies the assumptions of the Banach contraction mapping theorem, which implies that our generated sequence of values should converge to a unique fixed point."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Implementing Value Function Iteration"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we load all necessary packages."
]
},
{
"cell_type": "code",
"execution_count": 79,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-13T22:19:48.472000-07:00",
"start_time": "2021-04-14T05:19:48.468Z"
}
},
"outputs": [],
"source": [
"using LinearAlgebra"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We create a function that carries out a single iteration of the Bellman mapping.\n",
"\n",
"We can then repeatedly call on this function when applying the algorithm.\n",
"\n",
"This function will be called `mccallbellmanmap`, and will take an initial value function `v`, a vector of possible wages `w`, a vector of probabilities associated with each of the wages `π`, the unemployment benefit `c`, and the discount factor `β`, as inputs.\n",
"\n",
"`mccallbellmanmap()` will output a new mapping of `v`.\n"
]
},
{
"cell_type": "code",
"execution_count": 80,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-13T22:19:49.822000-07:00",
"start_time": "2021-04-14T05:19:49.682Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"mccallbellmanmap (generic function with 1 method)"
]
},
"execution_count": 80,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"\n",
" mccallbellmanmap(v,w,π,c,β)\n",
"\n",
"Iterates the McCall search model bellman equation for with value function v.\n",
"Returns the new value function.\n",
"\n",
"# Arguments\n",
"* `v` vector of values for each wage\n",
"* `w` vector of wages\n",
"* `p` vector of probabilities for each wage\n",
"* `c` unemployment benefits\n",
"* `β` time discount factor\n",
"\n",
"\"\"\"\n",
"\n",
"function mccallbellmanmap(v, w, p, c, β)\n",
" \n",
" # Create empty vector for value storage\n",
" v_out = zeros(length(w))\n",
" \n",
" # Compute value given each wage in vector `w`\n",
" for i in 1:length(w)\n",
" \n",
" # Bellman map: v = max{c + β E(v'), w/(1-β)}\n",
" v_out[i] = max(c + β * dot(p,v), w[i]/(1-β)) \n",
" \n",
" end\n",
" \n",
" # Return value function\n",
" return v_out\n",
"end"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We may also avoids loops entirely by defining an alternative function `mccallbellmanmap_v2` that takes the same exact inputs and gives the same output."
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-13T22:19:51.847000-07:00",
"start_time": "2021-04-14T05:19:51.617Z"
},
"scrolled": true
},
"outputs": [
{
"data": {
"text/plain": [
"mccallbellmanmap_v2"
]
},
"execution_count": 81,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"\n",
" mccallbellmanmap_v2(v,w,π,c,β)\n",
"\n",
"Iterates the McCall search model bellman equation for with value function v.\n",
"Returns the new value function.\n",
"\n",
"# Arguments\n",
"* `v` vector of values for each wage\n",
"* `w` vector of wages\n",
"* `p` vector of probabilities for each wage\n",
"* `c` unemployment benefits\n",
"* `β` time discount factor\n",
"\n",
"\"\"\"\n",
"function mccallbellmanmap_v2(v, w, p, c, β)\n",
" \n",
" # Compute value of rejecting the offer\n",
" v_reject = c + β * dot(p,v) # this is a Float\n",
" \n",
" # Compute value of accepting the wage offer\n",
" v_accept = w / (1-β)\n",
" \n",
" # Compute v_reject to v_accept\n",
" v_out = max.(v_reject, v_accept)\n",
" \n",
" # Return value function\n",
" return v_out\n",
"end"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's iterate over the Bellman map to see what it does."
]
},
{
"cell_type": "code",
"execution_count": 82,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-13T22:19:53.702000-07:00",
"start_time": "2021-04-14T05:19:53.108Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"50×10 Array{Float64,2}:\n",
" 20.0 40.0 60.0 80.0 … 140.0 160.0 180.0 200.0\n",
" 107.5 107.5 107.5 107.5 140.0 160.0 180.0 200.0\n",
" 130.062 130.062 130.062 130.062 140.0 160.0 180.0 200.0\n",
" 141.736 141.736 141.736 141.736 141.736 160.0 180.0 200.0\n",
" 148.554 148.554 148.554 148.554 148.554 160.0 180.0 200.0\n",
" 153.089 153.089 153.089 153.089 … 153.089 160.0 180.0 200.0\n",
" 156.104 156.104 156.104 156.104 156.104 160.0 180.0 200.0\n",
" 158.109 158.109 158.109 158.109 158.109 160.0 180.0 200.0\n",
" 159.443 159.443 159.443 159.443 159.443 160.0 180.0 200.0\n",
" 160.329 160.329 160.329 160.329 160.329 160.329 180.0 200.0\n",
" 160.95 160.95 160.95 160.95 … 160.95 160.95 180.0 200.0\n",
" 161.422 161.422 161.422 161.422 161.422 161.422 180.0 200.0\n",
" 161.781 161.781 161.781 161.781 161.781 161.781 180.0 200.0\n",
" ⋮ ⋱ \n",
" 162.916 162.916 162.916 162.916 162.916 162.916 180.0 200.0\n",
" 162.916 162.916 162.916 162.916 162.916 162.916 180.0 200.0\n",
" 162.916 162.916 162.916 162.916 … 162.916 162.916 180.0 200.0\n",
" 162.916 162.916 162.916 162.916 162.916 162.916 180.0 200.0\n",
" 162.916 162.916 162.916 162.916 162.916 162.916 180.0 200.0\n",
" 162.916 162.916 162.916 162.916 162.916 162.916 180.0 200.0\n",
" 162.916 162.916 162.916 162.916 162.916 162.916 180.0 200.0\n",
" 162.917 162.917 162.917 162.917 … 162.917 162.917 180.0 200.0\n",
" 162.917 162.917 162.917 162.917 162.917 162.917 180.0 200.0\n",
" 162.917 162.917 162.917 162.917 162.917 162.917 180.0 200.0\n",
" 162.917 162.917 162.917 162.917 162.917 162.917 180.0 200.0\n",
" 162.917 162.917 162.917 162.917 162.917 162.917 180.0 200.0"
]
},
"execution_count": 82,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create discrete uniform wage distribution between 1 and 10\n",
"# where S = # of grid points (evenly-spaced points between 1 and 10)\n",
"S = 10\n",
"w = LinRange(1, 10, S) \n",
"p = ones(S)/S # each p[i] = 1/S ∀i, so that p(w[i]) = p(w[j]) ∀i,j ∈ {1,...,S}\n",
"\n",
"# Set model parameters:\n",
"β = 0.95 # note that lower β causes code to converge faster\n",
"c = 3 # unemployment wage/benefits\n",
"\n",
"# Initiate the value function\n",
"v0 = zeros(S)\n",
"\n",
"# Iterate code J times\n",
"J = 50\n",
"\n",
"# Create and initiate array containing \n",
"# iterated value function values:\n",
"V = zeros(J, S)\n",
"V[1,:] = mccallbellmanmap(v0, w, p, c, β)\n",
"\n",
"# Keep applying Bellman map recursively \n",
"for j in 2:J\n",
" V[j,:] = mccallbellmanmap_v2(V[j-1,:], w, p, c, β)\n",
"end\n",
"\n",
"V"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's plot the columns of `V`:"
]
},
{
"cell_type": "code",
"execution_count": 83,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-13T22:19:54.667000-07:00",
"start_time": "2021-04-14T05:19:54.590Z"
}
},
"outputs": [
{
"data": {
"image/svg+xml": [
"\n",
"\n"
]
},
"execution_count": 83,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"using Plots\n",
"\n",
"plot(V, xlim = (0,10))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can define a neat function that solves any given finite-state McCall model."
]
},
{
"cell_type": "code",
"execution_count": 84,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-13T22:19:55.726000-07:00",
"start_time": "2021-04-14T05:19:55.718Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"solvemccall (generic function with 2 methods)"
]
},
"execution_count": 84,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"\n",
" solvemccall(w,π,c,β[,ϵ])\n",
"\n",
"Iterates the McCall search model bellman equation until convergence criterion \n",
"ϵ is reached\n",
"\n",
"# Arguments\n",
"* `w` vector of wages\n",
"* `p` vector of probabilities for each wage\n",
"* `c` unemployment benefits\n",
"* `β` time discount factor\n",
"* `ϵ' Stopping criteria (default 1e-6)\n",
"\"\"\"\n",
"\n",
"function solvemccall(w, p, c, β, ϵ=1e-6)\n",
" \n",
" # Initialize the value function.\n",
" v = w/(1-β)\n",
" \n",
" # Initiate distance from iterated mapping\n",
" diff = 10\n",
" \n",
" # Keep iterating on Bellman map until\n",
" # stopping criterium is satisfied\n",
" while diff > ϵ\n",
" \n",
" # New mapping of the value function\n",
" v_new = mccallbellmanmap(v,w,p,c,β)\n",
" \n",
" # Use supremum norm as measure of distance between \n",
" # old vs. new value function map\n",
" diff = norm(v-v_new,Inf)\n",
" \n",
" # Store new value function as original\n",
" # before loop potentially restarts\n",
" v = v_new \n",
" end\n",
" \n",
" # Return converged value function\n",
" return v\n",
"end"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use our function to solve the McCall model defined by the previously-declared parameters (which I repeat here for ease of reference). "
]
},
{
"cell_type": "code",
"execution_count": 86,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-13T22:20:01.600000-07:00",
"start_time": "2021-04-14T05:20:01.594Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"10-element Array{Float64,1}:\n",
" 162.91666382521822\n",
" 162.91666382521822\n",
" 162.91666382521822\n",
" 162.91666382521822\n",
" 162.91666382521822\n",
" 162.91666382521822\n",
" 162.91666382521822\n",
" 162.91666382521822\n",
" 179.99999999999983\n",
" 199.99999999999983"
]
},
"execution_count": 86,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"S = 10\n",
"w = LinRange(1, 10, S) \n",
"p = ones(S)/S \n",
"β = 0.95 \n",
"c = 3\n",
"\n",
"# Solve the model:\n",
"v = solvemccall(w,p,c,β)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Comparative Statics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's vary $\\beta$ while keeping all else constant, and check what happens to $V(w)$:"
]
},
{
"cell_type": "code",
"execution_count": 87,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-13T22:20:03.754000-07:00",
"start_time": "2021-04-14T05:20:03.573Z"
}
},
"outputs": [
{
"data": {
"image/svg+xml": [
"\n",
"\n"
]
},
"execution_count": 87,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"using Plots, LaTeXStrings\n",
"\n",
"S = 10\n",
"w = LinRange(1, 10, S) \n",
"p = ones(S)/S \n",
"β = 0.95 \n",
"c = 3\n",
"\n",
"g(θ) = solvemccall(w,p,c,θ)\n",
"\n",
"fig = plot(); # Limit the plot y-axis\n",
" \n",
"for i in 1:length(w)\n",
" plot!(fig, 0.85:0.01:0.95, θ -> g(θ)[i])\n",
"end \n",
"\n",
"fig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"What if we, instead, vary $c$?"
]
},
{
"cell_type": "code",
"execution_count": 88,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-13T22:20:05.400000-07:00",
"start_time": "2021-04-14T05:20:04.875Z"
}
},
"outputs": [
{
"data": {
"image/svg+xml": [
"\n",
"\n"
]
},
"execution_count": 88,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"using Plots, LaTeXStrings\n",
"\n",
"S = 10\n",
"w = LinRange(1, 10, S) \n",
"p = ones(S)/S \n",
"β = 0.95 \n",
"c = 3\n",
"\n",
"g(θ) = solvemccall(w,p,θ,β)\n",
"\n",
"fig = plot(); # Limit the plot y-axis\n",
" \n",
"for i in 1:length(w)\n",
" plot!(fig, 1:0.01:10, θ -> g(θ)[i])\n",
"end \n",
"\n",
"fig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"----"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.5.4",
"language": "julia",
"name": "julia-1.5"
},
"language_info": {
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.5.4"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,
"autoclose": false,
"autocomplete": true,
"bibliofile": "biblio.bib",
"cite_by": "apalike",
"current_citInitial": 1,
"eqLabelWithNumbers": true,
"eqNumInitial": 1,
"hotkeys": {
"equation": "Ctrl-E",
"itemize": "Ctrl-I"
},
"labels_anchors": false,
"latex_user_defs": false,
"report_style_numbering": false,
"user_envs_cfg": false
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}