{ "cells": [ { "cell_type": "markdown", "id": "427e6820", "metadata": {}, "source": [ "\n", "" ] }, { "cell_type": "markdown", "id": "2b218741", "metadata": {}, "source": [ "# Job Search VI: On-the-Job Search\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "id": "f7ba69ef", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Job Search VI: On-the-Job Search](#Job-Search-VI:-On-the-Job-Search) \n", " - [Overview](#Overview) \n", " - [Model](#Model) \n", " - [Implementation](#Implementation) \n", " - [Solving for Policies](#Solving-for-Policies) \n", " - [Exercises](#Exercises) " ] }, { "cell_type": "markdown", "id": "77e38563", "metadata": {}, "source": [ "## Overview\n", "\n", "In this section, we solve a simple on-the-job search model\n", "\n", "- based on [[Ljungqvist and Sargent, 2018](https://python.quantecon.org/zreferences.html#id185)], exercise 6.18, and [[Jovanovic, 1979](https://python.quantecon.org/zreferences.html#id99)] \n", "\n", "\n", "Let’s start with some imports:" ] }, { "cell_type": "code", "execution_count": null, "id": "a63f4bea", "metadata": { "hide-output": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import scipy.stats as stats\n", "from numba import njit, prange" ] }, { "cell_type": "markdown", "id": "58353cf9", "metadata": {}, "source": [ "### Model Features\n", "\n", "\n", "\n", "- job-specific human capital accumulation combined with on-the-job search \n", "- infinite-horizon dynamic programming with one state variable and two controls " ] }, { "cell_type": "markdown", "id": "ce92c8f0", "metadata": {}, "source": [ "## Model\n", "\n", "\n", "\n", "Let $ x_t $ denote the time-$ t $ job-specific human capital of a worker employed at a given firm and let $ w_t $ denote current wages.\n", "\n", "Let $ w_t = x_t(1 - s_t - \\phi_t) $, where\n", "\n", "- $ \\phi_t $ is investment in job-specific human capital for the current role and \n", "- $ s_t $ is search effort, devoted to obtaining new offers from other firms. \n", "\n", "\n", "For as long as the worker remains in the current job, evolution of $ \\{x_t\\} $ is given by $ x_{t+1} = g(x_t, \\phi_t) $.\n", "\n", "When search effort at $ t $ is $ s_t $, the worker receives a new job offer with probability $ \\pi(s_t) \\in [0, 1] $.\n", "\n", "The value of the offer, measured in job-specific human capital, is $ u_{t+1} $, where $ \\{u_t\\} $ is IID with common distribution $ f $.\n", "\n", "The worker can reject the current offer and continue with existing job.\n", "\n", "Hence $ x_{t+1} = u_{t+1} $ if he/she accepts and $ x_{t+1} = g(x_t, \\phi_t) $ otherwise.\n", "\n", "Let $ b_{t+1} \\in \\{0,1\\} $ be a binary random variable, where $ b_{t+1} = 1 $ indicates that the worker receives an offer at the end of time $ t $.\n", "\n", "We can write\n", "\n", "\n", "\n", "$$\n", "x_{t+1}\n", "= (1 - b_{t+1}) g(x_t, \\phi_t) + b_{t+1}\n", " \\max \\{ g(x_t, \\phi_t), u_{t+1}\\} \\tag{32.1}\n", "$$\n", "\n", "Agent’s objective: maximize expected discounted sum of wages via controls $ \\{s_t\\} $ and $ \\{\\phi_t\\} $.\n", "\n", "Taking the expectation of $ v(x_{t+1}) $ and using [(32.1)](#equation-jd),\n", "the Bellman equation for this problem can be written as\n", "\n", "\n", "\n", "$$\n", "v(x)\n", "= \\max_{s + \\phi \\leq 1}\n", " \\left\\{\n", " x (1 - s - \\phi) + \\beta (1 - \\pi(s)) v[g(x, \\phi)] +\n", " \\beta \\pi(s) \\int v[g(x, \\phi) \\vee u] f(du)\n", " \\right\\} \\tag{32.2}\n", "$$\n", "\n", "Here nonnegativity of $ s $ and $ \\phi $ is understood, while\n", "$ a \\vee b := \\max\\{a, b\\} $." ] }, { "cell_type": "markdown", "id": "60b72e6b", "metadata": {}, "source": [ "### Parameterization\n", "\n", "\n", "\n", "In the implementation below, we will focus on the parameterization\n", "\n", "$$\n", "g(x, \\phi) = A (x \\phi)^{\\alpha},\n", "\\quad\n", "\\pi(s) = \\sqrt s\n", "\\quad \\text{and} \\quad\n", "f = \\text{Beta}(2, 2)\n", "$$\n", "\n", "with default parameter values\n", "\n", "- $ A = 1.4 $ \n", "- $ \\alpha = 0.6 $ \n", "- $ \\beta = 0.96 $ \n", "\n", "\n", "The $ \\text{Beta}(2,2) $ distribution is supported on $ (0,1) $ - it has a unimodal, symmetric density peaked at 0.5.\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "id": "0740b67a", "metadata": {}, "source": [ "### Back-of-the-Envelope Calculations\n", "\n", "Before we solve the model, let’s make some quick calculations that\n", "provide intuition on what the solution should look like.\n", "\n", "To begin, observe that the worker has two instruments to build\n", "capital and hence wages:\n", "\n", "1. invest in capital specific to the current job via $ \\phi $ \n", "1. search for a new job with better job-specific capital match via $ s $ \n", "\n", "\n", "Since wages are $ x (1 - s - \\phi) $, marginal cost of investment via either $ \\phi $ or $ s $ is identical.\n", "\n", "Our risk-neutral worker should focus on whatever instrument has the highest expected return.\n", "\n", "The relative expected return will depend on $ x $.\n", "\n", "For example, suppose first that $ x = 0.05 $\n", "\n", "- If $ s=1 $ and $ \\phi = 0 $, then since $ g(x,\\phi) = 0 $,\n", " taking expectations of [(32.1)](#equation-jd) gives expected next period capital equal to $ \\pi(s) \\mathbb{E} u\n", " = \\mathbb{E} u = 0.5 $. \n", "- If $ s=0 $ and $ \\phi=1 $, then next period capital is $ g(x, \\phi) = g(0.05, 1) \\approx 0.23 $. \n", "\n", "\n", "Both rates of return are good, but the return from search is better.\n", "\n", "Next, suppose that $ x = 0.4 $\n", "\n", "- If $ s=1 $ and $ \\phi = 0 $, then expected next period capital is again $ 0.5 $ \n", "- If $ s=0 $ and $ \\phi = 1 $, then $ g(x, \\phi) = g(0.4, 1) \\approx 0.8 $ \n", "\n", "\n", "Return from investment via $ \\phi $ dominates expected return from search.\n", "\n", "Combining these observations gives us two informal predictions:\n", "\n", "1. At any given state $ x $, the two controls $ \\phi $ and $ s $ will\n", " function primarily as substitutes — worker will focus on whichever instrument has the higher expected return. \n", "1. For sufficiently small $ x $, search will be preferable to investment in\n", " job-specific human capital. For larger $ x $, the reverse will be true. \n", "\n", "\n", "Now let’s turn to implementation, and see if we can match our predictions." ] }, { "cell_type": "markdown", "id": "b51fcec3", "metadata": {}, "source": [ "## Implementation\n", "\n", "\n", "\n", "We will set up a class `JVWorker` that holds the parameters of the model described above" ] }, { "cell_type": "code", "execution_count": null, "id": "60260f37", "metadata": { "hide-output": false }, "outputs": [], "source": [ "class JVWorker:\n", " r\"\"\"\n", " A Jovanovic-type model of employment with on-the-job search.\n", "\n", " \"\"\"\n", "\n", " def __init__(self,\n", " A=1.4,\n", " α=0.6,\n", " β=0.96, # Discount factor\n", " π=np.sqrt, # Search effort function\n", " a=2, # Parameter of f\n", " b=2, # Parameter of f\n", " grid_size=50,\n", " mc_size=100,\n", " ɛ=1e-4):\n", "\n", " self.A, self.α, self.β, self.π = A, α, β, π\n", " self.mc_size, self.ɛ = mc_size, ɛ\n", "\n", " self.g = njit(lambda x, ϕ: A * (x * ϕ)**α) # Transition function\n", " self.f_rvs = np.random.beta(a, b, mc_size)\n", "\n", " # Max of grid is the max of a large quantile value for f and the\n", " # fixed point y = g(y, 1)\n", " ɛ = 1e-4\n", " grid_max = max(A**(1 / (1 - α)), stats.beta(a, b).ppf(1 - ɛ))\n", "\n", " # Human capital\n", " self.x_grid = np.linspace(ɛ, grid_max, grid_size)" ] }, { "cell_type": "markdown", "id": "b52182d0", "metadata": {}, "source": [ "The function `operator_factory` takes an instance of this class and returns a\n", "jitted version of the Bellman operator `T`, i.e.\n", "\n", "$$\n", "Tv(x)\n", "= \\max_{s + \\phi \\leq 1} w(s, \\phi)\n", "$$\n", "\n", "where\n", "\n", "\n", "\n", "$$\n", "w(s, \\phi)\n", " := x (1 - s - \\phi) + \\beta (1 - \\pi(s)) v[g(x, \\phi)] +\n", " \\beta \\pi(s) \\int v[g(x, \\phi) \\vee u] f(du) \\tag{32.3}\n", "$$\n", "\n", "When we represent $ v $, it will be with a NumPy array `v` giving values on grid `x_grid`.\n", "\n", "But to evaluate the right-hand side of [(32.3)](#equation-defw), we need a function, so\n", "we replace the arrays `v` and `x_grid` with a function `v_func` that gives linear\n", "interpolation of `v` on `x_grid`.\n", "\n", "Inside the `for` loop, for each `x` in the grid over the state space, we\n", "set up the function $ w(z) = w(s, \\phi) $ defined in [(32.3)](#equation-defw).\n", "\n", "The function is maximized over all feasible $ (s, \\phi) $ pairs.\n", "\n", "Another function, `get_greedy` returns the optimal choice of $ s $ and $ \\phi $\n", "at each $ x $, given a value function." ] }, { "cell_type": "code", "execution_count": null, "id": "bdac8bd2", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def operator_factory(jv, parallel_flag=True):\n", "\n", " \"\"\"\n", " Returns a jitted version of the Bellman operator T\n", "\n", " jv is an instance of JVWorker\n", "\n", " \"\"\"\n", "\n", " π, β = jv.π, jv.β\n", " x_grid, ɛ, mc_size = jv.x_grid, jv.ɛ, jv.mc_size\n", " f_rvs, g = jv.f_rvs, jv.g\n", "\n", " @njit\n", " def state_action_values(z, x, v):\n", " s, ϕ = z\n", " v_func = lambda x: np.interp(x, x_grid, v)\n", "\n", " integral = 0\n", " for m in range(mc_size):\n", " u = f_rvs[m]\n", " integral += v_func(max(g(x, ϕ), u))\n", " integral = integral / mc_size\n", "\n", " q = π(s) * integral + (1 - π(s)) * v_func(g(x, ϕ))\n", " return x * (1 - ϕ - s) + β * q\n", "\n", " @njit(parallel=parallel_flag)\n", " def T(v):\n", " \"\"\"\n", " The Bellman operator\n", " \"\"\"\n", "\n", " v_new = np.empty_like(v)\n", " for i in prange(len(x_grid)):\n", " x = x_grid[i]\n", "\n", " # Search on a grid\n", " search_grid = np.linspace(ɛ, 1, 15)\n", " max_val = -1\n", " for s in search_grid:\n", " for ϕ in search_grid:\n", " current_val = state_action_values((s, ϕ), x, v) if s + ϕ <= 1 else -1\n", " if current_val > max_val:\n", " max_val = current_val\n", " v_new[i] = max_val\n", "\n", " return v_new\n", "\n", " @njit\n", " def get_greedy(v):\n", " \"\"\"\n", " Computes the v-greedy policy of a given function v\n", " \"\"\"\n", " s_policy, ϕ_policy = np.empty_like(v), np.empty_like(v)\n", "\n", " for i in range(len(x_grid)):\n", " x = x_grid[i]\n", " # Search on a grid\n", " search_grid = np.linspace(ɛ, 1, 15)\n", " max_val = -1\n", " for s in search_grid:\n", " for ϕ in search_grid:\n", " current_val = state_action_values((s, ϕ), x, v) if s + ϕ <= 1 else -1\n", " if current_val > max_val:\n", " max_val = current_val\n", " max_s, max_ϕ = s, ϕ\n", " s_policy[i], ϕ_policy[i] = max_s, max_ϕ\n", " return s_policy, ϕ_policy\n", "\n", " return T, get_greedy" ] }, { "cell_type": "markdown", "id": "e394bb81", "metadata": {}, "source": [ "To solve the model, we will write a function that uses the Bellman operator\n", "and iterates to find a fixed point." ] }, { "cell_type": "code", "execution_count": null, "id": "87e74ffa", "metadata": { "hide-output": false }, "outputs": [], "source": [ "def solve_model(jv,\n", " use_parallel=True,\n", " tol=1e-4,\n", " max_iter=1000,\n", " verbose=True,\n", " print_skip=25):\n", "\n", " \"\"\"\n", " Solves the model by value function iteration\n", "\n", " * jv is an instance of JVWorker\n", "\n", " \"\"\"\n", "\n", " T, _ = operator_factory(jv, parallel_flag=use_parallel)\n", "\n", " # Set up loop\n", " v = jv.x_grid * 0.5 # Initial condition\n", " i = 0\n", " error = tol + 1\n", "\n", " while i < max_iter and error > tol:\n", " v_new = T(v)\n", " error = np.max(np.abs(v - v_new))\n", " i += 1\n", " if verbose and i % print_skip == 0:\n", " print(f\"Error at iteration {i} is {error}.\")\n", " v = v_new\n", "\n", " if error > tol:\n", " print(\"Failed to converge!\")\n", " elif verbose:\n", " print(f\"\\nConverged in {i} iterations.\")\n", "\n", " return v_new" ] }, { "cell_type": "markdown", "id": "0d8e9c7c", "metadata": {}, "source": [ "## Solving for Policies\n", "\n", "\n", "\n", "Let’s generate the optimal policies and see what they look like.\n", "\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "id": "28804fc0", "metadata": { "hide-output": false }, "outputs": [], "source": [ "jv = JVWorker()\n", "T, get_greedy = operator_factory(jv)\n", "v_star = solve_model(jv)\n", "s_star, ϕ_star = get_greedy(v_star)" ] }, { "cell_type": "markdown", "id": "e58174be", "metadata": {}, "source": [ "Here are the plots:" ] }, { "cell_type": "code", "execution_count": null, "id": "8d2ebdac", "metadata": { "hide-output": false }, "outputs": [], "source": [ "plots = [s_star, ϕ_star, v_star]\n", "titles = [\"s policy\", \"ϕ policy\", \"value function\"]\n", "\n", "fig, axes = plt.subplots(3, 1, figsize=(12, 12))\n", "\n", "for ax, plot, title in zip(axes, plots, titles):\n", " ax.plot(jv.x_grid, plot)\n", " ax.set(title=title)\n", " ax.grid()\n", "\n", "axes[-1].set_xlabel(\"x\")\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "75577aaf", "metadata": {}, "source": [ "The horizontal axis is the state $ x $, while the vertical axis gives $ s(x) $ and $ \\phi(x) $.\n", "\n", "Overall, the policies match well with our predictions from [above](#jvboecalc)\n", "\n", "- Worker switches from one investment strategy to the other depending on relative return. \n", "- For low values of $ x $, the best option is to search for a new job. \n", "- Once $ x $ is larger, worker does better by investing in human capital specific to the current position. " ] }, { "cell_type": "markdown", "id": "b18f2b44", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "id": "612a5938", "metadata": {}, "source": [ "## Exercise 32.1\n", "\n", "Let’s look at the dynamics for the state process $ \\{x_t\\} $ associated with these policies.\n", "\n", "The dynamics are given by [(32.1)](#equation-jd) when $ \\phi_t $ and $ s_t $ are\n", "chosen according to the optimal policies, and $ \\mathbb{P}\\{b_{t+1} = 1\\}\n", "= \\pi(s_t) $.\n", "\n", "Since the dynamics are random, analysis is a bit subtle.\n", "\n", "One way to do it is to plot, for each $ x $ in a relatively fine grid\n", "called `plot_grid`, a\n", "large number $ K $ of realizations of $ x_{t+1} $ given $ x_t =\n", "x $.\n", "\n", "Plot this with one dot for each realization, in the form of a 45 degree\n", "diagram, setting" ] }, { "cell_type": "code", "execution_count": null, "id": "d12e569b", "metadata": { "hide-output": false }, "outputs": [], "source": [ "jv = JVWorker(grid_size=25, mc_size=50)\n", "plot_grid_max, plot_grid_size = 1.2, 100\n", "plot_grid = np.linspace(0, plot_grid_max, plot_grid_size)\n", "fig, ax = plt.subplots()\n", "ax.set_xlim(0, plot_grid_max)\n", "ax.set_ylim(0, plot_grid_max)" ] }, { "cell_type": "markdown", "id": "495aa4de", "metadata": {}, "source": [ "By examining the plot, argue that under the optimal policies, the state\n", "$ x_t $ will converge to a constant value $ \\bar x $ close to unity.\n", "\n", "Argue that at the steady state, $ s_t \\approx 0 $ and $ \\phi_t \\approx 0.6 $." ] }, { "cell_type": "markdown", "id": "28d65237", "metadata": {}, "source": [ "## Solution to[ Exercise 32.1](https://python.quantecon.org/#jv_ex1)\n", "\n", "Here’s code to produce the 45 degree diagram" ] }, { "cell_type": "code", "execution_count": null, "id": "01d2f4f6", "metadata": { "hide-output": false }, "outputs": [], "source": [ "jv = JVWorker(grid_size=25, mc_size=50)\n", "π, g, f_rvs, x_grid = jv.π, jv.g, jv.f_rvs, jv.x_grid\n", "T, get_greedy = operator_factory(jv)\n", "v_star = solve_model(jv, verbose=False)\n", "s_policy, ϕ_policy = get_greedy(v_star)\n", "\n", "# Turn the policy function arrays into actual functions\n", "s = lambda y: np.interp(y, x_grid, s_policy)\n", "ϕ = lambda y: np.interp(y, x_grid, ϕ_policy)\n", "\n", "def h(x, b, u):\n", " return (1 - b) * g(x, ϕ(x)) + b * max(g(x, ϕ(x)), u)\n", "\n", "\n", "plot_grid_max, plot_grid_size = 1.2, 100\n", "plot_grid = np.linspace(0, plot_grid_max, plot_grid_size)\n", "fig, ax = plt.subplots(figsize=(8, 8))\n", "ticks = (0.25, 0.5, 0.75, 1.0)\n", "ax.set(xticks=ticks, yticks=ticks,\n", " xlim=(0, plot_grid_max),\n", " ylim=(0, plot_grid_max),\n", " xlabel='$x_t$', ylabel='$x_{t+1}$')\n", "\n", "ax.plot(plot_grid, plot_grid, 'k--', alpha=0.6) # 45 degree line\n", "for x in plot_grid:\n", " for i in range(jv.mc_size):\n", " b = 1 if np.random.uniform(0, 1) < π(s(x)) else 0\n", " u = f_rvs[i]\n", " y = h(x, b, u)\n", " ax.plot(x, y, 'go', alpha=0.25)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "3c96189e", "metadata": {}, "source": [ "Looking at the dynamics, we can see that\n", "\n", "- If $ x_t $ is below about 0.2 the dynamics are random, but\n", " $ x_{t+1} > x_t $ is very likely. \n", "- As $ x_t $ increases the dynamics become deterministic, and\n", " $ x_t $ converges to a steady state value close to 1. \n", "\n", "\n", "Referring back to the figure [here](#jv-policies) we see that $ x_t \\approx 1 $ means that\n", "$ s_t = s(x_t) \\approx 0 $ and\n", "$ \\phi_t = \\phi(x_t) \\approx 0.6 $." ] }, { "cell_type": "markdown", "id": "8f272324", "metadata": {}, "source": [ "## Exercise 32.2\n", "\n", "In Exercise 32.1, we found that $ s_t $ converges to zero\n", "and $ \\phi_t $ converges to about 0.6.\n", "\n", "Since these results were calculated at a value of $ \\beta $ close to\n", "one, let’s compare them to the best choice for an *infinitely* patient worker.\n", "\n", "Intuitively, an infinitely patient worker would like to maximize steady state\n", "wages, which are a function of steady state capital.\n", "\n", "You can take it as given—it’s certainly true—that the infinitely patient worker does not\n", "search in the long run (i.e., $ s_t = 0 $ for large $ t $).\n", "\n", "Thus, given $ \\phi $, steady state capital is the positive fixed point\n", "$ x^*(\\phi) $ of the map $ x \\mapsto g(x, \\phi) $.\n", "\n", "Steady state wages can be written as $ w^*(\\phi) = x^*(\\phi) (1 - \\phi) $.\n", "\n", "Graph $ w^*(\\phi) $ with respect to $ \\phi $, and examine the best\n", "choice of $ \\phi $.\n", "\n", "Can you give a rough interpretation for the value that you see?" ] }, { "cell_type": "markdown", "id": "d38326dc", "metadata": {}, "source": [ "## Solution to[ Exercise 32.2](https://python.quantecon.org/#jv_ex2)\n", "\n", "The figure can be produced as follows" ] }, { "cell_type": "code", "execution_count": null, "id": "870659d1", "metadata": { "hide-output": false }, "outputs": [], "source": [ "jv = JVWorker()\n", "\n", "def xbar(ϕ):\n", " A, α = jv.A, jv.α\n", " return (A * ϕ**α)**(1 / (1 - α))\n", "\n", "ϕ_grid = np.linspace(0, 1, 100)\n", "fig, ax = plt.subplots(figsize=(9, 7))\n", "ax.set(xlabel='$\\phi$')\n", "ax.plot(ϕ_grid, [xbar(ϕ) * (1 - ϕ) for ϕ in ϕ_grid], label='$w^*(\\phi)$')\n", "ax.legend()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "ffd2e46e", "metadata": {}, "source": [ "Observe that the maximizer is around 0.6.\n", "\n", "This is similar to the long-run value for $ \\phi $ obtained in\n", "Exercise 32.1.\n", "\n", "Hence the behavior of the infinitely patent worker is similar to that\n", "of the worker with $ \\beta = 0.96 $.\n", "\n", "This seems reasonable and helps us confirm that our dynamic programming\n", "solutions are probably correct." ] } ], "metadata": { "date": 1714442505.923959, "filename": "jv.md", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "title": "Job Search VI: On-the-Job Search" }, "nbformat": 4, "nbformat_minor": 5 }