{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Alternative Combinations of Parameter Values\n", "\n", "[![badge](https://img.shields.io/badge/Launch%20using%20-Econ--ARK-blue)](https://econ-ark.org/materials/alternative-combos-of-parameter-values#launch)\n", "\n", "## Introduction\n", "\n", "The notebook \"Micro-and-Macro-Implications-of-Very-Impatient-HHs\" is an exercise that demonstrates the consequences of changing a key parameter of the [cstwMPC](http://www.econ2.jhu.edu/people/ccarroll/papers/cstwMPC) model, the time preference factor $\\beta$.\n", "\n", "The [REMARK](https://github.com/econ-ark/REMARK) `SolvingMicroDSOPs` reproduces the last figure in the [SolvingMicroDSOPs](http://www.econ2.jhu.edu/people/ccarroll/SolvingMicroDSOPs) lecture notes, which shows that there are classes of alternate values of $\\beta$ and $\\rho$ that fit the data almost as well as the exact 'best fit' combination.\n", "\n", "Inspired by this comparison, this notebook asks you to examine the consequences for:\n", "\n", "* The consumption function\n", "* The distribution of wealth\n", "\n", "Of _joint_ changes in $\\beta$ and $\\rho$ together.\n", "\n", "One way you can do this is to construct a list of alternative values of $\\rho$ (say, values that range upward from the default value of $\\rho$, in increments of 0.2, all the way to $\\rho=5$). Then for each of these values of $\\rho$ you will find the value of $\\beta$ that leads the same value for target market resources, $\\check{m}$.\n", "\n", "As a reminder, $\\check{m}$ is defined as the value of $m$ at which the optimal value of ${c}$ is the value such that, at that value of ${c}$, the expected level of ${m}$ next period is the same as its current value:\n", "\n", "$\\mathbb{E}_{t}[{m}_{t+1}] = {m}_{t}$\n", "\n", "Other notes:\n", "* The cstwMPC model solves and simulates the problems of consumers with 7 different values of $\\beta$\n", " * You should do your exercise using the middle value of $\\beta$ from that exercise:\n", " * `DiscFac_mean = 0.9855583`\n", "* You are likely to run into the problem, as you experiment with parameter values, that you have asked HARK to solve a model that does not satisfy one of the impatience conditions required for the model to have a solution. Those conditions are explained intuitively in the [TractableBufferStock](http://www.econ2.jhu.edu/people/ccarroll/public/lecturenotes/consumption/TractableBufferStock/) model. The versions of the impatience conditions that apply to the $\\texttt{IndShockConsumerType}$ model can be found in the paper [BufferStockTheory](http://www.econ2.jhu.edu/people/ccarroll/papers/BufferStockTheory), table 2.\n", " * The conditions that need to be satisfied are:\n", " * The Growth Impatience Condition (GIC)\n", " * The Return Impatience Condition (RIC)\n", "* Please accumulate the list of solved consumers' problems in a list called `MyTypes`\n", " * For compatibility with a further part of the assignment below" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# This cell merely imports and sets up some basic functions and packages\n", "from HARK.utilities import get_lorenz_shares, get_percentiles\n", "from tqdm import tqdm\n", "import numpy as np" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# Import IndShockConsumerType\n", "from HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType\n", "\n", "# Define a dictionary with calibrated parameters\n", "cstwMPC_calibrated_parameters = {\n", " \"CRRA\": 1.0, # Coefficient of relative risk aversion\n", " \"Rfree\": 1.01 / (1.0 - 1.0 / 160.0), # Survival probability,\n", " # Permanent income growth factor (no perm growth),\n", " \"PermGroFac\": [1.000**0.25],\n", " \"PermGroFacAgg\": 1.0,\n", " \"BoroCnstArt\": 0.0,\n", " \"CubicBool\": False,\n", " \"vFuncBool\": False,\n", " \"PermShkStd\": [\n", " (0.01 * 4 / 11) ** 0.5\n", " ], # Standard deviation of permanent shocks to income\n", " \"PermShkCount\": 5, # Number of points in permanent income shock grid\n", " \"TranShkStd\": [\n", " (0.01 * 4) ** 0.5\n", " ], # Standard deviation of transitory shocks to income,\n", " \"TranShkCount\": 5, # Number of points in transitory income shock grid\n", " \"UnempPrb\": 0.07, # Probability of unemployment while working\n", " \"IncUnemp\": 0.15, # Unemployment benefit replacement rate\n", " \"UnempPrbRet\": 0.0,\n", " \"IncUnempRet\": 0.0,\n", " \"aXtraMin\": 0.00001, # Minimum end-of-period assets in grid\n", " \"aXtraMax\": 40, # Maximum end-of-period assets in grid\n", " \"aXtraCount\": 32, # Number of points in assets grid\n", " \"aXtraExtra\": [None],\n", " \"aXtraNestFac\": 3, # Number of times to 'exponentially nest' when constructing assets grid\n", " \"LivPrb\": [1.0 - 1.0 / 160.0], # Survival probability\n", " \"DiscFac\": 0.97, # Default intertemporal discount factor; dummy value, will be overwritten\n", " \"cycles\": 0,\n", " \"T_cycle\": 1,\n", " \"T_retire\": 0,\n", " # Number of periods to simulate (idiosyncratic shocks model, perpetual youth)\n", " \"T_sim\": 1200,\n", " \"T_age\": 400,\n", " \"IndL\": 10.0 / 9.0, # Labor supply per individual (constant),\n", " \"aNrmInitMean\": np.log(0.00001),\n", " \"aNrmInitStd\": 0.0,\n", " \"pLvlInitMean\": 0.0,\n", " \"pLvlInitStd\": 0.0,\n", " \"AgentCount\": 10000,\n", "}" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# Construct a list of solved consumers' problems, IndShockConsumerType is just a place holder\n", "MyTypes = [IndShockConsumerType(verbose=0, **cstwMPC_calibrated_parameters)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simulating the Distribution of Wealth for Alternative Combinations\n", "\n", "You should now have constructed a list of consumer types all of whom have the same _target_ level of market resources $\\check{m}$.\n", "\n", "But the fact that everyone has the same target ${m}$ does not mean that the _distribution_ of ${m}$ will be the same for all of these consumer types.\n", "\n", "In the code block below, fill in the contents of the loop to solve and simulate each agent type for many periods. To do this, you should invoke the methods $\\texttt{solve}$, $\\texttt{initialize\\_sim}$, and $\\texttt{simulate}$ in that order. Simulating for 1200 quarters (300 years) will approximate the long run distribution of wealth in the population." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:03<00:00, 3.03s/it]\n" ] } ], "source": [ "for ThisType in tqdm(MyTypes):\n", " ThisType.solve()\n", " ThisType.initialize_sim()\n", " ThisType.simulate()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that you have solved and simulated these consumers, make a plot that shows the relationship between your alternative values of $\\rho$ and the mean level of assets" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# To help you out, we have given you the command needed to construct a list of the levels of assets for all consumers\n", "aLvl_all = np.concatenate([ThisType.state_now[\"aLvl\"] for ThisType in MyTypes])\n", "\n", "# You should take the mean of aLvl for each consumer in MyTypes, divide it by the mean across all simulations\n", "# and then plot the ratio of the values of mean(aLvl) for each group against the value of $\\rho$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Interpret\n", "Here, you should attempt to give an intiutive explanation of the results you see in the figure you just constructed" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Distribution of Wealth...\n", "\n", "Your next exercise is to show how the distribution of wealth differs for the different parameter values" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# Finish filling in this function to calculate the Euclidean distance between the simulated and actual Lorenz curves.\n", "\n", "\n", "def calcLorenzDistance(SomeTypes):\n", " \"\"\"\n", " Calculates the Euclidean distance between the simulated and actual (from SCF data) Lorenz curves at the\n", " 20th, 40th, 60th, and 80th percentiles.\n", "\n", " Parameters\n", " ----------\n", " SomeTypes : [AgentType]\n", " List of AgentTypes that have been solved and simulated. Current levels of individual assets should\n", " be stored in the attribute aLvl.\n", "\n", " Returns\n", " -------\n", " lorenz_distance : float\n", " Euclidean distance (square root of sum of squared differences) between simulated and actual Lorenz curves.\n", " \"\"\"\n", " # Define empirical Lorenz curve points\n", " lorenz_SCF = np.array([-0.00183091, 0.0104425, 0.0552605, 0.1751907])\n", "\n", " # Extract asset holdings from all consumer types\n", " aLvl_sim = np.concatenate([ThisType.aLvl for ThisType in MyTypes])\n", "\n", " # Calculate simulated Lorenz curve points\n", " lorenz_sim = get_lorenz_shares(aLvl_sim, percentiles=[0.2, 0.4, 0.6, 0.8])\n", "\n", " # Calculate the Euclidean distance between the simulated and actual Lorenz curves\n", " lorenz_distance = np.sqrt(np.sum((lorenz_SCF - lorenz_sim) ** 2))\n", "\n", " # Return the Lorenz distance\n", " return lorenz_distance" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ...and the Marginal Propensity to Consume\n", "\n", "Now let's look at the aggregate MPC. In the code block below, write a function that produces text output of the following form:\n", "\n", "$\\texttt{The 35th percentile of the MPC is 0.15623}$\n", "\n", "Your function should take two inputs: a list of types of consumers and an array of percentiles (numbers between 0 and 1). It should return no outputs, merely print to screen one line of text for each requested percentile. The model is calibrated at a quarterly frequency, but Carroll et al report MPCs at an annual frequency. To convert, use the formula:\n", "\n", "$\\kappa_{Y} \\approx 1.0 - (1.0 - \\kappa_{Q})^4$" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The 5.0th percentile of the MPC is 0.3830226479018095\n", "The 10.0th percentile of the MPC is 0.4190098031734306\n", "The 15.0th percentile of the MPC is 0.45984701160581964\n", "The 20.0th percentile of the MPC is 0.45984701160581964\n", "The 25.0th percentile of the MPC is 0.45984701160581964\n", "The 30.0th percentile of the MPC is 0.4979166414954148\n", "The 35.0th percentile of the MPC is 0.4979166414954148\n", "The 40.0th percentile of the MPC is 0.4979166414954148\n", "The 44.99999999999999th percentile of the MPC is 0.5372418610399308\n", "The 49.99999999999999th percentile of the MPC is 0.5372418610399308\n", "The 54.99999999999999th percentile of the MPC is 0.5372418610399308\n", "The 60.0th percentile of the MPC is 0.5821887061768969\n", "The 65.0th percentile of the MPC is 0.5821887061768969\n", "The 70.0th percentile of the MPC is 0.634537312685834\n", "The 75.0th percentile of the MPC is 0.634537312685834\n", "The 80.0th percentile of the MPC is 0.7267307307276032\n", "The 85.0th percentile of the MPC is 0.7799255201452847\n", "The 90.0th percentile of the MPC is 0.8208530902866055\n", "The 95.0th percentile of the MPC is 0.8966083611183647\n" ] } ], "source": [ "# Write a function to tell us about the distribution of the MPC in this code block, then test it!\n", "# You will almost surely find it useful to use a for loop in this function.\n", "def describeMPCdstn(SomeTypes, percentiles):\n", " MPC_sim = np.concatenate([ThisType.MPCnow for ThisType in SomeTypes])\n", " MPCpercentiles_quarterly = get_percentiles(MPC_sim, percentiles=percentiles)\n", " MPCpercentiles_annual = 1.0 - (1.0 - MPCpercentiles_quarterly) ** 4\n", "\n", " for j in range(len(percentiles)):\n", " print(\n", " \"The \"\n", " + str(100 * percentiles[j])\n", " + \"th percentile of the MPC is \"\n", " + str(MPCpercentiles_annual[j])\n", " )\n", "\n", "\n", "describeMPCdstn(MyTypes, np.linspace(0.05, 0.95, 19))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# If You Get Here ...\n", "\n", "If you have finished the above exercises quickly and have more time to spend on this assignment, for extra credit you can do the same exercise where, instead of exploring the consequences of alternative values of relative risk aversion $\\rho$, you should test the consequences of different values of the growth factor $\\Gamma$ that lead to the same $\\check{m}$." ] } ], "metadata": { "jupytext": { "cell_metadata_filter": "collapsed,code_folding", "cell_metadata_json": true, "formats": "ipynb,py:percent", "notebook_metadata_filter": "all" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.13" }, "latex_envs": { "LaTeX_envs_menu_present": true, "autoclose": false, "autocomplete": true, "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 1, "hotkeys": { "equation": "Ctrl-E", "itemize": "Ctrl-I" }, "labels_anchors": false, "latex_user_defs": false, "report_style_numbering": false, "user_envs_cfg": false } }, "nbformat": 4, "nbformat_minor": 4 }