{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Foundations of Computational Economics #46\n", "\n", "by Fedor Iskhakov, ANU\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "## Nested fixed point maximum likelihood estimator (NFXP)\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "\n", "\n", "[https://youtu.be/8nfU6vpmdT0](https://youtu.be/8nfU6vpmdT0)\n", "\n", "Description: Nested loop MLE estimator. Combining Newton-Kantorovich iterations with gradient based likelihood maximization. Structural estimation of Rust bus engine replacement model." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Maximum likelihood estimator (MLE)\n", "\n", "Maximum likelihood estimator is applicable when the model yields\n", "a probability distribution for the observable data\n", "\n", "- Let $ L(x,\\theta) $ denote the distribution (pdf) of the observables $ x $ implied by the model with parameter vector $ \\theta $ \n", "- Let $ Z_n = (z_i,\\dots,z_n) $ denote the data, consisting of $ n $ independent observations (random sample) \n", "\n", "\n", "*MLE and MSM are two main estimation methods for dynamic economic models*" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Likelihood function\n", "\n", "- If $ L(x,\\theta) $ is a discrete distribution then $ L(x,\\theta) $ computed at the data point gives the probability of\n", " observing the given data point exactly \n", "- If $ L(x,\\theta) $ is a continuous distribution then $ L(x,\\theta) $ computed at the data point is analogous to the probability of\n", " observing the given data point \n", "\n", "\n", "Likelihood function is\n", "\n", "$$\n", "\\mathcal{L}_n(\\theta) = L(Z_n,\\theta) = \\prod_{i=1}^n L(z_i,\\theta)\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Definition of MLE estimator\n", "\n", "$$\n", "\\hat{\\theta}_{MLE} = \\arg\\max_{\\theta \\in \\Theta} \\, \\prod_{i=1}^n \\underset{\\mathcal{L}(z_i,\\theta)}{\\underbrace{L(z_i,\\theta)}} = \\arg\\max_{\\theta \\in \\Theta} \\mathcal{L}_n(\\theta)\n", "$$\n", "\n", "$$\n", "\\hat{\\theta}_{MLE} = \\arg\\max_{\\theta \\in \\Theta} \\, \\sum_{i=1}^n \\underset{\\ell(z_i,\\theta)}{\\underbrace{\\log L(z_i,\\theta)}} = \\arg\\max_{\\theta \\in \\Theta} \\ell_n(\\theta)\n", "$$\n", "\n", "- $ \\theta \\in \\Theta $ is parameter space \n", "- $ Z_n = (z_1,\\dots,z_n) $ is observed data " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Asymptotic properties of MLE\n", "\n", "1. Consistency: $ \\; \\hat{\\theta}_{MLE} \\xrightarrow{\\mathcal{P}} \\theta_0 $ \n", "1. Asymptotic normality: $ \\; \\sqrt{n} ( \\hat{\\theta}_{MLE} - \\theta_0 ) \\xrightarrow{\\mathcal{D}} N(0,\\mathcal{I}(\\theta_0)^{-1}) $ \n", "1. Asymptotic efficiency: MLE approaches the smallest possible variance (Cramér-Rao bound) for unbiased estimator when $ n \\rightarrow \\infty $ \n", "1. Functional invariance: MLE of $ \\gamma_0 = g(\\theta_0) $ is given by $ g(\\hat{\\theta}_{MLE}) $ if $ g(\\cdot) $ is continuously differentiable function " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Asymptotic variance of MLE estimator\n", "\n", "$$\n", "\\sqrt{n} ( \\hat{\\theta}_{MLE} - \\theta_0 ) \\xrightarrow{\\mathcal{D}} N(0,\\mathcal{I}(\\theta_0)^{-1}) \\Leftrightarrow\n", "\\sqrt{\\mathcal{I}_n(\\theta_0)} ( \\hat{\\theta}_{MLE} - \\theta_0 ) \\xrightarrow{\\mathcal{D}} N(0,1)\n", "$$\n", "\n", "- variance is given by the inverse of the Fisher information, computed from pdf $ L(x,\\theta_0) $ \n", "\n", "\n", "$$\n", "\\mathcal{I}(\\theta_0) = - \\mathbb{E}\\left[ \\frac{\\partial^2}{\\partial\\theta \\partial\\theta} \\ell(x,\\theta_0) \\right] =\n", "\\mathbb{E}\\left[ \\left( \\frac{\\partial}{\\partial\\theta} \\ell(x,\\theta_0) \\right)^2 \\right]\n", "$$\n", "\n", "- alternatively, Fisher information matrix can be computed from log-likelihood function $ \\ell_n(\\theta_0) $ of $ n $ i.i.d. random variables in place of data $ Z_n $ \n", "\n", "\n", "$$\n", "\\mathcal{I}_n(\\theta_0) = - \\mathbb{E}\\left[ \\frac{\\partial^2}{\\partial\\theta \\partial\\theta} \\ell_n(\\theta_0) \\right] =\n", "\\mathbb{E}\\left[ \\left( \\frac{\\partial}{\\partial\\theta} \\ell_n(\\theta_0) \\right)^2 \\right] =\n", "n \\mathcal{I}(\\theta_0)\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Estimating Fisher information\n", "\n", "- Fisher information depends on model pdf $ L(x,\\theta_0) $, hardly computable \n", "- Can be consistently estimated due to LLN: $ -\\tfrac{1}{n} \\sum_i^n \\frac{\\partial^2}{\\partial\\theta \\partial\\theta} \\ell(z_i,\\theta) \\xrightarrow{P} \\mathcal{I}(\\theta_0) $ \n", "- This gives *observed* Fisher information \n", "\n", "\n", "$$\n", "\\hat{\\mathcal{J}}_n(\\theta_0) = - \\sum_{i=1}^n \\frac{\\partial^2}{\\partial\\theta \\partial\\theta} \\ell(z_i,\\theta) =\n", "- \\frac{\\partial^2}{\\partial\\theta \\partial\\theta} \\ell_n(\\theta_0)\n", "\\approx \\mathcal{I}_n(\\theta_0) = n \\mathcal{I}(\\theta_0)\n", "$$\n", "\n", "- Plugging in the estimate $ \\hat{\\theta} = \\hat{\\theta}_{MLE} $ in we have \n", "\n", "\n", "$$\n", "\\sqrt{\\hat{\\mathcal{J}}_n(\\hat{\\theta})} ( \\hat{\\theta} - \\theta_0 ) \\xrightarrow{\\mathcal{D}} N(0,1) \\Rightarrow\n", "\\hat{\\theta} \\approx N(\\theta_0, \\hat{\\mathcal{J}}_n(\\hat{\\theta})^{-1})\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Information equality\n", "\n", "- similar to the two equivalent definitions for the *expected* Fisher information, for the *observed*\n", " Fisher information it holds (assuming $ \\theta_0 $ in a scalar) \n", "\n", "\n", "$$\n", "\\hat{\\mathcal{J}}_n(\\theta_0) = - \\frac{\\partial^2}{\\partial\\theta \\partial\\theta} \\ell_n(\\theta_0)\n", "= \\left( \\frac{\\partial}{\\partial\\theta} \\ell_n(\\theta_0) \\right)^2\n", "$$\n", "\n", "- let parameter $ \\theta \\in \\mathbb{R}^K $ be a vector with $ K $ elements \n", "- need to adjust both second order derivative and the square of the first order derivative \n", "- the square in RHS originates in the calculation of variance of $ \\frac{\\partial \\ell_n(\\theta_0)}{\\partial\\theta} $ \n", "\n", "\n", "$$\n", "Var\\left( \\frac{\\partial \\ell_n(\\theta_0)}{\\partial\\theta} \\right) =\n", "\\mathbb{E} \\Big( \\frac{\\partial \\ell_n(\\theta_0)}{\\partial\\theta} \\Big)^2 -\n", "\\Big( \\underset{=0}{\\underbrace{ \\mathbb{E} \\frac{\\partial \\ell_n(\\theta_0)}{\\partial\\theta} }} \\Big)^2\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Outer product of gradients\n", "\n", "- consider vector random variable $ \\tilde{X} = (\\tilde{X}_1,\\dots,\\tilde{X}_K)^{T} $, so $ \\tilde{X} $ is column-vector $ K \\times 1 $ \n", "- expectation of $ \\tilde{X} $ is given by the $ K \\times 1 $ vector $ \\mathbb{E}\\tilde{X} $ \n", "- $ K \\times K $ variance-covariance matrix $ \\Sigma $ of $ \\tilde{X} $ is given by the expectation of the\n", " **outer product** $ \\otimes $ \n", "\n", "\n", "$$\n", "\\Sigma = \\mathbb{E} \\left\\{ \\big( \\tilde{X}-\\mathbb{E}\\tilde{X} \\big) \\otimes \\big( \\tilde{X}-\\mathbb{E}\\tilde{X} \\big)^{T} \\right\\}\n", "$$\n", "\n", "- variances of the elements of $ \\tilde{X} $ are on the main diagonal of $ \\Sigma $, whereas the off-diagonal elements contain covariances\n", " $ cov(\\tilde{X}_i,\\tilde{X}_j) $ for all $ i \\ne j $ " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Information matrix equality\n", "\n", "- let $ H(\\ell_n(\\theta_0)) = \\frac{\\partial^2}{\\partial\\theta \\partial\\theta} \\ell_n(\\theta_0) $ denote the $ K \\times K $\n", " Hessian matrix of $ \\ell_n(\\theta_0) $ \n", "- let $ \\nabla f(\\theta) = \\frac{\\partial}{\\partial\\theta} f(\\theta) $ denote the gradient of $ f(\\theta) $,\n", " $ K \\times 1 $ vector \n", "- let $ \\nabla\\ell(Z_n, \\theta_0) = \\big( \\frac{\\partial}{\\partial\\theta} \\ell(z_1,\\theta_0),\\dots,\\frac{\\partial}{\\partial\\theta} \\ell(z_n,\\theta_0) \\big) $ denote the $ K \\times n $ matrix of gradients of $ \\ell(z_i,\\theta_0) $ stacked for all $ i $ \n", "\n", "\n", "Then we have the **information matrix equality**\n", "\n", "$$\n", "\\begin{eqnarray}\n", "\\hat{\\mathcal{J}}_n(\\theta_0) = - H(\\ell_n(\\theta_0))\n", "&=& \\sum_{i=1}^n \\nabla\\ell(z_i, \\theta_0) \\otimes \\nabla\\ell(z_i, \\theta_0)^{T} \\\\\n", "&=& \\nabla\\ell(Z_n, \\theta_0) \\otimes \\nabla\\ell(Z_n, \\theta_0)^{T}\n", "\\end{eqnarray}\n", "$$\n", "\n", "- where parameter $ \\theta \\in \\mathbb{R}^K $ is a vector with $ K $ elements " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Summary and conclusion\n", "\n", "$$\n", "\\hat{\\theta}_{MLE} = \\arg\\max_{\\theta \\in \\Theta} \\, \\sum_{i=1}^n \\underset{\\ell(z_i,\\theta)}{\\underbrace{\\log L(z_i,\\theta)}} = \\arg\\max_{\\theta \\in \\Theta} \\ell_n(\\theta)\n", "$$\n", "\n", "- $ \\theta \\in \\Theta $ is parameter space \n", "- $ Z_n = (z_1,\\dots,z_n) $ is observed data \n", "\n", "\n", "$$\n", "\\sqrt{\\hat{\\mathcal{J}}_n(\\hat{\\theta}_{MLE})} ( \\hat{\\theta}_{MLE} - \\theta_0 ) \\xrightarrow{\\mathcal{D}} N(0,1) \\Rightarrow\n", "\\hat{\\theta}_{MLE} \\approx N(\\theta_0, \\hat{\\mathcal{J}}_n(\\hat{\\theta}_{MLE})^{-1})\n", "$$\n", "\n", "$$\n", "\\hat{\\mathcal{J}}_n(\\theta_0) = - H(\\ell_n(\\theta_0))\n", "= \\nabla\\ell(Z_n, \\theta_0) \\otimes \\nabla\\ell(Z_n, \\theta_0)^{T}\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Berndt-Hall-Hall-Hausman (BHHH) algorithm\n", "\n", "$$\n", "H(\\ell_n(\\theta)) = - \\nabla\\ell(Z_n, \\theta) \\otimes \\nabla\\ell(Z_n, \\theta)^{T}\n", "$$\n", "\n", "- information matrix equality gives an approximation of the Hessian of the log-likelihood function\n", " $ \\ell_n(\\theta_0) $ \n", "- can be used in quasi-Newton optimization method! $ \\Rightarrow $ BHHH algorithm \n", "- outer product of gradients result in semi-definite matrix for every $ \\theta $ so even if the approximation\n", " is not accurate, it will never point Newton iteration in the wrong direction \n", "- search of appropriate step size in the direction found by approximated Hessian is part of the algorithm to ensure global convergence \n", "\n", "\n", "📖 E.K. Berndt, B.H. Hall, R.E. Hall and J.A. Hausman, 1974 “Estimation and inference in nonlinear structural models”" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Back to Rust bus engine replacement model of Harold Zurcher\n", "\n", "The only model in the course which satisfies the requirements for MLE.\n", "\n", "See videos 28, 29 and 44 for the setup and the code base." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Bellman equation(s)\n", "\n", "- in standard value function space \n", "\n", "\n", "$$\n", "V(x,\\varepsilon) = \\max_{d\\in \\{0,1\\}} \\big\\{ u(x,\\varepsilon_d,d) + \\beta \\mathbb{E}\\big[ V(x',\\varepsilon')\\big|x,\\varepsilon,d\\big] \\big\\}\n", "$$\n", "\n", "- in expected value function space \n", "\n", "\n", "$$\n", "EV(x,d) = \\sum_{X} \\log \\big( \\exp[u(x',0) + \\beta EV(x',0)] + \\exp[u(x',1) + \\beta EV(x',1)] \\big) \\pi(x'|x,d)\n", "$$\n", "\n", "- $ x $ is discretized mileage \n", "- $ \\varepsilon $ is the vector of i.i.d. EV1 shocks \n", "- $ d \\in \\{0,1\\} = \\{\\text{keep},\\text{replace}\\} $ " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Zurcher’s preferences\n", "\n", "$$\n", "u(x_{t},d_t,\\theta_1)=\\left \\{\n", "\\begin{array}{ll}\n", " -RC-c(0,\\theta_1) & \\text{if }d_{t}=\\text{replace}=1 \\\\\n", " -c(x_{t},\\theta_1) & \\text{if }d_{t}=\\text{keep}=0\n", "\\end{array} \\right.\n", "$$\n", "\n", "- $ RC $ = replacement cost \n", "- $ c(x,\\theta_1) $ = cost of maintenance with preference parameters $ \\theta_1 $ " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Transition matrix for mileage\n", "\n", "- If not replacing ($ d=0) $ \n", "\n", "\n", "$$\n", "\\Pi_{n \\times n} =\n", "\\begin{pmatrix}\n", "\\theta_{20} & \\theta_{21} & \\theta_{22} & 0 & \\cdot & \\cdot & \\cdot & 0 \\\\\n", "0 & \\theta_{20} & \\theta_{21} & \\theta_{22} & 0 & \\cdot & \\cdot & 0 \\\\\n", "0 & 0 &\\theta_{20} & \\theta_{21} & \\theta_{22} & 0 & \\cdot & 0 \\\\\n", "\\cdot & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\\n", "0 & \\cdot & \\cdot & 0 & \\theta_{20} & \\theta_{21} & \\theta_{22} & 0 \\\\\n", "0 & \\cdot & \\cdot & \\cdot & 0 & \\theta_{20} & \\theta_{21} & \\theta_{22} \\\\\n", "0 & \\cdot & \\cdot & \\cdot & \\cdot & 0 & \\theta_{20} & 1-\\theta_{20} \\\\\n", "0 & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & 0 & 1\n", "\\end{pmatrix}\n", "$$\n", "\n", "- If replacing ($ d=1 $), transition probabilities are given by the first row " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Choice probabilities\n", "\n", "Once the fixed point is found, the policy function is given by the *optimal* choice probability $ P(d|x) $ which has the *logit* structure due to assumption EV:\n", "\n", "$$\n", "P(0|x) = \\frac{\\exp[ u(x,0) + \\beta EV(x,0) ]}{\\sum_{d\\in \\{0,1\\}} \\exp[u(x,d) + \\beta EV(x,d)]}\n", "$$\n", "\n", "$$\n", "P(0|x) = \\frac{1}{1 + \\exp[u(x,1) - u(x,0) + \\beta EV(x,1) - \\beta EV(x,0)]}\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Where we are so far\n", "\n", "- for every value of structural parameters $ \\theta = (RC,\\theta_1,\\theta_20,\\dots,\\theta_20,\\beta) $ \n", "- we can solve the model fast using NK iterations within the polyalgorithm \n", "- based on the solution $ EV_\\theta(x,d) $ can form choice probabilities\n", " $ P(\\text{keep}|x,\\theta) $ and $ P(\\text{replace}|x,\\theta) $ \n", "\n", "\n", "Next: given the data on mileage ($ x $) and choices ($ d $), can form likelihood function\n", "$ L(x,d,\\theta) $, and proceed with maximum likelihood estimation" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Data\n", "\n", "- Harold Zurcher’s Maintenance records of 162 busses in 8 groups \n", "- Monthly observations of mileage on each bus (odometer reading) \n", "- Data on maintenance operations:\n", " 1. Routine periodic maintenance (i.e. brake adjustment)\n", " 2. Replacement or repair at time of failure\n", " 3. Major engine overhaul and/or replacement (*the focus of the paper*) \n", "\n", "\n", "Data $ (x_{i,t},d_{i,t}) $ where $ x_{i,t} $ is discretized mileage (bin indexes), and $ d_{i,t} $ is the observed choice at this mileage for each bus $ i $ in each month $ t $" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Likelihood function\n", "\n", "$$\n", "\\mathcal{L}_n(\\theta, EV_\\theta) = \\prod_{i=1}^{162}\\prod_{t=2}^{T_i} P(d_{i,t}|x_{i,t}) \\pi(x_{i,t}|x_{i,t-1},d_{i,t-1})\n", "$$\n", "\n", "$$\n", "\\ell_n(\\theta,EV_\\theta) = \\log \\mathcal{L}_n(\\theta,EV_\\theta) = \\sum_{i=1}^{162}\\sum_{t=2}^{T_i} \\big ( \\log P(d_{i,t}|x_{i,t}) + \\log \\pi(x_{i,t}|x_{i,t-1},d_{i,t-1}) \\big)\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### MLE estimator\n", "\n", "$$\n", "\\hat{\\theta} = \\arg\\max_\\theta \\ell_n(\\theta, EV_{\\theta})\n", "$$\n", "\n", "Unconstrained optimiztion, but retires the computation of $ EV_{\\theta} $ for each value of parameter $ \\theta $" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Nested loop\n", "\n", "**Outer loop** = Hill-climbing algorithm\n", "\n", "- Log-likelihood function $ \\ell_n(\\theta,EV_{\\theta}) $ is maximized with respect to $ \\theta $ \n", "- Quasi-Newton algorithm with BHHH approximation of Hessian \n", "- Each evaluation of $ \\ell_n(\\theta,EV_{\\theta}) $ requires solution for the fixed point $ EV_{\\theta} $ \n", "\n", "\n", "**Inner loop** = Fixed point algorithm\n", "\n", "- Solver for the fixed point of the Bellman operator $ EV_{\\theta} = \\Gamma(EV_{\\theta}) $ \n", "- Successive approximations (VFI) + Newton-Kantorovich iterations in a polyalgorithm " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Important details\n", "\n", "1. **Performance:** gradient based Newton method to maximize likelihood \n", "1. **Analytical gradients:** using implicit function theorem and chain rule for the outer loop, and Fréchet derivative for the inner loop \n", "1. **Use BHHH:** outer product of gradient approximation for Hessian \n", "1. **Numerical stability:** recenter logsum and choice probabilities \n", "1. **Further info:** NFXP manual (see further learning resources) " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Analytical gradient of the likelihood function\n", "\n", "$$\n", "\\frac{\\partial}{\\partial \\theta} \\ell_n(\\theta,EV_\\theta) = \\sum_{i=1}^{162}\\sum_{t=2}^{T_i} \\big ( \\frac{\\partial}{\\partial \\theta} \\log P(d_{i,t}|x_{i,t}) + \\frac{\\partial}{\\partial \\theta} \\log \\pi(x_{i,t}|x_{i,t-1},d_{i,t-1}) \\big)\n", "$$\n", "\n", "- straightforward with multiple application of chain rule \n", "- the key point is expressing derivatives of the expected value function $ \\frac{\\partial EV_{\\theta}}{\\partial \\theta} $, which is done with Banach space version of *implicit function theorem*,\n", " applied to the fixed point equation \n", "\n", "\n", "$$\n", "\\frac{\\partial EV_{\\theta}}{\\partial \\theta} = \\frac{\\partial \\Gamma_{\\theta}}{\\partial \\theta} + \\frac{\\partial \\Gamma_{\\theta}}{\\partial EV_{\\theta}} \\frac{\\partial EV_{\\theta}}{\\partial \\theta}\n", "\\Rightarrow\n", "\\frac{\\partial EV_{\\theta}}{\\partial \\theta} = \\Big( I - \\frac{\\partial \\Gamma_{\\theta}}{\\partial EV_{\\theta}} \\Big)^{-1} \\frac{\\partial \\Gamma_{\\theta}}{\\partial \\theta}\n", "$$\n", "\n", "- $ \\Gamma_{\\theta}(\\cdot) $ is the (finite approximation of) Bellman operator in expected function space \n", "- $ \\frac{\\partial \\Gamma_{\\theta}}{\\partial EV_{\\theta}} $ is the (finite approximation of) Fréchet derivative " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Implementation\n", "\n", "- `zurcher` class fully implements the inner loop (see video 44) \n", "- need to write the outer loop: likelihood function optimizer, and calculator for standard errors of the estimates \n", "\n", "\n", "Alternative implementation: `ruspy` module Maximilian Blesch and the computational economics group at the University of Bonn\n", "\n", "[https://ruspy.readthedocs.io/en/latest/index.html](https://ruspy.readthedocs.io/en/latest/index.html)\n", "[https://github.com/OpenSourceEconomics/ruspy](https://github.com/OpenSourceEconomics/ruspy)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Further learning resources\n", "\n", "- 📖 Adda Cooper “Dynamic Economics” pp. 83-85 \n", "- Note on Information Matrix Equality [http://web.econ.ku.dk/munk-nielsen/notes/noteInformationMatrix.pdf](http://web.econ.ku.dk/munk-nielsen/notes/noteInformationMatrix.pdf) \n", "- Matlab implementation of full solver and Rust estimator (NFXP) [https://github.com/dseconf/DSE2019/tree/master/02_DDC_SchjerningIskhakov](https://github.com/dseconf/DSE2019/tree/master/02_DDC_SchjerningIskhakov) \n", "- 📖 John Rust (1987, Econometrica) “Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher” \n", "- NFXP manual with detailed instructions on Fréchet derivative in Rust model \n", "- RusPy package [https://ruspy.readthedocs.io/en/latest/index.html](https://ruspy.readthedocs.io/en/latest/index.html) " ] } ], "metadata": { "celltoolbar": "Slideshow", "date": 1612589586.8894951, "download_nb": false, "filename": "46_nfxp_estimation.rst", "filename_with_path": "46_nfxp_estimation", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" }, "title": "Foundations of Computational Economics #46" }, "nbformat": 4, "nbformat_minor": 4 }