{ "cells": [ { "cell_type": "markdown", "id": "2e7c5397", "metadata": {}, "source": [ "\n", "" ] }, { "cell_type": "markdown", "id": "2d38f147", "metadata": {}, "source": [ "# Univariate Time Series with Matrix Algebra" ] }, { "cell_type": "markdown", "id": "1dc170fc", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Univariate Time Series with Matrix Algebra](#Univariate-Time-Series-with-Matrix-Algebra) \n", " - [Overview](#Overview) \n", " - [Samuelson’s model](#Samuelson’s-model) \n", " - [Adding a Random Term](#Adding-a-Random-Term) \n", " - [Computing Population Moments](#Computing-Population-Moments) \n", " - [Moving Average Representation](#Moving-Average-Representation) \n", " - [A Forward Looking Model](#A-Forward-Looking-Model) " ] }, { "cell_type": "markdown", "id": "9267bea6", "metadata": {}, "source": [ "## Overview\n", "\n", "This lecture uses matrices to solve some linear difference equations.\n", "\n", "As a running example, we’ll study a **second-order linear difference\n", "equation** that was the key technical tool in Paul Samuelson’s 1939\n", "article [[Sam39](https://python.quantecon.org/zreferences.html#id85)] that introduced the **multiplier-accelerator** model.\n", "\n", "This model became the workhorse that powered early econometric versions of\n", "Keynesian macroeconomic models in the United States.\n", "\n", "You can read about the details of that model in [this](https://python.quantecon.org/samuelson.html)\n", "QuantEcon lecture.\n", "\n", "(That lecture also describes some technicalities about second-order linear difference equations.)\n", "\n", "In this lecture, we’ll also learn about an **autoregressive** representation and a **moving average** representation of a non-stationary\n", "univariate time series $ \\{y_t\\}_{t=0}^T $.\n", "\n", "We’ll also study a “perfect foresight” model of stock prices that involves solving\n", "a “forward-looking” linear difference equation.\n", "\n", "We will use the following imports:" ] }, { "cell_type": "code", "execution_count": null, "id": "e48a8e61", "metadata": { "hide-output": false }, "outputs": [], "source": [ "import numpy as np\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "from matplotlib import cm\n", "plt.rcParams[\"figure.figsize\"] = (11, 5) #set default figure size" ] }, { "cell_type": "markdown", "id": "2d843e4a", "metadata": {}, "source": [ "## Samuelson’s model\n", "\n", "Let $ t = 0, \\pm 1, \\pm 2, \\ldots $ index time.\n", "\n", "For $ t = 1, 2, 3, \\ldots, T $ suppose that\n", "\n", "\n", "\n", "$$\n", "y_{t} = \\alpha_{0} + \\alpha_{1} y_{t-1} + \\alpha_{2} y_{t-2} \\tag{11.1}\n", "$$\n", "\n", "where we assume that $ y_0 $ and $ y_{-1} $ are given numbers\n", "that we take as **initial conditions**.\n", "\n", "In Samuelson’s model, $ y_t $ stood for **national income** or perhaps a different\n", "measure of aggregate activity called **gross domestic product** (GDP) at time $ t $.\n", "\n", "Equation [(11.1)](#equation-tswm-1) is called a **second-order linear difference equation**.\n", "\n", "But actually, it is a collection of $ T $ simultaneous linear\n", "equations in the $ T $ variables $ y_1, y_2, \\ldots, y_T $.\n", "\n", ">**Note**\n", ">\n", ">To be able to solve a second-order linear difference\n", "equation, we require two **boundary conditions** that can take the form\n", "either of two **initial conditions** or two **terminal conditions** or\n", "possibly one of each.\n", "\n", "Let’s write our equations as a stacked system\n", "\n", "$$\n", "\\underset{\\equiv A}{\\underbrace{\\left[\\begin{array}{cccccccc}\n", "1 & 0 & 0 & 0 & \\cdots & 0 & 0 & 0\\\\\n", "-\\alpha_{1} & 1 & 0 & 0 & \\cdots & 0 & 0 & 0\\\\\n", "-\\alpha_{2} & -\\alpha_{1} & 1 & 0 & \\cdots & 0 & 0 & 0\\\\\n", "0 & -\\alpha_{2} & -\\alpha_{1} & 1 & \\cdots & 0 & 0 & 0\\\\\n", "\\vdots & \\vdots & \\vdots & \\vdots & \\cdots & \\vdots & \\vdots & \\vdots\\\\\n", "0 & 0 & 0 & 0 & \\cdots & -\\alpha_{2} & -\\alpha_{1} & 1\n", "\\end{array}\\right]}}\\left[\\begin{array}{c}\n", "y_{1}\\\\\n", "y_{2}\\\\\n", "y_{3}\\\\\n", "y_{4}\\\\\n", "\\vdots\\\\\n", "y_{T}\n", "\\end{array}\\right]=\\underset{\\equiv b}{\\underbrace{\\left[\\begin{array}{c}\n", "\\alpha_{0}+\\alpha_{1}y_{0}+\\alpha_{2}y_{-1}\\\\\n", "\\alpha_{0}+\\alpha_{2}y_{0}\\\\\n", "\\alpha_{0}\\\\\n", "\\alpha_{0}\\\\\n", "\\vdots\\\\\n", "\\alpha_{0}\n", "\\end{array}\\right]}}\n", "$$\n", "\n", "or\n", "\n", "$$\n", "A y = b\n", "$$\n", "\n", "where\n", "\n", "$$\n", "y = \\begin{bmatrix} y_1 \\cr y_2 \\cr \\vdots \\cr y_T \\end{bmatrix}\n", "$$\n", "\n", "Evidently $ y $ can be computed from\n", "\n", "$$\n", "y = A^{-1} b\n", "$$\n", "\n", "The vector $ y $ is a complete time path $ \\{y_t\\}_{t=1}^T $.\n", "\n", "Let’s put Python to work on an example that captures the flavor of\n", "Samuelson’s multiplier-accelerator model.\n", "\n", "We’ll set parameters equal to the same values we used in [this QuantEcon lecture](https://python.quantecon.org/samuelson.html)." ] }, { "cell_type": "code", "execution_count": null, "id": "69feb01b", "metadata": { "hide-output": false }, "outputs": [], "source": [ "T = 80\n", "\n", "# parameters\n", "𝛼0 = 10.0\n", "𝛼1 = 1.53\n", "𝛼2 = -.9\n", "\n", "y_1 = 28. # y_{-1}\n", "y0 = 24." ] }, { "cell_type": "markdown", "id": "4aa1e994", "metadata": {}, "source": [ "Now we construct $ A $ and $ b $." ] }, { "cell_type": "code", "execution_count": null, "id": "41823e86", "metadata": { "hide-output": false }, "outputs": [], "source": [ "A = np.identity(T) # The T x T identity matrix\n", "\n", "for i in range(T):\n", "\n", " if i-1 >= 0:\n", " A[i, i-1] = -𝛼1\n", "\n", " if i-2 >= 0:\n", " A[i, i-2] = -𝛼2\n", "\n", "b = np.full(T, 𝛼0)\n", "b[0] = 𝛼0 + 𝛼1 * y0 + 𝛼2 * y_1\n", "b[1] = 𝛼0 + 𝛼2 * y0" ] }, { "cell_type": "markdown", "id": "e10b8b67", "metadata": {}, "source": [ "Let’s look at the matrix $ A $ and the vector $ b $ for our\n", "example." ] }, { "cell_type": "code", "execution_count": null, "id": "a03a848f", "metadata": { "hide-output": false }, "outputs": [], "source": [ "A, b" ] }, { "cell_type": "markdown", "id": "ee34775f", "metadata": {}, "source": [ "Now let’s solve for the path of $ y $.\n", "\n", "If $ y_t $ is GNP at time $ t $, then we have a version of\n", "Samuelson’s model of the dynamics for GNP.\n", "\n", "To solve $ y = A^{-1} b $ we can either invert $ A $ directly, as in" ] }, { "cell_type": "code", "execution_count": null, "id": "a27d5d9a", "metadata": { "hide-output": false }, "outputs": [], "source": [ "A_inv = np.linalg.inv(A)\n", "\n", "y = A_inv @ b" ] }, { "cell_type": "markdown", "id": "e7e2658a", "metadata": {}, "source": [ "or we can use `np.linalg.solve`:" ] }, { "cell_type": "code", "execution_count": null, "id": "57ff43eb", "metadata": { "hide-output": false }, "outputs": [], "source": [ "y_second_method = np.linalg.solve(A, b)" ] }, { "cell_type": "markdown", "id": "545312d3", "metadata": {}, "source": [ "Here make sure the two methods give the same result, at least up to floating\n", "point precision:" ] }, { "cell_type": "code", "execution_count": null, "id": "c6b509cc", "metadata": { "hide-output": false }, "outputs": [], "source": [ "np.allclose(y, y_second_method)" ] }, { "cell_type": "markdown", "id": "b612b4e5", "metadata": {}, "source": [ ">**Note**\n", ">\n", ">In general, `np.linalg.solve` is more numerically stable than using\n", "`np.linalg.inv` directly.\n", "However, stability is not an issue for this small example. Moreover, we will\n", "repeatedly use `A_inv` in what follows, so there is added value in computing\n", "it directly.\n", "\n", "Now we can plot." ] }, { "cell_type": "code", "execution_count": null, "id": "a0c7e4df", "metadata": { "hide-output": false }, "outputs": [], "source": [ "plt.plot(np.arange(T)+1, y)\n", "plt.xlabel('t')\n", "plt.ylabel('y')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "0a10f29f", "metadata": {}, "source": [ "The **steady state** value $ y^* $ of $ y_t $ is obtained by setting $ y_t = y_{t-1} =\n", "y_{t-2} = y^* $ in [(11.1)](#equation-tswm-1), which yields\n", "\n", "$$\n", "y^* = \\frac{\\alpha_{0}}{1 - \\alpha_{1} - \\alpha_{2}}\n", "$$\n", "\n", "If we set the initial values to $ y_{0} = y_{-1} = y^* $, then $ y_{t} $ will be\n", "constant:" ] }, { "cell_type": "code", "execution_count": null, "id": "fa66777e", "metadata": { "hide-output": false }, "outputs": [], "source": [ "y_star = 𝛼0 / (1 - 𝛼1 - 𝛼2)\n", "y_1_steady = y_star # y_{-1}\n", "y0_steady = y_star\n", "\n", "b_steady = np.full(T, 𝛼0)\n", "b_steady[0] = 𝛼0 + 𝛼1 * y0_steady + 𝛼2 * y_1_steady\n", "b_steady[1] = 𝛼0 + 𝛼2 * y0_steady" ] }, { "cell_type": "code", "execution_count": null, "id": "868610ee", "metadata": { "hide-output": false }, "outputs": [], "source": [ "y_steady = A_inv @ b_steady" ] }, { "cell_type": "code", "execution_count": null, "id": "b2aa4dc6", "metadata": { "hide-output": false }, "outputs": [], "source": [ "plt.plot(np.arange(T)+1, y_steady)\n", "plt.xlabel('t')\n", "plt.ylabel('y')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "103458c2", "metadata": {}, "source": [ "## Adding a Random Term\n", "\n", "To generate some excitement, we’ll follow in the spirit of the great economists\n", "Eugen Slutsky and Ragnar Frisch and replace our original second-order difference\n", "equation with the following **second-order stochastic linear difference\n", "equation**:\n", "\n", "\n", "\n", "$$\n", "y_{t} = \\alpha_{0} + \\alpha_{1} y_{t-1} + \\alpha_{2} y_{t-2} + u_t \\tag{11.2}\n", "$$\n", "\n", "where $ u_{t} \\sim N\\left(0, \\sigma_{u}^{2}\\right) $ and is IID,\n", "meaning **independent** and **identically** distributed.\n", "\n", "We’ll stack these $ T $ equations into a system cast in terms of\n", "matrix algebra.\n", "\n", "Let’s define the random vector\n", "\n", "$$\n", "u=\\left[\\begin{array}{c}\n", "u_{1}\\\\\n", "u_{2}\\\\\n", "\\vdots\\\\\n", "u_{T}\n", "\\end{array}\\right]\n", "$$\n", "\n", "Where $ A, b, y $ are defined as above, now assume that $ y $ is\n", "governed by the system\n", "\n", "\n", "\n", "$$\n", "A y = b + u \\tag{11.3}\n", "$$\n", "\n", "The solution for $ y $ becomes\n", "\n", "\n", "\n", "$$\n", "y = A^{-1} \\left(b + u\\right) \\tag{11.4}\n", "$$\n", "\n", "Let’s try it out in Python." ] }, { "cell_type": "code", "execution_count": null, "id": "25747286", "metadata": { "hide-output": false }, "outputs": [], "source": [ "𝜎u = 2." ] }, { "cell_type": "code", "execution_count": null, "id": "092acf80", "metadata": { "hide-output": false }, "outputs": [], "source": [ "u = np.random.normal(0, 𝜎u, size=T)\n", "y = A_inv @ (b + u)" ] }, { "cell_type": "code", "execution_count": null, "id": "994ad16b", "metadata": { "hide-output": false }, "outputs": [], "source": [ "plt.plot(np.arange(T)+1, y)\n", "plt.xlabel('t')\n", "plt.ylabel('y')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "56378974", "metadata": {}, "source": [ "The above time series looks a lot like (detrended) GDP series for a\n", "number of advanced countries in recent decades.\n", "\n", "We can simulate $ N $ paths." ] }, { "cell_type": "code", "execution_count": null, "id": "663006bb", "metadata": { "hide-output": false }, "outputs": [], "source": [ "N = 100\n", "\n", "for i in range(N):\n", " col = cm.viridis(np.random.rand()) # Choose a random color from viridis\n", " u = np.random.normal(0, 𝜎u, size=T)\n", " y = A_inv @ (b + u)\n", " plt.plot(np.arange(T)+1, y, lw=0.5, color=col)\n", "\n", "plt.xlabel('t')\n", "plt.ylabel('y')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "7132a24f", "metadata": {}, "source": [ "Also consider the case when $ y_{0} $ and $ y_{-1} $ are at\n", "steady state." ] }, { "cell_type": "code", "execution_count": null, "id": "20267b0c", "metadata": { "hide-output": false }, "outputs": [], "source": [ "N = 100\n", "\n", "for i in range(N):\n", " col = cm.viridis(np.random.rand()) # Choose a random color from viridis\n", " u = np.random.normal(0, 𝜎u, size=T)\n", " y_steady = A_inv @ (b_steady + u)\n", " plt.plot(np.arange(T)+1, y_steady, lw=0.5, color=col)\n", "\n", "plt.xlabel('t')\n", "plt.ylabel('y')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "1b9b1024", "metadata": {}, "source": [ "## Computing Population Moments\n", "\n", "We can apply standard formulas for multivariate normal distributions to compute the mean vector and covariance matrix\n", "for our time series model\n", "\n", "$$\n", "y = A^{-1} (b + u) .\n", "$$\n", "\n", "You can read about multivariate normal distributions in this lecture [Multivariate Normal Distribution](https://python.quantecon.org/multivariate_normal.html).\n", "\n", "Let’s write our model as\n", "\n", "$$\n", "y = \\tilde A (b + u)\n", "$$\n", "\n", "where $ \\tilde A = A^{-1} $.\n", "\n", "Because linear combinations of normal random variables are normal, we know that\n", "\n", "$$\n", "y \\sim {\\mathcal N}(\\mu_y, \\Sigma_y)\n", "$$\n", "\n", "where\n", "\n", "$$\n", "\\mu_y = \\tilde A b\n", "$$\n", "\n", "and\n", "\n", "$$\n", "\\Sigma_y = \\tilde A (\\sigma_u^2 I_{T \\times T} ) \\tilde A^T\n", "$$\n", "\n", "Let’s write a Python class that computes the mean vector $ \\mu_y $ and covariance matrix $ \\Sigma_y $." ] }, { "cell_type": "code", "execution_count": null, "id": "deb82bca", "metadata": { "hide-output": false }, "outputs": [], "source": [ "class population_moments:\n", " \"\"\"\n", " Compute population moments mu_y, Sigma_y.\n", " ---------\n", " Parameters:\n", " alpha0, alpha1, alpha2, T, y_1, y0\n", " \"\"\"\n", " def __init__(self, alpha0, alpha1, alpha2, T, y_1, y0, sigma_u):\n", "\n", " # compute A\n", " A = np.identity(T)\n", "\n", " for i in range(T):\n", " if i-1 >= 0:\n", " A[i, i-1] = -alpha1\n", "\n", " if i-2 >= 0:\n", " A[i, i-2] = -alpha2\n", "\n", " # compute b\n", " b = np.full(T, alpha0)\n", " b[0] = alpha0 + alpha1 * y0 + alpha2 * y_1\n", " b[1] = alpha0 + alpha2 * y0\n", "\n", " # compute A inverse\n", " A_inv = np.linalg.inv(A)\n", "\n", " self.A, self.b, self.A_inv, self.sigma_u, self.T = A, b, A_inv, sigma_u, T\n", " \n", " def sample_y(self, n):\n", " \"\"\"\n", " Give a sample of size n of y.\n", " \"\"\"\n", " A_inv, sigma_u, b, T = self.A_inv, self.sigma_u, self.b, self.T\n", " us = np.random.normal(0, sigma_u, size=[n, T])\n", " ys = np.vstack([A_inv @ (b + u) for u in us])\n", "\n", " return ys\n", "\n", " def get_moments(self):\n", " \"\"\"\n", " Compute the population moments of y.\n", " \"\"\"\n", " A_inv, sigma_u, b = self.A_inv, self.sigma_u, self.b\n", "\n", " # compute mu_y\n", " self.mu_y = A_inv @ b\n", " self.Sigma_y = sigma_u**2 * (A_inv @ A_inv.T)\n", "\n", " return self.mu_y, self.Sigma_y\n", "\n", "\n", "my_process = population_moments(\n", " alpha0=10.0, alpha1=1.53, alpha2=-.9, T=80, y_1=28., y0=24., sigma_u=1)\n", " \n", "mu_y, Sigma_y = my_process.get_moments()\n", "A_inv = my_process.A_inv" ] }, { "cell_type": "markdown", "id": "f2f2d148", "metadata": {}, "source": [ "It is enlightening to study the $ \\mu_y, \\Sigma_y $’s implied by various parameter values.\n", "\n", "Among other things, we can use the class to exhibit how **statistical stationarity** of $ y $ prevails only for very special initial conditions.\n", "\n", "Let’s begin by generating $ N $ time realizations of $ y $ plotting them together with population mean $ \\mu_y $ ." ] }, { "cell_type": "code", "execution_count": null, "id": "3affdd02", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# plot mean\n", "N = 100\n", "\n", "for i in range(N):\n", " col = cm.viridis(np.random.rand()) # Choose a random color from viridis\n", " ys = my_process.sample_y(N)\n", " plt.plot(ys[i,:], lw=0.5, color=col)\n", " plt.plot(mu_y, color='red')\n", "\n", "plt.xlabel('t')\n", "plt.ylabel('y')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "3a26383d", "metadata": {}, "source": [ "Visually, notice how the variance across realizations of $ y_t $ decreases as $ t $ increases.\n", "\n", "Let’s plot the population variance of $ y_t $ against $ t $." ] }, { "cell_type": "code", "execution_count": null, "id": "79ecaf28", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# plot variance\n", "plt.plot(Sigma_y.diagonal())\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "87207fb0", "metadata": {}, "source": [ "Notice how the population variance increases and asymptotes\n", "\n", "Let’s print out the covariance matrix $ \\Sigma_y $ for a time series $ y $" ] }, { "cell_type": "code", "execution_count": null, "id": "d8159982", "metadata": { "hide-output": false }, "outputs": [], "source": [ "my_process = population_moments(alpha0=0, alpha1=.8, alpha2=0, T=6, y_1=0., y0=0., sigma_u=1)\n", " \n", "mu_y, Sigma_y = my_process.get_moments()\n", "print(\"mu_y = \",mu_y)\n", "print(\"Sigma_y = \", Sigma_y)" ] }, { "cell_type": "markdown", "id": "0b1152c3", "metadata": {}, "source": [ "Notice that the covariance between $ y_t $ and $ y_{t-1} $ – the elements on the superdiagonal – are **not** identical.\n", "\n", "This is is an indication that the time series respresented by our $ y $ vector is not **stationary**.\n", "\n", "To make it stationary, we’d have to alter our system so that our **initial conditions** $ (y_1, y_0) $ are not fixed numbers but instead a jointly normally distributed random vector with a particular mean and covariance matrix.\n", "\n", "We describe how to do that in another lecture in this lecture [Linear State Space Models](https://python.quantecon.org/linear_models.html).\n", "\n", "But just to set the stage for that analysis, let’s print out the bottom right corner of $ \\Sigma_y $." ] }, { "cell_type": "code", "execution_count": null, "id": "5dbbd072", "metadata": { "hide-output": false }, "outputs": [], "source": [ "mu_y, Sigma_y = my_process.get_moments()\n", "print(\"bottom right corner of Sigma_y = \\n\", Sigma_y[72:,72:])" ] }, { "cell_type": "markdown", "id": "ea1c4d19", "metadata": {}, "source": [ "Please notice how the sub diagonal and super diagonal elements seem to have converged.\n", "\n", "This is an indication that our process is asymptotically stationary.\n", "\n", "You can read about stationarity of more general linear time series models in this lecture [Linear State Space Models](https://python.quantecon.org/linear_models.html).\n", "\n", "There is a lot to be learned about the process by staring at the off diagonal elements of $ \\Sigma_y $ corresponding to different time periods $ t $, but we resist the temptation to do so here." ] }, { "cell_type": "markdown", "id": "2fab36ad", "metadata": {}, "source": [ "## Moving Average Representation\n", "\n", "Let’s print out $ A^{-1} $ and stare at its structure\n", "\n", "- is it triangular or almost triangular or $ \\ldots $ ? \n", "\n", "\n", "To study the structure of $ A^{-1} $, we shall print just up to $ 3 $ decimals.\n", "\n", "Let’s begin by printing out just the upper left hand corner of $ A^{-1} $" ] }, { "cell_type": "code", "execution_count": null, "id": "4a8d625a", "metadata": { "hide-output": false }, "outputs": [], "source": [ "with np.printoptions(precision=3, suppress=True):\n", " print(A_inv[0:7,0:7])" ] }, { "cell_type": "markdown", "id": "0c9ee149", "metadata": {}, "source": [ "Evidently, $ A^{-1} $ is a lower triangular matrix.\n", "\n", "Let’s print out the lower right hand corner of $ A^{-1} $ and stare at it." ] }, { "cell_type": "code", "execution_count": null, "id": "8bc400a4", "metadata": { "hide-output": false }, "outputs": [], "source": [ "with np.printoptions(precision=3, suppress=True):\n", " print(A_inv[72:,72:])" ] }, { "cell_type": "markdown", "id": "4acf10d4", "metadata": {}, "source": [ "Notice how every row ends with the previous row’s pre-diagonal entries.\n", "\n", "Since $ A^{-1} $ is lower triangular, each row represents $ y_t $ for a particular $ t $ as the sum of\n", "\n", "- a time-dependent function $ A^{-1} b $ of the initial conditions incorporated in $ b $, and \n", "- a weighted sum of current and past values of the IID shocks $ \\{u_t\\} $ \n", "\n", "\n", "Thus, let $ \\tilde{A}=A^{-1} $.\n", "\n", "Evidently, for $ t\\geq0 $,\n", "\n", "$$\n", "y_{t+1}=\\sum_{i=1}^{t+1}\\tilde{A}_{t+1,i}b_{i}+\\sum_{i=1}^{t}\\tilde{A}_{t+1,i}u_{i}+u_{t+1}\n", "$$\n", "\n", "This is a **moving average** representation with time-varying coefficients.\n", "\n", "Just as system [(11.4)](#equation-eq-eqma) constitutes a\n", "**moving average** representation for $ y $, system [(11.3)](#equation-eq-eqar) constitutes an **autoregressive** representation for $ y $." ] }, { "cell_type": "markdown", "id": "e69c715f", "metadata": {}, "source": [ "## A Forward Looking Model\n", "\n", "Samuelson’s model is **backwards looking** in the sense that we give it **initial conditions** and let it\n", "run.\n", "\n", "Let’s now turn to model that is **forward looking**.\n", "\n", "We apply similar linear algebra machinery to study a **perfect\n", "foresight** model widely used as a benchmark in macroeconomics and\n", "finance.\n", "\n", "As an example, we suppose that $ p_t $ is the price of a stock and\n", "that $ y_t $ is its dividend.\n", "\n", "We assume that $ y_t $ is determined by second-order difference\n", "equation that we analyzed just above, so that\n", "\n", "$$\n", "y = A^{-1} \\left(b + u\\right)\n", "$$\n", "\n", "Our **perfect foresight** model of stock prices is\n", "\n", "$$\n", "p_{t} = \\sum_{j=0}^{T-t} \\beta^{j} y_{t+j}, \\quad \\beta \\in (0,1)\n", "$$\n", "\n", "where $ \\beta $ is a discount factor.\n", "\n", "The model asserts that the price of the stock at $ t $ equals the\n", "discounted present values of the (perfectly foreseen) future dividends.\n", "\n", "Form\n", "\n", "$$\n", "\\underset{\\equiv p}{\\underbrace{\\left[\\begin{array}{c}\n", "p_{1}\\\\\n", "p_{2}\\\\\n", "p_{3}\\\\\n", "\\vdots\\\\\n", "p_{T}\n", "\\end{array}\\right]}}=\\underset{\\equiv B}{\\underbrace{\\left[\\begin{array}{ccccc}\n", "1 & \\beta & \\beta^{2} & \\cdots & \\beta^{T-1}\\\\\n", "0 & 1 & \\beta & \\cdots & \\beta^{T-2}\\\\\n", "0 & 0 & 1 & \\cdots & \\beta^{T-3}\\\\\n", "\\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n", "0 & 0 & 0 & \\cdots & 1\n", "\\end{array}\\right]}}\\left[\\begin{array}{c}\n", "y_{1}\\\\\n", "y_{2}\\\\\n", "y_{3}\\\\\n", "\\vdots\\\\\n", "y_{T}\n", "\\end{array}\\right]\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "id": "d8ae8eae", "metadata": { "hide-output": false }, "outputs": [], "source": [ "𝛽 = .96" ] }, { "cell_type": "code", "execution_count": null, "id": "eec05328", "metadata": { "hide-output": false }, "outputs": [], "source": [ "# construct B\n", "B = np.zeros((T, T))\n", "\n", "for i in range(T):\n", " B[i, i:] = 𝛽 ** np.arange(0, T-i)" ] }, { "cell_type": "code", "execution_count": null, "id": "2d1c1a8f", "metadata": { "hide-output": false }, "outputs": [], "source": [ "B" ] }, { "cell_type": "code", "execution_count": null, "id": "69e1f095", "metadata": { "hide-output": false }, "outputs": [], "source": [ "𝜎u = 0.\n", "u = np.random.normal(0, 𝜎u, size=T)\n", "y = A_inv @ (b + u)\n", "y_steady = A_inv @ (b_steady + u)" ] }, { "cell_type": "code", "execution_count": null, "id": "43a6dd1e", "metadata": { "hide-output": false }, "outputs": [], "source": [ "p = B @ y" ] }, { "cell_type": "code", "execution_count": null, "id": "3f187ee2", "metadata": { "hide-output": false }, "outputs": [], "source": [ "plt.plot(np.arange(0, T)+1, y, label='y')\n", "plt.plot(np.arange(0, T)+1, p, label='p')\n", "plt.xlabel('t')\n", "plt.ylabel('y/p')\n", "plt.legend()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "a63c6313", "metadata": {}, "source": [ "Can you explain why the trend of the price is downward over time?\n", "\n", "Also consider the case when $ y_{0} $ and $ y_{-1} $ are at the\n", "steady state." ] }, { "cell_type": "code", "execution_count": null, "id": "20e1f9f0", "metadata": { "hide-output": false }, "outputs": [], "source": [ "p_steady = B @ y_steady\n", "\n", "plt.plot(np.arange(0, T)+1, y_steady, label='y')\n", "plt.plot(np.arange(0, T)+1, p_steady, label='p')\n", "plt.xlabel('t')\n", "plt.ylabel('y/p')\n", "plt.legend()\n", "\n", "plt.show()" ] } ], "metadata": { "date": 1680677906.85742, "filename": "time_series_with_matrices.md", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "title": "Univariate Time Series with Matrix Algebra" }, "nbformat": 4, "nbformat_minor": 5 }