{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "
\n", " \n", " \"QuantEcon\"\n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Numerical Linear Algebra and Factorizations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Numerical Linear Algebra and Factorizations](#Numerical-Linear-Algebra-and-Factorizations) \n", " - [Overview](#Overview) \n", " - [Factorizations](#Factorizations) \n", " - [Continuous-Time Markov Chains (CTMCs)](#Continuous-Time-Markov-Chains-%28CTMCs%29) \n", " - [Banded Matrices](#Banded-Matrices) \n", " - [Implementation Details and Performance](#Implementation-Details-and-Performance) \n", " - [Exercises](#Exercises) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> You cannot learn too much linear algebra. – Benedict Gross" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "In this lecture, we examine the structure of matrices and linear operators (e.g., dense, sparse, symmetric, tridiagonal, banded) and\n", "discuss how the structure can be exploited to radically increase the performance of solving large problems.\n", "\n", "We build on applications discussed in previous lectures: [linear algebra](linear_algebra.html), [orthogonal projections](orth_proj.html), and [Markov chains](finite_markov.html).\n", "\n", "The methods in this section are called direct methods, and they are qualitatively similar to performing Gaussian elimination to factor matrices and solve systems of equations. In [iterative methods and sparsity](iterative_methods_sparsity.html) we examine a different approach, using iterative algorithms, where we can think of more general linear operators.\n", "\n", "The list of specialized packages for these tasks is enormous and growing, but some of the important organizations to\n", "look at are [JuliaMatrices](https://github.com/JuliaMatrices) , [JuliaSparse](https://github.com/JuliaSparse), and [JuliaMath](https://github.com/JuliaMath)\n", "\n", "*NOTE*: As this section uses advanced Julia techniques, you may wish to review multiple-dispatch and generic programming in introduction to types, and consider further study on [generic programming](../more_julia/generic_programming.html).\n", "\n", "The theme of this lecture, and numerical linear algebra in general, comes down to three principles:\n", "\n", "1. **Identify structure** (e.g., [symmetric, sparse, diagonal](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/index.html#Special-matrices-1)) matrices in order to use **specialized algorithms.** \n", "1. **Do not lose structure** by applying the wrong numerical linear algebra operations at the wrong times (e.g., sparse matrix becoming dense) \n", "1. Understand the **computational complexity** of each algorithm, given the structure of the inputs. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "hide-output": true }, "outputs": [], "source": [ "using InstantiateFromURL\n", "# optionally add arguments to force installation: instantiate = true, precompile = true\n", "github_project(\"QuantEcon/quantecon-notebooks-julia\", version = \"0.8.0\")" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "hide-output": true }, "outputs": [], "source": [ "using LinearAlgebra, Statistics, BenchmarkTools, SparseArrays, Random\n", "Random.seed!(42); # seed random numbers for reproducibility" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Computational Complexity\n", "\n", "Ask yourself whether the following is a **computationally expensive** operation as the matrix **size increases**\n", "\n", "- Multiplying two matrices? \n", " \n", " - *Answer*: It depends. Multiplying two diagonal matrices is trivial. \n", " \n", "- Solving a linear system of equations? \n", " \n", " - *Answer*: It depends. If the matrix is the identity, the solution is the vector itself. \n", " \n", "- Finding the eigenvalues of a matrix? \n", " \n", " - *Answer*: It depends. The eigenvalues of a triangular matrix are the diagonal elements. \n", " \n", "\n", "\n", "As the goal of this section is to move toward numerical methods with large systems, we need to understand how well algorithms scale with the size of matrices, vectors, etc. This is known as [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity). As we saw in the answer to the questions above, the algorithm - and hence the computational complexity - changes based on matrix structure.\n", "\n", "While this notion of complexity can work at various levels, such as the number of [significant digits](https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Arithmetic_functions) for basic mathematical operations, the amount of memory and storage required, or the amount of time, we will typically focus on the time complexity.\n", "\n", "For time complexity, the size $ N $ is usually the dimensionality of the problem, although occasionally the key will be the number of non-zeros in the matrix or the width of bands. For our applications, time complexity is best thought of as the number of floating point operations (e.g., addition, multiplication) required." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Notation\n", "\n", "Complexity of algorithms is typically written in [Big O](https://en.wikipedia.org/wiki/Big_O_notation) notation, which provides bounds on the scaling of the computational complexity with respect to the size of the inputs.\n", "\n", "Formally, if the number of operations required for a problem size $ N $ is $ f(N) $, we can write this as $ f(N) = O(g(N)) $ for some $ g(N) $ - typically a polynomial.\n", "\n", "The interpretation is that there exist some constants $ M $ and $ N_0 $ such that\n", "\n", "$$\n", "f(N) \\leq M g(N), \\text{ for } N > N_0\n", "$$\n", "\n", "For example, the complexity of finding an LU Decomposition of a dense matrix is $ O(N^3) $, which should be read as there being a constant where\n", "eventually the number of floating point operations required to decompose a matrix of size $ N\\times N $ grows cubically.\n", "\n", "Keep in mind that these are asymptotic results intended for understanding the scaling of the problem, and the constant can matter for a given\n", "fixed size.\n", "\n", "For example, the number of operations required for an [LU decomposition](https://en.wikipedia.org/wiki/LU_decomposition#Algorithms) of a dense $ N \\times N $ matrix is $ f(N) = \\frac{2}{3} N^3 $, ignoring the $ N^2 $ and lower terms. Other methods of solving a linear system may have different constants of proportionality, even if they have the same scaling, $ O(N^3) $." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Rules of Computational Complexity\n", "\n", "You will sometimes need to think through how [combining algorithms](https://en.wikipedia.org/wiki/Big_O_notation#Properties) changes complexity. For example, if you use\n", "\n", "1. an $ O(N^3) $ operation $ P $ times, then it simply changes the constant. The complexity remains $ O(N^3) $. \n", "1. one $ O(N^3) $ operation and one $ O(N^2) $ operation, then you take the max. The complexity remains $ O(N^3) $. \n", "1. a repetition of an $ O(N) $ operation that itself uses an $ O(N) $ operation, you take the product. The complexity becomes $ O(N^2) $. \n", "\n", "\n", "With this, we have an important word of caution: Dense-matrix multiplication is an [expensive operation](https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra) for unstructured matrices. The naive version is $ O(N^3) $ while the fastest-known algorithms (e.g., Coppersmith-Winograd) are roughly $ O(N^{2.37}) $. In practice, it is reasonable to crudely approximate with $ O(N^3) $ when doing an analysis, in part since the higher constant factors of the better scaling algorithms dominate the better complexity until matrices become very large.\n", "\n", "Of course, modern libraries use highly tuned and numerically stable [algorithms](https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm) to multiply matrices and exploit the computer architecture, memory cache, etc., but this simply lowers the constant of proportionality and they remain roughly approximated by $ O(N^3) $.\n", "\n", "A consequence is that, since many algorithms require matrix-matrix multiplication, it is often not possible to go below that order without further matrix structure.\n", "\n", "That is, changing the constant of proportionality for a given size can help, but in order to achieve better scaling you need to identify matrix structure (e.g., tridiagonal, sparse) and ensure that your operations do not lose it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Losing Structure\n", "\n", "As a first example of a structured matrix, consider a [sparse array](https://docs.julialang.org/en/v1/stdlib/SparseArrays/index.html)." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "nnz(A) = 47\n", "nnz(invA) = " ] }, { "name": "stdout", "output_type": "stream", "text": [ "100\n" ] } ], "source": [ "A = sprand(10, 10, 0.45) # random sparse 10x10, 45 percent filled with non-zeros\n", "\n", "@show nnz(A) # counts the number of non-zeros\n", "invA = sparse(inv(Array(A))) # Julia won't invert sparse, so convert to dense with Array.\n", "@show nnz(invA);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This increase from less than 50 to 100 percent dense demonstrates that significant sparsity can be lost when computing an inverse.\n", "\n", "The results can be even more extreme. Consider a tridiagonal matrix of size $ N \\times N $\n", "that might come out of a Markov chain or a discretization of a diffusion process," ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "5×5 Tridiagonal{Float64,Array{Float64,1}}:\n", " 0.8 0.2 ⋅ ⋅ ⋅ \n", " 0.1 0.8 0.1 ⋅ ⋅ \n", " ⋅ 0.1 0.8 0.1 ⋅ \n", " ⋅ ⋅ 0.1 0.8 0.1\n", " ⋅ ⋅ ⋅ 0.2 0.8" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "N = 5\n", "A = Tridiagonal([fill(0.1, N-2); 0.2], fill(0.8, N), [0.2; fill(0.1, N-2);])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The number of non-zeros here is approximately $ 3 N $, linear, which scales well for huge matrices into the millions or billions\n", "\n", "But consider the inverse" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "5×5 Array{Float64,2}:\n", " 1.29099 -0.327957 0.0416667 -0.00537634 0.000672043\n", " -0.163978 1.31183 -0.166667 0.0215054 -0.00268817\n", " 0.0208333 -0.166667 1.29167 -0.166667 0.0208333\n", " -0.00268817 0.0215054 -0.166667 1.31183 -0.163978\n", " 0.000672043 -0.00537634 0.0416667 -0.327957 1.29099" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "inv(A)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, the matrix is fully dense and has $ N^2 $ non-zeros.\n", "\n", "This also applies to the $ A' A $ operation when forming the normal equations of linear least squares." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "nnz(A) / 20 ^ 2 = 0.2825\n", "nnz(A' * A) / 21 ^ 2 = " ] }, { "name": "stdout", "output_type": "stream", "text": [ "0.800453514739229\n" ] } ], "source": [ "A = sprand(20, 21, 0.3)\n", "@show nnz(A)/20^2\n", "@show nnz(A'*A)/21^2;" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that a 30 percent dense matrix becomes almost full dense after the product is taken.\n", "\n", "*Sparsity/Structure is not just for storage*: Matrix size can sometimes become important (e.g., a 1 million by 1 million tridiagonal matrix needs to store 3 million numbers (i.e., about 6MB of memory), where a dense one requires 1 trillion (i.e., about 1TB of memory)).\n", "\n", "But, as we will see, the main purpose of considering sparsity and matrix structure is that it enables specialized algorithms, which typically\n", "have a lower computational order than unstructured dense, or even unstructured sparse, operations.\n", "\n", "First, create a convenient function for benchmarking linear solvers" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "benchmark_solve (generic function with 1 method)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using BenchmarkTools\n", "function benchmark_solve(A, b)\n", " println(\"A\\\\b for typeof(A) = $(string(typeof(A)))\")\n", " @btime $A \\ $b\n", "end" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, take away structure to see the impact on performance," ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A\\b for typeof(A) = Tridiagonal{Float64,Array{Float64,1}}\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 28.726 μs (9 allocations: 47.75 KiB)\n", "A\\b for typeof(A) = SparseMatrixCSC{Float64,Int64}\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 715.620 μs (69 allocations: 1.06 MiB)\n", "A\\b for typeof(A) = Array{Float64,2}\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 28.800 ms (5 allocations: 7.65 MiB)\n" ] } ], "source": [ "N = 1000\n", "b = rand(N)\n", "A = Tridiagonal([fill(0.1, N-2); 0.2], fill(0.8, N), [0.2; fill(0.1, N-2);])\n", "A_sparse = sparse(A) # sparse but losing tridiagonal structure\n", "A_dense = Array(A) # dropping the sparsity structure, dense 1000x1000\n", "\n", "# benchmark solution to system A x = b\n", "benchmark_solve(A, b)\n", "benchmark_solve(A_sparse, b)\n", "benchmark_solve(A_dense, b);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This example shows what is at stake: using a structured tridiagonal matrix may be 10-20 times faster than using a sparse matrix, which is 100 times faster than\n", "using a dense matrix.\n", "\n", "In fact, the difference becomes more extreme as the matrices grow. Solving a tridiagonal system is $ O(N) $, while that of a dense matrix without any structure is $ O(N^3) $. The complexity of a sparse solution is more complicated, and scales in part by the `nnz(N)`, i.e., the number of nonzeros." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Matrix Multiplication\n", "\n", "While we write matrix multiplications in our algebra with abundance, in practice the computational operation scales very poorly without any matrix structure.\n", "\n", "Matrix multiplication is so important to modern computers that the constant of scaling is small using proper packages, but the order is still roughly $ O(N^3) $ in practice (although smaller in theory, as discussed above).\n", "\n", "Sparse matrix multiplication, on the other hand, is $ O(N M_A M_B) $ where $ M_A $ is the number of nonzeros per row of $ A $ and $ M_B $ is the number of non-zeros per column of $ B $.\n", "\n", "By the rules of computational order, that means any algorithm requiring a matrix multiplication of dense matrices requires at least $ O(N^3) $ operation.\n", "\n", "The other important question is what is the structure of the resulting matrix. For example, multiplying an upper triangular matrix by a lower triangular matrix" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "5×5 UpperTriangular{Float64,Array{Float64,2}}:\n", " 0.299976 0.176934 0.0608682 0.20465 0.409653\n", " ⋅ 0.523923 0.127154 0.512531 0.235328\n", " ⋅ ⋅ 0.600588 0.682868 0.330638\n", " ⋅ ⋅ ⋅ 0.345419 0.0312986\n", " ⋅ ⋅ ⋅ ⋅ 0.471043" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "N = 5\n", "U = UpperTriangular(rand(N,N))" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "5×5 Adjoint{Float64,UpperTriangular{Float64,Array{Float64,2}}}:\n", " 0.299976 0.0 0.0 0.0 0.0\n", " 0.176934 0.523923 0.0 0.0 0.0\n", " 0.0608682 0.127154 0.600588 0.0 0.0\n", " 0.20465 0.512531 0.682868 0.345419 0.0\n", " 0.409653 0.235328 0.330638 0.0312986 0.471043" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "L = U'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But the product is fully dense (e.g., think of a Cholesky multiplied by itself to produce a covariance matrix)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "5×5 Array{Float64,2}:\n", " 0.0899855 0.0530758 0.018259 0.0613901 0.122886\n", " 0.0530758 0.305801 0.0773883 0.304736 0.195775\n", " 0.018259 0.0773883 0.380579 0.487749 0.253435\n", " 0.0613901 0.304736 0.487749 0.890193 0.441042\n", " 0.122886 0.195775 0.253435 0.441042 0.555378" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "L * U" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the other hand, a tridiagonal matrix times a diagonal matrix is still tridiagonal - and can use specialized $ O(N) $ algorithms." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "5×5 Tridiagonal{Float64,Array{Float64,1}}:\n", " 0.0156225 0.00390564 ⋅ ⋅ ⋅ \n", " 0.0436677 0.349342 0.0436677 ⋅ ⋅ \n", " ⋅ 0.0213158 0.170526 0.0213158 ⋅ \n", " ⋅ ⋅ 0.00790566 0.0632453 0.00790566\n", " ⋅ ⋅ ⋅ 0.19686 0.787442" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A = Tridiagonal([fill(0.1, N-2); 0.2], fill(0.8, N), [0.2; fill(0.1, N-2);])\n", "D = Diagonal(rand(N))\n", "D * A" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Factorizations\n", "\n", "When you tell a numerical analyst you are solving a linear system using direct methods, their first question is “which factorization?”.\n", "\n", "Just as you can factor a number (e.g., $ 6 = 3 \\times 2 $) you can factor a matrix as the product of other, more\n", "convenient matrices (e.g., $ A = L U $ or $ A = Q R $, where $ L, U, Q, $ and $ R $ have properties such as being triangular, [orthogonal](https://en.wikipedia.org/wiki/Orthogonal_matrix), etc.)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inverting Matrices\n", "\n", "On paper, since the [Invertible Matrix Theorem](https://en.wikipedia.org/wiki/Invertible_matrix#The_invertible_matrix_theorem) tells us that a unique solution is\n", "equivalent to $ A $ being invertible, we often write the solution to $ A x = b $ as\n", "\n", "$$\n", "x = A^{-1} b\n", "$$\n", "\n", "What if we do not (directly) use a factorization?\n", "\n", "Take a simple linear system of a dense matrix," ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4-element Array{Float64,1}:\n", " 0.5682240701809245\n", " 0.40245385575255055\n", " 0.1825995192132288\n", " 0.06160128039631019" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "N = 4\n", "A = rand(N,N)\n", "b = rand(N)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On paper, we try to solve the system $ A x = b $ by inverting the matrix," ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4-element Array{Float64,1}:\n", " -0.0339069840407679\n", " 0.7988200873225003\n", " 0.9963711951331815\n", " -0.9276352098500461" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = inv(A) * b" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we will see throughout, inverting matrices should be used for theory, not for code. The classic advice that you should [never invert a matrix](https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix) may be [slightly exaggerated](https://arxiv.org/abs/1201.6035), but is generally good advice.\n", "\n", "Solving a system by inverting a matrix is always a little slower, is potentially less accurate, and will sometimes lose crucial sparsity compared to using factorizations. Moreover, the methods used by libraries to invert matrices are frequently the same factorizations used for computing a system of equations.\n", "\n", "Even if you need to solve a system with the same matrix multiple times, you are better off factoring the matrix and using the solver rather than calculating an inverse." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 442.521 μs (68 allocations: 205.28 KiB)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 360.323 μs (96 allocations: 155.59 KiB)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 207.730 μs (6 allocations: 102.63 KiB)\n" ] } ], "source": [ "N = 100\n", "A = rand(N,N)\n", "M = 30\n", "B = rand(N,M)\n", "function solve_inverting(A, B)\n", " A_inv = inv(A)\n", " X = similar(B)\n", " for i in 1:size(B,2)\n", " X[:,i] = A_inv * B[:,i]\n", " end\n", " return X\n", "end\n", "\n", "function solve_factoring(A, B)\n", " X = similar(B)\n", " A = factorize(A)\n", " for i in 1:size(B,2)\n", " X[:,i] = A \\ B[:,i]\n", " end\n", " return X\n", "end\n", "\n", "\n", "\n", "@btime solve_inverting($A, $B)\n", "@btime solve_factoring($A, $B)\n", "\n", "# even better, use the built-in feature for multiple RHS\n", "@btime $A \\ $B;" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Triangular Matrices and Back/Forward Substitution\n", "\n", "Some matrices are already in a convenient form and require no further factoring.\n", "\n", "For example, consider solving a system with an `UpperTriangular` matrix," ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "3×3 UpperTriangular{Float64,Array{Float64,2}}:\n", " 1.0 2.0 3.0\n", " ⋅ 5.0 6.0\n", " ⋅ ⋅ 9.0" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "b = [1.0, 2.0, 3.0]\n", "U = UpperTriangular([1.0 2.0 3.0; 0.0 5.0 6.0; 0.0 0.0 9.0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This system is especially easy to solve using [back substitution](https://en.wikipedia.org/wiki/Triangular_matrix#Forward_and_back_substitution). In particular, $ x_3 = b_3 / U_{33}, x_2 = (b_2 - x_3 U_{23})/U_{22} $, etc." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "3-element Array{Float64,1}:\n", " 0.0\n", " 0.0\n", " 0.3333333333333333" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "U \\ b" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A `LowerTriangular` matrix has similar properties and can be solved with forward substitution.\n", "\n", "The computational order of back substitution and forward substitution is $ O(N^2) $ for dense matrices. Those fast algorithms are a key reason that factorizations target triangular structures.\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### LU Decomposition\n", "\n", "The $ LU $ decomposition finds a lower triangular matrix $ L $ and an upper triangular matrix $ U $ such that $ L U = A $.\n", "\n", "For a general dense matrix without any other structure (i.e., not known to be symmetric, tridiagonal, etc.) this is the standard approach to solve a system and exploit the speed of back and forward substitution using the factorization.\n", "\n", "The computational order of LU decomposition itself for a dense matrix is $ O(N^3) $ - the same as Gaussian elimination - but it tends\n", "to have a better constant term than others (e.g., half the number of operations of the QR decomposition). For structured\n", "or sparse matrices, that order drops.\n", "\n", "We can see which algorithm Julia will use for the `\\` operator by looking at the `factorize` function for a given\n", "matrix." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "LU{Float64,Array{Float64,2}}\n", "L factor:\n", "4×4 Array{Float64,2}:\n", " 1.0 0.0 0.0 0.0\n", " 0.563082 1.0 0.0 0.0\n", " 0.730109 0.912509 1.0 0.0\n", " 0.114765 0.227879 0.115228 1.0\n", "U factor:\n", "4×4 Array{Float64,2}:\n", " 0.79794 0.28972 0.765939 0.496278\n", " 0.0 0.82524 0.23962 -0.130989\n", " 0.0 0.0 -0.447888 0.374303\n", " 0.0 0.0 0.0 0.725264" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "N = 4\n", "A = rand(N,N)\n", "b = rand(N)\n", "\n", "Af = factorize(A) # chooses the right factorization, LU here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this case, it provides an $ L $ and $ U $ factorization (with [pivoting](https://en.wikipedia.org/wiki/LU_decomposition#LU_factorization_with_full_pivoting) ).\n", "\n", "With the factorization complete, we can solve different `b` right hand sides." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4-element Array{Float64,1}:\n", " -0.49842605495731557\n", " -0.11835721499695576\n", " 1.5055538550184817\n", " 0.07694455957797537" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Af \\ b" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4-element Array{Float64,1}:\n", " -0.6456780666059364\n", " -0.2601515737654759\n", " 1.116889566296631\n", " 0.5405293106660054" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "b2 = rand(N)\n", "Af \\ b2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In practice, the decomposition also includes a $ P $ which is a [permutation matrix](https://en.wikipedia.org/wiki/Permutation_matrix) such\n", "that $ P A = L U $." ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "true" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Af.P * A ≈ Af.L * Af.U" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also directly calculate an LU decomposition with `lu` but without the pivoting," ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "LU{Float64,Array{Float64,2}}\n", "L factor:\n", "4×4 Array{Float64,2}:\n", " 1.0 0.0 0.0 0.0\n", " 0.730109 1.0 0.0 0.0\n", " 0.563082 1.09588 1.0 0.0\n", " 0.114765 0.249728 0.122733 1.0\n", "U factor:\n", "4×4 Array{Float64,2}:\n", " 0.79794 0.28972 0.765939 0.496278\n", " 0.0 0.753039 -0.229233 0.254774\n", " 0.0 0.0 0.490832 -0.410191\n", " 0.0 0.0 0.0 0.725264" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "L, U = lu(A, Val(false)) # the Val(false) provides a solution without permutation matrices" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can verify the decomposition" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "true" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A ≈ L * U" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see roughly how the solver works, note that we can write the problem $ A x = b $ as $ L U x = b $. Let $ U x = y $, which breaks the\n", "problem into two sub-problems.\n", "\n", "$$\n", "\\begin{aligned}\n", "L y &= b\\\\\n", "U x &= y\n", "\\end{aligned}\n", "$$\n", "\n", "As we saw above, this is the solution to two triangular systems, which can be efficiently done with back or forward substitution in $ O(N^2) $ operations.\n", "\n", "To demonstrate this, first using" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4-element Array{Float64,1}:\n", " 0.759344042755733\n", " -0.4146467815590597\n", " 0.707411438334498\n", " 0.05580508465599857" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y = L \\ b" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "true" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = U \\ y\n", "x ≈ A \\ b # Check identical" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The LU decomposition also has specialized algorithms for structured matrices, such as a `Tridiagonal`" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "LU{Float64,Tridiagonal{Float64,Array{Float64,1}}}" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "N = 1000\n", "b = rand(N)\n", "A = Tridiagonal([fill(0.1, N-2); 0.2], fill(0.8, N), [0.2; fill(0.1, N-2);])\n", "factorize(A) |> typeof" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This factorization is the key to the performance of the `A \\ b` in this case. For Tridiagonal matrices, the\n", "LU decomposition is $ O(N^2) $.\n", "\n", "Finally, just as a dense matrix without any structure uses an LU decomposition to solve a system,\n", "so will the sparse solvers" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "SuiteSparse.UMFPACK.UmfpackLU{Float64,Int64}" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A_sparse = sparse(A)\n", "factorize(A_sparse) |> typeof # dropping the tridiagonal structure to just become sparse" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A\\b for typeof(A) = Tridiagonal{Float64,Array{Float64,1}}\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 28.293 μs (9 allocations: 47.75 KiB)\n", "A\\b for typeof(A) = SparseMatrixCSC{Float64,Int64}\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 719.411 μs (69 allocations: 1.06 MiB)\n" ] } ], "source": [ "benchmark_solve(A, b)\n", "benchmark_solve(A_sparse, b);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With sparsity, the computational order is related to the number of non-zeros rather than the size of the matrix itself." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Cholesky Decomposition\n", "\n", "For real, symmetric, [positive semi-definite](https://en.wikipedia.org/wiki/Definiteness_of_a_matrix) matrices, a Cholesky decomposition is a specialized example of an LU decomposition where $ L = U' $.\n", "\n", "The Cholesky is directly useful on its own (e.g., [Classical Control with Linear Algebra](../time_series_models/classical_filtering.html)), but it is also an efficient factorization to use in solving symmetric positive semi-definite systems.\n", "\n", "As always, symmetry allows specialized algorithms." ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "BunchKaufman{Float64,Array{Float64,2}}" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "N = 500\n", "B = rand(N,N)\n", "A_dense = B' * B # an easy way to generate a symmetric positive semi-definite matrix\n", "A = Symmetric(A_dense) # flags the matrix as symmetric\n", "\n", "factorize(A) |> typeof" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, the $ A $ decomposition is [Bunch-Kaufman](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/index.html#LinearAlgebra.bunchkaufman) rather than\n", "Cholesky, because Julia doesn’t know that the matrix is positive semi-definite. We can manually factorize with a Cholesky," ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "Cholesky{Float64,Array{Float64,2}}" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "cholesky(A) |> typeof" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Benchmarking," ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A\\b for typeof(A) = Symmetric{Float64,Array{Float64,2}}\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 4.701 ms (8 allocations: 2.16 MiB)\n", "A\\b for typeof(A) = Array{Float64,2}\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 6.414 ms (5 allocations: 1.92 MiB)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 2.836 ms (7 allocations: 1.91 MiB)\n" ] } ], "source": [ "b = rand(N)\n", "cholesky(A) \\ b # use the factorization to solve\n", "\n", "benchmark_solve(A, b)\n", "benchmark_solve(A_dense, b)\n", "@btime cholesky($A, check=false) \\ $b;" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### QR Decomposition\n", "\n", "Previously, we learned about applications of the QR decomposition to solving the linear least squares.\n", "\n", "While in principle the solution to the least-squares problem\n", "\n", "$$\n", "\\min_x \\| Ax -b \\|^2\n", "$$\n", "\n", "is $ x = (A'A)^{-1}A'b $, in practice note that $ A'A $ becomes dense and calculating the inverse is rarely a good idea.\n", "\n", "The QR decomposition is a decomposition $ A = Q R $ where $ Q $ is an orthogonal matrix (i.e., $ Q'Q = Q Q' = I $) and $ R $ is\n", "an upper triangular matrix.\n", "\n", "Given the previous derivation, we showed that we can write the least-squares problem as\n", "the solution to\n", "\n", "$$\n", "R x = Q' b\n", "$$\n", "\n", "where, as discussed above, the upper-triangular structure of $ R $ can be solved easily with back substitution.\n", "\n", "The `\\` operator solves the linear least-squares problem whenever the given `A` is rectangular" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "3-element Array{Float64,1}:\n", " 0.4011747124872585\n", " 0.0736108001071848\n", " -0.2347806801272458" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "N = 10\n", "M = 3\n", "x_true = rand(3)\n", "\n", "A = rand(N,M) .+ randn(N)\n", "b = rand(N)\n", "x = A \\ b" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To manually use the QR decomposition in solving linear least squares:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Q * R ≈ A = true\n" ] }, { "data": { "text/plain": [ "3-element Array{Float64,1}:\n", " 0.4011747124872585\n", " 0.07361080010718478\n", " -0.2347806801272457" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Af = qr(A)\n", "Q = Af.Q\n", "R = [Af.R; zeros(N - M, M)] # Stack with zeros\n", "@show Q * R ≈ A\n", "x = R \\ Q'*b # simplified QR solution for least squares" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This stacks the `R` with zeros, but the more specialized algorithm would not multiply directly\n", "in that way.\n", "\n", "In some cases, if an LU is not available for a particular matrix structure, the QR factorization\n", "can also be used to solve systems of equations (i.e., not just LLS). This tends to be about 2 times slower than the LU\n", "but is of the same computational order.\n", "\n", "Deriving the approach, where we can now use the inverse since the system is square and we assumed $ A $ was non-singular,\n", "\n", "$$\n", "\\begin{aligned}\n", "A x &= b\\\\\n", "Q R x &= b\\\\\n", "Q^{-1} Q R x &= Q^{-1} b\\\\\n", "R x &= Q' b\n", "\\end{aligned}\n", "$$\n", "\n", "where the last step uses the fact that $ Q^{-1} = Q' $ for an orthogonal matrix.\n", "\n", "Given the decomposition, the solution for dense matrices is of computational\n", "order $ O(N^2) $. To see this, look at the order of each operation.\n", "\n", "- Since $ R $ is an upper-triangular matrix, it can be solved quickly through back substitution with computational order $ O(N^2) $ \n", "- A transpose operation is of order $ O(N^2) $ \n", "- A matrix-vector product is also $ O(N^2) $ \n", "\n", "\n", "In all cases, the order would drop depending on the sparsity pattern of the\n", "matrix (and corresponding decomposition). A key benefit of a QR decomposition is that it tends to\n", "maintain sparsity.\n", "\n", "Without implementing the full process, you can form a QR\n", "factorization with `qr` and then use it to solve a system" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A \\ b = [-1.478040941944558, 2.09875752634393, -0.6857071090150306, -0.16849538664184543, 2.012803045177841]\n", "qr(A) \\ b = [-1.4780409419445582, 2.09875752634393, -0.685707109015032, -0.16849538664184413, 2.0128030451778414]\n" ] } ], "source": [ "N = 5\n", "A = rand(N,N)\n", "b = rand(N)\n", "@show A \\ b\n", "@show qr(A) \\ b;" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Spectral Decomposition\n", "\n", "A spectral decomposition, also known as an [eigendecomposition](https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix), finds all of the eigenvectors and eigenvalues to decompose a square matrix `A` such that\n", "\n", "$$\n", "A = Q \\Lambda Q^{-1}\n", "$$\n", "\n", "where $ Q $ is a matrix made of the eigenvectors of $ A $ as columns, and $ \\Lambda $ is a diagonal matrix of the eigenvalues. Only square, [diagonalizable](https://en.wikipedia.org/wiki/Diagonalizable_matrix) matrices have an eigendecomposition (where a matrix is not diagonalizable if it does not have a full set of linearly independent eigenvectors).\n", "\n", "In Julia, whenever you ask for a full set of eigenvectors and eigenvalues, it decomposes using an algorithm appropriate for the matrix type. For example, symmetric, Hermitian, and tridiagonal matrices have specialized algorithms.\n", "\n", "To see this," ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "2.803627108839096e-15" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A = Symmetric(rand(5, 5)) # symmetric matrices have real eigenvectors/eigenvalues\n", "A_eig = eigen(A)\n", "Λ = Diagonal(A_eig.values)\n", "Q = A_eig.vectors\n", "norm(Q * Λ * inv(Q) - A)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Keep in mind that a real matrix may have complex eigenvalues and eigenvectors, so if you attempt to check `Q * Λ * inv(Q) - A` - even for a positive-definite matrix - it may not be a real number due to numerical inaccuracy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Continuous-Time Markov Chains (CTMCs)\n", "\n", "In the previous lecture on discrete-time Markov chains, we saw that the transition probability\n", "between state $ x $ and state $ y $ was summarized by the matrix $ P(x, y) := \\mathbb P \\{ X_{t+1} = y \\,|\\, X_t = x \\} $.\n", "\n", "As a brief introduction to continuous time processes, consider the same state space as in the discrete\n", "case: $ S $ is a finite set with $ n $ elements $ \\{x_1, \\ldots, x_n\\} $.\n", "\n", "A **Markov chain** $ \\{X_t\\} $ on $ S $ is a sequence of random variables on $ S $ that have the **Markov property**.\n", "\n", "In continuous time, the [Markov Property](https://en.wikipedia.org/wiki/Markov_property) is more complicated, but intuitively is\n", "the same as the discrete-time case.\n", "\n", "That is, knowing the current state is enough to know probabilities for future states. Or, for realizations $ x(\\tau)\\in S, \\tau \\leq t $,\n", "\n", "$$\n", "\\mathbb P \\{ X(t+s) = y \\,|\\, X(t) = x, X(\\tau) = x(\\tau) \\text{ for } 0 \\leq \\tau \\leq t \\} = \\mathbb P \\{ X(t+s) = y \\,|\\, X(t) = x\\}\n", "$$\n", "\n", "Heuristically, consider a time period $ t $ and a small step forward, $ \\Delta $. Then the probability to transition from state $ i $ to\n", "state $ j $ is\n", "\n", "$$\n", "\\mathbb P \\{ X(t + \\Delta) = j \\,|\\, X(t) \\} = \\begin{cases} q_{ij} \\Delta + o(\\Delta) & i \\neq j\\\\\n", " 1 + q_{ii} \\Delta + o(\\Delta) & i = j \\end{cases}\n", "$$\n", "\n", "where the $ q_{ij} $ are “intensity” parameters governing the transition rate, and $ o(\\Delta) $ is [little-o notation](https://en.wikipedia.org/wiki/Big_O_notation#Little-o_notation). That is, $ \\lim_{\\Delta\\to 0} o(\\Delta)/\\Delta = 0 $.\n", "\n", "Just as in the discrete case, we can summarize these parameters by an $ N \\times N $ matrix, $ Q \\in R^{N\\times N} $.\n", "\n", "Recall that in the discrete case every element is weakly positive and every row must sum to one. With continuous time, however, the rows of $ Q $ sum to zero, where the diagonal contains the negative value of jumping out of the current state. That is,\n", "\n", "- $ q_{ij} \\geq 0 $ for $ i \\neq j $ \n", "- $ q_{ii} \\leq 0 $ \n", "- $ \\sum_{j} q_{ij} = 0 $ \n", "\n", "\n", "The $ Q $ matrix is called the intensity matrix, or the infinitesimal generator of the Markov chain. For example,\n", "\n", "$$\n", "Q = \\begin{bmatrix} -0.1 & 0.1 & 0 & 0 & 0 & 0\\\\\n", " 0.1 &-0.2 & 0.1 & 0 & 0 & 0\\\\\n", " 0 & 0.1 & -0.2 & 0.1 & 0 & 0\\\\\n", " 0 & 0 & 0.1 & -0.2 & 0.1 & 0\\\\\n", " 0 & 0 & 0 & 0.1 & -0.2 & 0.1\\\\\n", " 0 & 0 & 0 & 0 & 0.1 & -0.1\\\\\n", " \\end{bmatrix}\n", "$$\n", "\n", "In the above example, transitions occur only between adjacent states with the same intensity (except for a ``bouncing back’’ of the bottom and top states).\n", "\n", "Implementing the $ Q $ using its tridiagonal structure" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "6×6 Tridiagonal{Float64,Array{Float64,1}}:\n", " -0.1 0.1 ⋅ ⋅ ⋅ ⋅ \n", " 0.1 -0.2 0.1 ⋅ ⋅ ⋅ \n", " ⋅ 0.1 -0.2 0.1 ⋅ ⋅ \n", " ⋅ ⋅ 0.1 -0.2 0.1 ⋅ \n", " ⋅ ⋅ ⋅ 0.1 -0.2 0.1\n", " ⋅ ⋅ ⋅ ⋅ 0.1 -0.1" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using LinearAlgebra\n", "α = 0.1\n", "N = 6\n", "Q = Tridiagonal(fill(α, N-1), [-α; fill(-2α, N-2); -α], fill(α, N-1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we can use `Tridiagonal` to exploit the structure of the problem.\n", "\n", "Consider a simple payoff vector $ r $ associated with each state, and a discount rate $ ρ $. Then we can solve for\n", "the expected present discounted value in a way similar to the discrete-time case.\n", "\n", "$$\n", "\\rho v = r + Q v\n", "$$\n", "\n", "or rearranging slightly, solving the linear system\n", "\n", "$$\n", "(\\rho I - Q) v = r\n", "$$\n", "\n", "For our example, exploiting the tridiagonal structure," ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "6×6 Tridiagonal{Float64,Array{Float64,1}}:\n", " 0.15 -0.1 ⋅ ⋅ ⋅ ⋅ \n", " -0.1 0.25 -0.1 ⋅ ⋅ ⋅ \n", " ⋅ -0.1 0.25 -0.1 ⋅ ⋅ \n", " ⋅ ⋅ -0.1 0.25 -0.1 ⋅ \n", " ⋅ ⋅ ⋅ -0.1 0.25 -0.1\n", " ⋅ ⋅ ⋅ ⋅ -0.1 0.15" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "r = range(0.0, 10.0, length=N)\n", "ρ = 0.05\n", "\n", "A = ρ * I - Q" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that this $ A $ matrix is maintaining the tridiagonal structure of the problem, which leads to an efficient solution to the\n", "linear problem." ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "6-element Array{Float64,1}:\n", " 38.15384615384615\n", " 57.23076923076923\n", " 84.92307692307693\n", " 115.07692307692311\n", " 142.76923076923077\n", " 161.84615384615384" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "v = A \\ r" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The $ Q $ is also used to calculate the evolution of the Markov chain, in direct analogy to the $ ψ_{t+k} = ψ_t P^k $ evolution with the transition matrix $ P $ of the discrete case.\n", "\n", "In the continuous case, this becomes the system of linear differential equations\n", "\n", "$$\n", "\\dot{ψ}(t) = Q(t)^T ψ(t)\n", "$$\n", "\n", "given the initial condition $ \\psi(0) $ and where the $ Q(t) $ intensity matrix is allowed to vary with time. In the simplest case of a constant $ Q $ matrix, this is a simple constant-coefficient system of linear ODEs with coefficients $ Q^T $.\n", "\n", "If a stationary equilibrium exists, note that $ \\dot{ψ}(t) = 0 $, and the stationary solution $ ψ^{*} $ needs to satisfy\n", "\n", "$$\n", "0 = Q^T ψ^{*}\n", "$$\n", "\n", "Notice that this is of the form $ 0 ψ^{*} = Q^T ψ^{*} $ and hence is equivalent to finding the eigenvector associated with the $ \\lambda = 0 $ eigenvalue of $ Q^T $.\n", "\n", "With our example, we can calculate all of the eigenvalues and eigenvectors" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "Eigen{Float64,Float64,Array{Float64,2},Array{Float64,1}}\n", "values:\n", "6-element Array{Float64,1}:\n", " -0.3732050807568874\n", " -0.29999999999999993\n", " -0.19999999999999998\n", " -0.09999999999999995\n", " -0.026794919243112274\n", " 0.0\n", "vectors:\n", "6×6 Array{Float64,2}:\n", " -0.149429 -0.288675 0.408248 0.5 -0.557678 0.408248\n", " 0.408248 0.57735 -0.408248 1.38778e-16 -0.408248 0.408248\n", " -0.557678 -0.288675 -0.408248 -0.5 -0.149429 0.408248\n", " 0.557678 -0.288675 0.408248 -0.5 0.149429 0.408248\n", " -0.408248 0.57735 0.408248 7.63278e-16 0.408248 0.408248\n", " 0.149429 -0.288675 -0.408248 0.5 0.557678 0.408248" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "λ, vecs = eigen(Array(Q'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Indeed, there is a $ \\lambda = 0 $ eigenvalue, which is associated with the last column in the eigenvector. To turn that into a probability,\n", "we need to normalize it." ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "6-element Array{Float64,1}:\n", " 0.16666666666666657\n", " 0.16666666666666657\n", " 0.1666666666666667\n", " 0.16666666666666682\n", " 0.16666666666666685\n", " 0.16666666666666663" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vecs[:,N] ./ sum(vecs[:,N])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Multiple Dimensions\n", "\n", "A frequent case in discretized models is dealing with Markov chains with multiple “spatial” dimensions (e.g., wealth and income).\n", "\n", "After discretizing a process to create a Markov chain, you can always take the Cartesian product of the set of states in order to\n", "enumerate it as a single state variable.\n", "\n", "To see this, consider states $ i $ and $ j $ governed by infinitesimal generators $ Q $ and $ A $." ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "8×8 Array{Float64,2}:\n", " -0.2 0.1 0.0 0.0 0.1 0.0 0.0 0.0\n", " 0.1 -0.3 0.1 0.0 0.0 0.1 0.0 0.0\n", " 0.0 0.1 -0.3 0.1 0.0 0.0 0.1 0.0\n", " 0.0 0.0 0.1 -0.2 0.0 0.0 0.0 0.1\n", " 0.2 0.0 0.0 0.0 -0.3 0.1 0.0 0.0\n", " 0.0 0.2 0.0 0.0 0.1 -0.4 0.1 0.0\n", " 0.0 0.0 0.2 0.0 0.0 0.1 -0.4 0.1\n", " 0.0 0.0 0.0 0.2 0.0 0.0 0.1 -0.3" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "function markov_chain_product(Q, A)\n", " M = size(Q, 1)\n", " N = size(A, 1)\n", " Q = sparse(Q)\n", " Qs = blockdiag(fill(Q, N)...) # create diagonal blocks of every operator\n", " As = kron(A, sparse(I(M)))\n", " return As + Qs\n", "end\n", "\n", "α = 0.1\n", "N = 4\n", "Q = Tridiagonal(fill(α, N-1), [-α; fill(-2α, N-2); -α], fill(α, N-1))\n", "A = sparse([-0.1 0.1\n", " 0.2 -0.2])\n", "M = size(A,1)\n", "L = markov_chain_product(Q, A)\n", "L |> Matrix # display as a dense matrix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This provides the combined Markov chain for the $ (i,j) $ process. To see the sparsity pattern," ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "hide-output": false }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAlgAAAGQCAIAAAD9V4nPAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3deVxU5f4H8O9zWIZ9FURABTX3JXcNFVRc0lRyS6w0M9Nyybpqadu9ldVtM71WlmtZmv7SzK5ZbrmvuJYK7jsgu4DAwJzn98dwERFlzuHMPLN83q95vRpmnqf5iB6+PMs5h3HOCQAAwFFJogMAAACIhEIIAAAODYUQAAAcGgohAAA4NBRCAABwaM6iAwAAgJYSExNXrFhBRE8++WSjRo3KXs/JydmwYcPJkyc9PDwGDRrUvHlzcRmtC0aERER//fXX0qVLZ82adfr0adFZAADUS0xM7Nixo/G8uI4dOyYmJpa9NXPmzBUrVnh7e2dlZXXq1Om///2vuJjWheE8QiJ6+OGHIyMjt2/f/u233w4cOFB0HAAAlSZMmODs7Dx//nwimjx5cnFx8YIFC4xvFRQUuLu7G5+/+eabR48eRS00woiQiOjYsWM///xzcHCw6CAAANWyY8eO3r17G5/37t17x44dZW+VVUEiKi4u9vT0tHQ4a4U1QgAA+5GcnFz2O33NmjVv3Lhxb5ukpKQFCxZs2rTJstGsFwph1XJzc2eMn5aVnmX8skePHv379y97N7hBTRd3V/Mm0GfxgqtKOzHXAHIPN0ccqBZDIc87o7QTc3Inr4fMEQfE4pSltMv//d8vK35YV+HFtWvXSpJERM7OziUlJcYXS0pKXFxcKrS8du1av3793n777Q4dOqiKbIdQCKu2e/du6UBxpD7E+OXF06fmf3Gq7N1pu98KrFvDrAF41gH58LNKe7F6L0qN3zBHHqiW4kx5T1+lnVhgF6njanPEAbFkvpFI2UaNxMQNrq6uI0eOLP8iY8z4JDQ0tGwUeP369dDQ0PLNkpOTe/bs+cILL7z88svVSG1vUAhN4uzsTPpKXpecJJ9gH7N/vFto1W3uwVT1ArPTBRFzIV6srJc7/jbtk8xLlBZCTnLDhk3i4uIqfXfAgAE//fTT8OHDiWjNmjUDBgwgohMnTvj5+bm5ufXq1WvUqFHTpk2rfnJ7gkJYLXXb1zP7vCgR82lGroGkz1DWKyjaTHmgWpgLC+zE03cp61QDf5v2iXOD0kJID9zqP3ny5M6dOw8aNIgxdvz48c8//5yIpk6dGhsbe/bs2YsXL27fvn379u1EVLdu3UWLFqmPbkdQCImIRowYcejQoatXr44fP/7ll19eu3Ztq1atTOnYenB7c2cjImJOLDSOX1qsoIfvw+RZ33yJoDpY2FBlhdDFlwX3NFscsCshISEnTpzYvHkzEfXq1cvb25uI5s+f7+vrq9frJ02aVNay/CZSB4dCSET02WefFRYWln0ZFhZmSq/gBjXbDutotlB3kRq8ZLi2ikryTGvOWONZ5g0E1cBCB7MLC3iuqVdvkOpPIWfzz8CDCOqmRh/cwNvbe/DgweVfadq0qfFJZGSkos9yECiEREQV1pNN4eqpi/9qjORsqRMxXWtIrebJR54jXsUxQESs/kQW2MUCoUAl5iS1/tqw7zEqvlV12xrdWOTzFggFQnAVhdCEHwKgCE6oV8PD33P00udDGlt0/wKr2Vdq8SlJVSxJsoixUsPXLJIIqsGrgdRuOblWsd+YBcVIbb4h5mSZUACOCSNCZRhjzR5t1e/NOP/wAAGfHv6E5N2Yn3qLZx2q5G2PulLj11nIYxbPBWow//ZOURvlxHd58q+VjAlcfKX6U1jk86iC9o3LikeExA3myeK4UAhNUuCmb9+tQ3iruk37tAx+qKbAJMy3Fev8C886zG/+QblJpE8nFz/yqMuCerAa0SRVPHkWrJp7mNR6ATWczlM28pxjVJRKkhu5h7Ma3VhwT6wLOgI1U6NVrRGCUiiEJrkamvrUwudEp7iD+bdl/m1FpwCNeNZn9Scx0SlACBWF0JSNAqAI1ggBAMChYUQIACCMijVC7BrVHAohAIA4KqZGsUaoNUyNAgCAQ8OIEABAGM4Nikd4mBrVGgohAIA4cgkKoXAohAAA4sjFagohzrbRFNYIAQDAoWFECAAgDlc+NYpdo1pDIQQAEEfNGqEBU6PawtQoAAA4NIwIAQDEwa5RK4BCCAAgjoo1QhRCraEQAgAIwzAitAJYIwQAAIeGESEAgDiyikusKb1IN1QBhRAAQBgVU6OMy6iE2sLUKAAAODSMCAEAxFEzNYrNMhpDIQQAEAe7Rq0ACiEAgDAMm2WsANYIAQDAoWFECAAgDu4+YQVQCAEAhFExNcpkFEKNYWoUAAAcGkaEAADi4PQJK4BCCAAgDHaNWgMUQgAAcbhB8QgPI0KtYY0QAAAcGkaEAADCMFnxiJBhalRrKIQAAOLIsvKpURRCjWFqFAAAHBpGhCZ5MeaafHQC823Jaj5KnpGi49CpIxd3bjh6/uS1rPRcb1+PsMigLn0fbhfdxNnFSXAyfTpP3cQzD1BhMhEnt1rMvwOr2Yd0QYKDgQqG2zxtG0/bQYXJVJJHbiHMtyWr2Zc864lOZj9UTI1is4zmUAhN0iC4kCev58nrKXE2qzVAavwmuYcJSZJ47NJnM1Yc2Z1U4fUfv9xcu37NKbOf6D6wrZBgZMiXz87hl5eQobD8y/z6T3TqDVZ3jPTQP8jZS0w2UIrL/PIy+dwc0mfc9fKdQ+ANcg8Xlc6uqCmEmBrVGKZGleI8eb1hT1+eddDyn73xx33P9njv3ipodPV86vQR8z6f+aNs+ftXF96Q98XxC19WqIKlZD2/+LW8byAVXLV0MFDBcFs+8px86o0KVfB//ncIZB6wdDC7JBsUPzAi1BoKoSr6DPnQUzw30ZKfuXfTibfHfaMvLH5ws+/nbvz6vbWWiVSqJE8+9BS/dfLBrXhuouFgPBXnWCYUqMXlE6/w1N+raKXPlA89xXNPWyQSgHmhEBIRGQyGkydP7t27Nysry9Q+JXn86AvEDebMdUdWeu6sUV/KBpN+E1zy71+P7LJckZZPvWnq7wT5F+STr5s5DlQLv7yMJ683qakhXz76AvESMyeyc0yWFT8wNao1FEJKSkoKCwsbOnToq6++GhERsWjRIhM78rwkfv0ns2Yrs/TjX/NuFZjYmHM+743VZs1z57Pykvj1/1PQ/sbPVY4dQRjDbfnsHAXt885Y7BCwW2qmRlEINYZCSL6+vps3bz59+vSuXbtWrVo1ceLE/Px8E/sqqgGqyTL/fdV+RV3+PnT+8plkM+Upj19fo3DFgvPra8yVBqqHp20jfbqyLtcscQgAmBUKIYWEhLRo0cL4vG3btnq9PifH1HUsnnmw8u0hmjpz4nLmTcVLawe2WWTglb5TeZcdZsgBGuDpuxR3yTpIBlPnKqASMidZVvbAiFBrKIR3+eKLLx555JHQ0NAKr5eU3GchhJdQUYq5UyVfqXTzXhVSrqrppRQvuG6BLmAhKv5quIEKzX4I2DEVa4QohJrDeYR3rFu3bsGCBTt3VjLEKS6+715Nbihg5kxFRFXuFK1UQX6R5kkqoWJAbCgg4kTm/raBcrKa6Q0LHAIAZoVCWGrjxo3jx4//7bffGjZseO+77u7uRJUvHDJdTTNHo8Cavip6BdXy0zxJJdyCKf+iwi41UQWtlC5YRSfmZvZDwJ7JBsWbz3EeodYwNUpEtHnz5jFjxqxfv75tW4WXZXEPJ1d/84S6o2HLOk7Oiq+d1qStJS4Fx3xbKu/SyhxJoPqYTwvFfdzDyTXADFkchtIFQkyNmgFGhHT69OlBgwb16dNn+/bt27dvJ6Knn3763mXCSrGaj1pgcOPj79m2a6ODf54yvYu3r0fbrk3MF6kMq/kov/GLwi59zRQGqomFPEqJ7xEp+DnLavbF+L5azHD3ib/++mv58uVE9PTTT5ftBCSikpKS06dPJyQk3Lx5c/r06ZKEgVApfCPI2dl5ypQpjRo1yvofg8G0mQonN6neeDOnK/XczDhF7Ue90t9VZ4nfclhIf+bVSEEHz3osdKDZ4kD1eESw0EEK2ks6KdJChwCY6OTJk1FRUb6+vn5+fl26dDl58s7u8YSEhP79+y9fvvy1116TZcyv3oERIT300EMffvihio5SwxnkZtLAsfradGnU/8moDT/sMaVxwxZ1Rk7qbe5IpZgTa/4+PzCCuAk7epiz1PxDYi7mjwUqSY3fMKTvNvFsQqnhDFFXn7cbTPkaIXvgCPLzzz8fM2bM66+/TkRpaWmff/75woULjW916tTpypUrZ8+erXQnhCPDiFAlVucpFjnBkp/4+n/GtIuuerYzpHbgJ6tf0rm7WiCSEQvoLLX4N7GqVjGZJDV7lwV2sUgoUMstVGq7mJy9q2zIaj/J6r1ggUR2Tus1wl27dsXGxhqfx8bG7tql+NxQB4RCqJyTm9TkLan5Rxb+WFc3l//8Mm34hFhJuu+STMeezb/b/c/QujUsGYyIWPgIqd13D9pz6FpDaruU1RltwVCgEvNv7/TIr8y78X1bGA+BFh9bMBSYKjk5OSio9PafwcHBN27cEJvHJmBqVAn32izkUSlyPLnVEvL5Lq7OMz57evDY7iv+8/vODceyM3KNr7t5uHbs0Xzw2JioPsI2ZLKg7k4xe/mlpTxlPc/5687rPs1YrQGs7rO4GaEt8WooddnMr//Er//EMw/cubK2e21Ws69Ub4KoQ8AOqdoss3LlykOHDpV/bePGjcbNL66urmUXANHr9TqdTqOg9gyF0CQf/V576fL/I9dA0UGIiBo0C39rwXOyzDNv5mTevOUb4BUQ7OPiagV/lU4erP5EVn8ileSV3aHelEk2sEbMiYU/wcKfIEMhFaVwQwHTBVvJIWBXVBXCjh07jh591/wKY6UTRWFhYdeuXTM+v379uokb4B2cFfz0tAUpt3TW9iNAkliNEL8aIRY5a14pZy/yekh0CNCIkxt5ROAMCXNRVQjr1avXp0+fSt8cNGjQqlWrRowYQUSrVq2Ki4sjooSEhMDAwMhIS5xbbItQCAEA7MekSZOioqJ69+7NGLt48eKCBQuIaMaMGbGxsVOnTu3WrVtRURERdezY0c/Pb+vWraLzWgUUQgAAcbjGJ9QHBQUdP378zz//JKLu3bu7u7sT0TfffOPt7a3T6b7++uuyls7O+PlfCt8IAABxzHBlGXd39379+pV/pUGDBsYniq8i6Rhw+gQAADg0jAgBAMQxw4gQlEIhBAAQB4XQCqAQAgCII3PFhQ11UGtYIwQAAIeGESEAgDiYGrUCKIQAAOKomRpFIdQYpkYBAMChYUQIACAOpkatAAohAIA42DVqBVAIAQDEkZUXNhRCrWGNEAAAHBpGhAAA4nDlU6MYEmoNhRAAQBxMjVoBTI0CAIBDw4gQAEAcjAitAAohAIAwXHkhxGmEmkMhBAAQByNCK4A1QgAAcGgYEQIAiIMRoRVAIQQAEEdmxJniLqApTI0CAIBDw4gQAEAYzpWPCEFrKIQAAOJgjdAKoBACAIijYo0QI0itYY0QAAAcGkaEAADCcFlSeqkYjhGh1lAIAQDEUTM1ap4kDgyF0GYV3uC5iaTPJBdf8qjDvBqJDlSqqEB/+uilmzeyiPOgUP8mrSPdPFxFh/qfvDP89mUqziFXf+bVmNzDRAeCarDWQwBsDgqhreEyT17HLyzgt/6+63X3cKnOUyxiHDm5C0pGl5KSv3l/3c4NRwpv68te1Lm7dn304edfj6vXRFzVMRTyy0vkK9/R7SvlX2Y+TVnE8yxsKDEsltsOKz4E1FBx+gSmRrWG49+m6NPlA0PkY5Mq/gggooJrctKHhp3RlbxlEd/N+e2J9q9v+r/95asgERUV6LesPTiiwxtL/r2ei7hsPs89bdgVIye+V6EKEhG/dUo+MVXeP4iKUi0fDNSw4kNAHS4zpQ9MjWoOhdB26NMNewfwzAMPalNwTd4Xx7MPWypTqc9n/jjv9VWGEsP9GsgG+ct/rfn4H99bMhUR8ZwT8r5B95bAu9pkHTbsfQy10AZY8SEANg2F0EZwWT4yjm5frrql4bZ8+DnSZ5g/U6nfVuz5fu5GU1quXrBl/bc7zZ3nDn2WfPgZKsmrumXBdfnIc8TvW8hBPCs+BKpFlhQ/MDWqNRRCunbt2siRIx9++OFWrVqNGzfuxo0bohNVgievq+IX4fKKUuWzn5kzzh238wrnvr7K9Pbz3lidd6vAfHnKk8/PpcIUExvzrMP8+k9mzQPVYbWHQDWpmBrF6ROaQyEkIurfv/8PP/zw/fff37p164knnhAdpxL84jfK2l/9gUpyzRSmvN9X7ctIzTG9fXZG7oYfdpsvzx2GAn5luaIe/OLXZsoC1We1h0B1GTfLKHqA1lAIKTw8/Mknn2zWrFmLFi1mzJhx+LD1rS4UJvOcv5R1kfU8bbtZwtxtx3+PKu6yQXEXFXj6LjIoG3ry3ES6fck8caB6rPgQADuA0ydKHTlypKCgYM6cOaNGjRKdpSKem6TiHFqed8YCvzpeOH1dcZdTiruokZeoohPPO8s8IrSOAtVlzYdANXEukazwj4ZBodZQCEu9+OKLWVlZRUVFq1evvvfdw4cPh4eHl7WcNWuWRcMVZ6rpVZSmdY5KZGconn3KTLvFOWfMvAcz11vvNw0Us+JDoLpkpvhGuyiEWkMhLLV//34i+vnnn/v06XPlyhVvb+/y77Zo0eLbb781Pvfx8bF0OGdVn+jip3WOSnj5uBfkFynq4uPnae4qSETM2UfN2VYW+aaBYlZ8CIAdQCG8y8CBA/Py8q5cudKsWbPyr7u6upaNCC2PedS1WC+lwuvVTEvOVtgl2Exh7qLuj2+RbxooZc2HQDVx5SNCEdelsHPYLEN///13amoqERkMhrlz5/r7+zdo0EB0qLt5PaT8BzRjQd3NEuZuUX1aKu3SpW8rcySpgAX1IKbw9zy3EObT1DxxoHqs+BCoJs4lpQ9MjWoOhZD++uuv5s2bBwUF+fj4rFy58pdfftHpdKJDVSTVfkpRe1azF7mFmClMef1GRuncFVxT29XNpf+TUebLU+6T/FnIo4p6SLWfIsKPGCtltYdAdRlHhIoeKIRaQyGk+Pj4tLS08+fPZ2VlHTp0qHPnzqITVYJFjCX32qa2lnRSo9fNmKac4FD/p6cqqDcjJ/WpVaeG+fKUJzV8jZzcTG3tFsrqjTdnHKgWqz0EwA6gEJby8fFxdbWauwXdy8lNaruUnDxMaSs1e5+8HjJ3ojLPz4qL6mPSbGfrqEbj33jc3Hnu8IyUWs0zaZAn6aQ2C8nJ0/yZQC0rPgSqQ9WVZUSHtjsohDaD+TSVOq+r4hZ6kqvUcg6rHW+pUEREkpP08crJ/eIfeXCzLn1bzVnzsourRfdnsZDHpNZfVXFfHtcAqcMK5tfaUqFAJas9BKqFS4ofmMDXGgqhLWE+zZ2i/mARY0m6ZxWTSaxmH6cuW1i4gEvEubq5/GvR87OXvVDpjtDQiKB3Fo+fs+ZlLx8BN4pjtQY6ddnMQvpXctNByZXVHePUbTsLsMb5cLiX1R4CYNNw+oStcQ2Qmr5LjV7jadspN5EXpZOrP/Oow4J6kM4ipyXcB2Osz/BOvYd1PH300tHdSanXMokoOMy/dVSjpm0jLXDi4IN41pPaLKSiNJ62jd++QvpMpqtBXg1ZUAw5e1fdHayKtR4C6qg4fQKbZTSHQmibnDxZSH8K6W9tBwRjrGmbyKZtIkUHqYwuiIU/YW3fMVDJWg8BpXAeoTVAIQQAEEfFDSUwItQaCiEAgF1JSEgwXhJy9OjR7dq1Ex3HBmCzDACAMFyWlD4ePCI8fvx4jx49IiIiIiMje/Tocfz4cYv9WWwXRoQAAMJw5VOjnD/o/Im5c+eOGzfuH//4BxHduHFj7ty5S5YsqV5G+4cRIQCA/dizZ0+PHj2Mz3v06LFnzx6xeWwCRoQAAMJofvpEcnJyjRqlVzEMCgq6ceOG6myOA4UQAEAcWc0d6pcvX75r167yr23bts3JyYmIdDqdXq83vqjX693dBVzFwuagEAIACKNmjZBYdHT02LFjy79orIJEFB4efvXqVePzq1evhoU98Ip0QEQohAAANqdOnTrR0dGVvhUXF7dy5cr4+HjG2MqVK+Pi4iyczRahEAIACKNqjfBBb06aNKlbt27R0dGMsYyMDGwZNQUKIQCAMKpOn3hQ+8DAwCNHjuzdu5eIHnnkESu8zbgVQiEEALArOp2ue/fuolPYEhRCAABhuKpdo+bJ4rhQCAEAhOFccWHDzSc0h0IIACAM7kdoDXCJNQAAcGgYEQIACKPuotugLRRCAABxMDVqBTA1CgAADg0jQgAAYTiXlG8DxYhQYyiEAADCYI3QGqAQAgAIg9MnrAHWCAEAwKFhRAgAIIyq+xGCxlAIAQCEkbW++wSogKlRAABwaBgRAgAIw2WJZKV9MCLUGAohAIAwWCO0BpgaBQAAh4YRIQCAMCpGhJga1RwKIQCAMCpOqMeVZTSHQggAIIyaESGuNao1FELQEs/Yy5PXU+YBXpRMnMgthAV0YLUGssAuYoPdSs05uuZQ4taTmZfTC24VePh7hjSq1bhX89aPt9N5uYnNBvak/CHgFPs3MfyMtQH4SwKN3L4k/zWDZ+y+68W8WzzvDL/yPQvoLLX4mDzrWT6XbJC3ff77zq+3FRfoy17MKdDn3MhK+vPUlk9/6ztzYLsnOlk+GNibSg+BquCi29YAhRA0wLMS5MOjSZ913waZ+wx7+0ttl7CAzpYMVlxY/P24RWe2n75fg/yMvDXTVtz46+qAd4cyhhknUKnKQ+C+HZXfholjalRrOH0Cqq3gqnx4TNU/Aopz5IRnKO+MRTIREXHO105f+YAqWGbft7u2z99sgUhgn0w8BMBaoRDecfTo0Z9++kl0Ctsjn/gH6TNMalqSK//9qpnj3HHi16PH1iWY2HjzJxuST143ax6wVwoOgXv7ykzpA9ca1RwKYanU1NT+/fvHx8eLDmJjeMZeRYsiPPMAT99hvjzlbf3sN9Mbc5lvm/eH+cKAvVJ6CFTszpnSBy4tozkUwlITJ04cP3686BS2hyevt0AXFW78fS3t/E1FXRK3/F2UV2SmPGCvqvnvWUUhxIhQcyiERESrV69mjD3++OOig9igzANKe/DM/eYIUsGlQxeUdinRl1w7ftkcYcCeKT8EwNpg1yhlZGS8+eab27Zty8i47yz/tWvX3n33XePzqKioHj16WCqdteNFyYr7FCrvotytlGxVvXI0TwL2Tc0hcFd/jPDEQyGkKVOmTJ8+PSws7AGFkHOu15eehVZSUmKpaABg52TlhRCFU3OOXggzMjJWr16t1+s3bdqUk5NjMBiGDx8+e/bshx56qHyz2rVrl40IoTymq8WLbynrowsxT5a7+NT0VdMrRE0vcGRqDgGwMo5eCL29vTdu3Gh8fuHCha1btz7//PPBwcFiU9mSgA6Ul6SoBwuwxGVcIjrUV9rF2dU5vFVdc4QBe6b8EChPxeYXbBrVnKMXQldX19jYWOPzEydOMMbKvgRTsFoD+JXlCrsMNFOY8mo1C6tRLzj9goKNo416NNV56cwXCeySikOgPDW7QDE1qjXsGr2jbt26K1euFJ3CxrDALoouqM3827KgaPPlufNBjMX+o5+C9hLrOfVR8+UBe6X0EKiAc0nxA5dY0xoK4R2+vr5Dhw4VncL2SC0+Ihc/k5o6+0gtP7fYTWRaDmjdvP/DJjbuObVvrWZhZs0D9krBIQBWCYUQqs0jQmq7tOofBM4+UttF5Kl46U41xtjwOU891LVxlS07PhXVY2pfC0QC+2TiIVAZVZdY0/wP4OhQCEEDLKCjU9RvLPCR+zbw7+AUtcHydyV0cXd95rsJPab0cXFzqbSBh7/n4H+PiPvgCdx6AqqjykPgflRdYg3/VjXm6JtlQDMeEVLHn3j6Lp78C8/cT4XJRJzcajH/jqzWQBYUIyqX5Cz1mt6/w5OPHFlzKHHL35lXMm5n5XvV8A5uGNIktnmbYR3cvN1FZQO7UskhALYBhRC0xGp0ZTW6ik5RCd9Q/+6Te3ef3Ft0ELBzSg8BnD5hDVAIAQCEwZVlrAEKIQCAMKruJoFCqDFslgEAAIeGESEAgDCcmNIT5LFGqDkUQgAAYbBGaA0wNQoAAA4NI0IAAGFUbZYBjaEQAgAIg6lRa4BCCAAgDE6otwZYIwQAAIeGESEAgDCWnBo9d+7c/PnzMzMzBwwYMGzYsPJvJSQkHD58+OLFixMnTqxdu7a6/7/twogQAEAYNXefUCUrKysqKsrDw+PRRx/9xz/+sWzZsrK3iouLR4wYsW3bts8///zGjRva/MFsCkaEAAD277vvvmvevPn7779PRM7Ozv/85z+feeYZ41suLi7nzp0jIj8/B729MEaEAADCqBgRqhsU7t+/PyYmxvg8Ojr61KlTOTk5Wv5JbBlGhAAAwshaX2ItPT2d33MPey8vr5SUlK5dS+8PFRgYyBhLTk729fVV9NH2CoUQAEAYdXefWLp06aZNm8q/tG/fPicnJyJq3bp1SUlJhQ7vvPOOu7u7Xq83fqnX6znnHh4eqmPbGRRCAAAb06dPnwkTJpR/xVgFiejq1auVdjl48OCVK1eMz69cueLs7BwSEmLWkDYEhRAAQBhVp09QWGhohw4dFPUaMmTICy+88M4773h5eS1fvnzAgAGurq7Hjx/Py8uLiopS9L+yPyiEAADCcE73rOhV1UXVB/Xu3bt9+/atW7eOjIw8efLk5s2biej7778/f/58VFRUbGzsxYsXc3NzhwwZotPpduzYER4erupzbBIKIQCA/ZMkafXq1cePH8/Ozm7Xrp2npycRzZo1y7iguHTp0uLi4rLGjjZrikIIACCMhS+63apVq/Jf+vv7G5844NVkykMhBAAQBneotwY4oR4AABwaRoQAAMKomRpVOIKEKqEQAgAIo+aEesyNag2FEABAGFWXWMOIUGNYIwQAAIeGESEAgDAWO6EeHgCFEKJDFBsAACAASURBVABAGHWXWANtYWoUAAAcGkaEAADCqLsNk1miODAUQgAAYTS/MS+ogEJokkFxTTidJvJnFIz5ZNCWocRwdM+ZU0cupidnu3nqaob5d+zRPLxesOhcRER0+zJP38kLk6kkj7mFkG9LFtCRmIvoWPbDwtcahUqhEJokOiaC01Ei4qRj1IRRIyIn0aHA5umLSv7v6y3LPvlvVnpuhbdadmww6d3hbbo0EhKMiHjWIZ44m2cdvPOK8T+u/lK9iSxiLEk6QdEANIbBjVJFnI7JtIkoX3QSsG3pKdnjes2e89rKe6sgEZ04cG58nw/+8+ZqWbb8TBjn57+Q9z9evgreoc+SE9+T9/SjgmsWD2aHjKdPKHuIzmx/UAjVyZJpK1GR6Bhgq25l5T/f+/2TCRce0IZz/u2nG+bO+tFiqYzkpA/lpNnE5Qe04bmnDfuHkD7DYqnslUxM6QOFUHMohKrlcdorOgPYqn+OW3jlXKopLX+Y9/u2XxLMnacMv7mZn59vUtOCq/KxSWaOA2AJKIQky3JAOTNmzDCxI6dkTslmzQZ26dD2Uzt/O2p6+3mvryrWl5gvzx28RE581/RtiTx9B0/bZtZEds94+oSiB2GzjNawWYaIKCsr6+LFi76+vkSk0ynaAnCWqJaZUoG9WrPoT0Xtr124efDPk1F9WlXdtHp4xj7KO6esy5XlLKiHmfI4Aln56RCYGtUcRoSl/Pz8/P39/f39PTw8TO/FKYXoQUspABXIMt+7+YTSXrs3HjdHmAp42lbFXdJ3klxsjjAOgpPiESEKoeZQCEs1btw4ODh4yJAhly9fvvddWb5ftSshKjBrMLAzmTdzbucWKu117cJNc4Sp6PYlxV0MBVSUon0SAAtCISTG2JYtW86fP3/06FE3N7dBgwbdW/YKCh5Q7fRmjQd2Ji9HzW9Ot7ItcrpO8S0VnXhxjuZBHIfMmdIHTqjXHNYIiTHWs2dPIvL09FywYIGfn9+lS5fq1atXvo2np+f9/wduZg4IdsU/yFtFr8BgX82TVMI1UEUn5hqkeRDHgTVCa4AR4V2Ki4s5587Opv9+oEMhBEV8/D0DlFe1iMYW2ZPl1VBxFxc/0tUwQxQAy0EhpAMHDqxbt+7ChQtHjx4dNWpU586da9eubWJfRqG4Ejwowhjr1u9hpb2i+7cxR5gKWM3eirsExxLD5QbVU3P6BGgNhZCI6IsvvujTp8+YMWMiIyPXrVvHmIn/1BgjYZeCBNv15JS+Ts4KikezdvVadmpgvjxlmG8rFhilpIPEIsaZLY5DkJU/MDWqOawRUseOHTdv3qyiI6P6RAGa5wG7F9k4dOi4Hqu+MulfnbOL0/RPnzL5l7PqYo3f4PsGkWzSFjBWeyTzbWHuSPZNxSAPg0LNYUSoWhCjdqIzgK16+cP49jFNq2zGGHv181HN29e3QKTST/RtJTX/iFjVPxmYf3up6XsWiARgbiiEajCKkKgHvnugmrOL09y1rzz2VJcHtPHwcvtg+YuPj4mxVKhSLHy41GYROT9odysLjZM6/EiSq8VS2Ss1p0+Izmx/8KNcEUYUzKgHo0dwP0KoJlc3l39+M+6r315tF91EcrrrSPTx8xw6rsfaE/+OHdxBSDZWs69T9G5Wdwy5+N39hsQCOkkdVkoPf0lO7kKy2Rmu/AGawxqhSVb8cOKdd95lFECEm5GCltrHNG0f0zQ7I/f00UsZKTmubi4htQObtY1UtJvGLHRBUrPZ1PRfPOc4FVwnQwHpgplvC3LFyRJgb1AITZJw6DrDxbXBbPwCvTvHWuWuE+bM/NqSX1vROeyWiivFYLOM5lAIAQCEUXE6BC7zrzkUQgAAYdScI491Qq1hswwAADg0jAgBAITBRbetAQohAIAwaq4sg+sbaw1TowAA4NAwIgQAEEbmxBXOdSptD1VCIQQAEAZrhNYAhRAAQBjcYtAaYI0QAAAcGkaEAADCYGrUGqAQAgAIw7FZxgpgahQAABwaRoQAAMLIxJSeII8T6jWHQggAIIya8wjNk8SRYWoUAAAcGkaEAADCcIzwrAAKIQCAMJwzWfEd6s2UxXGhEAIACKPm9Am1n3X69Ok5c+ZkZ2c/9thjo0aNKns9PT191apVBw4ckGW5e/fuo0ePdnZ2rNKANUIAAPuXkZHRrVu38PDw+Pj4f/7znwsXLix767///e+ePXt69uzZr1+/Tz75ZOrUqQJzCuFYZR8AwKrIRLLCLupGhN9++22bNm3eeustIuKcz5o1a9y4cca3Ro8e/cwzzxifBwUFjRw5cv78+ao+xFZhRAgAIIzxotuKHqTqIt2HDh3q2rWr8XnXrl2TkpKys7ONXzJ253949erVWrVqVf/PZVswIgQAsB9Xr17l96w6+vv7p6SkdOvWzfhlQEAAYywlJcXPz698s+Tk5FmzZn399dcWymo1UAgBAIRRNzW6cOHC9evXl3/x6NGjTk5ORNSzZ8+SkpIKXd588013d/fCwkLjl3q9nnPu6elZvk16enqvXr0mTZo0aNAghYlsHgohAIAwMidZ+a7RQYMGTZo0qfyLxipIRGfOnKm01/79+y9fvmx8funSJRcXl5o1a5a9m5mZ2atXr0GDBr3xxhvK0tgFFEIAAGFUnFDPiYKDg1u2bKmo19ChQ8eOHfuvf/3L19d32bJlcXFxrq6uCQkJubm5bdq06du3b0xMzOzZsxVmsRMohAAA9i82NjYmJqZly5Z169a9fPnypk2biGjVqlXnz5/v0qVLQkJCRkZG2XTrqVOndDqd0LwWhUIIACCMmqlRVedPMMa+++67c+fOpaWltWnTxljn3n77bYPBoNPpxowZU76xq6urms+wWSiEAADCqLiyTHU0aNCgQYMGZV96eXkZn7i5uVkuhPXBeYQAAODQMCIEABBGJibjxryioRDatCKiIiIXIjfCsWHrSnKpMIWcPck1iCQX0Wnu0Ov1N2/ezM3NDQ0N9fX1FR3nbvoM0meSiw+51iDmJDqNGhZbI4QHQCG0RTmckjhdIyr83yvOjGoRNWDkcNdGsnlFN+VLi3jKBsq/WPoKc2EBHVnYUBY2ROAPd1mWV6xY+d13y7dv31FcXGx8sX79+oMHPz516pTQ0FBRwYiI5yXxi4v4zU1UlFb6kpMnC4phdZ5iNaIFBlNB3ekToC2sEdoWmdNhmX7jdK5cFSSiEk5XOf3J6U+iImHpQCF+ealhexQ/P/9OFSQiXswzdssnpsq7YnlekpBgiYmJrVu3ffrp0Zs3bymrgkR0/vz5jz/+5KGHGs+Z87mQYMSL5VNvyrti+dUf7lRBIjLk85QN8sF4+dBI0meKyQY2C4XQhsictnNKesBvhJySZfqDKN+SsUAd+fQ/5ZOvk+G+f1k8L0neO4BnHrBkKiLav/9A585dTpz4634Nbt++/cor0158cdL9GpiLrJcPPcUvLSZuuF8TnrbdsLcfFVy3ZK7qME6NKnpgRKg5FEKbwekgpxQTGubJtJPovj8pwBrwy0v5xW+qbleSJx8ZSwXXzJ+o1LVr1+LiBpfdl+ABvvpqgYXHhfLfr/L0XVW3u31FPjyGDIVVt7QCXNUDtIVCSERUUlKyePHi0aNHv/DCC1u2bBEdp1I3OV0wuXEWJzFTamASfbqc9IHJjTPl0++YM81dXn11ZmpqqomNZ8164+rVq2bNU4Zn7ufXVpna+Nbf/NIis+YBe4JCSJzzoUOHLlmypGvXrq1bt77fJWvF4vS3wvanlF/UHixEvvgNleSZ3p6nbKC8c+bLU+bChQs//mhqsSGiwsLCOXPmmi9PefycstGnfOELkourbieamqlRDAm1hl2jtG7duiNHjpw9e9aKr62n52TqL+nlu2ATqXXiKb8p7cFTf2NeU8ySppy1a3+WZWW/P61d+/Onn35c/s6uZlGczTP2KuySwzP2sKAYs+TRjrq7T4C2MCKkrVu3Dh48eN26dTNnzly5cqXSHwQWwClL1T9+7J2zSiW5lG/6LHcpnnPcHFkqOHz4iNIuly9fTk9PN0eY8vitk8Qr3mOvahb5plUTFgitAUaEdOnSpaNHj+bm5nbs2PGTTz7ZunXrokUVVxeSkpLi4uKMzwcPHjxq1CjLZlS37F+gcQrQRPlN/6YrvKl1jkqYvjpY3o0byUFBQZqHuYuqbxovuonLTIApUAjJ1dU1NDR08eLFRBQbG9ugQYMPP/ywRo0a5dsEBQU988wzxucNGza0eEZ1Z1Xb5IU27J+TqqsbO7lrnaMS6q687OnpoXmSiqz4m1ZNuLKMNUAhpNq1a7u4lF7RKiIiQpKklJSUCoUwICCgbERoeYw8VP3L99Q6CGhBF0TMhbjCfRzulriSS+3atZV2cXJyssRVZtzUfART1cvCcGUZa4A1Qho2bNj+/ftv375NRNu3b/f09Kxfv77oUBX4EyneyMMoxBxRoLqYCwvspLiTRa4c1qtXrNIuUVGPeHiYfUTIfJqRa6DiXkE2drk1EAWFkLp06dKrV69WrVoNGTIkPj5+wYIF7u7WNqPCGEUo7BJI5GOOKFB9LGyosg4uviy4p3my3KVv3z4V5kKq9NRTT5opzF2YEwtVNiXDfB8mT2v7jbYSuLKMNUAhJCJatGjRL7/8MmXKlJMnT8bHx4uOUwlGzYgU3JGA0cPmCwPVxEIHM+8mpreX6k8hZ0v8WuPl5fXGG7NMb9+kSZMxY54xW5y7SA1eImcvk5sz1ljBH0Qg4415FT1QCTWHQliqadOm0dHRSn8dtiA3Rp1NvNcSo6aMapo7EKjHnKTWX5OLSbWN1ejGIp83d6IykydPGjhwgCktvby8fvxxhbOzpfYZuNaQWs0jZtKPLFZ/IgvsYu5EmpCVP1AHNYdCaDMYhTPqWOVfGaNGjFpZJhKo59VAarecXKv4xYsFxUhtvrHkzZgkSfrhh+WPPdb/wc0CAwPXr1/XsmULy6QyYjX7Si0+Jcm1imYRY6WGr1kkEdgJFEJbwqieRL2J7nfOlhejLoza4ia9NoH5t3eK2shqDaz878vFV2r8ptRuuWUmRcvz8vL65Zef58z5NDCwkv0pjLEhQwYnJBzo3j3GwsGIiIU/IXX+hfm3r/xtj7pSm2+kpu+aOHC0BlgjtAY4fcLmBEjUiyid0zWiHE6FjFyJvIjCGIXgNxsb4x4mtV5ADafzlI085xgVpZLkRu7hrEY3FtzT8iWwjCRJU6e+NHbssxs2/LZp0+bLly8XFBSEhIS0a9c2Lm5Q06ZNRQUjIubbinX+hWcd5jf/oNwk0qeTix951GVBPViNaJIULKVbA+PCn9IuZgrjsFAIbVQNRjUIQz/74Fmf1Z9khX+V3t7eI0Y8MWLEE6KDVIL5t2X+bUWnADuBQggAIAyuLGMNUAgBAIThuK2SFUAhBAAQRlZ+41DUTc1hbwUAADg0jAgBAITBGqE1QCEEABAGp09YA0yNAgCAQ8OIEABAGDVTo+ZJ4shQCAEAhMHNJKwBpkYBAMChYUQIACCMzLmscBsodo1qDoUQAEAYnFBvDVAIAQCE4ZxzpSNClEKtYY0QAAAcGkaEAADCYGrUGqAQAgAIg80y1gBTowAA4NAwIgQAEEbFCfUYEGoOhRAAQBiZuIyLbouGQggAIIyaNUIzRXFgWCMEAACHhhEhAIAwKu5HiDGh5lAIAQCEwXmE1gBTowAA4NAwIgQAEEbVrlHQGAohAIAwKtYIcfqE5lAIAQCEkUmWFa4SohBqDoUQAMAhnDhx4tNPP83MzBwwYMC4ceMYY8bXs7OzP/3005MnT5aUlLRr127y5Mn+/v5io1oYCiEAgDAWWyNMS0uLiYl57bXXWrZsOWXKlJKSkhdffNH4Vn5+vrOz85gxY4joiy++2LZt2/bt21V9iK1CIQQAEMZil1hbtmxZp06dZsyYQUQff/zxtGnTygphWFjY22+/bXzesGHDJk2a6PV6V1dXFZ9io3D6BACA/Tt8+HBUVJTxeVRU1Llz57Kysso3yM7OTklJWbx4cffu3R2qChJGhAAAAqm6ssyDnD17lt9z8dLg4OCUlJTo6GjjlwEBAYyxlJSU8muB7dq1u379ekBAwG+//aZhHpuAQgjaKuR0negm0W0iIvIgCmIUTuQmOBeoYLjN07bxtB1UmEwleeQWwnxbspp9ybOe6GSUkHB4/fpf//7777S0ND8/v3r16vXv36979xgXFxfByfTpPHUTzzxAhclSh5XEnB7cXN2u0S+//HLVqlXlX0xMTHRyciKiIUOGlJSUVOgyc+ZMT0/PgoIC45eFhYWcc29v7/Jtzp07xzlftGhRr169zp075+PjoyiVTUMhBK2UcPqbUxKR4e7XL3JKYNSQUQsi0T+kwERc5peXyefmkD7jrpeT11PibFZrgNT4DXIPFxLtyJGjL7/8ys6duyq8Pm/efxo0aPDRRx8+/nickGBkyJfPzuGXl5Ch8H8vVT3UkxmXmeI1wvj4+KlTp5Z/0VgFiejEiROV9tq7d++lS5eMzy9evOjq6lqzZs0KbRhj48aNmzZt2smTJzt37qwolU3DGiFo4rZMmzmduqcKGsmcEmXaTJRv6VygguG2fOQ5+dQbFarg/3CevN6wpy/PPGDpYEQ//LAiKqrrvVXQ6Ny5c4MHD502bYYsK71+Z7UV3pD3xfELX5argmbk7+//0N2q7DJ8+PC1a9dmZmYS0eLFiwcPHuzi4rJv374//vjj+vXrhYWlsTdt2lRUVGTK/9CeYEQI1Vcs03ai7KqaZcu0TaI+RI61Dm9ruHziFZ76exWt9JnyoaekR9Yz7yYWSUVEtHHj76NHjzEYKv1l645PP/3M3d393Xf/ZZlUREQlefKhp3huooquFjuhPiYmpn///s2bNw8LC8vMzNy0aRMRrV279vz580OGDJkyZUpERERBQUF6evrSpUtr1Kih4iNsFwoh5eXl3bx5s/wrYWFhOp1OVB6bw+mwCVXQKJdTAqNHzBsIqoFfXsaT15vU1JAvH33BqesWYpb4MZKWlhYf/2SVVdBo9uz3Y2N7Rkd3M3cqI/nUm+qqIBFxkrlFCiFj7Ouvv3799dezs7ObNm3q7OxMRO+++64syx4eHgMGDLh06ZJOp4uMjHS0LaOEQkhEO3bsmDJlivF5UVHR9evXz5w542gzA9WQw+mi6a05XWLUhMixrlthMwy35bNzFLTPO8Ov/8TCR5gt0B0ffPDvnJwcExtzzl999bX9+/eaNVLpZ+Ul8ev/Z4EP0kSdOnXq1KlT9qWbW+kuNh8fn5YtWwoKJR7WCKl///7n/2f69OkdO3ZEFTQdp4tKr3TB6ZJ5skB18bRtpE9X1uWaJWqALMsrVqxU1OXAgYNJSUlmylMev76GuPolSePUqKIHrjWqORTCuyxbtuzZZ58VncKWcEpR3iXZHEmg+nh65ZtQHtQl6yAZCswRprxjx46npqYq7bV58xZzhKkofWd1enMmywofXOEuU6gSpkbvSEhISEpKGj58+L1vZWdnl51kWr9+/UaNGlk2mjVTsREUe0etVcF1xV24gQpTyDPSDGnuuHz5sopeV65c1TzJvbiKb1o5uPuENUAhvGPJkiXDhw/38/O7962UlJT58+cbnw8bNgyFsByTNi9UuwtYhKxm6z83FDDNk9ytbHO/Ivn5FvmVyyLnS4BZoRCWKigo+PHHH9etW1fpu40bN3bAyw6Zxp0oV3kXsEq6YBWdmFvF87I1FxISoqJXaGgtzZNUwi2Y8hXsF6tAxa5R3KNec1gjLLVmzRp/f/+uXbuKDmJjGAVYoAtYBvNpobiPezi5mv0vtFWrlsbt/oq0a9fOHGEqYL7V2mypYrOM0rtVQJVQCEstWbLkueeeK7tTJZistkW6gCWwkEeJlB0CrGZfpV1UCAgIUHpSoJ+fX0xMtJnylMdqPmqBTwGzQiEkIioqKurYseMzzzwjOojtYVSbyFdJD29GdapuBUJ4RLDQQQraSzopcrzZ0tzlrbfeVNR+xoxplrksBgvpz7zUbxrA6RPWAIWQiEin033wwQe1allkRcHeMInam/wPiTEFjUEAqfEb5Grq5bWkhjPIPcysecp069Z11KinTWzcqlXLqVNfMmueO5gTa/4+MZUXlFd67oTMUAi1hx9JUH3BjNqbMD/GGLVjpGbXA1iOW6jUdjE5e1fZkNV+ktV7wQKJynz99Vfdu8dU2axOnTrr1q11d7fcniwW0Flq8e8q77hUKU4GpQ9SvLkGqoBCCBpgVJ9RzANvOujGqBsjXLLHBjD/9k6P/Mq8G9+3hZOb1OQtqcXHFgxFROTm5vb7779NmjRRku77g6tXr9hDh/ZHRERYMBcREQsfIbX7Tt22WxAOhRC0waiWRAMZPUwVN4X6M2ol0QBGFppDAw14NZS6bJZazmGBUXddU9u9NosY5xS9h0VOEJLL1dX1P/+Ze+zYkTFjnil/hwQPD49BgwZu2PDrpk2/BweLqUYsqLtTzF6p0evMV8HmW6wRWgOcRwgacmbUlFFTomKiAiJO5IGb8doq5sTCn2DhT5ChkIpSuKGA6YLJNVB0LCKiFi2aL1mySJbl1NTU1NSbgYEBNWvWtIp7Jjh5sPoTWf2JVJJHhcmmTJZa7O4T8AAohGAOLqh/9sPJjTwirPC8IkmSatWqZaV73Jy9yAsLATYDhRAAQBiZG2SFN6/AiFBzKIQAAMLgEmvWAJtlAADAoWFECAAgjEwG5bdhwnmEGkMhBAAQBrtGrQEKIQCAMKpGhCiEGsMaIQAAODSMCAEAhOFc5gpPnyCOEaHGUAgBAITBGqE1wNQoAAA4NIwIAQCEUXVlGZw+oTEUQgAAYXBlGWuAQggAIAznMucGhV1QCDWGNUIAAHBoGBECAAiDqVFrgEIIACCMzGXchkk4TI0CAIBDw4gQAEAYTgZOCjfL4PQJraEQAgAIg0usWQMUQgAAYXCJNWuANUIAAHBoGBECAAijZmoUI0KtoRACAAijohDiyjKaw9QoAAA4NIwIAQCEUXH6BOH0Ca2hEAIACKNmahRrhFpDIQQAEEhWPsJDIdQY1ggBAMChYUQIACAMrixjDVAIAQCEwZVlrAGmRgEAwKFhRAgO5RanXKJiIh0jXyIP0XmgGgqTeW4S6TPIxYd51CWvhqIDlSoq0Cceu3zzRlbPx9tLEquiNZcJV5YRDYUQHIGBUxKnc0R5ZS9xIiJ/Ro0ZRRBV9dMKrAeXefI6fvFrnvPXXa+715bqPMUiniMnd0HJ6PKZ5IUf/LLjv0cK8ouI6MCtJSQ5PbiLqvMIUQg1hkIIdi9bpp3lS2A5WZz2cTorUVciYT89QQF9unxkHM88UMlbBVflpA/oynKp7RLm09ziyei7Ob998fZPhhIVZ8djRCgY1gjBvmXKtPk+VbBMukybiAoslAhU06cb9g6ovAqWKbgm74vj2YctlanU5zN/nPf6KuVVEKwCCiHYsSKZdhIVm9AyX6Zd+EXbqnFZPjKObl+uuqXhtnz4OdJnmD9Tqd9W7Pl+7kaVnY1rhMoeKv+hJiQkDB8+vHfv3vPmzav0yt0HDx4cP3780aNHVf5ZbBYKIRHRkiVLunbt2rx589GjR1+5cqXCu+fOnTt37pyQYFUaM2ZMWlqa6BSVWLVq1fLly8Vm4HSS6LbJzdM5XTRjGhPMnj173759YjNU6siRI2+99ZbYDDx5XRVjwfKKUuWzn5kzzh238wrnvr6q0re+//77Krtz4ioeKnKmpqb26tWra9eus2bNWrBgwbx58yo0KCwsnDBhwpo1ay5eFHwgWB4KIf3xxx/Tp09///33N23a5Obm9sQTT1RokJ2dnZ2dLSRblbZu3VpQYI1zemfPnk1KShIaoYSTsl9fOCWaKYqJEhISUlJSxGao1M2bNw8ePCg2A7/4jbL2V3+gklwzhSnv91X7MlJzKn3rzJkzFghgoqVLl3bp0mXy5MkxMTEfffTR3LlzKzR4++23R44cGRQUJCSeWCiEdOzYsaioqK5du4aGhk6YMOHYsWOiE4EGOKUQlSjslF3VaiIIUphccY9olWQ9T9tuljB32/Hfak4kysofakaER48e7dy5s/F5586dL168mJWVVf7dbdu2vfTSS9X7s9gq7Bqlxx57bP78+Zs3b65Xr968efPi4+NFJwJNVP5L+oNxymHkpXkUqCaem6Tipz/PO2OB02IunL5evf8B13Zx+sSJE/eu/4WFhaWmpvr7+xu/9Pf3Z4ylpKQYX9Hr9c8+++zixYtdXFw0TGJDUAipadOmTz/99JAhQwICAiRJ+vXXXys0aNGiRW5urrNz6ffKy8vL29vb4jErl5OT07lzZ0myupF9YWEh5/zbb78VFaBf/xZduz5U6Vt6vd7V1bXSt9au+erQoUtmjPVAeXl5+/btmzJliqgA91NcXFxUVFS7dm1RAVpF8Ce6VF7U9Hq9i4sLY5W8u3/RnPWHPjdzNPLOb8h0lXy6wWBo03ZQld1LSoqUfuJ777338ccfVzi4Ll265OTkRETjxo0rKak4FzJt2jQvL6+yZZSCggLOednPsQ8++KB3795t2rRRmsRusEr3DjmUOXPm/Pjjj1u3bvXy8vrpp58mTZp04cIFD4+7rjly6tSp3NzS9QZvb2/rKYQAYLXM9KvDrVu3cnIqTnhU+VkvvviiJEnz588nor/++qt9+/a5ubnGIWDPnj23bdtWvvErr7zy6aefapraqqEQ0rBhw5o2bfqvf/2LiAwGg06nO3bsWPPmAk7IBQAwk507d44YMeL48eNBQUGTJk3KyclZvnz5rl27srOzBwwYUNasSZMms2fPHjx4sMColoepUWrWrNnGjRunT5/u5eX1888/u7u7R0ZGig4FAKClbt26DRs2rFmzZjVr1jQYDBs3biSi9evXnz9/vnwhdEwYEVJ+fv6YMWO2bNni4+PDOZ83LeWY5QAABZlJREFUb96gQVXP7AMA2Jy0tLSMjIyGDRsaNxYYDAbOedkGCIeFQliqpKSkoKCgwuJfcnLyzz//fOTIEZ1O98UXX4jKdq/i4uIFCxZs2bIlNTW1cePGM2fObNSokehQpTZu3Lho0aJr1675+fkNGzbsueeeE52oot27d8+bN++VV17p1KmT6CylFixYULZI4+zsvGLFCrF5ysvKypo9e/aePXs8PDyeffbZJ598UnQiIqLr16+//PLL5V+ZMGFCjx49ROUpr7i4+JNPPvnjjz8457GxsTNmzNDpdKJDwYM4+i8CZZydne/dAnPixIndu3frdLoNGzZYVSHMy8v7888/R40aVbt27ZUrV0ZHRycmJvr5+YnORURkMBhGjhxZt27da9euTZw4kTE2duxY0aHuuH379uTJk2/cuDFs2DDrKYQJCQk6nW7gwIFEZFV7gIuKimJjY5s3b/7vf//bYDDcunVLdKJS3t7ew4YNMz5PSUl56aWX3n//fbGRynz44Ydr1qxZuHChJEnjx48vLCycPXu26FDwQByq8vvvv9etW1d0ivuSZTkwMHDr1q2ig1Ti5ZdfHjt2rOgUd3n55ZfnzJnTrFmz1atXi85yx9ixYz/66CPRKSqxcOHCVq1aybIsOsiDfPjhh9HR0aJT3PHYY4+V/W3OmzcvNjZWbB6okhX97gnqXLt2LScnp169eqKD3JGfn3/hwoU///xzw4YNjz/+uOg4dxw4cGDfvn2TJ08WHaQSq1evHjBgwNSpU63qSo/79u3r2bPnO++88/jjj7/77ru3b5t+7VbLWbZs2bPPPis6xR2DBg1as2bNuXPnLly4sHr16ri4ONGJoAoohLatuLh41KhREydOjIiIEJ3ljh07dvTq1WvAgAGtWrXq3r276DilioqKxo8f/+WXXxrPO7YqPXr0eOWVVyZMmKDX61u3bn35sgn3WLCIq1evLly40NPTc+rUqfv27Rs+fLjoRBXt3r37xo0bQ4cOFR3kjpEjRwYGBrZu3frhhx92c3MbPXq06ERQBRRCG1ZSUhIfH+/j4/Pxxx+LznKXfv36nT9//ubNm8XFxdZz9cLZs2f379+/devWooNUYuTIkfHx8f379//yyy87dOjw3XffiU5UysvLq1OnTtOmTYuOjl60aNGGDRtSU1NFh7rL4sWL4+PjK1wBQ6yxY8cGBwdnZmZmZmbWr19/1KhRohNBFVAIbZXBYBg1alR+fv7q1aut8wqBHh4ew4YNO3TokOggpf7++++vvvoqICAgICAgMTFxzJgxM2fOFB2qEuHh4dZzt5PIyMjAwEDj88DAQMaY9eyXIaK8vLyffvrJquZFiWj79u3x8fEuLi7Ozs5PP/30n3/+KToRVAGF0CZxzl944YW0tLS1a9da287sw4cPy7JMRLm5uStWrGjfvr3oRKXWrl2b+T+NGzdeunTpBx98IDpUqYSEBOOTEydOrFu3LiYmRmicO55++ult27YZR4ErVqwICwuzqtXoH3/8MSIiokOHDqKD3KVRo0a///678fmGDRsaN24sNg9UTfRuHatW4U7NHTp0EJ2o1L37KZYvXy46VKm4uDgfH5969eq5u7vHxcVlZGSITlQJa9s1GhERERAQULt2bR8fn/fee090nLu8//77NWrUaNasWWRk5K5du0THuUvnzp0/++wz0SkqOnHihPHbVb9+/caNGx85ckR0IqgCTqgH7eXn56elpdWqVcvaRqvWLD09vbCwMDQ01KrOIzTKz8/PyckJDQ0VHcSWpKenc84d8z63NgeFEAAAHJrV/e4JAABgSSiEAADg0FAIAQDAoaEQAgCAQ0MhBAAAh4ZCCAAADg2FEAAAHBoKIQAAODQUQgAAcGgohAAA4NBQCAEAwKGhEAIAgENDIQQAAIf2/2eRhDrVUj5pAAAAAElFTkSuQmCC" }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using Plots\n", "gr(fmt = :png);\n", "spy(L, markersize = 10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To calculate a simple dynamic valuation, consider whether the payoff of being in state $ (i,j) $ is $ r_{ij} = i + 2j $" ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "8-element Array{Float64,1}:\n", " 3.0\n", " 4.0\n", " 5.0\n", " 6.0\n", " 5.0\n", " 6.0\n", " 7.0\n", " 8.0" ] }, "execution_count": 43, "metadata": {}, "output_type": "execute_result" } ], "source": [ "r = [i + 2.0j for i in 1:N, j in 1:M]\n", "r = vec(r) # vectorize it since stacked in same order" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Solving the equation $ \\rho v = r + L v $" ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4×2 Array{Float64,2}:\n", " 87.8992 93.6134\n", " 96.1345 101.849\n", " 106.723 112.437\n", " 114.958 120.672" ] }, "execution_count": 44, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ρ = 0.05\n", "v = (ρ * I - L) \\ r\n", "reshape(v, N, M)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `reshape` helps to rearrange it back to being two-dimensional.\n", "\n", "To find the stationary distribution, we calculate the eigenvalue and choose the eigenvector associated with $ \\lambda=0 $ . In this\n", "case, we can verify that it is the last one." ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "8-element Array{Float64,1}:\n", " 0.16666666666666677\n", " 0.1666666666666665\n", " 0.16666666666666682\n", " 0.16666666666666666\n", " 0.08333333333333325\n", " 0.08333333333333345\n", " 0.0833333333333333\n", " 0.08333333333333334" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "L_eig = eigen(Matrix(L'))\n", "@assert norm(L_eig.values[end]) < 1E-10\n", "\n", "ψ = L_eig.vectors[:,end]\n", "ψ = ψ / sum(ψ)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Reshape this to be two-dimensional if it is helpful for visualization." ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4×2 Array{Float64,2}:\n", " 0.166667 0.0833333\n", " 0.166667 0.0833333\n", " 0.166667 0.0833333\n", " 0.166667 0.0833333" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "reshape(ψ, N, size(A,1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Irreducibility\n", "\n", "As with the discrete-time Markov chains, a key question is whether CTMCs are reducible, i.e., whether states communicate. The problem\n", "is isomorphic to determining whether the directed graph of the Markov chain is [strongly connected](https://en.wikipedia.org/wiki/Strongly_connected_component)." ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "6×6 Tridiagonal{Float64,Array{Float64,1}}:\n", " -0.1 0.1 ⋅ ⋅ ⋅ ⋅ \n", " 0.1 -0.2 0.1 ⋅ ⋅ ⋅ \n", " ⋅ 0.1 -0.2 0.1 ⋅ ⋅ \n", " ⋅ ⋅ 0.1 -0.2 0.1 ⋅ \n", " ⋅ ⋅ ⋅ 0.1 -0.2 0.1\n", " ⋅ ⋅ ⋅ ⋅ 0.1 -0.1" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using LightGraphs\n", "α = 0.1\n", "N = 6\n", "Q = Tridiagonal(fill(α, N-1), [-α; fill(-2α, N-2); -α], fill(α, N-1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can verify that it is possible to move between every pair of states in a finite number of steps with" ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "is_strongly_connected(Q_graph) = true\n" ] } ], "source": [ "Q_graph = DiGraph(Q)\n", "@show is_strongly_connected(Q_graph); # i.e., can follow directional edges to get to every state" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, as an example of a reducible Markov chain where states $ 1 $ and $ 2 $ cannot jump to state $ 3 $." ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "is_strongly_connected(Q_graph) = false\n" ] } ], "source": [ "Q = [-0.2 0.2 0\n", " 0.2 -0.2 0\n", " 0.2 0.6 -0.8]\n", "Q_graph = DiGraph(Q)\n", "@show is_strongly_connected(Q_graph);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Banded Matrices\n", "\n", "A tridiagonal matrix has 3 non-zero diagonals: the main diagonal, the first sub-diagonal (i.e., below the main diagonal), and also the first super-diagonal (i.e., above the main diagonal).\n", "\n", "This is a special case of a more general type called a banded matrix, where the number of sub- and super-diagonals can be greater than 1. The\n", "total width of main-, sub-, and super-diagonals is called the bandwidth. For example, a tridiagonal matrix has a bandwidth of 3.\n", "\n", "An $ N \\times N $ banded matrix with bandwidth $ P $ has about $ N P $ nonzeros in its sparsity pattern.\n", "\n", "These can be created directly as a dense matrix with `diagm`. For example, with a bandwidth of three and a zero diagonal," ] }, { "cell_type": "code", "execution_count": 50, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4×4 Array{Int64,2}:\n", " 0 1 0 0\n", " 4 0 2 0\n", " 0 5 0 3\n", " 0 0 6 0" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "diagm(1 => [1,2,3], -1 => [4,5,6])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or as a sparse matrix," ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4×4 SparseMatrixCSC{Int64,Int64} with 6 stored entries:\n", " [2, 1] = 4\n", " [1, 2] = 1\n", " [3, 2] = 5\n", " [2, 3] = 2\n", " [4, 3] = 6\n", " [3, 4] = 3" ] }, "execution_count": 51, "metadata": {}, "output_type": "execute_result" } ], "source": [ "spdiagm(1 => [1,2,3], -1 => [4,5,6])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or directly using [BandedMatrices.jl](https://github.com/JuliaMatrices/BandedMatrices.jl)" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "4×4 BandedMatrix{Int64,Array{Int64,2},Base.OneTo{Int64}}:\n", " 0 1 ⋅ ⋅\n", " 4 0 2 ⋅\n", " ⋅ 5 0 3\n", " ⋅ ⋅ 6 0" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using BandedMatrices\n", "BandedMatrix(1 => [1,2,3], -1 => [4,5,6])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is also a convenience function for generating random banded matrices" ] }, { "cell_type": "code", "execution_count": 53, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "7×7 BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}:\n", " 0.386608 0.346262 ⋅ ⋅ ⋅ ⋅ ⋅ \n", " 0.479616 0.750276 0.50362 ⋅ ⋅ ⋅ ⋅ \n", " 0.857743 0.913585 0.087322 0.0676183 ⋅ ⋅ ⋅ \n", " 0.779364 0.293782 0.269804 0.813762 0.147221 ⋅ ⋅ \n", " ⋅ 0.0341229 0.711412 0.438157 0.0312296 0.930633 ⋅ \n", " ⋅ ⋅ 0.412892 0.351496 0.701733 0.335451 0.0827553\n", " ⋅ ⋅ ⋅ 0.394056 0.460506 0.25927 0.418861" ] }, "execution_count": 53, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A = brand(7, 7, 3, 1) # 7x7 matrix, 3 subdiagonals, 1 superdiagonal" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And, of course, specialized algorithms will be used to exploit the structure when solving linear systems. In particular, the complexity is related to the $ O(N P_L P_U) $ for upper and lower bandwidths $ P $" ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "factorize(Symmetric(A)) |> typeof = LDLt{Float64,Symmetric{Float64,BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}}}\n" ] }, { "data": { "text/plain": [ "7-element Array{Float64,1}:\n", " -0.6345917189136551\n", " 1.2689275835805582\n", " 0.5499404721793494\n", " 0.24947160343942412\n", " -0.45227412611006496\n", " 0.4973200025591808\n", " 1.3752489574369149" ] }, "execution_count": 54, "metadata": {}, "output_type": "execute_result" } ], "source": [ "@show factorize(Symmetric(A)) |> typeof\n", "A \\ rand(7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The factorization algorithm uses a specialized LU decomposition for banded matrices.\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implementation Details and Performance\n", "\n", "Recall the famous quote from Knuth: “97% of the time, premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” The most common example of premature optimization is trying to use your own mental model of a compiler while writing your code, worried about the efficiency of code, and (usually incorrectly) second-guessing the compiler.\n", "\n", "Concretely, the lessons in this section are:\n", "\n", "1. Don’t worry about optimizing your code unless you need to. Code clarity is your first-order concern. \n", "1. If you use other people’s packages, they can worry about performance and you don’t need to. \n", "1. If you absolutely need that “critical 3%,” your intuition about performance is usually wrong on modern CPUs and GPUs, so let the compiler do its job. \n", "1. Benchmarking (e.g., `@btime`) and [profiling](https://docs.julialang.org/en/v1/manual/profile/) are the tools to figure out performance bottlenecks. If 99% of computing time is spent in one small function, then there is no point in optimizing anything else. \n", "1. If you benchmark to show that a particular part of the code is an issue, and you can’t find another library that does a better job, then you can worry about performance. \n", "\n", "\n", "You will rarely get to step 3, let alone step 5.\n", "\n", "However, there is also a corollary: “don’t pessimize prematurely.” That is, don’t make choices that lead to poor performance without any tradeoff in improved code clarity. For example, writing your own algorithms when a high-performance algorithm exists in a package or Julia itself, or lazily making a matrix dense and carelessly dropping its structure." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implementation Difficulty\n", "\n", "Numerical analysts sometimes refer to the lowest level of code for basic operations (e.g., a dot product, matrix-matrix product, convolutions) as `kernels`.\n", "\n", "That sort of code is difficult to write, and performance depends on the characteristics of the underlying hardware, such as the [instruction set](https://en.wikipedia.org/wiki/Instruction_set_architecture) available on the particular CPU, the size of the [CPU cache](https://en.wikipedia.org/wiki/CPU_cache), and the layout of arrays in memory.\n", "\n", "Typically, these operations are written in a [BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) library, organized into different levels. The levels roughly correspond to the computational order of the operations: BLAS Level 1 are $ O(N) $ operations such as linear products, Level 2 are $ O(N^2) $ operations such as matrix-vector products, and Level 3 are roughly $ O(N^3) $, such as general matrix-matrix products.\n", "\n", "An example of a BLAS library is [OpenBLAS](https://github.com/xianyi/OpenBLAS), which is used by default in Julia, or the [Intel MKL](https://en.wikipedia.org/wiki/Math_Kernel_Library), which is used in Matlab (and in Julia if the `MKL.jl` package is installed).\n", "\n", "On top of BLAS are [LAPACK](https://en.wikipedia.org/wiki/LAPACK) operations, which are higher-level kernels, such as matrix factorizations and eigenvalue algorithms, and are often in the same libraries (e.g., MKL has both BLAS and LAPACK functionality).\n", "\n", "The details of these packages are not especially relevant, but if you are talking about performance, people will inevitably start discussing these different packages and kernels. There are a few important things to keep in mind:\n", "\n", "1. Leave writing kernels to the experts. Even simple-sounding algorithms can be very complicated to implement with high performance. \n", "1. Your intuition about performance of code is probably going to be wrong. If you use high quality libraries rather than writing your own kernels, you don’t need to use your intuition. \n", "1. Don’t get distracted by the jargon or acronyms above if you are reading about performance. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Row- and Column-Major Ordering\n", "\n", "There is a practical performance issue which may influence your code. Since memory in a CPU is linear, dense matrices need to be stored by either stacking columns (called [column-major order](https://en.wikipedia.org/wiki/Row-_and_column-major_order)) or rows.\n", "\n", "The reason this matters is that compilers can generate better performance if they work in contiguous chunks of memory, and this becomes especially important with large matrices due to the interaction with the CPU cache. Choosing the wrong order when there is no benefit in code clarity is an example of premature pessimization. The performance difference can be orders of magnitude in some cases, and nothing in others.\n", "\n", "One option is to use the functions that let the compiler choose the most efficient way to traverse memory. If you need to choose the looping order yourself, then you might want to experiment with going through columns first and going through rows first. Other times, let Julia decide, i.e., `enumerate` and `eachindex` will choose the right approach.\n", "\n", "Julia, Fortran, and Matlab all use column-major order, while C/C++ and Python use row-major order. This means that if you find an algorithm written for C/C++/Python, you will sometimes need to make small changes if performance is an issue." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Digression on Allocations and In-place Operations\n", "\n", "While we have usually not considered optimizing code for performance (and have focused on the choice of\n", "algorithms instead), when matrices and vectors become large we need to be more careful.\n", "\n", "The most important thing to avoid are excess allocations, which usually occur due to the use of\n", "temporary vectors and matrices when they are not necessary. Sometimes those extra temporary values\n", "can cause enormous degradations in performance.\n", "\n", "However, caution is suggested since\n", "excess allocations are never relevant for scalar values, and allocations frequently create faster code for\n", "smaller matrices/vectors since it can lead to better [cache locality](https://en.wikipedia.org/wiki/Locality_of_reference).\n", "\n", "To see this, a convenient tool is the benchmarking" ] }, { "cell_type": "code", "execution_count": 55, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 504.714 ns (1 allocation: 896 bytes)\n" ] }, { "data": { "text/plain": [ "10×10 Array{Float64,2}:\n", " 3.26172 4.24569 3.37182 4.10324 … 4.03344 4.39198 2.88797 2.63934\n", " 4.19687 4.58126 3.88015 5.0409 4.0105 3.56832 2.35475 3.32362\n", " 2.17535 2.58069 3.08736 3.04461 2.71563 3.03535 2.62734 2.37854\n", " 4.07043 4.57067 4.23989 5.24296 4.34443 4.21237 3.30526 3.82245\n", " 3.16928 4.20751 3.08482 3.89843 3.81516 4.14681 2.64178 2.94961\n", " 3.01031 3.08903 2.83417 3.80852 … 3.22832 3.29357 2.57282 2.60746\n", " 3.88276 4.45627 3.88941 5.12798 4.11822 3.70176 2.69528 3.81814\n", " 2.7023 3.10147 2.95828 3.63363 3.64397 3.40609 2.44341 3.03272\n", " 3.02687 3.13864 2.78748 3.90634 3.18422 2.90128 1.99457 2.80653\n", " 3.80929 3.83031 3.88255 4.8596 4.16155 3.73634 2.65279 3.07034" ] }, "execution_count": 55, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using BenchmarkTools\n", "A = rand(10,10)\n", "B = rand(10,10)\n", "C = similar(A)\n", "function f!(C, A, B)\n", " D = A*B\n", " C .= D .+ 1\n", "end\n", "@btime f!($C, $A, $B)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `!` on the `f!` is an informal way to say that the function is mutating, and the first argument (`C` here)\n", "is by convention the modified variable.\n", "\n", "In the `f!` function, notice that the `D` is a temporary variable which is created, and then modified afterwards. But notice that since\n", "`C` is modified directly, there is no need to create the temporary `D` matrix.\n", "\n", "This is an example of where an in-place version of the matrix multiplication can help avoid the allocation." ] }, { "cell_type": "code", "execution_count": 56, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 512.031 ns (1 allocation: 896 bytes)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " 447.510 ns (0 allocations: 0 bytes)\n" ] }, { "data": { "text/plain": [ "10×10 Array{Float64,2}:\n", " 2.42733 3.74571 2.5811 2.745 … 2.45258 3.17339 2.792 3.46213\n", " 3.52188 4.16932 3.17155 3.98401 2.1202 2.85629 3.35848 3.88871\n", " 3.74317 4.66988 3.3338 4.69372 2.61622 3.70894 4.06268 4.79582\n", " 3.30158 4.09369 3.81428 3.65591 2.743 3.42494 3.65687 3.83879\n", " 2.47181 4.33343 2.46863 2.68593 2.38238 3.6709 3.2434 4.17783\n", " 3.5594 4.72281 3.71072 4.31957 … 2.83065 4.21896 4.34601 4.90251\n", " 3.76742 4.85555 4.03515 4.55265 2.62424 4.19292 4.57003 4.88181\n", " 3.29688 5.38813 3.4278 3.8622 2.87482 4.07336 3.89498 5.41919\n", " 2.96602 3.60521 2.90236 3.2117 2.68528 2.99728 3.34362 3.47657\n", " 4.73208 5.38525 4.42378 5.18235 2.91664 4.70184 5.28638 5.4401" ] }, "execution_count": 56, "metadata": {}, "output_type": "execute_result" } ], "source": [ "function f2!(C, A, B)\n", " mul!(C, A, B) # in-place multiplication\n", " C .+= 1\n", "end\n", "A = rand(10,10)\n", "B = rand(10,10)\n", "C = similar(A)\n", "@btime f!($C, $A, $B)\n", "@btime f2!($C, $A, $B)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that in the output of the benchmarking, the `f2!` is non-allocating and is using the pre-allocated `C` variable directly.\n", "\n", "Another example of this is solutions to linear equations, where for large solutions you may pre-allocate and reuse the\n", "solution vector." ] }, { "cell_type": "code", "execution_count": 57, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "10-element Array{Float64,1}:\n", " -0.09745394360765254\n", " 0.7799354221131604\n", " 1.1994346228906085\n", " 0.0913844576787099\n", " -0.5083914639425638\n", " -0.3509162355608617\n", " 0.793473061987608\n", " -0.5304171009174155\n", " 0.4517444530913052\n", " -0.8005334538688558" ] }, "execution_count": 57, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A = rand(10,10)\n", "y = rand(10)\n", "z = A \\ y # creates temporary\n", "\n", "A = factorize(A) # in-place requires factorization\n", "x = similar(y) # pre-allocate\n", "ldiv!(x, A, y) # in-place left divide, using factorization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, if you benchmark carefully, you will see that this is sometimes slower. Avoiding allocations is not always a good\n", "idea - and worrying about it prior to benchmarking is premature optimization.\n", "\n", "There are a variety of other non-allocating versions of functions. For example," ] }, { "cell_type": "code", "execution_count": 58, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "10×10 Array{Float64,2}:\n", " 0.373481 0.715094 0.880197 0.219559 … 0.903144 0.0534784 0.646242\n", " 0.0572854 0.437244 0.0465054 0.271735 0.0419775 0.91462 0.804396\n", " 0.0722476 0.435665 0.631825 0.0804549 0.773098 0.254097 0.674881\n", " 0.0341739 0.185395 0.736277 0.142816 0.687287 0.236726 0.19037\n", " 0.843743 0.860459 0.709686 0.630887 0.274137 0.958363 0.948974\n", " 0.918731 0.933097 0.280531 0.486534 … 0.0313851 0.479192 0.988241\n", " 0.868133 0.243504 0.628518 0.954309 0.667845 0.935099 0.990551\n", " 0.0636638 0.659151 0.377286 0.0453235 0.865368 0.64157 0.570134\n", " 0.759633 0.389194 0.153783 0.284574 0.245533 0.516012 0.55121\n", " 0.301123 0.505073 0.0402959 0.225074 0.57159 0.893165 0.374389" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A = rand(10,10)\n", "B = similar(A)\n", "\n", "transpose!(B, A) # non-allocating version of B = transpose(A)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, a common source of unnecessary allocations is when taking slices or portions of\n", "matrices. For example, the following allocates a new matrix `B` and copies the values." ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "hide-output": false }, "outputs": [ { "data": { "text/plain": [ "5-element Array{Float64,1}:\n", " 0.07265755245781103\n", " 0.2967203620355736\n", " 0.7745398448673058\n", " 0.6244448536072318\n", " 0.5287113274542306" ] }, "execution_count": 59, "metadata": {}, "output_type": "execute_result" } ], "source": [ "A = rand(5,5)\n", "B = A[2,:] # extract a vector" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see that these are different matrices, note that" ] }, { "cell_type": "code", "execution_count": 60, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A[2, 1] = 100.0\n", "B[1] = 0.07265755245781103\n" ] } ], "source": [ "A[2,1] = 100.0\n", "@show A[2,1]\n", "@show B[1];" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Instead of allocating a new matrix, you can take a `view` of a matrix, which provides an\n", "appropriate `AbstractArray` type that doesn’t allocate new memory with the `@view` matrix." ] }, { "cell_type": "code", "execution_count": 61, "metadata": { "hide-output": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A[2, 1] = 100.0\n", "B[1] = 100.0\n" ] } ], "source": [ "A = rand(5,5)\n", "B = @view A[2,:] # does not copy the data\n", "\n", "A[2,1] = 100.0\n", "@show A[2,1]\n", "@show B[1];" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But again, you will often find that doing `@view` leads to slower code. Benchmark\n", "instead, and generally rely on it for large matrices and for contiguous chunks of memory (e.g., columns rather than rows)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1\n", "\n", "This exercise is for practice on writing low-level routines (i.e., “kernels”), and to hopefully convince you to leave low-level code to the experts.\n", "\n", "The formula for matrix multiplication is deceptively simple. For example, with the product of square matrices $ C = A B $ of size $ N \\times N $, the $ i,j $ element of $ C $ is\n", "\n", "$$\n", "C_{ij} = \\sum_{k=1}^N A_{ik} B_{kj}\n", "$$\n", "\n", "Alternatively, you can take a row $ A_{i,:} $ and column $ B_{:, j} $ and use an inner product\n", "\n", "$$\n", "C_{ij} = A_{i,:} \\cdot B_{:,j}\n", "$$\n", "\n", "Note that the inner product in a discrete space is simply a sum, and has the same complexity as the sum (i.e., $ O(N) $ operations).\n", "\n", "For a dense matrix without any structure and using a naive multiplication algorithm, this also makes it clear why the complexity is $ O(N^3) $: You need to evaluate it for $ N^2 $ elements in the matrix and do an $ O(N) $ operation each time.\n", "\n", "For this exercise, implement matrix multiplication yourself and compare performance in a few permutations.\n", "\n", "1. Use the built-in function in Julia (i.e., `C = A * B`, or, for a better comparison, the in-place version `mul!(C, A, B)`, which works with pre-allocated data). \n", "1. Loop over each $ C_{ij} $ by the row first (i.e., the `i` index) and use a `for` loop for the inner product. \n", "1. Loop over each $ C_{ij} $ by the column first (i.e., the `j` index) and use a `for` loop for the inner product. \n", "1. Do the same but use the `dot` product instead of the sum. \n", "1. Choose your best implementation of these, and then for matrices of a few different sizes (`N=10`, `N=1000`, etc.), and compare the ratio of performance of your best implementation to the built-in BLAS library. \n", "\n", "\n", "A few more hints:\n", "\n", "- You can just use random matrices (e.g., `A = rand(N, N)`). \n", "- For all of them, pre-allocate the $ C $ matrix beforehand with `C = similar(A)` or something equivalent. \n", "- To compare performance, put your code in a function and use the `@btime` macro to time it. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 2a\n", "\n", "Here we will calculate the evolution of the pdf of a discrete-time Markov chain, $ \\psi_t $, given the initial condition $ \\psi_0 $.\n", "\n", "Start with a simple symmetric tridiagonal matrix" ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "hide-output": false }, "outputs": [], "source": [ "N = 100\n", "A = Tridiagonal([fill(0.1, N-2); 0.2], fill(0.8, N), [0.2; fill(0.1, N-2)])\n", "A_adjoint = A';" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Pick some large `T` and use the initial condition $ \\psi_0 = \\begin{bmatrix} 1 & 0 & \\ldots & 0\\end{bmatrix} $ \n", "1. Write code to calculate $ \\psi_t $ to some $ T $ by iterating the map for each $ t $, i.e., \n", "\n", "\n", "$$\n", "\\psi_{t+1} = A' \\psi_t\n", "$$\n", "\n", "1. What is the computational order of calculating $ \\psi_T $ using this iteration approach $ T < N $? \n", "1. What is the computational order of $ (A')^T = (A' \\ldots A') $ and then $ \\psi_T = (A')^T \\psi_0 $ for $ T < N $? \n", "1. Benchmark calculating $ \\psi_T $ with the iterative calculation above as well as the direct $ \\psi_T = (A')^T \\psi_0 $ to see which is faster. You can take the matrix power with just `A_adjoint^T`, which uses specialized algorithms faster and more accurately than repeated matrix multiplication (but with the same computational order). \n", "1. Check the same if $ T = 2 N $ \n", "\n", "\n", "*Note:* The algorithm used in Julia to take matrix powers depends on the matrix structure, as always. In the symmetric case, it can use an eigendecomposition, whereas with a general dense matrix it uses [squaring and scaling](https://doi.org/10.1137/090768539)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 2b\n", "\n", "With the same setup as in Exercise 2a, do an [eigendecomposition](https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix) of `A_transpose`. That is, use `eigen` to factor the adjoint $ A' = Q \\Lambda Q^{-1} $, where $ Q $ is the matrix of eigenvectors and $ \\Lambda $ is the diagonal matrix of eigenvalues. Calculate $ Q^{-1} $ from the results.\n", "\n", "Use the factored matrix to calculate the sequence of $ \\psi_t = (A')^t \\psi_0 $ using the relationship\n", "\n", "$$\n", "\\psi_t = Q \\Lambda^t Q^{-1} \\psi_0\n", "$$\n", "\n", "where matrix powers of diagonal matrices are simply the element-wise power of each element.\n", "\n", "Benchmark the speed of calculating the sequence of $ \\psi_t $ up to `T = 2N` using this method. In principle, the factorization and easy calculation of the power should give you benefits, compared to simply iterating the map as we did in Exercise 2a. Explain why it does or does not, using computational order of each approach." ] } ], "metadata": { "date": 1591310629.4073532, "download_nb": 1, "download_nb_path": "https://julia.quantecon.org/", "filename": "numerical_linear_algebra.rst", "filename_with_path": "tools_and_techniques/numerical_linear_algebra", "kernelspec": { "display_name": "Julia 1.4.2", "language": "julia", "name": "julia-1.4" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.4.2" }, "title": "Numerical Linear Algebra and Factorizations" }, "nbformat": 4, "nbformat_minor": 2 }