{ "cells": [ { "cell_type": "markdown", "metadata": { "toc": "true" }, "source": [ "# Table of Contents\n", "

1  QR Decomposition
1.1  Definition
1.2  Algorithms
1.2.1  QR by Gram-Schmidt
1.2.2  QR by Householder transform
1.2.3  Householder QR with column pivoting
1.2.4  Implementation
1.2.5  QR by Givens rotation
1.3  Applications
1.3.1  Linear regression
1.4  Further reading
" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Julia Version 1.1.0\n", "Commit 80516ca202 (2019-01-21 21:24 UTC)\n", "Platform Info:\n", " OS: macOS (x86_64-apple-darwin14.5.0)\n", " CPU: Intel(R) Core(TM) i7-6920HQ CPU @ 2.90GHz\n", " WORD_SIZE: 64\n", " LIBM: libopenlibm\n", " LLVM: libLLVM-6.0.1 (ORCJIT, skylake)\n", "Environment:\n", " JULIA_EDITOR = code\n" ] } ], "source": [ "versioninfo()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# QR Decomposition\n", "\n", "* We learnt Cholesky decomposition as **one** approach for solving linear regression.\n", "\n", "* A second approach for linear regression uses QR decomposition. \n", " **This is how the `lm()` function in R does linear regression.**" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3-element Array{Float64,1}:\n", " 0.3795466676698624\n", " 0.6508866456093488\n", " 0.392250419565355 " ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using Random\n", "\n", "Random.seed!(280) # seed\n", "\n", "n, p = 5, 3\n", "X = randn(n, p) # predictor matrix\n", "y = randn(n) # response vector\n", "\n", "# finds the least squares solution\n", "X \\ y" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We want to understand what is QR and how it is used for solving least squares problem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Definition\n", "\n", "* Assume $\\mathbf{X} \\in \\mathbb{R}^{n \\times p}$ has full column rank. \n", "\n", "* **Full QR decomposition**: \n", "$$\n", " \\mathbf{X} = \\mathbf{Q} \\mathbf{R}, \n", "$$\n", "where \n", " - $\\mathbf{Q} \\in \\mathbb{R}^{n \\times n}$, $\\mathbf{Q}^T \\mathbf{Q} = \\mathbf{I}_n$. In other words, $\\mathbf{Q}$ is an orthogonal matrix. \n", " - First $p$ columns of $\\mathbf{Q}$ form an orthonormal basis of ${\\cal C}(\\mathbf{X})$ (**column space** of $\\mathbf{X}$) \n", " - Last $n-p$ columns of $\\mathbf{Q}$ form an orthonormal basis of ${\\cal N}(\\mathbf{X}^T)$ (**null space** of $\\mathbf{X}^T$)\n", " - $\\mathbf{R} \\in \\mathbb{R}^{n \\times p}$ is upper triangular with positive diagonal entries. \n", " \n", "* **Skinny (or thin) QR decomposition**:\n", "$$\n", " \\mathbf{X} = \\mathbf{Q}_1 \\mathbf{R}_1,\n", "$$\n", "where\n", " - $\\mathbf{Q}_1 \\in \\mathbb{R}^{n \\times p}$, $\\mathbf{Q}_1^T \\mathbf{Q}_1 = \\mathbf{I}_p$. In other words, $\\mathbf{Q}_1$ has orthonormal columns. \n", " - $\\mathbf{R}_1 \\in \\mathbb{R}^{p \\times p}$ is an upper triangular matrix with positive diagonal entries. \n", " \n", "* Given QR decompositin $\\mathbf{X} = \\mathbf{Q} \\mathbf{R}$,\n", "$$\n", " \\mathbf{X}^T \\mathbf{X} = \\mathbf{R}^T \\mathbf{Q}^T \\mathbf{Q} \\mathbf{R} = \\mathbf{R}^T \\mathbf{R}.\n", "$$\n", "Therefore $\\mathbf{R}$ is the same as the upper triangular Choleksy factor of the **Gram matrix** $\\mathbf{X}^T \\mathbf{X}$.\n", "\n", "* There are 3 algorithms to compute QR: (modified) Gram-Schmidt, Householder transform, (fast) Givens transform.\n", "\n", " In particular, the **Householder transform** for QR is implemented in LAPACK and thus used in R and Julia." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Algorithms\n", "\n", "### QR by Gram-Schmidt\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "* Assume $\\mathbf{X} = (\\mathbf{x}_1, \\ldots, \\mathbf{x}_p) \\in \\mathbb{R}^{n \\times p}$ has full column rank. \n", "\n", "* Gram-Schmidt (GS) algorithm produces the **skinny QR**: \n", "$$\n", " \\mathbf{X} = \\mathbf{Q} \\mathbf{R},\n", "$$\n", "where $\\mathbf{Q} \\in \\mathbb{R}^{n \\times p}$ has orthonormal columns and $\\mathbf{R} \\in \\mathbb{R}^{p \\times p}$ is an upper triangular matrix.\n", "\n", "* **Gram-Schmidt algorithm** orthonormalizes a set of non-zero, *linearly independent* vectors $\\mathbf{x}_1,\\ldots,\\mathbf{x}_p$. \n", "\n", " 0. Initialize $\\mathbf{q}_1 = \\mathbf{x}_1 / \\|\\mathbf{x}_1\\|_2$\n", " 0. For $k=2, \\ldots, p$, \n", "$$\n", "\\begin{eqnarray*}\n", "\t\\mathbf{v}_k &=& \\mathbf{x}_k - \\mathbf{P}_{{\\cal C}\\{\\mathbf{q}_1,\\ldots,\\mathbf{q}_{k-1}\\}} \\mathbf{x}_k = \\mathbf{x}_k - \\sum_{j=1}^{k-1} \\langle \\mathbf{x}_k, \\mathbf{q}_j \\rangle \\cdot \\mathbf{q}_j \\\\\n", "\t\\mathbf{q}_k &=& \\mathbf{v}_k / \\|\\mathbf{v}_k\\|_2\n", "\\end{eqnarray*}\n", "$$\n", "\n", "* Collectively we have $\\mathbf{X} = \\mathbf{Q} \\mathbf{R}$. \n", " - $\\mathbf{Q} \\in \\mathbb{R}^{n \\times p}$ has orthonormal columns $\\mathbf{q}_k$ thus $\\mathbf{Q}^T \\mathbf{Q} = \\mathbf{I}_p$ \n", " - Where is $\\mathbf{R}$? $\\mathbf{R} = \\mathbf{Q}^T \\mathbf{X}$ has entries $r_{jk} = \\langle \\mathbf{q}_j, \\mathbf{x}_k \\rangle$, which are computed during the Gram-Schmidt process. Note $r_{jk} = 0$ for $j > k$, so $\\mathbf{R}$ is upper triangular.\n", " \n", "* In GS algorithm, $\\mathbf{X}$ is over-written by $\\mathbf{Q}$ and $\\mathbf{R}$ is stored in a separate array.\n", "\n", "* The regular Gram-Schmidt is *unstable* (we loose orthogonality due to roundoff errors) when columns of $\\mathbf{X}$ are collinear.\n", "\n", "* **Modified Gram-Schmidt** (MGS): after each normalization step of $\\mathbf{v}_k$, we replace columns $\\mathbf{x}_j$, $j>k$, by its residual.\n", "\n", "* Why MGS is better than GS? Read \n", "\n", "* Computational cost of GS and MGS is $\\sum_{k=1}^p 4n(k-1) \\approx 2np^2$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### QR by Householder transform\n", "\n", "\n", "\n", "* This is the algorithm implemented in LAPACK. In other words, **this is the algorithm for solving linear regression in R**.\n", "\n", "* Assume $\\mathbf{X} = (\\mathbf{x}_1, \\ldots, \\mathbf{x}_p) \\in \\mathbb{R}^{n \\times p}$ has full column rank. \n", "\n", "* Idea:\n", "$$\n", " \\mathbf{H}_{p} \\cdots \\mathbf{H}_2 \\mathbf{H}_1 \\mathbf{X} = \\begin{pmatrix} \\mathbf{R}_1 \\\\ \\mathbf{0} \\end{pmatrix},\n", "$$\n", "where $\\mathbf{H}_j \\in \\mathbf{R}^{n \\times n}$ are a sequence of Householder transformation matrices.\n", "\n", " It yields the **full QR** where $\\mathbf{Q} = \\mathbf{H}_1 \\cdots \\mathbf{H}_p \\in \\mathbb{R}^{n \\times n}$. Recall that GS/MGS only produces the **thin QR** decomposition.\n", "\n", "* For arbitrary $\\mathbf{v}, \\mathbf{w} \\in \\mathbb{R}^{n}$ with $\\|\\mathbf{v}\\|_2 = \\|\\mathbf{w}\\|_2$, we can construct a **Householder matrix** (or **Householder reflector**)\n", "$$\n", " \\mathbf{H} = \\mathbf{I}_n - 2 \\mathbf{u} \\mathbf{u}^T, \\quad \\mathbf{u} = - \\frac{1}{\\|\\mathbf{v} - \\mathbf{w}\\|_2} (\\mathbf{v} - \\mathbf{w}),\n", "$$\n", "that transforms $\\mathbf{v}$ to $\\mathbf{w}$:\n", "$$\n", "\t\\mathbf{H} \\mathbf{v} = \\mathbf{w}.\n", "$$\n", "$\\mathbf{H}$ is symmetric and orthogonal. Calculation of Householder vector $\\mathbf{u}$ costs $4n$ flops for general $\\mathbf{u}$ and $\\mathbf{w}$.\n", "\n", "\n", "\n", "* Now choose $\\mathbf{H}_1$ to zero the first column of $\\mathbf{X}$ below diagonal\n", "$$\n", "\t\\mathbf{H}_1 \\mathbf{x}_1 = \\begin{pmatrix} \\|\\mathbf{x}_{1}\\|_2 \\\\ 0 \\\\ \\vdots \\\\ 0 \\end{pmatrix}.\n", "$$\n", "Take $\\mathbf{H}_2$ to zero the second column below diagonal; ... \n", " \n", "\n", "In general, choose the $j$-th Householder transform $\\mathbf{H}_j = \\mathbf{I}_n - 2 \\mathbf{u}_j \\mathbf{u}_j^T$, where \n", "$$\n", " \\mathbf{u}_j = \\begin{pmatrix} \\mathbf{0}_{j-1} \\\\ {\\tilde u}_j \\end{pmatrix}, \\quad {\\tilde u}_j \\in \\mathbb{R}^{n-j+1},\n", "$$\n", "to zero the $j$-th column below diagonal. $\\mathbf{H}_j$ takes the form\n", "$$\n", "\t\\mathbf{H}_j = \\begin{pmatrix}\n", "\t\\mathbf{I}_{j-1} & \\\\\n", "\t& \\mathbf{I}_{n-j+1} - 2 {\\tilde u}_j {\\tilde u}_j^T\n", "\t\\end{pmatrix} = \\begin{pmatrix}\n", "\t\\mathbf{I}_{j-1} & \\\\\n", "\t& {\\tilde H}_{j}\n", "\t\\end{pmatrix}.\n", "$$\n", "\n", "* Applying a Householder transform $\\mathbf{H} = \\mathbf{I} - 2 \\mathbf{u} \\mathbf{u}^T$ to a matrix $\\mathbf{X} \\in \\mathbb{R}^{n \\times p}$\n", "$$\n", "\t\\mathbf{H} \\mathbf{X} = \\mathbf{X} - 2 \\mathbf{u} (\\mathbf{u}^T \\mathbf{X})\n", "$$\n", "costs $4np$ flops. **Householder updates never entails explicit formation of the Householder matrices.**\n", "\n", "* Note applying ${\\tilde H}_j$ to $\\mathbf{X}$ only needs $4(n-j+1)(p-j+1)$ flops.\n", "\n", "* QR by Householder: $\\mathbf{H}_{p} \\cdots \\mathbf{H}_1 \\mathbf{X} = \\begin{pmatrix} \\mathbf{R}_1 \\\\ \\mathbf{0} \\end{pmatrix}$.\n", "\n", "* The process is done in place. Upper triangular part of $\\mathbf{X}$ is overwritten by $\\mathbf{R}_1$ and the essential Householder vectors ($\\tilde u_{j1}$ is normalized to 1) are stored in $\\mathbf{X}[j:n,j]$.\n", "\n", "* At $j$-th stage\n", " 0. computing the Householder vector ${\\tilde u}_j$ costs $3(n-j+1)$ flops\n", " 0. applying the Householder transform ${\\tilde H}_j$ to the $\\mathbf{X}[j:n, j:p]$ block costs $4(n-j+1)(p-j+1)$ flops \n", " \n", "In total we need $\\sum_{j=1}^p [3(n-j+1) + 4(n-j+1)(p-j+1)] \\approx 2np^2 - \\frac 23 p^3$ flops.\n", "\n", "* Where is $\\mathbf{Q}$? $\\mathbf{Q} = \\mathbf{H}_1 \\cdots \\mathbf{H}_p$. In some applications, it's necessary to form the orthogonal matrix $\\mathbf{Q}$. \n", "\n", " Accumulating $\\mathbf{Q}$ costs another $2np^2 - \\frac 23 p^3$ flops.\n", "\n", "* When computing $\\mathbf{Q}^T \\mathbf{v}$ or $\\mathbf{Q} \\mathbf{v}$ as in some applications (e.g., solve linear equation using QR), no need to form $\\mathbf{Q}$. Simply apply Householder transforms successively to the vector $\\mathbf{v}$.\n", "\n", "* Computational cost of Householder QR for linear regression: $2n p^2 - \\frac 23 p^3$ (regression coefficients and $\\hat \\sigma^2$) or more (fitted values, s.e., ...)." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### Householder QR with column pivoting\n", "\n", "Consider rank deficient $\\mathbf{X}$.\n", "\n", "* At the $j$-th stage, swap the column in $\\mathbf{X}[j:n,j:p]$ with maximum $\\ell_2$ norm to be the pivot column. If the maximum $\\ell_2$ norm is 0, it stops, ending with\n", "$$\n", "\\mathbf{X} \\mathbf{P} = \\mathbf{Q} \\begin{pmatrix} \\mathbf{R}_{11} & \\mathbf{R}_{12} \\\\ \\mathbf{0}_{(n-r) \\times r} & \\mathbf{0}_{(n-r) \\times (p-r)} \\end{pmatrix},\n", "$$\n", "where $\\mathbf{P} \\in \\mathbb{R}^{p \\times p}$ is a permutation matrix and $r$ is the rank of $\\mathbf{X}$. QR with column pivoting is rank revealing.\n", "\n", "* The overhead of re-computing the column norms can be reduced by the property\n", "$$\n", "\t\\mathbf{Q} \\mathbf{z} = \\begin{pmatrix} \\alpha \\\\ \\omega \\end{pmatrix} \\Rightarrow \\|\\omega\\|_2^2 = \\|\\mathbf{z}\\|_2^2 - \\alpha^2\n", "$$\n", "for any orthogonal matrix $\\mathbf{Q}$.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implementation\n", "\n", "* Julia functions: [`qr`](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#LinearAlgebra.qr), [`qrfact!`](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#LinearAlgebra.qr!), or call LAPACK wrapper functions [`geqrf!`](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#LinearAlgebra.LAPACK.geqrf!) and [`geqp3!`](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#LinearAlgebra.LAPACK.geqp3!)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "5×3 Array{Float64,2}:\n", " -1.12783 -0.88267 -1.79426 \n", " 1.14786 1.71384 1.09138 \n", " -1.35471 -0.733952 0.426628 \n", " -0.237065 1.08182 -0.624434 \n", " -0.680994 -0.170406 0.0320486" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "using Random, LinearAlgebra\n", "\n", "Random.seed!(280) # seed\n", "\n", "y = randn(5) # response vector\n", "X = randn(5, 3) # predictor matrix" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3-element Array{Float64,1}:\n", " -0.6130947263291079\n", " -0.7800615507222621\n", " 0.3634934302764104" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X \\ y # least squares solution by QR" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3-element Array{Float64,1}:\n", " -0.6130947263291078\n", " -0.7800615507222622\n", " 0.3634934302764104" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# same as\n", "qr(X) \\ y" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3-element Array{Float64,1}:\n", " -0.6130947263291076\n", " -0.7800615507222624\n", " 0.3634934302764099" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "cholesky(X'X) \\ (X'y) # least squares solution by Cholesky" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "QRPivoted{Float64,Array{Float64,2}}\n", "Q factor:\n", "5×5 LinearAlgebra.QRPackedQ{Float64,Array{Float64,2}}:\n", " -0.377941 -0.70935 -0.0804225 -0.586254 -0.0618207\n", " 0.733832 0.161768 -0.101458 -0.650763 -0.039191 \n", " -0.314263 0.384947 -0.75494 -0.116185 -0.411851 \n", " 0.463213 -0.565162 -0.483764 0.464867 -0.12608 \n", " -0.0729644 0.0553323 -0.423411 -0.0566828 0.899514 \n", "R factor:\n", "3×3 Array{Float64,2}:\n", " 2.33547 1.05335 1.6342 \n", " 0.0 1.96822 0.560522\n", " 0.0 0.0 1.39999 \n", "permutation:\n", "3-element Array{Int64,1}:\n", " 2\n", " 3\n", " 1" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# QR factorization with column pivoting\n", "xqr = qr(X, Val(true))" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3-element Array{Float64,1}:\n", " -0.6130947263291079\n", " -0.7800615507222621\n", " 0.3634934302764104" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "xqr \\ y # least squares solution" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "9.924559042135032e-16" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# thin Q matrix multiplication (a sequence of Householder transforms)\n", "norm(xqr.Q * xqr.R - X[:, xqr.p]) # recovers X (with columns permuted)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### QR by Givens rotation\n", "\n", "* Householder transform $\\mathbf{H}_j$ introduces batch of zeros into a vector.\n", "\n", "* **Givens transform** (aka **Givens rotation**, **Jacobi rotation**, **plane rotation**) selectively zeros one element of a vector.\n", "\n", "* Overall QR by Givens rotation is less efficient than the Householder method, but is better suited for matrices with structured patterns of nonzero elements.\n", "\n", "* **Givens/Jacobi rotations**: \n", "$$\n", "\t\\mathbf{G}(i,k,\\theta) = \\begin{pmatrix} \n", "\t1 & & 0 & & 0 & & 0 \\\\\n", "\t\\vdots & \\ddots & \\vdots & & \\vdots & & \\vdots \\\\\n", "\t0 & & c & & s & & 0 \\\\ \n", "\t\\vdots & & \\vdots & \\ddots & \\vdots & & \\vdots \\\\\n", "\t0 & & - s & & c & & 0 \\\\\n", "\t\\vdots & & \\vdots & & \\vdots & \\ddots & \\vdots \\\\\n", "\t0 & & 0 & & 0 & & 1 \\end{pmatrix},\n", "$$\n", "where $c = \\cos(\\theta)$ and $s = \\sin(\\theta)$. $\\mathbf{G}(i,k,\\theta)$ is orthogonal.\n", "\n", "* Pre-multiplication by $\\mathbf{G}(i,k,\\theta)^T$ rotates counterclockwise $\\theta$ radians in the $(i,k)$ coordinate plane. If $\\mathbf{x} \\in \\mathbb{R}^n$ and $\\mathbf{y} = \\mathbf{G}(i,k,\\theta)^T \\mathbf{x}$, then\n", "$$\n", "\ty_j = \\begin{cases}\n", "\tcx_i - s x_k & j = i \\\\\n", "\tsx_i + cx_k & j = k \\\\\n", "\tx_j & j \\ne i, k\n", "\t\\end{cases}.\n", "$$\n", "Apparently if we choose $\\tan(\\theta) = -x_k / x_i$, or equivalently,\n", "$$\n", "\\begin{eqnarray*}\n", "\tc = \\frac{x_i}{\\sqrt{x_i^2 + x_k^2}}, \\quad s = \\frac{-x_k}{\\sqrt{x_i^2 + x_k^2}},\n", "\\end{eqnarray*}\n", "$$\n", "then $y_k=0$.\n", "\n", "* Pre-applying Givens transform $\\mathbf{G}(i,k,\\theta)^T \\in \\mathbb{R}^{n \\times n}$ to a matrix $\\mathbf{A} \\in \\mathbb{R}^{n \\times m}$ only effects two rows of $\\mathbf{\n", "A}$:\n", "$$\n", "\t\\mathbf{A}([i, k], :) \\gets \\begin{pmatrix} c & s \\\\ -s & c \\end{pmatrix}^T \\mathbf{A}([i, k], :),\n", "$$\n", "costing $6m$ flops.\n", "\n", "* Post-applying Givens transform $\\mathbf{G}(i,k,\\theta) \\in \\mathbb{R}^{m \\times m}$ to a matrix $\\mathbf{A} \\in \\mathbb{R}^{n \\times m}$ only effects two columns of $\\mathbf{A}$:\n", "$$\n", "\t\\mathbf{A}(:, [i,k]) \\gets \\mathbf{A}(:, [i,k]) \\begin{pmatrix} c & s \\\\ -s & c \\end{pmatrix},\n", "$$\n", "costing $6n$ flops.\n", "\n", "* QR by Givens: $\\mathbf{G}_t^T \\cdots \\mathbf{G}_1^T \\mathbf{X} = \\begin{pmatrix} \\mathbf{R}_1 \\\\ \\mathbf{0} \\end{pmatrix}$.\n", "\n", "\n", "\n", "* Zeros in $\\mathbf{X}$ can also be introduced row-by-row.\n", "\n", "* If $\\mathbf{X} \\in \\mathbb{R}^{n \\times p}$, the total cost is $3np^2 - p^3$ flops and $O(np)$ square roots.\n", "\n", "* Note each Givens transform can be summarized by a single number, which is stored in the zeroed entry of $\\mathbf{X}$.\n", "\n", "* *Fast Givens transform* avoids taking square roots." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Applications\n", "\n", "### Linear regression\n", "\n", "* QR decomposition of $\\mathbf{X}$: $2np^2 - \\frac 23 p^3$ flops.\n", "\n", "* Solve $\\mathbf{R}^T \\mathbf{R} \\beta = \\mathbf{R}^T \\mathbf{Q} \\mathbf{y}$ for $\\beta$.\n", "\n", "* If need standard errors, compute inverse of $\\mathbf{R}^T \\mathbf{R}$." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Further reading\n", "\n", "* Section 7.8 of [Numerical Analysis for Statisticians](http://ucla.worldcat.org/title/numerical-analysis-for-statisticians/oclc/793808354&referer=brief_results) of Kenneth Lange (2010).\n", "\n", "* Section II.5.3 of [Computational Statistics](http://ucla.worldcat.org/title/computational-statistics/oclc/437345409&referer=brief_results) by James Gentle (2010).\n", "\n", "* Chapter 5 of [Matrix Computation](http://catalog.library.ucla.edu/vwebv/holdingsInfo?bibId=7122088) by Gene Golub and Charles Van Loan (2013)." ] } ], "metadata": { "@webio": { "lastCommId": null, "lastKernelId": null }, "kernelspec": { "display_name": "Julia 1.1.0", "language": "julia", "name": "julia-1.1" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.1.0" }, "toc": { "colors": { "hover_highlight": "#DAA520", "running_highlight": "#FF0000", "selected_highlight": "#FFD700" }, "moveMenuLeft": true, "nav_menu": { "height": "65px", "width": "252px" }, "navigate_menu": true, "number_sections": true, "sideBar": true, "skip_h1_title": true, "threshold": 4, "toc_cell": true, "toc_section_display": "block", "toc_window_display": true, "widenNotebook": false } }, "nbformat": 4, "nbformat_minor": 2 }