{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Lecture 13: Iterative methods" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Syllabus\n", "**Week 1:** Matrices, vectors, matrix/vector norms, scalar products & unitary matrices \n", "**Week 2:** TAs-week (Strassen, FFT, a bit of SVD) \n", "**Week 3:** Matrix ranks, singular value decomposition, linear systems, eigenvalues \n", "**Week 4:** Matrix decompositions: QR, LU, SVD + test + structured matrices start \n", "**Week 5:** Iterative methods, preconditioners, matrix functions" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Recap of the previous lecture\n", "- Krylov subspaces\n", "- Iterative methods: Conjugate Gradient (CG), Generalized Minimal Residual Method (GMRES), BiCGStab\n", "- Definition of preconditioners\n", "- Jacobi, Gauss-Seidel as preconditioners" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Today lecture\n", "\n", "Today we will continue to talk about iterative methods.\n", "\n", "We will consider:\n", "\n", "- Concept of Incomplete LU/Incomplete Cholesky\n", "- A brief note about algebraic multigrid (no details, because we need PDEs for the general multigrid concept)\n", "- Iterative methods for other NLA problems (SVD, eigenvalue problems)\n", "- Other than sparse: Toeplitz matrices, FFT, circulant matrices." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Problem setup (recall)\n", "\n", "Just to recall: we are solving \n", "\n", "$$Ax = f,$$\n", "\n", "where $A$ is an $n \\times n$ **sparse matrix** (i.e. the number of non-zero elements is smaller than the $N^2)$, \n", "\n", "and the **matrix-by-vector product** can be computed in $\\mathcal{O}(N)$ operations.\n", "\n", "We solve $$A P^{-1} y = f,$$ \n", "\n", "and $P$ is such that $A \\approx P$ and $AP^{-1}$ has **better spectra** for the iterative method." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Simple method\n", "General idea is to select a class $\\mathcal{P}$ of matrices with which we can **easily solve** linear systems and then \n", "\n", "find a matrix $P \\in \\mathcal{P}$ that minimizes\n", "\n", "$$P = \\arg \\min_{P \\in \\mathcal{P}}\\Vert A - P \\Vert $$\n", "\n", "1. Diagonal matrices give **Jacobi preconditioners**\n", "2. Triangular matrices give **Gauss-Seidel** preconditioner\n", "\n", "Any other ideas for the class of the matrices?\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Remember the Gaussian elimination\n", "Let us remember the basic method for solving linear systems:\n", "\n", "Decompose the matrix $A$ in the form \n", "\n", "$$A = P_1 L U P^{\\top}_2, $$\n", "\n", "where $P_1$ and $P_2$ are certain **permutation** matrices (which do the pivoting).\n", "\n", "The most natural idea is to use **sparse** $L$ and $U$. \n", "\n", "It is not possible without **fill-in** growth for example for matrices, coming from 2D/3D Partial Differential equations (PDEs).\n", "\n", "What to do?" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Incomplete LU\n", "\n", "\n", "Suppose you want to eliminate a variable $x_1$,\n", "\n", "and the equations have the form\n", "\n", "$$5 x_1 + x_4 + x_{10} = 1, \\quad 3 x_1 + x_4 + x_8 = 0, \\ldots,$$\n", "\n", "and in all other equations $x_1$ is not present. \n", "\n", "After the elimination, only $x_{10}$ will enter additionally to the second equation (new fill-in).\n", "\n", "$$x_4 + x_8 + 3(1 - x_4 - x_{10})/5 = 0$$\n", "\n", "In the Incomplete $LU$ case (actually, ILU(0)) we just throw away the **new fill-in**." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Incomplete-LU: formal definition\n", "\n", "We run the usual LU-decomposition cycle, but avoid inserting non-zeros **other** than the initial non-zero pattern. \n", "\n", "``\n", " L = np.zeros((n, n))\n", " U = np.zeros((n, n))\n", " for k in xrange(n): #Eliminate one row \n", " L[k, k] = 1\n", " for i in xrange(k+1, n):\n", " L[i, k] = a[i, k] / a[k, k]\n", " for j in xrange(k+1, n):\n", " a[i, j] = a[i, j] - L[i, k] * a[k, j] #New fill-ins appear here\n", " for j in xrange(k, n):\n", " U[k, j] = a[k, j]\n", "``" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## ILU(k)\n", "\n", "Youseef Saad (who is the author of GMRES) also had a seminal paper on the **Incomplete LU** decomposition\n", "\n", "Saad, Yousef (1996), Iterative methods for sparse linear systems \n", "\n", "And he proposed **ILU(k)** method." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## ILU(k), continued\n", "\n", "The idea of ILU(k) is very instructive and is based on the connection between sparse matrices and graphs.\n", "\n", "Suppose you have an $n \\times n$ matrix $A$ and a corresponding adjacency graph.\n", "\n", "Then we eliminate one variable (vertex) and get a smaller system of size $(n-1) \\times (n-1)$.\n", "\n", "What is the graph of this system, when a **new edge appear**?" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## LU & graphs\n", "\n", "The new edge can appear only between vertices that had common neighbour: it means, that they are second-order neigbours. This is also a sparsity pattern of the matrix $$A^2.$$\n", "\n", "The **ILU(k)** idea is to leave only the elements in $L$ and $U$ that are $k$-order neighbours.\n", "\n", "The ILU(2) is very efficient, but for some reason completely abandoned (i.e. there is no implementation in MATLAB and scipy).\n", "\n", "There is an original Sparsekit software by Saad, which works quite well." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## ILU Thresholded (ILUT)\n", "A much more popular approach is based on the so-called **thresholded LU**.\n", "\n", "You do the standard Gaussian elimination with fill-ins, but either:\n", "\n", "Throw away elements that are smaller than threshold, and/or control the amount of non-zeros you are allowed to store.\n", "\n", "The smaller is the threshold, the better is the preconditioner, but more memory it takes." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Symmetric positive definite case\n", "\n", "In the SPD case, instead of incomplete LU you should use Incomplete Cholesky, which is twice faster and consumes twice less memory.\n", "\n", "Both ILUT and Ichol are implemented in scipy and you can try (nothing quite fancy, but it works)." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Second-order LU preconditioners\n", "\n", "There are more efficient (but much less popular due to the absense of open-source implementations) **second-order** LU factorization [proposed by I. Kaporin](http://www.researchgate.net/profile/I_Kaporin/publication/242940993_High_quality_preconditioning_of_a_general_symmetric_positive_definite_matrix_based_on_its_UTU__UTR__RTU-decomposition/links/53f72ad90cf2888a74976f54.pdf)\n", "\n", "The idea is to approximate the matrix in the form\n", "\n", "$$A \\approx U_2 U^{\\top}_2 + U^{\\top}_2 R_2 + R^{\\top}_2 U_2,$$\n", "\n", "which is just the expansion of the $UU^{\\top}$ with respect to the perturbation of $U$. \n", "\n", "where $U_1$ and $U_2$ are low-triangular and sparse, whereare $R_2$ is small with respect to the drop tolerance parameter.\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## What else?\n", "Besides the ILU/IC approaches, there is a very efficient approach, called **algebraic multigrid (AMG)**.\n", "\n", "It was first proposed by Ruge and Stuben in 1987.\n", "\n", "It is probably the only **black-box** method that is asymptotically optimal for 2D/3D problems \n", "\n", "(at least, for elliptic problems).\n", "\n", "The idea comes from the **geometric multigrid**.\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Multigrid: just the basic idea\n", "The basic idea of the multigrid is to consider instead of one problem problems on **coarser meshes**.\n", "\n", "Then we solve for a system on a coarse mesh, and interpolate it to the finer mesh.\n", "\n", "Geometric multigrid uses the information about the meshes, operators, discretization..\n", "\n", "AMG method tries to guess it from the **matrix**.\n", "\n", "To know more, enroll in the FastPDE course this Spring :)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Summary of this part\n", "\n", "The ILU has three flavours: ILU(k), ILUT, ILU2. They are different, but based on the approximation of the matrix in a simpler form to use as a preconditioners.\n", "\n", "Now we will briefly discuss iterative methods for other NLA problems." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Iterative methods for other NLA problems\n", "\n", "Up to now, we only talked about the iterative methods for **linear systems**.\n", "\n", "There are other important large-scale problems:\n", "\n", "1. (Partial) Eigenvalue problem: $Ax_k = \\lambda_k x_k.$\n", "2. (Partial) SVD problem: $A v_k = \\sigma_k u_k, \\quad A^* u_k = \\sigma_k v_k$.\n", "\n", "**It is extremely difficult to compute all eigenvalues/singular values of the matrix**\n", "\n", "(singular vectors / eigenvectors are not sparse, so we need to store them all!).\n", "\n", "But it is possible to solve **partial eigenvalue problems**. \n", "\n", "The SVD follows from the symmetric eigenvalue problem." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Can we do better, than the power method\n", "\n", "Recall the power method:\n", "\n", "$$x_{k+1} = A x_k.$$\n", "\n", "Can we do better, than the power method (which converges to the largest eigenvalue?)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Yes, we can do better\n", "\n", "We can use the full Krylov subspace, and construct the orthogonal basis in the subspace\n", "\n", "$$x \\in K_n(A, f)$$ \n", "\n", "to approximate the eigenvalues. \n", "\n", "This gives rise to the **Arnoldi method** (for Hermitian matrices it reduces to the **Lanczos method**)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Arnoldi method\n", "\n", "The Arnold method has a simple structure (see the pseudocode), which just modified Gram-Schmidt applied to the Krylov subspace\n", "\n", "\n", "`` \n", "q[k] = A * q[k-1], \n", "for j in range(k-1): \n", " h[j, k-1] = (q[j], q[k]) \n", " q[k] -= h[j, k-1] * q[j] \n", "h[k, k-1] = || q[k] || \n", "q[k] = q[k]/h[k, k-1]\n", "``" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Arnoldi method, cont\n", "\n", "The Arnoldi method generates an orthogonal basis of the form\n", "\n", "$$A Q_n = \\widetilde{H}_n Q_{n+1},$$\n", "\n", "and $H_n = Q^*_n A Q_n$ is **upper Hessenberg matrix**.\n", "\n", "The idea of using Arnoldi method as an **eigenvalue algorithm** is that we compute the eigenvalues of $A$ \n", "\n", "by looking at the eigenvalues of the **orthogonal projection** $H_n$ (called **Ritz values**).\n", "\n", "Since $H_n$ is small we can use standard software to compute its eigenvalues.\n", "\n", "Typically, Ritz values converge to **extreme** (large & small) eigenvalues of $A$.\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Symmetric case: Lanczos method\n", "\n", "In the Hermitian case, $A = A^*$ the matrix\n", "\n", "$$H = Q^* A Q$$ becomes tridiagonals, and it gives short three-term recurrence relations for the vectors:\n", "\n", "the famous Lanczos method.\n", "\n", "We only need to store last vectors and the tridiagonal matrix." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Practical issues and stability\n", "\n", "The Lanczos vectors may loose orthogonality during the process due to floating-point errors, thus all practical implementations of it use **restarts**.\n", "\n", "A very good introduction to the topic is given in the book of **Golub and Van-Loan (Matrix Computations)**." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Preconditioning of eigenvalue problems.\n", "\n", "The convergence of eigenvalues is still governed by the condition number, and the number of iterations can be large.\n", "\n", "Thus, instead of Krylov subspaces, we can use **Rational Krylov subspaces**, which require auxiliary linear solvers." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Simplest Rational Krylov method: Inverse iteration\n", "\n", "The simplest Rational Krylov method is the **inverse iteration**:\n", "\n", "$$x_{k+1} = A^{-1} x_k.$$\n", "\n", "It converges to the smallest eigenvector (as the power method with the inverse matrix).\n", "\n", "Each step requires one linear solve. \n", "\n", "We can factorize $A$ once, and then apply inverse!\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Rayleigh quotient iteration\n", "\n", "The RQI-iteration has the form\n", "\n", "$$x_{k+1} = (A - \\lambda_k I)^{-1} x_k, \\quad \\lambda_k = ( A x_k, x_k)/(x_k, x_k).$$\n", "\n", "For SPD matrices it has **cubic convergence**, but each time we have to construct preconditioners for the different matrices.\n", "\n", "If the factorization of $(A - \\lambda I)$ is not feasible, **Jacobi-Davidson** method is typically the method of choice." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Jacobi correction\n", "\n", "Jacobi not only presented the way to solve the eigenvalue problem by Jacobi rotations, but also proposed an iterative procedure. Let $u_j$ be the current approximation, and $t$ the correction:\n", "\n", "$$A(u_j + t) = \\lambda (u_j + t),$$\n", "\n", "and we look for the correction $t \\perp u_j$ (new orthogonal vector).\n", "\n", "Then, the parallel part has the form\n", "\n", "$$u_j u^*_j A (u_j + t) = \\lambda u_j,$$\n", "\n", "which simplifies to \n", "\n", "$$\\theta_j + u_j u^* _j A t = \\lambda.$$\n", "\n", "The orthogonal component is \n", "\n", "$$( I - u_j u^*_j) A (u_j + t) = (I - u_j u^*_j) \\lambda (u_j + t),$$\n", "\n", "which is equivalent to \n", "\n", "$$\n", " (I - u_j u^*_j) (A - \\lambda I) t = (I - u_j u^*_j) (- A u_j + \\lambda u_j) = - (I - u_j u^*_j) A u_j = - (A - \\theta_j I) u_j = -r_j.\n", "$$\n", "\n", "$r_j$ is the residue.\n", "\n", "Since $(I - u_j u^*j) t = t$, we can rewrite this equation in the form\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Jacobi-Davidson correction\n", "\n", "$$ (I - u_j u^*_j) (A - \\lambda I) (I - u_j u^*_j) t = -r_j.$$\n", "\n", "Now we replace $\\lambda$ by $\\theta_j$, and get \n", "\n", "$$\n", " (I - u_j u^*_j) (A - \\theta_j I) (I - u_j u^*_j) t = -r_j.\n", "$$\n", "\n", "Since $r_j \\perp u_j$ this equation is consistent, if $(A - \\theta_j I)$ is non-singular.\n", "\n", "If this equation is solved exactly, we will get Rayleigh Quotient iteration!\n", "\n", "If we want many eigenvectors, we just compute **partial Schur decomposition:**\n", "\n", "$$A Q_k = Q_k T_k, $$\n", "\n", "and then want to update $Q_k$ by one vector added to $Q_k$. We just use instead of $A$ the matrix $(I - Q_k Q^*_k) A (I - Q_k Q^*_k)$. " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Jacobi-Davidson: summary\n", "\n", "The correction equation can be solved only roughly, and JD method is often the fastest." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Software\n", "\n", "- The [ARPack](http://www.caam.rice.edu/software/ARPACK/) is the most widely used (it powers scipy sparse eigensolver)\n", "- The [PRIMME](https://github.com/primme/primme) is the best from my experience (it employs dynamic switching between different methods, including those we have not talked about like LOBPCG.)\n", "- [PROPACK](http://sun.stanford.edu/~rmunk/PROPACK/) works well for the SVD.\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Summary for this part\n", "\n", "- Iterative methods (Arnoldi, Lanczos, Jacobi-Davidson) for eigenvalue problems\n", "- There is a software for using them\n", "- There is a lot of technical issues hidden (restarts, spurious eigenvalues, stability)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Other structured matrices\n", "\n", "- Up to now, we discussed preconditioning only for **sparse matrices**\n", "- But iterative methods work well for any matrices that have fast black-box matrix-by-vector product\n", "- Important class of matrices are **Toeplitz matrices** (and **Hankel matrices**)\n", "\n", "We will briefly discuss them, with more to follow on Thursday." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Toeplitz matrices: definition\n", "\n", "A matrix is called **Toeplitz** if its elements are defined as\n", "\n", "$$a_{ij} = t_{i - j}.$$\n", "\n", "A Toeplitz matrix is completely defined by its first column and first row (i.e., $2n-1$ parameters).\n", "\n", "It can also be multiplied by vector fast.\n", "\n", "It has a lot of application in **signal processing**" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Fast Toeplitz matrix-by-vector product\n", "\n", "In order to get fast matrix-by-vector product, consider a special class of **Toeplitz matrices:**\n", "\n", "**Circulant matrices**.\n", "\n", "A matrix is called **circulant**, if\n", "\n", "$$a_{ij} = c_{\\mathrm{mod}(i - j, n)}.$$\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Spectral theorem for circulant matrices\n", "\n", "**Theorem:**\n", "\n", "Any circulant matrix can be represented in the form\n", "\n", "$$C = F \\Lambda F^*,$$\n", "\n", "where $F$ is the **Fourier matrix** with the elements\n", "\n", "$$F_{kl} = \\frac{1}{\\sqrt{n}} w^{kl}, \\quad k, l = 0, \\ldots, n-1, \\quad w = e^{\\frac{2 \\pi i}{n}},$$\n", "\n", "and matrix $\\Lambda$ is the diagonal matrix and\n", "\n", "$$\\lambda = F^* c, $$\n", "\n", "where $c$ is the first column of the circulant matrix." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## FFT\n", "\n", "The matrix-by-vector product $F x$ can be computed in $\\mathcal{O}(n \\log n)$ operations using Fast Fourier Transform (FFT)." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Fast Toeplitz-by-vector product by circulant embedding\n", "\n", "\n", "\n", "$$ C = \\begin{bmatrix} T & T_2 \\\\\n", " T_1 & T \\end{bmatrix} $$\n", " and then\n", " $$ \n", " C \\begin{bmatrix} q\\\\ 0 \\end{bmatrix} = \\begin{bmatrix} Tq \\\\0 \\end{bmatrix}.\n", " $$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Linear system solution with Toeplitz matrices\n", "\n", "How to solve linear systems with Toeplitz matrices? \n", "\n", "Since we have fast matrix-by-vector product, methods we can use iterative methods.\n", "\n", "We can also use direct solvers, but what is **the structure of the inverse** of such matrix?\n", "\n", "Discuss it next time!" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Take home message\n", "- ILU\n", "- Iterative eigensolvers\n", "- Toeplitz matrices, FFT, circulant" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Next lecture\n", "- More on Toeplitz matrices (direct and iterative methods, block case).\n", "- Back to matrices: matrix functions" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Questions?" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "slideshow": { "slide_type": "skip" } }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n" ], "text/plain": [ "" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from IPython.core.display import HTML\n", "def css_styling():\n", " styles = open(\"./styles/custom.css\", \"r\").read()\n", " return HTML(styles)\n", "css_styling()" ] } ], "metadata": { "celltoolbar": "Slideshow", "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" } }, "nbformat": 4, "nbformat_minor": 0 }