{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "skip" }, "toc": "true" }, "source": [ "# Table of Contents\n", "
" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Lecture 14: Iterative methods for large scale eigenvalue problems" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Previous lecture\n", "\n", "- Finalizing iterative methods for linear systems (minres, bicg, bicgstab)\n", "\n", "- Jacobi, Gauss-Seidel, SSOR methods as preconditioners\n", "\n", "- Incomplete LU for preconditioning, three flavours: ILU(k), ILUT, ILU2" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Partial eigenvalue problem\n", "\n", "- Recall that to find eigenvalues of matrix of size $N\\times N$ one can use, e.g. the QR algorithm.\n", "\n", "- However, in some applications matrix is so large, that we even can not store it exactly.\n", "\n", "- Typically such matrices are given as a **black-box** that is able only to multiply matrix by vector (sometimes even without access to matrix elements). This is what we assume today.\n", "\n", "- In this case the best we can do is to solve partial eigenvalue problem, e.g.\n", "\n", " - Find $k\\ll N$ smallest or largest eigenvalues (and eigenvectors if needed)\n", " - Find $k\\ll N$ eigenvalues closest to a given number $\\sigma$\n", "\n", "- For simplicity we will consider the case when matrix is normal and thus has orthonormal basis of eigenvectors. \n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Power method and related methods\n", "\n", "### Power method\n", "\n", "Recall that the simplest method to find the largest eigenvalue is the **power method**\n", "\n", "$$\n", " x_{i+1} = \\frac{Ax_{i}}{\\|Ax_{i}\\|}\n", "$$\n", "\n", "The convergence is linear with rate $q = \\left|\\frac{\\lambda_1}{\\lambda_2}\\right|$." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Inverse iteration\n", "\n", "To find the smallest eigenvalue one may run power method for $A^{-1}$:\n", "\n", "$$x_{i+1} = \\frac{A^{-1}x_{i}}{\\|A^{-1}x_{i}\\|}.$$\n", "\n", "To accelerate convergence shift-and-invert strategy can be used:\n", "\n", "$$x_{i+1} = \\frac{(A-\\sigma I)^{-1}x_{i}}{\\|(A-\\sigma I)^{-1}x_{i}\\|},$$\n", "\n", "where $\\sigma$ should be close to the eigenvalue we want to find." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Rayleigh quotient (RQ) iteration\n", "\n", "In order to get superlinear convergence one may use adaptive shifts:\n", "\n", "$$x_{i+1} = \\frac{(A-R(x_i) I)^{-1}x_{i}}{\\|(A-R(x_i) I)^{-1}x_{i}\\|},$$\n", "\n", "where $R(x_k) = \\frac{(x_i, Ax_i)}{(x_i, x_i)}$ is Rayleigh quotient. \n", "\n", "The method converges **cubically for Hermitian matrices** and quadratically for non-Hermitian case." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Inexact inverse iteration framework\n", "\n", "- Matrices $(A- \\sigma I)$ as well as $(A-R(x_i) I)$ are ill-conditioned if $\\sigma$ or $R(x_i)$ are close to eigenvalues.\n", "\n", "- Thus, if you are not given e.g. LU factorization of such matrix you might face a problem.\n", "\n", "- In practice you can solve systems only with some accuracy. Recall also that condition number is an upper bound and is overestimated for cosistent rhs. So, even in RQ iteration letting\n", "the shift tend to the eigenvalue [does not harm](http://www.sciencedirect.com/science/article/pii/S0024379505005756) significantly\n", "the performance of the iterative methods.\n", "\n", "- If accuracy of solution of systems increases from iteration to iteration, superlinear convergence for RQ iteration can still be present, see [Theorem 2.1](http://www.sciencedirect.com/science/article/pii/S0024379505005756).\n", "Otherwise, you will get linear convergence." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Block power method\n", "\n", "The block power method (also known as subspace iteration method or simultaneous vector iteration) is a natural generalization of the power method for several largest eigenvalues.