{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Numerical Methods\n",
"\n",
"## Lecture 7: Numerical Linear Algebra III\n",
"\n",
"### Extra exercises"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import numpy as np\n",
"import scipy.linalg as sl"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 1: diagonally dominant matrices ($*$)\n",
"\n",
"A matrix $A$ is said to be diagonally dominant if for each row $i$ the absolute value of the diagonal element is larger than the sum of the absolute values of all the other terms of the row.\n",
"\n",
"\n",
"- write this definition in a mathematical form.\n",
"\n",
"\n",
"- write a code that checks if a matrix is diagonally dominant.\n",
"\n",
"\n",
"- test it with well chosen $2\\times 2$ and $3\\times 3$ examples.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 2: singular matrices and ill-conditioning ($*$)\n",
"\n",
"For the following matrixes, compute the determinant and the condition number, and classify them as singular, ill conditioned or well conditioned:\n",
"$$ (i)\\quad A = \n",
" \\begin{pmatrix}\n",
" 1 & 2 & 3 \\\\\n",
" 2 & 3 & 4 \\\\\n",
" 3 & 4 & 5 \\\\\n",
" \\end{pmatrix}\n",
"\\quad\\quad\\quad\\quad\n",
"(ii)\\quad A = \n",
" \\begin{pmatrix}\n",
" 2.11 & -0.80 & 1.72 \\\\\n",
" -1.84 & 3.03 & 1.29 \\\\\n",
" -1.57 & 5.25 & 4.30 \\\\\n",
" \\end{pmatrix}\n",
"$$\n",
"$$ (iii)\\quad A = \n",
" \\begin{pmatrix}\n",
" 2 & -1 & 0 \\\\\n",
" -1 & 2 & -1 \\\\\n",
" 0 & -1 & 2 \\\\\n",
" \\end{pmatrix}\n",
"\\quad\\quad\\quad\\quad\n",
"(iv)\\quad A = \n",
" \\begin{pmatrix}\n",
" 4 & 3 & -1 \\\\\n",
" 7 & -2 & 3 \\\\\n",
" 5 & -18 & 13 \\\\\n",
" \\end{pmatrix}\\,.\n",
"$$\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 3: Hilbert matrices ($**$) \n",
"\n",
"The *Hilbert matrix* is a classic example of ill-conditioned matrix:\n",
"\n",
"$$\n",
"A = \n",
" \\begin{pmatrix}\n",
" 1 & 1/2 & 1/3 & \\cdots \\\\\n",
" 1/2 & 1/3 & 1/4 & \\cdots \\\\\n",
" 1/3 & 1/4 & 1/5 & \\cdots \\\\\n",
" \\vdots & \\vdots & \\vdots & \\ddots \\\\\n",
"\\end{pmatrix}\\,.\n",
"$$\n",
"\n",
"Let's consider the linear system $A\\pmb{x}=\\pmb{b}$ where \n",
"$$ b_i = \\sum_{j=1}^n A_{ij},\\quad \\textrm{for}\\quad i=1,2,\\ldots, n.$$\n",
"\n",
"\n",
"- How can you write entry $A_{ij}$ for any $i$ and $j$ ?\n",
"\n",
"\n",
"- Convince yourself by pen and paper that $ \\pmb{x} = \\left[ 1, 1, \\cdots 1\\right]^T$ is the solution of the system.\n",
"\n",
"\n",
"- Write a function that returns $A$ and $b$ for a given $n$.\n",
"\n",
"\n",
"- For a range of $n$, compute the condition number of $A$, solve the linear system and compute the error ($err = \\sum_{i=1}^n \\left|x_{computed, i}-x_{exact, i}\\right|$). What do you observe ?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 4: Vandermonde matrices ($**$) \n",
"\n",
"A *Vandermonde matrix* is defined as follows, for any $\\alpha_1, \\dots, \\alpha_n$ real numbers:\n",
"\n",
"\n",
"$$V=\\begin{pmatrix}\n",
"1 & \\alpha_1 & {\\alpha_1}^2 & \\dots & {\\alpha_1}^{n-1}\\\\\n",
"1 & \\alpha_2 & {\\alpha_2}^2 & \\dots & {\\alpha_2}^{n-1}\\\\\n",
"1 & \\alpha_3 & {\\alpha_3}^2 & \\dots & {\\alpha_3}^{n-1}\\\\\n",
"\\vdots & \\vdots & \\vdots & &\\vdots \\\\\n",
"1 & \\alpha_n & {\\alpha_n}^2 & \\dots & {\\alpha_n}^{n-1}\\\\\n",
"\\end{pmatrix}$$\n",
"\n",
"\n",
"- Write a function that takes a real number $\\alpha$ and an integer $n$ as input, and returns a **vector** $v = \\left(1, \\alpha, \\alpha^2, \\dots, \\alpha^{n-1}\\right)$\n",
"\n",
"\n",
"- Using this function, write a function that takes a vector $a = \\left(\\alpha_1, \\alpha_2, \\dots, \\alpha_n\\right)$ as input and returns the corresponding Vandermonde matrix.\n",
"\n",
"\n",
"- For different sets of randomly chosen $(\\alpha_i)$, compute the determinant of the corresponding Vandermonde matrix. What does it tell us regarding the matrix conditioning ?\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 5: LU solve ($**$) \n",
"\n",
"Write a function that solves a linear system $A\\pmb{x}=\\pmb{b}$ using the LU decomposition method.\n",
"\n",
"Hint: you can re-use the function you have written in lecture 6, or use the built-in function `scipy.linalg.lu` to compute the LU decomposition. Write code that performs the forward substitution and backward substitution. Compare your result to the one given by `scipy.linalg.solve`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 6: Gauss-Seidel relaxation ($***$)\n",
"\n",
"Convergence of the Gauss-Seidel method can be improved by a technique known\n",
"as relaxation. The idea is to take the new value of x i as a weighted average of its previous value and the value predicted by the regular Gauss-Seidel iteration. \n",
"\n",
"The corresponding formula for the $k^{th}$ iteration of the algorithm and the $i^{th}$ row is:\n",
"\n",
"$$x_i^{(k)} = \\frac{\\omega}{A_{ii}}\\left(b_i- \\sum_{\\substack{j=1}}^{i-1} A_{ij}x_j^{(k)} - \\sum_{\\substack{j=i+1}}^n A_{ij}x_j^{(k-1)}\\right) + (1-\\omega)x_i^{(k-1)},\\quad i=1,2,\\ldots, n.$$\n",
"\n",
"where the weight $\\omega$ is called the relaxation factor and is usually positive.\n",
"\n",
"\n",
"- What does the algorithm give for $\\omega = 0$; for $\\omega = 1$; for $0 < \\omega < 1$? NB. When $\\omega > 1$, the method is called \"over-relaxation\".\n",
"\n",
"\n",
"- Write a function that solves a system with the relaxed Gauss-Seidel's algorithm, for a given $\\omega$.\n",
"\n",
"\n",
"- Use this function to solve the system from Lecture 7, for different values of $\\omega$. How many iterations are necessary to reach a tolerance of $10^{-6}$ for each value of $\\omega$ ?\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": false,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": false,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 1
}