{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem 1 (LU decomposition)\n", "## 30 pts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1. LU for band matrices (5 pts)\n", "\n", "The complexity to find an LU decomposition of a dense $n\\times n$ matrix is $\\mathcal{O}(n^3)$.\n", "Significant reduction in complexity can be achieved if the matrix has a certain structure, e.g. it is sparse. \n", "In the following task we consider an important example of $LU$ for a special type of sparse matrices –– tridiagonal matrices.\n", "\n", "- Find the number of operations to compute an $LU$ decomposition of a tridiagonal matrix taking into account only non-zero elements. How many nonzero elements are in factors $L$, $U$ and where are they located? Conclude what is the complexity to solve a linear system with tridiagonal matrix in terms of $n$. \n", "\n", "### 2. Completing the proof of existence of LU (10 pts)\n", "\n", "Some details in lecture proofs about $LU$ were omitted. Let us complete them.\n", "- Prove that if $LU$ decomposition exists, then matrix is strictly regular.\n", "- Prove that if $A$ is strictly regular, then $A_1 = D - \\frac 1a b c^T$ (see lectures for notations) is also strictly regular.\n", "\n", "### 3. Stability of LU (10 pts)\n", "\n", "Let\n", "$A = \\begin{pmatrix}\n", "a & 1 & 0\\\\\n", "1 & 1 & 1 \\\\\n", "0 & 1 & 1\n", "\\end{pmatrix}.$ \n", "* Find analytically an $LU$ decomposition of the matrix $A$.\n", "* For what values of $a$ does the LU decomposition of $A$ exist?\n", "* Explain, why can the LU decomposition fail to approximate factors $L$ and $U$ for $|a|\\ll 1$ in computer arithmetic?\n", "How can this problem be solved?\n", "\n", "\n", "### 4. Block LU (5 pts)\n", "\n", "Let $A = \\begin{bmatrix} A_{11} & A_{12} \\\\ A_{21} & A_{22} \\end{bmatrix}$ be a block matrix. The goal is to solve the linear system\n", "$$\n", " \\begin{bmatrix} A_{11} & A_{12} \\\\ A_{21} & A_{22} \\end{bmatrix} \\begin{bmatrix} u_1 \\\\ u_2 \\end{bmatrix} = \\begin{bmatrix} f_1 \\\\ f_2 \\end{bmatrix}.\n", "$$\n", "\n", "* Using block elimination find matrix $S$ and right-hand side $f_2$ so that $u_2$ can be found from $S u_2 = f_2$. Note that the matrix $S$ is called Schur complement of the block $A_{11}$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem 2 (QR decomposition) \n", "\n", "## 20 pts\n", "\n", "### 1. Standard Gram-Schmidt algorithm (10 pts)\n", "Our goal now is to orthogonalize a system of linearly independent vectors $v_1,\\dots,v_n$.\n", "The standard algorithm for the task is the Gram-Schmidt algorithm:\n", "\n", "$$\n", "\\begin{split}\n", "u_1 &= v_1, \\\\\n", "u_2 &= v_2 - \\frac{(v_2, u_1)}{(u_1, u_1)} u_1, \\\\\n", "\\dots \\\\\n", "u_n &= v_n - \\frac{(v_n, u_1)}{(u_1, u_1)} u_1 - \\frac{(v_n, u_2)}{(u_2, u_2)} u_2 - \\dots - \\frac{(v_n, u_{n-1})}{(u_{n-1}, u_{n-1})} u_{n-1}.\n", "\\end{split}\n", "$$\n", "\n", "Now $u_1, \\dots, u_n$ are orthogonal vectors in exact arithmetics. Then to get orthonormal system you should divide each of the vectors by its norm: $u_i := u_i/\\|u_i\\|$.\n", "The Gram-Schidt process can be viewed as a QR decomposition. Let us show that." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Write out what is $Q$ and $R$ obtained in the process described. \n", "\n", "\n", "* Implement the described Gram-Schmidt algorithm as a function ```gram_schmidt(A)```, which outputs ```Q,R``` and check it on a random $100\\times 100$ matrix $B.$ Print out the error. \n", "\n", "**Note:** To check orthogonality calculate the matrix of scalar products $G_{ij} = (u_i, u_j)$ (called Gram matrix of set of vectors $u_1,\\dots, u_n$) which should be equal to the identity matrix $I.$ Error $\\|G - I\\|_2$ will show you how far is the system $u_i$ from orthonormal.\n", "\n", "\n", "* Create a Hilbert matrix $A$ of size $100\\times 100$ without using loops.\n", "Othogonalize its columns by the described Gram-Schmidt algorithm. Is the Gram matrix close to the identity matrix in this case? Why?\n", "\n", "\n", "The observed loss of orthogonality is a problem of this particular algorithm. To avoid it [modified Gram-Schmidt algorithm](https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process#Numerical_stability), QR via Householder reflections or Givens rotations can be used." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# INPUT : rectangular matrix A\n", "# OUTPUT: matrices Q - orthogonal and R - upper triangular such that A=QR\n", "def gram_schmidt(A): # 5 pts\n", " # enter your code here\n", " return Q, R" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2. Householder QR (10 pts)\n", "\n", "* Implement Householder QR decomposition as a function ```householder_qr(A)``` which outputs ```Q,R```. Apply it to the matrix $B$ created above. Print out the error.\n", "\n", "\n", "* Apply it to the Hilbert matrix $A$ created in the first part of the problem and print out the error. Consider how stable is Householder compared to Gram-Schmidt. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# INPUT : rectangular matrix A\n", "# OUTPUT: matrices Q - orthogonal and R - upper triangular such that A=QR\n", "def householder_qr(A): # 7 pts\n", " # enter your code here\n", " return Q, R" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem 3 (Low-rank decompositions)\n", "\n", "## 45 pts\n", "\n", "## 1. Theoretical tasks (15 pts)\n", "\n", "* Prove that for any Hermitian matrix, singular values equal to absolute value of eigenvalues. Does this hold for a general matrix? Prove or provide a counterexample.\n", "\n", "\n", "* Find analytically a skeleton decomposition of the matrix of size $n\\times m$ with elements $a_{ij} = \\sin i + \\sin j$.\n", "\n", "\n", "* Let $A\\in\\mathbb{C}^{n\\times m}$ be of rank $r$ and let $A = U\\Sigma V^*$ be its SVD. Prove that $\\mathrm{im}(A^*) = \\mathrm{span}\\{v_1,\\dots, v_r\\}$, where $V = [v_1, \\dots, v_n]$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Recommender system using SVD (30 pts)\n", "\n", "In this task you are asked to build a simple movie recommender system based on *collaborative filtering* approach and SVD.\n", "Collaborative filtering implies that you build recommendations based on the feedback of other users given in a matrix $\\mathbf{M}$ of users vs. movies. \n", "If a user $i$ watched a movie $j$ and rated it, say, as $3$ out of $5$, then the value $3$ is the corresponding matrix entry, i.e. $\\mathbf{M}_{i,j}=3$.\n", "If a user did not watch a movie, then we put $0$ as a matrix element, i.e. $\\mathbf{M}=0$. \n", "Hence, the matrix is sparse." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task 1. Building the core of recommender (15 pts)\n", "\n", "Build representation of users and movies in the latent factors space with the help of SVD.\n", "\n", "* We test the SVD model on a [Movielens 10M](https://grouplens.org/datasets/movielens/) dataset. Download the dataset using python functions provided in the following [Jupyter notebook](movielens10m.ipynb).\n", "\n", "\n", "* Is it possible to use ```np.linalg.svd``` function to calculate SVD of the downloaded matrices on your laptop? Provide an estimate.\n", "\n", "\n", "* Implement function `tr_svd` so that it computes truncated SVD using `scipy.linalg.svds`:\n", " * Be aware that `scipy` returns singular values in ascending order (see the [docs](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.linalg.svds.html)).\n", " * Sort all your svd data in descending (by singular values) order without breaking the result.\n", " \n", "\n", "* Fix the rank of approximation and compute truncated SVD with `tr_svd` of the training set of the dataset. Plot the obtained singular values. Can you tell from the plot whether the data has a low-rank structure? Give your intuition, why it happens?\n", "\n", "\n", "* Write the function `top_n` which takes user as a row of his/her ratings (including non-rated films, i.e. just a row from the train\\test set), integer number $N$ and returns array of indices which correspond to $N$ highest ratings. Use function `np.argsort()`.\n", "\n", "\n", "* Pick several users at random from the training set. Compare their top-10 films and top-10, suggested by your model ($A_k = U_k \\Sigma_k V_k^T$). Comment on the result. **Note:** you can run all tests in this task with $k=25$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# INPUT: A: scipy.sparse.csr_matrix (N_train x N_films), k - integer\n", "# OUTPUT: U - np.array (N_train x k), S - np.array (k x k), Vh - np.array (k x N_films)\n", "def tr_svd(A, k): # 5 pts\n", " # enter your code here\n", " return U, S, Vh\n", "\n", "# INPUT: user - np.array (N_films,), N - integer \n", "# OUTPUT: np.array (N,)\n", "def top_n(user, N): # 2 pts\n", " # enter your code here\n", " return top_n_list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task 2. Evaluating performance of the recommender (15 pts)\n", "\n", "Suppose, we trained our model (obtain $U_k, \\Sigma_k, V^T_k$ from $A_{train}$). Let's evaluate it! For this purpose we have $A_{test}$ (recall the function [```split_data```](movielens10m.ipynb)). And our goal is to obtain vectors of recommendation $r$ for each user (row) in the test set ($A_{test}$). But there is no need to recompute the whole SVD for each user. We have the tool, which is called _folding-in_ for recommender systems.\n", "\n", "#### Folding-in technique \n", "\n", "\n", "\n", "\n", "A new user can be considered as an update to the original matrix (appending new row). Appending a row in the original matrix corresponds to appending a row into the users latent factors matrix $U_k$ in the SVD decomposition. \n", "Since we do not want to recompute the SVD, we project the new user on the space of found latent factors $V_k$, which spans the row space of matrix $A_k = U_k \\Sigma_k V^T_k$ (recall the problem from the theoretical tasks).\n", "The orthoprojection on this space is $P = V_kV_k^T$ (check that it satisfies definition of orthoprojection, i.e. $P^2=P$, $P^T = P$).\n", "\n", "Thus, the recommendation vector $r$ for a new user $x$ (considered as a column vector) can be written as\n", "\n", "$$\n", "r = V_kV_k^T x.\n", "$$\n", "\n", "\n", "#### Computing the total score\n", "You have to iterate over all users in the test set and make the following steps:\n", "* obtain vector $x$, which is the same as user row, but the last $N = 3$ rated films should be filled with zeroes. Example:\n", "\n", "$$\n", "user = (0, 0, 1, 3, 5, 2, 0, 2, 2, 1, 0, 5) \\;\\; \\to \\;\\; x = (0, 0, 1, 3, 5, 2, 0, 2, 2, 0, 0, 0).\n", "$$\n", "\n", "* compute the folding-in prediction $r$:\n", "\n", "$$\n", "r = V_k V_k^T x.\n", "$$\n", "\n", "* Obtain top-3 from $user$ (truth) and top-3 from $r$ (prediction). The number of films appearing _simultaneously_ in both top-3's should be added to the `total_score`. Write the corresponding function `total_score_folding`, which takes the sparse test matrix $A_{test}$, $V_k$ from truncated SVD of $A_{train}$ and compute the total score. \n", "\n", "**Example: **\n", "\n", "| $user$ | $recommendation$ |\n", "|:------------:|:----------:|\n", "| (**1**,**2**,3) | (10,**2**,**1**) |\n", "| (34, 27, **69**) | (**69**, 5, 9) |\n", "| (7,6,4) | (8,9,2) |\n", "\n", "```total_score``` = 2 + 1 + 0 = 3." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# INPUT: V - np.array(N_films, k), test_data - scipy.sparse.csr_matrix (N_train x N_films)\n", "# OTPUT: total_score - integer\n", "def total_score_folding(V, test_data): # 8 pts\n", " # enter you code here\n", " return total_score" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Task 3 (bonus) Fine-tuning your model\n", "\n", "* Try to find the rank that produces the best evaluation score.\n", " * Plot the dependency of evaluation score on the rank of SVD for all your trials in one graph.\n", "* Report the best result and the corresponding SVD rank.\n", "* Compare your model with the non-personalized recommender which simply recommends top-3 movies with highest average ratings. \n", "\n", "**Note**, that you don't have to recompute SVD to evaluate your model. You might compute once relatively large ($k =500$) truncated SVD and then just use submatrices of it.\n", "\n", "**Optionally:**\n", "You may want to test your parameters with different data splittings in order to minimize the risk of local effects.\n", "You're also free to add modifications to your code for producing better results. Report what modificatons you've done and what effect it had if any." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem 4 (eigenvalues)\n", "\n", "## 55 pts\n", "\n", "### 1. Theoretical tasks (10 pts)\n", "\n", "* Prove that normal matrix is Hermitian iff its eigenvalues are real. Prove that normal matrix is unitary iff its eigenvalues satisfy $|\\lambda| = 1$. \n", "\n", "* The following problem illustrates instability of the Jordan form. Find theoretically the eigenvalues of the perturbed Jordan block:\n", "\n", "$$\n", " J(\\varepsilon) = \n", " \\begin{bmatrix} \n", " \\lambda & 1 & & & 0 \\\\ \n", " & \\lambda & 1 & & \\\\ \n", " & & \\ddots & \\ddots & \\\\ \n", " & & & \\lambda & 1 \\\\ \n", " \\varepsilon & & & & \\lambda \\\\ \n", " \\end{bmatrix}_{n\\times n}\n", "$$\n", "\n", "Comment how eigenvalues of $J(0)$ are perturbed for large $n$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2. PageRank (30 pts)\n", "\n", "\n", "#### Damping factor importance\n", "\n", "* Write the function ```pagerank_matrix(G)``` that takes an adjacency matrix $G$ as an input and outputs the corresponding PageRank matrix $A$.\n", "\n", "* Find PageRank matrix $A$ that corresponds to the following graph: \n", "What is its largest eigenvalue? What multiplicity does it have?\n", "\n", "\n", "* Implement the power method for a given matrix $A$, an initial guess $x_0$ and a number of iterations ```num_iter```. It should be organized as a function ```power_method(A, x0, num_iter)``` that outputs approximation to eigenvector $x$, eigenvalue $\\lambda$ and history of residuals $\\{\\|Ax_k - \\lambda_k x_k\\|_2\\}$. Make sure that the method conveges to the correct solution on a matrix $\\begin{bmatrix} 2 & -1 \\\\ -1 & 2 \\end{bmatrix}$ which is known to have the largest eigenvalue equal to $3$.\n", "\n", "\n", "* Run the power method for the graph presented above and plot residuals $\\|Ax_k - \\lambda_k x_k\\|_2$ as a function of $k$ for ```num_iter=100``` and random initial guess ```x0```. Explain the absence of convergence. \n", "\n", "\n", "* Consider the same graph, but with the directed edge that goes from the node 3 to the node 4 being removed. Plot residuals as in the previous task and discuss the convergence. Now, run the power method with ```num_iter=100``` for 10 different initial guesses and print/plot the resulting approximated eigenvectors. Why do they depend on the initial guess?\n", "\n", "\n", "In order to avoid this problem Larry Page and Sergey Brin [proposed](http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf) to use the following regularization technique:\n", "\n", "$$\n", "A_d = dA + \\frac{1-d}{N} \\begin{pmatrix} 1 & \\dots & 1 \\\\ \\vdots & & \\vdots \\\\ 1 & \\dots & 1 \\end{pmatrix},\n", "$$\n", "\n", "where $d$ is a small parameter in $[0,1]$ (typically $d=0.85$), which is called **damping factor**, $A$ is of size $N\\times N$. Now $A_d$ is the matrix with multiplicity of the largest eigenvalue equal to 1. \n", "Recall that computing the eigenvector of the PageRank matrix, which corresponds to the largest eigenvalue, has the following interpretation. Consider a person who stays in a random node of a graph (i.e. opens a random web page); at each step s/he follows one of the outcoming edges uniformly at random (i.e. opens one of the links). So the person randomly walks through the graph and the eigenvector we are looking for is exactly his/her stationary distribution — for each node it tells you the probability of visiting this particular node. Therefore, if the person has started from a part of the graph which is not connected with the other part, he will never get there. In the regularized model, the person at each step follows one of the outcoming links with probability $d$ OR visits a random node from the whole graph with probability $(1-d)$.\n", "\n", "* Now, run the power method with $A_d$ and plot residuals $\\|A_d x_k - \\lambda_k x_k\\|_2$ as a function of $k$ for $d=0.99$, ```num_iter=100``` and a random initial guess ```x0```.\n", "\n", "\n", "Usually, graphs that arise in various areas are sparse (social, web, road networks, etc.) and, thus, computation of a matrix-vector product for corresponding PageRank matrix $A$ is much cheaper than $\\mathcal{O}(N^2)$. However, if $A_d$ is calculated directly, it becomes dense and, therefore, $\\mathcal{O}(N^2)$ cost grows prohibitively large for big $N$.\n", "\n", "\n", "* Implement fast matrix-vector product for $A_d$ as a function ```pagerank_matvec(A, d, x)```, which takes a PageRank matrix $A$ (in sparse format, e.g., ```csr_matrix```), damping factor $d$ and a vector $x$ as an input and returns $A_dx$ as an output. Generate a random adjacency matrix of size $10000 \\times 10000$ with only 100 non-zero elements and compare ```pagerank_matvec``` performance with direct evaluation of $A_dx$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# INPUT: G - np.ndarray\n", "# OUTPUT: A - np.ndarray (of size G.shape)\n", "def pagerank_matrix(G): # 5 pts\n", " # enter your code here\n", " return A" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# INPUT: A - np.ndarray (2D), x0 - np.ndarray (1D), num_iter - integer (positive) \n", "# OUTPUT: x - np.ndarray (of size x0), l - float, res - np.ndarray (of size num_iter + 1 [include initial guess])\n", "def power_method(A, x0, num_iter): # 5 pts\n", " # enter your code here\n", " return x, l, res" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# INPUT: A - np.ndarray (2D), d - float (from 0.0 to 1.0), x - np.ndarray (1D, size of A.shape[0/1])\n", "# OUTPUT: y - np.ndarray (1D, size of x)\n", "def pagerank_matvec(A, d, x): # 2 pts\n", " # enter your code here\n", " return y" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### DBLP: computer science bibliography\n", "\n", "Download the dataset from [here](https://goo.gl/oZVxEa), unzip it and put `dblp_authors.npz` and `dblp_graph.npz` in the same folder with this notebook. Each value (author name) from `dblp_authors.npz` corresponds to the row/column of the matrix from `dblp_graph.npz`. Value at row `i` and column `j` of the matrix from `dblp_graph.npz` corresponds to the number of times author `i` cited papers of the author `j`. Let us now find the most significant scientists according to PageRank model over DBLP data.\n", "\n", "* Load the weighted adjacency matrix and the authors list into Python using ```load_dblp(...)``` function. Print its density (fraction of nonzero elements). Find top-10 most cited authors from the weighted adjacency matrix. Now, make all the weights of the adjacency matrix equal to 1 for simplicity (consider only existence of connection between authors, not its weight). Obtain the PageRank matrix $A$ from the adjacency matrix and verify that it is stochastic.\n", " \n", " \n", "* In order to provide ```pagerank_matvec``` to your ```power_method``` (without rewriting it) for fast calculation of $A_dx$, you can create a ```LinearOperator```: \n", "```python\n", "L = scipy.sparse.linalg.LinearOperator(A.shape, matvec=lambda x, A=A, d=d: pagerank_matvec(A, d, x))\n", "```\n", "Calling ```L@x``` or ```L.dot(x)``` will result in calculation of ```pagerank_matvec(A, d, x)``` and, thus, you can plug $L$ instead of the matrix $A$ in the ```power_method``` directly. **Note:** though in the previous subtask graph was very small (so you could disparage fast matvec implementation), here it is very large (but sparse), so that direct evaluation of $A_dx$ will require $\\sim 10^{12}$ matrix elements to store - good luck with that (^_<)≡☆\n", "\n", "\n", "* Run the power method starting from the vector of all ones and plot residuals $\\|A_dx_k - \\lambda_k x_k\\|_2$ as a function of $k$ for $d=0.85$.\n", "\n", "\n", "* Print names of the top-10 authors according to PageRank over DBLP when $d=0.85$.\n", "\n", "\n", "* (Bonus) Does it look suspicious? Why? Discuss what could cause such results." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from scipy.sparse import load_npz\n", "import numpy as np\n", "def load_dblp(path_auth, path_graph):\n", " G = load_npz(path_graph).astype(float)\n", " with np.load(path_auth) as data: authors = data['authors']\n", " return G, authors\n", "G, authors = load_dblp('dblp_authors.npz', 'dblp_graph.npz')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3. QR algorithm (10 pts)\n", "\n", "* Implement QR-algorithm without shifting. Prototype of the function is given below" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# INPUT: \n", "# A_init - square matrix, \n", "# num_iter - number of iterations for QR algorithm\n", "# OUTPUT: \n", "# Ak - transformed matrix A_init given by QR algorithm, \n", "# convergence - numpy array of shape (num_iter, ), \n", "# where we store the maximal number from the Chebyshev norm \n", "# of triangular part of the Ak for every iteration\n", "def qr_algorithm(A_init, num_iter): # 3 pts\n", " # enter your code here\n", " return Ak, convergence" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Symmetric case\n", "\n", "- Create **symmetric** tridiagonal $11 \\times 11$ matrix with elements $-1, 2, -1$ on sub-, main- and upper diagonal respectively without using loops. \n", "- Run $300$ iterations of the QR algorithm for this matrix. \n", "- Plot the output matrix with function ```plt.spy(Ak, precision=1e-7)```.\n", "- Plot convergence of QR-algorithm.\n", "\n", "\n", "*Photo comment*: professor Gilbert Strang (MIT) \"These are 121 cupcakes with my favorite -1, 2, -1 matrix. It was the day before Thanksgiving and two days before my birthday. A happy surprise.\" " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Nonsymmetric case\n", "\n", "- Create **nonsymmetric** tridiagonal $11 \\times 11$ matrix with elements $5, 3, -2$ on sub-, main- and upper diagonal respectively without using loops. \n", "- Run $200$ iterations of the QR algorithm for this matrix. \n", "- Plot the result matrix with function ```plt.spy(Ak, precision=1e-7)```. Is this matrix lower triangular? How does this correspond to the claim about convergence of the QR algorithm?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [conda env:py35]", "language": "python", "name": "conda-env-py35-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.4" } }, "nbformat": 4, "nbformat_minor": 2 }