{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# [NTDS'17] assignment 3: feedback\n", "[ntds'17]: https://github.com/mdeff/ntds_2017\n", "\n", "[Michaƫl Defferrard](http://deff.ch), [EPFL LTS2](http://lts2.epfl.ch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The below grading scheme was followed for the correction of the third assignment, on a total of 100 points plus 10 bonus points. You'll also find some comments and common mistakes.\n", "\n", "Thanks for your work! It was quite good in general. Some did very well, and I read quite interesting comments throughout.\n", "\n", "## General remark\n", "\n", "First, a general remark: use vectorized code (via numpy or pandas) instead of loops. It's much more efficient, because the loop is either written in optimized C (or Fortran) code, or the function is carried by the CPU's [single instruction, multiple data (SIMD)](https://en.wikipedia.org/wiki/SIMD) unit (c.f. MMX, SSE, AVX instructions for x86 CPUs from Intel and AMD).\n", "\n", "Below are some examples from your submissions:\n", "* `err = np.sum(np.abs(labels - genres))` is better than `err = len([1 for i in range(len(labels)) if labels[i] != genres[i]])`.\n", "* `np.mean(mfcc, axis=1)` is better than `[np.mean(x) for x in mfcc]`.\n", "* `weights = np.exp(-distances**2 / kernel_width**2)` is better than\n", " ```\n", " for i in range(0,2000):\n", " for j in range(i,2000):\n", " weights[i,j] = math.exp(-math.sqrt(distances[i,j])/math.sqrt(kernel_width))\n", " ```\n", "\n", "If, for some reason, you cannot vectorize your code, consider using [numba](https://numba.pydata.org/) or [Cython](http://cython.org/).\n", "\n", "If you wrote *any loop* for your submission, please look at my [solution] for ways to avoid them. It's both faster and makes the code easier to understand.\n", "\n", "[solution]: 03_solution.ipynb" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data and features (25 points)\n", "\n", "### 1 Code: get genre from FMA (5 points)\n", "* In Python, you can convert an object to an integer with `int()`, to a string with `str()`, etc.\n", "\n", "### 2 Code: fill table with genres (5 points)\n", "* Doing a for loop on a pandas dataframe is quite inefficient. Again, try to vectorize. Here, the `apply` or `map` functions are handy.\n", "* Do `.apply(get_genre)` instead of `.apply(lambda x: get_genre(x))`. The anonymous function is useless if you don't alter the argument.\n", "\n", "### 3 Code: MFCCs (5 points)\n", "\n", "### 4 Code: summary statistics (5 points)\n", "\n", "### 5 Code: feature selection (5 points)\n", "* Some of you made great efforts to select the best features, even if that was not the focus of the assignment (as stated). Well done!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Graph (55 + 7 points)\n", "\n", "### 6 Question: what is the cosine distance (2 points)\n", "* Beware the difference between a distance and a similarity measure. A distance is 0 if two elements are equal, while a similarity measure takes its maximum. The maximum can be any number. It's 1 in the case of the cosine similarity.\n", "* Note that the range of the cosine similarity is $[-1, 1]$ ($1$ for vectors pointing in the same direction, $0$ for orthogonal vectors, and $-1$ for vectors pointing in opposite directions). The range of the cosine distance is thus $[0, 2]$.\n", "\n", "### 7 Code: distances (3 points)\n", "\n", "### 8 Question: distances equal to zero (4 points)\n", "* Some of you investigated why the distance between a pair of songs was zero (or almost zero) and discovered they were duplicates. Good job! :)\n", "\n", "### 9 Bonus: alternative kernel (3 points)\n", "* Think about the requirements for a valid kernel. Note that we want to transform a distance to a similarity measure. See the [solution] for more.\n", "\n", "### 10 Code: weights (4 points)\n", "\n", "### 11 Code: nearest neightbors (7 points)\n", "* Some of you used algorithms with one or two loops. Try to vectorize as much as possible for your code to be efficient. (I still gave all the points if you used only one loop.)\n", "\n", "### 12 Bonus: adjacency matrix visualization (4 points)\n", "* The \"block-diagonal\" view (see my [solution]) is the simplest visualization here. I put block-diagonal in quotes as the matrix is not exactly block-diagonal. If it was, our graph would be disconnected and a perfect separation would be easy. It shows the size of the clusters, and gives an indication of intra and inter cluster concentration of edges.\n", "* Some of you plotted non-zero values in the adjacency matrix with different colors to indicate if they were intra or inter genre. As you observed, this is quite hard to visualize. Especially if we would have more than 2 genres!\n", "* Some also plotted distributions of edge weights for each genre and between genre. This is a good idea, though it's more verbose and harder to interpret than the \"block-diagonal\" view.\n", "\n", "### 13 Code: degrees (3 points)\n", "* No need to use NetworkX. A simple `W.sum(0)` will do.\n", "\n", "### 14 Question: choice of Laplacian (3 points)\n", "* A lot of you said that they should use the normalized Laplacian, without justification. Both Laplacians are however valid choices. Clustering with the sign of the Fiedler vector of the combinatorial Laplacian is a relaxation of the RatioCut. A relaxation of the NormalizedCut is obtained with the normalized Laplacian. Both the RatioCut and the NormalizedCut are normalized versions of the MinCut which seek to impose balanced clusters.\n", "\n", "### 15 Code: Laplacian (4 points)\n", "* When computing the normalized Laplacian $\\mathbf{I} - \\mathbf{D}^{-1/2} \\mathbf{W} \\mathbf{D}^{-1/2}$, don't do `np.linalg.inv(D)` to inverse the degree matrix. The inverse of a diagonal matrix is straightforward and can be computed with `np.diag(1 / np.sqrt(degrees))`.\n", "* If the graph is weighted, we compute the Laplacian from the weighted adjacency and degree matrices. We only use the binary adjacency matrix if no weights are available. We would otherwise discard valuable information.\n", "\n", "### 16 Code: number of edges (3 points)\n", "* If you use the Laplacian matrix, you should subtract the non-zero elements on the diagonal.\n", "* You should divide the number of non-zero values by two as we are counting edges for an undirected graph.\n", "\n", "### 17 Question: which eigensolver (4 points)\n", "* We are not using the routines from `scipy.sparse` because they allow us to choose the number of eigenvectors to return. We use them because they implement efficient algorithms for partial eigendecomposition of sparse matrices.\n", "\n", "### 18 Code: eigenvectors & eigenvalues (5 points)\n", "* Using `eigsh` with `which='SM'`, `which='SA'`, or `sigma=0` are all correct approaches. See the [solution] for more information.\n", "\n", "### 19 Question: eigenvectors & eigenvalues (5 points)\n", "\n", "### 20 Question: connectedness (4 points)\n", "* The least costly way to check if the graph is connected is to look at the multiplicity of eigenvalue 0. Some of you used NetworkX, which is costlier (because we have the eigenvalues already).\n", "* While a $k$ nearest neighbor (kNN) graph ensures that each node is connected to at least $k$ nodes, it does not ensure connectedness. For example, two well separated clusters in feature space would not be connected together. We would end up with two graphs. That's not bad, it's very easy to cluster!\n", "\n", "### 21 Question: first eigenvector (4 points)\n", "* Most of you correclty expected to get 0 here, but not all realized that it was not exactly zero because computers have finite memory, and can only approximate real numbers with a 32 or 64 bits floating point representation. Some numerical error in the eigendecomposition is likely a reason as well." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualization and clustering (20 + 3 points)\n", "\n", "### 22 Question: Why not the first eigenvector (3 points)\n", "* The first eigenvector of the normalized Laplacian is not constant. It's value is $\\mathbf{D}^{1/2} \\mathbf{1}$, where $\\mathbf{D}$ is the diagonal weighted degree matrix and $\\mathbf{1}$ is the vector of all ones.\n", "* The first eigenvector of the combinatorial Laplacian is not all ones, but $\\frac{1}{N} \\mathbf{1}$. While $\\mathbf{1}$ is also an eigenvector (by scaling by $N$), by convention we normalize the eigenvectors so that they form an orthonormal basis.\n", "\n", "### 23 Code: 2D graph embedding (3 points)\n", "* While any symmetric matrix is diagonalizable, the main result here is that the spectral theorem says \n", "\n", "### 24 Question: appearance of genre (5 points)\n", "\n", "### 25 Code: classification with Fiedler vector (4 points)\n", "* The separating plane should be vertical as the first eigenvector was also used for the x axis.\n", "\n", "### 26 Code: error rate (5 points)\n", "* You should take into accounts that the labels may be reversed, i.e. -1 could correspond to either rock or hip-hop. Clustering with $\\operatorname{sign}(\\mathbf{u}_2)$ or $\\operatorname{sign}(-\\mathbf{u}_2)$, where $\\mathbf{u}_2$ is the Fiedler vector, should give the same error rate.\n", "\n", "### 27 Bonus: method name and goal (3 points)\n", "* Some of you mentioned *spectral graph embedding* instead of *Laplacian eigenmaps*. The main point is that the embedding method minimizes $\\operatorname{tr}(\\mathbf{Y}^\\intercal\\mathbf{L}\\mathbf{Y})$ and thus preserves the distances between samples as much as possible given the dimension of the embedding space.\n", "* Please don't copy Wikipedia, or any other source, without appropriate citation." ] } ], "metadata": {}, "nbformat": 4, "nbformat_minor": 2 }