{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Principal Component Analysis in Shogun" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### By Abhijeet Kislay (GitHub ID: kislayabhi)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook is about finding Principal Components (PCA) of data (unsupervised) in Shogun. Its dimensional reduction capabilities are further utilised to show its application in data compression, image processing and face recognition. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n", "import shogun as sg\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Some Formal Background (Skip if you just want code examples)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension.\n", "\n", "In machine learning problems data is often high dimensional - images, bag-of-word descriptions etc. In such cases we cannot expect the training data to densely populate the space, meaning that there will be large parts in which little is known about the data. Hence it is expected that only a small number of directions are relevant for describing the data to a reasonable accuracy.\n", "\n", "The data vectors may be very high dimensional, they will therefore typically lie closer to a much lower dimensional 'manifold'.\n", "Here we concentrate on linear dimensional reduction techniques. In this approach a high dimensional datapoint $\\mathbf{x}$ is 'projected down' to a lower dimensional vector $\\mathbf{y}$ by:\n", "$$\\mathbf{y}=\\mathbf{F}\\mathbf{x}+\\text{const}.$$\n", "where the matrix $\\mathbf{F}\\in\\mathbb{R}^{\\text{M}\\times \\text{D}}$, with $\\text{M}<\\text{D}$. Here $\\text{M}=\\dim(\\mathbf{y})$ and $\\text{D}=\\dim(\\mathbf{x})$.\n", "\n", "From the above scenario, we assume that\n", "\n", "* The number of principal components to use is $\\text{M}$.\n", "* The dimension of each data point is $\\text{D}$.\n", "* The number of data points is $\\text{N}$.\n", "\n", "We express the approximation for datapoint $\\mathbf{x}^n$ as:$$\\mathbf{x}^n \\approx \\mathbf{c} + \\sum\\limits_{i=1}^{\\text{M}}y_i^n \\mathbf{b}^i \\equiv \\tilde{\\mathbf{x}}^n.$$\n", "* Here the vector $\\mathbf{c}$ is a constant and defines a point in the lower dimensional space.\n", "* The $\\mathbf{b}^i$ define vectors in the lower dimensional space (also known as 'principal component coefficients' or 'loadings').\n", "* The $y_i^n$ are the low dimensional co-ordinates of the data.\n", "\n", "Our motive is to find the reconstruction $\\tilde{\\mathbf{x}}^n$ given the lower dimensional representation $\\mathbf{y}^n$(which has components $y_i^n,i = 1,...,\\text{M})$. For a data space of dimension $\\dim(\\mathbf{x})=\\text{D}$, we hope to accurately describe the data using only a small number $(\\text{M}\\ll \\text{D})$ of coordinates of $\\mathbf{y}$.\n", "To determine the best lower dimensional representation it is convenient to use the square distance error between $\\mathbf{x}$ and its reconstruction $\\tilde{\\mathbf{x}}$:$$\\text{E}(\\mathbf{B},\\mathbf{Y},\\mathbf{c})=\\sum\\limits_{n=1}^{\\text{N}}\\sum\\limits_{i=1}^{\\text{D}}[x_i^n - \\tilde{x}_i^n]^2.$$\n", "* Here the basis vectors are defined as $\\mathbf{B} = [\\mathbf{b}^1,...,\\mathbf{b}^\\text{M}]$ (defining $[\\text{B}]_{i,j} = b_i^j$).\n", "* Corresponding low dimensional coordinates are defined as $\\mathbf{Y} = [\\mathbf{y}^1,...,\\mathbf{y}^\\text{N}].$\n", "* Also, $x_i^n$ and $\\tilde{x}_i^n$ represents the coordinates of the data points for the original and the reconstructed data respectively.\n", "* The bias $\\mathbf{c}$ is given by the mean of the data $\\sum_n\\mathbf{x}^n/\\text{N}$.\n", "\n", "Therefore, for simplification purposes we centre our data, so as to set $\\mathbf{c}$ to zero. Now we concentrate on finding the optimal basis $\\mathbf{B}$( which has the components $\\mathbf{b}^i, i=1,...,\\text{M} $).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Deriving the optimal linear reconstruction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To find the best basis vectors $\\mathbf{B}$ and corresponding low dimensional coordinates $\\mathbf{Y}$, we may minimize the sum of squared differences between each vector $\\mathbf{x}$ and its reconstruction $\\tilde{\\mathbf{x}}$:\n", "\n", "$\\text{E}(\\mathbf{B},\\mathbf{Y}) = \\sum\\limits_{n=1}^{\\text{N}}\\sum\\limits_{i=1}^{\\text{D}}\\left[x_i^n - \\sum\\limits_{j=1}^{\\text{M}}y_j^nb_i^j\\right]^2 = \\text{trace} \\left( (\\mathbf{X}-\\mathbf{B}\\mathbf{Y})^T(\\mathbf{X}-\\mathbf{B}\\mathbf{Y}) \\right)$\n", "\n", "where $\\mathbf{X} = [\\mathbf{x}^1,...,\\mathbf{x}^\\text{N}].$\n", "Considering the above equation under the orthonormality constraint $\\mathbf{B}^T\\mathbf{B} = \\mathbf{I}$ (i.e the basis vectors are mutually orthogonal and of unit length), we differentiate it w.r.t $y_k^n$. The squared error $\\text{E}(\\mathbf{B},\\mathbf{Y})$ therefore has zero derivative when: \n", "\n", "$y_k^n = \\sum_i b_i^kx_i^n$\n", "\n", "By substituting this solution in the above equation, the objective becomes\n", "\n", "$\\text{E}(\\mathbf{B}) = (\\text{N}-1)\\left[\\text{trace}(\\mathbf{S}) - \\text{trace}\\left(\\mathbf{S}\\mathbf{B}\\mathbf{B}^T\\right)\\right],$\n", "\n", "where $\\mathbf{S}$ is the sample covariance matrix of the data.\n", "To minimise equation under the constraint $\\mathbf{B}^T\\mathbf{B} = \\mathbf{I}$, we use a set of Lagrange Multipliers $\\mathbf{L}$, so that the objective is to minimize: \n", "\n", "$-\\text{trace}\\left(\\mathbf{S}\\mathbf{B}\\mathbf{B}^T\\right)+\\text{trace}\\left(\\mathbf{L}\\left(\\mathbf{B}^T\\mathbf{B} - \\mathbf{I}\\right)\\right).$\n", "\n", "Since the constraint is symmetric, we can assume that $\\mathbf{L}$ is also symmetric. Differentiating with respect to $\\mathbf{B}$ and equating to zero we obtain that at the optimum \n", "\n", "$\\mathbf{S}\\mathbf{B} = \\mathbf{B}\\mathbf{L}$.\n", "\n", "This is a form of eigen-equation so that a solution is given by taking $\\mathbf{L}$ to be diagonal and $\\mathbf{B}$ as the matrix whose columns are the corresponding eigenvectors of $\\mathbf{S}$. In this case,\n", "\n", "$\\text{trace}\\left(\\mathbf{S}\\mathbf{B}\\mathbf{B}^T\\right) =\\text{trace}(\\mathbf{L}),$\n", "\n", "which is the sum of the eigenvalues corresponding to the eigenvectors forming $\\mathbf{B}$. Since we wish to minimise $\\text{E}(\\mathbf{B})$, we take the eigenvectors with the largest corresponding eigenvalues.\n", "Whilst the solution to this eigen-problem is unique, this only serves to define the solution subspace since one may rotate and scale $\\mathbf{B}$ and $\\mathbf{Y}$ such that the value of the squared loss is exactly the same. The justification for choosing the non-rotated eigen solution is given by the additional requirement that the principal components corresponds to directions of maximal variance." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Maximum variance criterion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We aim to find that single direction $\\mathbf{b}$ such that, when the data is projected onto this direction, the variance of this projection is maximal amongst all possible such projections.\n", "The projection of a datapoint onto a direction $\\mathbf{b}$ is $\\mathbf{b}^T\\mathbf{x}^n$ for a unit length vector $\\mathbf{b}$. Hence the sum of squared projections is: $$\\sum\\limits_{n}\\left(\\mathbf{b}^T\\mathbf{x}^n\\right)^2 = \\mathbf{b}^T\\left[\\sum\\limits_{n}\\mathbf{x}^n(\\mathbf{x}^n)^T\\right]\\mathbf{b} = (\\text{N}-1)\\mathbf{b}^T\\mathbf{S}\\mathbf{b} = \\lambda(\\text{N} - 1)$$ \n", "which ignoring constants, is simply the negative of the equation for a single retained eigenvector $\\mathbf{b}$(with $\\mathbf{S}\\mathbf{b} = \\lambda\\mathbf{b}$). Hence the optimal single $\\text{b}$ which maximises the projection variance is given by the eigenvector corresponding to the largest eigenvalues of $\\mathbf{S}.$ The second largest eigenvector corresponds to the next orthogonal optimal direction and so on. This explains why, despite the squared loss equation being invariant with respect to arbitrary rotation of the basis vectors, the ones given by the eigen-decomposition have the additional property that they correspond to directions of maximal variance. These maximal variance directions found by PCA are called the $\\text{principal} $ $\\text{directions}.$\n", "\n", "There are two eigenvalue methods through which shogun can perform PCA namely\n", "* Eigenvalue Decomposition Method.\n", "* Singular Value Decomposition.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### EVD vs SVD" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* The EVD viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product of $\\mathbf{X}\\mathbf{X}^\\text{T}$, where $\\mathbf{X}$ is the data matrix. Since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal:\n", "\n", "$\\mathbf{S}=\\frac{1}{\\text{N}-1}\\mathbf{X}\\mathbf{X}^\\text{T},$\n", "\n", "where the $\\text{D}\\times\\text{N}$ matrix $\\mathbf{X}$ contains all the data vectors: $\\mathbf{X}=[\\mathbf{x}^1,...,\\mathbf{x}^\\text{N}].$\n", "Writing the $\\text{D}\\times\\text{N}$ matrix of eigenvectors as $\\mathbf{E}$ and the eigenvalues as an $\\text{N}\\times\\text{N}$ diagonal matrix $\\mathbf{\\Lambda}$, the eigen-decomposition of the covariance $\\mathbf{S}$ is\n", "\n", "$\\mathbf{X}\\mathbf{X}^\\text{T}\\mathbf{E}=\\mathbf{E}\\mathbf{\\Lambda}\\Longrightarrow\\mathbf{X}^\\text{T}\\mathbf{X}\\mathbf{X}^\\text{T}\\mathbf{E}=\\mathbf{X}^\\text{T}\\mathbf{E}\\mathbf{\\Lambda}\\Longrightarrow\\mathbf{X}^\\text{T}\\mathbf{X}\\tilde{\\mathbf{E}}=\\tilde{\\mathbf{E}}\\mathbf{\\Lambda},$\n", "\n", "where we defined $\\tilde{\\mathbf{E}}=\\mathbf{X}^\\text{T}\\mathbf{E}$. The final expression above represents the eigenvector equation for $\\mathbf{X}^\\text{T}\\mathbf{X}.$ This is a matrix of dimensions $\\text{N}\\times\\text{N}$ so that calculating the eigen-decomposition takes $\\mathcal{O}(\\text{N}^3)$ operations, compared with $\\mathcal{O}(\\text{D}^3)$ operations in the original high-dimensional space. We then can therefore calculate the eigenvectors $\\tilde{\\mathbf{E}}$ and eigenvalues $\\mathbf{\\Lambda}$ of this matrix more easily. Once found, we use the fact that the eigenvalues of $\\mathbf{S}$ are given by the diagonal entries of $\\mathbf{\\Lambda}$ and the eigenvectors by\n", "\n", "$\\mathbf{E}=\\mathbf{X}\\tilde{\\mathbf{E}}\\mathbf{\\Lambda}^{-1}$\n", "\n", "\n", "\n", "\n", "* On the other hand, applying SVD to the data matrix $\\mathbf{X}$ follows like:\n", "\n", "$\\mathbf{X}=\\mathbf{U}\\mathbf{\\Sigma}\\mathbf{V}^\\text{T}$\n", "\n", "where $\\mathbf{U}^\\text{T}\\mathbf{U}=\\mathbf{I}_\\text{D}$ and $\\mathbf{V}^\\text{T}\\mathbf{V}=\\mathbf{I}_\\text{N}$ and $\\mathbf{\\Sigma}$ is a diagonal matrix of the (positive) singular values. We assume that the decomposition has ordered the singular values so that the upper left diagonal element of $\\mathbf{\\Sigma}$ contains the largest singular value.\n", "\n", "Attempting to construct the covariance matrix $(\\mathbf{X}\\mathbf{X}^\\text{T})$from this decomposition gives:\n", "\n", "$\\mathbf{X}\\mathbf{X}^\\text{T} = \\left(\\mathbf{U}\\mathbf{\\Sigma}\\mathbf{V}^\\text{T}\\right)\\left(\\mathbf{U}\\mathbf{\\Sigma}\\mathbf{V}^\\text{T}\\right)^\\text{T}$\n", "\n", "$\\mathbf{X}\\mathbf{X}^\\text{T} = \\left(\\mathbf{U}\\mathbf{\\Sigma}\\mathbf{V}^\\text{T}\\right)\\left(\\mathbf{V}\\mathbf{\\Sigma}\\mathbf{U}^\\text{T}\\right)$\n", "\n", "and since $\\mathbf{V}$ is an orthogonal matrix $\\left(\\mathbf{V}^\\text{T}\\mathbf{V}=\\mathbf{I}\\right),$\n", "\n", "$\\mathbf{X}\\mathbf{X}^\\text{T}=\\left(\\mathbf{U}\\mathbf{\\Sigma}^\\mathbf{2}\\mathbf{U}^\\text{T}\\right)$\n", "\n", "Since it is in the form of an eigen-decomposition, the PCA solution given by performing the SVD decomposition of $\\mathbf{X}$, for which the eigenvectors are then given by $\\mathbf{U}$, and corresponding eigenvalues by the square of the singular values.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### [CPCA](http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CPCA.html) Class Reference (Shogun) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "CPCA class of Shogun inherits from the [Preprocessor](http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1Preprocessor.html) class. Preprocessors are transformation functions that doesn't change the domain of the input features. Specifically, CPCA performs principal component analysis on the input vectors and keeps only the specified number of eigenvectors. On preprocessing, the stored covariance matrix is used to project vectors into eigenspace.\n", "\n", "Performance of PCA depends on the algorithm used according to the situation in hand.\n", "Our PCA preprocessor class provides 3 method options to compute the transformation matrix:\n", "\n", "* $\\text{PCA(EVD)}$ sets $\\text{PCAmethod == EVD}$ : Eigen Value Decomposition of Covariance Matrix $(\\mathbf{XX^T}).$\n", "The covariance matrix $\\mathbf{XX^T}$ is first formed internally and then\n", "its eigenvectors and eigenvalues are computed using QR decomposition of the matrix.\n", "The time complexity of this method is $\\mathcal{O}(D^3)$ and should be used when $\\text{N > D.}$\n", "\n", "\n", "* $\\text{PCA(SVD)}$ sets $\\text{PCAmethod == SVD}$ : Singular Value Decomposition of feature matrix $\\mathbf{X}$.\n", "The transpose of feature matrix, $\\mathbf{X^T}$, is decomposed using SVD. $\\mathbf{X^T = UDV^T}.$\n", "The matrix V in this decomposition contains the required eigenvectors and\n", "the diagonal entries of the diagonal matrix D correspond to the non-negative\n", "eigenvalues.The time complexity of this method is $\\mathcal{O}(DN^2)$ and should be used when $\\text{N < D.}$\n", "\n", "\n", "* $\\text{PCA(AUTO)}$ sets $\\text{PCAmethod == AUTO}$ : This mode automagically chooses one of the above modes for the user based on whether $\\text{N>D}$ (chooses $\\text{EVD}$) or $\\text{ND).\n", "#However we can also use PCA(AUTO) as it will automagically choose the appropriate method. \n", "preprocessor = sg.create_transformer('PCA', method='EVD')\n", "\n", "#since we are projecting down the 2d data, the target dim is 1. But here the exhaustive method is detailed by\n", "#setting the target dimension to 2 to visualize both the eigen vectors.\n", "#However, in future examples we will get rid of this step by implementing it directly.\n", "preprocessor.put('target_dim', 2)\n", "\n", "#Centralise the data by subtracting its mean from it.\n", "preprocessor.fit(train_features)\n", "\n", "#get the mean for the respective dimensions.\n", "mean_datapoints=preprocessor.get('mean_vector')\n", "mean_x=mean_datapoints[0]\n", "mean_y=mean_datapoints[1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 3: Calculate the covariance matrix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To understand the relationship between 2 dimension we define $\\text{covariance}$. It is a measure to find out how much the dimensions vary from the mean $with$ $respect$ $to$ $each$ $other.$$$cov(X,Y)=\\frac{\\sum\\limits_{i=1}^{n}(X_i-\\bar{X})(Y_i-\\bar{Y})}{n-1}$$\n", "A useful way to get all the possible covariance values between all the different dimensions is to calculate them all and put them in a matrix.\n", "\n", "Example: For a 3d dataset with usual dimensions of $x,y$ and $z$, the covariance matrix has 3 rows and 3 columns, and the values are this:\n", "$$\\mathbf{S} = \\quad\\begin{pmatrix}cov(x,x)&cov(x,y)&cov(x,z)\\\\cov(y,x)&cov(y,y)&cov(y,z)\\\\cov(z,x)&cov(z,y)&cov(z,z)\\end{pmatrix}$$\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Find the eigenvectors $e^1,....e^M$ of the covariance matrix $\\mathbf{S}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Shogun's way of doing things :" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Step 3 and Step 4 are directly implemented by the PCA preprocessor of Shogun toolbar. The transformation matrix is essentially a $\\text{D}$$\\times$$\\text{M}$ matrix, the columns of which correspond to the eigenvectors of the covariance matrix $(\\text{X}\\text{X}^\\text{T})$ having top $\\text{M}$ eigenvalues." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Get the eigenvectors(We will get two of these since we set the target to 2). \n", "E = preprocessor.get('transformation_matrix')\n", "\n", "#Get all the eigenvalues returned by PCA.\n", "eig_value=preprocessor.get('eigenvalues_vector')\n", "\n", "e1 = E[:,0]\n", "e2 = E[:,1]\n", "eig_value1 = eig_value[0]\n", "eig_value2 = eig_value[1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 5: Choosing components and forming a feature vector." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets visualize the eigenvectors and decide upon which to choose as the $principle$ $component$ of the data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#find out the M eigenvectors corresponding to top M number of eigenvalues and store it in E\n", "#Here M=1\n", "\n", "#slope of e1 & e2\n", "m1=e1[1]/e1[0]\n", "m2=e2[1]/e2[0]\n", "\n", "#generate the two lines\n", "x1=range(-50,50)\n", "x2=x1\n", "y1=np.multiply(m1,x1)\n", "y2=np.multiply(m2,x2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#plot the data along with those two eigenvectors\n", "figure, ax = plt.subplots(1,1)\n", "plt.xlim(-50, 50)\n", "plt.ylim(-50, 50)\n", "ax.plot(x[:], y[:],'o',color='green', markersize=5, label=\"green\")\n", "ax.plot(x1[:], y1[:], linewidth=0.7, color='black')\n", "ax.plot(x2[:], y2[:], linewidth=0.7, color='blue')\n", "p1 = plt.Rectangle((0, 0), 1, 1, fc=\"black\")\n", "p2 = plt.Rectangle((0, 0), 1, 1, fc=\"blue\")\n", "plt.legend([p1,p2],[\"1st eigenvector\",\"2nd eigenvector\"],loc='center left', bbox_to_anchor=(1, 0.5))\n", "plt.title('Eigenvectors selection')\n", "plt.xlabel(\"x axis\")\n", "plt.ylabel(\"y axis\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the above figure, the blue line is a good fit of the data. It shows the most significant relationship between the data dimensions.\n", "It turns out that the eigenvector with the $highest$ eigenvalue is the $principle$ $component$ of the data set.\n", "Form the matrix $\\mathbf{E}=[\\mathbf{e}^1,...,\\mathbf{e}^M].$\n", "Here $\\text{M}$ represents the target dimension of our final projection" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#The eigenvector corresponding to higher eigenvalue(i.e eig_value2) is choosen (i.e e2).\n", "#E is the feature vector.\n", "E=e2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 6: Projecting the data to its Principal Components." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is the final step in PCA. Once we have choosen the components(eigenvectors) that we wish to keep in our data and formed a feature vector, we simply take the vector and multiply it on the left of the original dataset.\n", "The lower dimensional representation of each data point $\\mathbf{x}^n$ is given by \n", "\n", "$\\mathbf{y}^n=\\mathbf{E}^T(\\mathbf{x}^n-\\mathbf{m})$\n", "\n", "Here the $\\mathbf{E}^T$ is the matrix with the eigenvectors in rows, with the most significant eigenvector at the top. The mean adjusted data, with data items in each column, with each row holding a seperate dimension is multiplied to it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Shogun's way of doing things :" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Step 6 can be performed by shogun's PCA preprocessor as follows:\n", "\n", "The transformation matrix that we got after $\\text{init()}$ is used to transform all $\\text{D-dim}$ feature matrices (with $\\text{D}$ feature dimensions) supplied, via $\\text{apply_to_feature_matrix methods}$.This transformation outputs the $\\text{M-Dim}$ approximation of all these input vectors and matrices (where $\\text{M}$ $\\leq$ $\\text{min(D,N)}$)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#transform all 2-dimensional feature matrices to target-dimensional approximations.\n", "yn=preprocessor.transform(train_features).get('feature_matrix')\n", "\n", "#Since, here we are manually trying to find the eigenvector corresponding to the top eigenvalue.\n", "#The 2nd row of yn is choosen as it corresponds to the required eigenvector e2.\n", "yn1=yn[1,:]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Step 5 and Step 6 can be applied directly with Shogun's PCA preprocessor (from next example). It has been done manually here to show the exhaustive nature of Principal Component Analysis." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 7: Form the approximate reconstruction of the original data $\\mathbf{x}^n$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The approximate reconstruction of the original datapoint $\\mathbf{x}^n$ is given by : $\\tilde{\\mathbf{x}}^n\\approx\\text{m}+\\mathbf{E}\\mathbf{y}^n$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x_new=(yn1 * E[0]) + np.tile(mean_x,[n,1]).T[0]\n", "y_new=(yn1 * E[1]) + np.tile(mean_y,[n,1]).T[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The new data is plotted below" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "figure, ax = plt.subplots(1,1)\n", "plt.xlim(-50, 50)\n", "plt.ylim(-50, 50)\n", "\n", "ax.plot(x[:], y[:],'o',color='green', markersize=5, label=\"green\")\n", "ax.plot(x_new, y_new, 'o', color='blue', markersize=5, label=\"red\")\n", "plt.title('PCA Projection of 2D data into 1D subspace')\n", "plt.xlabel(\"x axis\")\n", "plt.ylabel(\"y axis\")\n", "\n", "#add some legend for information\n", "p1 = plt.Rectangle((0, 0), 1, 1, fc=\"r\")\n", "p2 = plt.Rectangle((0, 0), 1, 1, fc=\"g\")\n", "p3 = plt.Rectangle((0, 0), 1, 1, fc=\"b\")\n", "plt.legend([p1,p2,p3],[\"normal projection\",\"2d data\",\"1d projection\"],loc='center left', bbox_to_anchor=(1, 0.5))\n", "\n", "#plot the projections in red:\n", "for i in range(n):\n", " ax.plot([x[i],x_new[i]],[y[i],y_new[i]] , color='red')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## PCA on a 3d data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step1: Get some data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We generate points from a plane and then add random noise orthogonal to it. The general equation of a plane is: $$\\text{a}\\mathbf{x}+\\text{b}\\mathbf{y}+\\text{c}\\mathbf{z}+\\text{d}=0$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.rcParams['figure.figsize'] = 8,8 \n", "#number of points\n", "n=100\n", "\n", "#generate the data\n", "a=np.random.randint(1,20)\n", "b=np.random.randint(1,20)\n", "c=np.random.randint(1,20)\n", "d=np.random.randint(1,20)\n", "\n", "x1=np.random.random_integers(-20,20,n)\n", "y1=np.random.random_integers(-20,20,n)\n", "z1=-(a*x1+b*y1+d)/c\n", "\n", "#generate the noise\n", "noise=np.random.random_sample([n])*np.random.random_integers(-30,30,n)\n", "\n", "#the normal unit vector is [a,b,c]/magnitude\n", "magnitude=np.sqrt(np.square(a)+np.square(b)+np.square(c))\n", "normal_vec=np.array([a,b,c]/magnitude)\n", "\n", "#add the noise orthogonally\n", "x=x1+noise*normal_vec[0]\n", "y=y1+noise*normal_vec[1]\n", "z=z1+noise*normal_vec[2]\n", "threeD_obsmatrix=np.array([x,y,z])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#to visualize the data, we must plot it.\n", "from mpl_toolkits.mplot3d import Axes3D\n", "\n", "fig = plt.figure()\n", "ax=fig.add_subplot(111, projection='3d')\n", "\n", "#plot the noisy data generated by distorting a plane\n", "ax.scatter(x, y, z,marker='o', color='g')\n", "\n", "ax.set_xlabel('x label')\n", "ax.set_ylabel('y label')\n", "ax.set_zlabel('z label')\n", "plt.legend([p2],[\"3d data\"],loc='center left', bbox_to_anchor=(1, 0.5))\n", "plt.title('Two dimensional subspace with noise')\n", "xx, yy = np.meshgrid(range(-30,30), range(-30,30))\n", "zz=-(a * xx + b * yy + d) / c" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 2: Subtract the mean." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#convert the observation matrix into dense feature matrix.\n", "train_features = sg.create_features(threeD_obsmatrix)\n", "\n", "#PCA(EVD) is choosen since N=100 and D=3 (N>D).\n", "#However we can also use PCA(AUTO) as it will automagically choose the appropriate method. \n", "preprocessor = sg.create_transformer('PCA', method='EVD')\n", "\n", "#If we set the target dimension to 2, Shogun would automagically preserve the required 2 eigenvectors(out of 3) according to their\n", "#eigenvalues.\n", "preprocessor.put('target_dim', 2)\n", "preprocessor.fit(train_features)\n", "\n", "#get the mean for the respective dimensions.\n", "mean_datapoints=preprocessor.get('mean_vector')\n", "mean_x=mean_datapoints[0]\n", "mean_y=mean_datapoints[1]\n", "mean_z=mean_datapoints[2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 3 & Step 4: Calculate the eigenvectors of the covariance matrix" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#get the required eigenvectors corresponding to top 2 eigenvalues.\n", "E = preprocessor.get('transformation_matrix')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Steps 5: Choosing components and forming a feature vector." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we performed PCA for a target $\\dim = 2$ for the $3 \\dim$ data, we are directly given \n", "the two required eigenvectors in $\\mathbf{E}$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "E is automagically filled by setting target dimension = M. This is different from the 2d data example where we implemented this step manually." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 6: Projecting the data to its Principal Components." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#This can be performed by shogun's PCA preprocessor as follows:\n", "yn=preprocessor.transform(train_features).get('feature_matrix')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 7: Form the approximate reconstruction of the original data $\\mathbf{x}^n$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The approximate reconstruction of the original datapoint $\\mathbf{x}^n$ is given by : $\\tilde{\\mathbf{x}}^n\\approx\\text{m}+\\mathbf{E}\\mathbf{y}^n$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "new_data=np.dot(E,yn)\n", "\n", "x_new=new_data[0,:]+np.tile(mean_x,[n,1]).T[0]\n", "y_new=new_data[1,:]+np.tile(mean_y,[n,1]).T[0]\n", "z_new=new_data[2,:]+np.tile(mean_z,[n,1]).T[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#all the above points lie on the same plane. To make it more clear we will plot the projection also.\n", "\n", "fig=plt.figure()\n", "ax=fig.add_subplot(111, projection='3d')\n", "ax.scatter(x, y, z,marker='o', color='g')\n", "ax.set_xlabel('x label')\n", "ax.set_ylabel('y label')\n", "ax.set_zlabel('z label')\n", "plt.legend([p1,p2,p3],[\"normal projection\",\"3d data\",\"2d projection\"],loc='center left', bbox_to_anchor=(1, 0.5))\n", "plt.title('PCA Projection of 3D data into 2D subspace')\n", "\n", "for i in range(100):\n", " ax.scatter(x_new[i], y_new[i], z_new[i],marker='o', color='b')\n", " ax.plot([x[i],x_new[i]],[y[i],y_new[i]],[z[i],z_new[i]],color='r') " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### PCA Performance" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Uptill now, we were using the EigenValue Decomposition method to compute the transformation matrix$\\text{(N>D)}$ but for the next example $\\text{(N