{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Kernel PCA\n\nThis example shows the difference between the Principal Components Analysis\n(:class:`~sklearn.decomposition.PCA`) and its kernelized version\n(:class:`~sklearn.decomposition.KernelPCA`).\n\nOn the one hand, we show that :class:`~sklearn.decomposition.KernelPCA` is able\nto find a projection of the data which linearly separates them while it is not the case\nwith :class:`~sklearn.decomposition.PCA`.\n\nFinally, we show that inverting this projection is an approximation with\n:class:`~sklearn.decomposition.KernelPCA`, while it is exact with\n:class:`~sklearn.decomposition.PCA`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Projecting data: `PCA` vs. `KernelPCA`\n\nIn this section, we show the advantages of using a kernel when\nprojecting data using a Principal Component Analysis (PCA). We create a\ndataset made of two nested circles.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import make_circles\nfrom sklearn.model_selection import train_test_split\n\nX, y = make_circles(n_samples=1_000, factor=0.3, noise=0.05, random_state=0)\nX_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's have a quick first look at the generated dataset.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\n_, (train_ax, test_ax) = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(8, 4))\n\ntrain_ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train)\ntrain_ax.set_ylabel(\"Feature #1\")\ntrain_ax.set_xlabel(\"Feature #0\")\ntrain_ax.set_title(\"Training data\")\n\ntest_ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test)\ntest_ax.set_xlabel(\"Feature #0\")\n_ = test_ax.set_title(\"Testing data\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The samples from each class cannot be linearly separated: there is no\nstraight line that can split the samples of the inner set from the outer\nset.\n\nNow, we will use PCA with and without a kernel to see what is the effect of\nusing such a kernel. The kernel used here is a radial basis function (RBF)\nkernel.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.decomposition import PCA, KernelPCA\n\npca = PCA(n_components=2)\nkernel_pca = KernelPCA(\n n_components=None, kernel=\"rbf\", gamma=10, fit_inverse_transform=True, alpha=0.1\n)\n\nX_test_pca = pca.fit(X_train).transform(X_test)\nX_test_kernel_pca = kernel_pca.fit(X_train).transform(X_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, (orig_data_ax, pca_proj_ax, kernel_pca_proj_ax) = plt.subplots(\n ncols=3, figsize=(14, 4)\n)\n\norig_data_ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test)\norig_data_ax.set_ylabel(\"Feature #1\")\norig_data_ax.set_xlabel(\"Feature #0\")\norig_data_ax.set_title(\"Testing data\")\n\npca_proj_ax.scatter(X_test_pca[:, 0], X_test_pca[:, 1], c=y_test)\npca_proj_ax.set_ylabel(\"Principal component #1\")\npca_proj_ax.set_xlabel(\"Principal component #0\")\npca_proj_ax.set_title(\"Projection of testing data\\n using PCA\")\n\nkernel_pca_proj_ax.scatter(X_test_kernel_pca[:, 0], X_test_kernel_pca[:, 1], c=y_test)\nkernel_pca_proj_ax.set_ylabel(\"Principal component #1\")\nkernel_pca_proj_ax.set_xlabel(\"Principal component #0\")\n_ = kernel_pca_proj_ax.set_title(\"Projection of testing data\\n using KernelPCA\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We recall that PCA transforms the data linearly. Intuitively, it means that\nthe coordinate system will be centered, rescaled on each component\nwith respected to its variance and finally be rotated.\nThe obtained data from this transformation is isotropic and can now be\nprojected on its *principal components*.\n\nThus, looking at the projection made using PCA (i.e. the middle figure), we\nsee that there is no change regarding the scaling; indeed the data being two\nconcentric circles centered in zero, the original data is already isotropic.\nHowever, we can see that the data have been rotated. As a\nconclusion, we see that such a projection would not help if define a linear\nclassifier to distinguish samples from both classes.\n\nUsing a kernel allows to make a non-linear projection. Here, by using an RBF\nkernel, we expect that the projection will unfold the dataset while keeping\napproximately preserving the relative distances of pairs of data points that\nare close to one another in the original space.\n\nWe observe such behaviour in the figure on the right: the samples of a given\nclass are closer to each other than the samples from the opposite class,\nuntangling both sample sets. Now, we can use a linear classifier to separate\nthe samples from the two classes.\n\n## Projecting into the original feature space\n\nOne particularity to have in mind when using\n:class:`~sklearn.decomposition.KernelPCA` is related to the reconstruction\n(i.e. the back projection in the original feature space). With\n:class:`~sklearn.decomposition.PCA`, the reconstruction will be exact if\n`n_components` is the same than the number of original features.\nThis is the case in this example.\n\nWe can investigate if we get the original dataset when back projecting with\n:class:`~sklearn.decomposition.KernelPCA`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X_reconstructed_pca = pca.inverse_transform(pca.transform(X_test))\nX_reconstructed_kernel_pca = kernel_pca.inverse_transform(kernel_pca.transform(X_test))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, (orig_data_ax, pca_back_proj_ax, kernel_pca_back_proj_ax) = plt.subplots(\n ncols=3, sharex=True, sharey=True, figsize=(13, 4)\n)\n\norig_data_ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test)\norig_data_ax.set_ylabel(\"Feature #1\")\norig_data_ax.set_xlabel(\"Feature #0\")\norig_data_ax.set_title(\"Original test data\")\n\npca_back_proj_ax.scatter(X_reconstructed_pca[:, 0], X_reconstructed_pca[:, 1], c=y_test)\npca_back_proj_ax.set_xlabel(\"Feature #0\")\npca_back_proj_ax.set_title(\"Reconstruction via PCA\")\n\nkernel_pca_back_proj_ax.scatter(\n X_reconstructed_kernel_pca[:, 0], X_reconstructed_kernel_pca[:, 1], c=y_test\n)\nkernel_pca_back_proj_ax.set_xlabel(\"Feature #0\")\n_ = kernel_pca_back_proj_ax.set_title(\"Reconstruction via KernelPCA\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While we see a perfect reconstruction with\n:class:`~sklearn.decomposition.PCA` we observe a different result for\n:class:`~sklearn.decomposition.KernelPCA`.\n\nIndeed, :meth:`~sklearn.decomposition.KernelPCA.inverse_transform` cannot\nrely on an analytical back-projection and thus an exact reconstruction.\nInstead, a :class:`~sklearn.kernel_ridge.KernelRidge` is internally trained\nto learn a mapping from the kernalized PCA basis to the original feature\nspace. This method therefore comes with an approximation introducing small\ndifferences when back projecting in the original feature space.\n\nTo improve the reconstruction using\n:meth:`~sklearn.decomposition.KernelPCA.inverse_transform`, one can tune\n`alpha` in :class:`~sklearn.decomposition.KernelPCA`, the regularization term\nwhich controls the reliance on the training data during the training of\nthe mapping.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }