{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# The Johnson-Lindenstrauss bound for embedding with random projections\n\n\nThe `Johnson-Lindenstrauss lemma`_ states that any high dimensional\ndataset can be randomly projected into a lower dimensional Euclidean\nspace while controlling the distortion in the pairwise distances.\n\n Johnson%E2%80%93Lindenstrauss_lemma\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause\n\nimport sys\nfrom time import time\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom sklearn.datasets import fetch_20newsgroups_vectorized, load_digits\nfrom sklearn.metrics.pairwise import euclidean_distances\nfrom sklearn.random_projection import (\n SparseRandomProjection,\n johnson_lindenstrauss_min_dim,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Theoretical bounds\nThe distortion introduced by a random projection `p` is asserted by\nthe fact that `p` is defining an eps-embedding with good probability\nas defined by:\n\n\\begin{align}(1 - eps) \\|u - v\\|^2 < \\|p(u) - p(v)\\|^2 < (1 + eps) \\|u - v\\|^2\\end{align}\n\nWhere `u` and `v` are any rows taken from a dataset of shape `(n_samples,\nn_features)` and `p` is a projection by a random Gaussian `N(0, 1)` matrix\nof shape `(n_components, n_features)` (or a sparse Achlioptas matrix).\n\nThe minimum number of components to guarantees the eps-embedding is\ngiven by:\n\n\\begin{align}n\\_components \\geq 4 log(n\\_samples) / (eps^2 / 2 - eps^3 / 3)\\end{align}\n\n\nThe first plot shows that with an increasing number of samples ``n_samples``,\nthe minimal number of dimensions ``n_components`` increased logarithmically\nin order to guarantee an ``eps``-embedding.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# range of admissible distortions\neps_range = np.linspace(0.1, 0.99, 5)\ncolors = plt.cm.Blues(np.linspace(0.3, 1.0, len(eps_range)))\n\n# range of number of samples (observation) to embed\nn_samples_range = np.logspace(1, 9, 9)\n\nplt.figure()\nfor eps, color in zip(eps_range, colors):\n min_n_components = johnson_lindenstrauss_min_dim(n_samples_range, eps=eps)\n plt.loglog(n_samples_range, min_n_components, color=color)\n\nplt.legend([f\"eps = {eps:0.1f}\" for eps in eps_range], loc=\"lower right\")\nplt.xlabel(\"Number of observations to eps-embed\")\nplt.ylabel(\"Minimum number of dimensions\")\nplt.title(\"Johnson-Lindenstrauss bounds:\\nn_samples vs n_components\")\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The second plot shows that an increase of the admissible\ndistortion ``eps`` allows to reduce drastically the minimal number of\ndimensions ``n_components`` for a given number of samples ``n_samples``\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# range of admissible distortions\neps_range = np.linspace(0.01, 0.99, 100)\n\n# range of number of samples (observation) to embed\nn_samples_range = np.logspace(2, 6, 5)\ncolors = plt.cm.Blues(np.linspace(0.3, 1.0, len(n_samples_range)))\n\nplt.figure()\nfor n_samples, color in zip(n_samples_range, colors):\n min_n_components = johnson_lindenstrauss_min_dim(n_samples, eps=eps_range)\n plt.semilogy(eps_range, min_n_components, color=color)\n\nplt.legend([f\"n_samples = {n}\" for n in n_samples_range], loc=\"upper right\")\nplt.xlabel(\"Distortion eps\")\nplt.ylabel(\"Minimum number of dimensions\")\nplt.title(\"Johnson-Lindenstrauss bounds:\\nn_components vs eps\")\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Empirical validation\n\nWe validate the above bounds on the 20 newsgroups text document\n(TF-IDF word frequencies) dataset or on the digits dataset:\n\n- for the 20 newsgroups dataset some 300 documents with 100k\n features in total are projected using a sparse random matrix to smaller\n euclidean spaces with various values for the target number of dimensions\n ``n_components``.\n\n- for the digits dataset, some 8x8 gray level pixels data for 300\n handwritten digits pictures are randomly projected to spaces for various\n larger number of dimensions ``n_components``.\n\nThe default dataset is the 20 newsgroups dataset. To run the example on the\ndigits dataset, pass the ``--use-digits-dataset`` command line argument to\nthis script.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "if \"--use-digits-dataset\" in sys.argv:\n data = load_digits().data[:300]\nelse:\n data = fetch_20newsgroups_vectorized().data[:300]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For each value of ``n_components``, we plot:\n\n- 2D distribution of sample pairs with pairwise distances in original\n and projected spaces as x- and y-axis respectively.\n\n- 1D histogram of the ratio of those distances (projected / original).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "n_samples, n_features = data.shape\nprint(\n f\"Embedding {n_samples} samples with dim {n_features} using various \"\n \"random projections\"\n)\n\nn_components_range = np.array([300, 1_000, 10_000])\ndists = euclidean_distances(data, squared=True).ravel()\n\n# select only non-identical samples pairs\nnonzero = dists != 0\ndists = dists[nonzero]\n\nfor n_components in n_components_range:\n t0 = time()\n rp = SparseRandomProjection(n_components=n_components)\n projected_data = rp.fit_transform(data)\n print(\n f\"Projected {n_samples} samples from {n_features} to {n_components} in \"\n f\"{time() - t0:0.3f}s\"\n )\n if hasattr(rp, \"components_\"):\n n_bytes = rp.components_.data.nbytes\n n_bytes += rp.components_.indices.nbytes\n print(f\"Random matrix with size: {n_bytes / 1e6:0.3f} MB\")\n\n projected_dists = euclidean_distances(projected_data, squared=True).ravel()[nonzero]\n\n plt.figure()\n min_dist = min(projected_dists.min(), dists.min())\n max_dist = max(projected_dists.max(), dists.max())\n plt.hexbin(\n dists,\n projected_dists,\n gridsize=100,\n cmap=plt.cm.PuBu,\n extent=[min_dist, max_dist, min_dist, max_dist],\n )\n plt.xlabel(\"Pairwise squared distances in original space\")\n plt.ylabel(\"Pairwise squared distances in projected space\")\n plt.title(\"Pairwise distances distribution for n_components=%d\" % n_components)\n cb = plt.colorbar()\n cb.set_label(\"Sample pairs counts\")\n\n rates = projected_dists / dists\n print(f\"Mean distances rate: {np.mean(rates):.2f} ({np.std(rates):.2f})\")\n\n plt.figure()\n plt.hist(rates, bins=50, range=(0.0, 2.0), edgecolor=\"k\", density=True)\n plt.xlabel(\"Squared distances rate: projected / original\")\n plt.ylabel(\"Distribution of samples pairs\")\n plt.title(\"Histogram of pairwise distance rates for n_components=%d\" % n_components)\n\n # TODO: compute the expected value of eps and add them to the previous plot\n # as vertical lines / region\n\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that for low values of ``n_components`` the distribution is wide\nwith many distorted pairs and a skewed distribution (due to the hard\nlimit of zero ratio on the left as distances are always positives)\nwhile for larger values of `n_components` the distortion is controlled\nand the distances are well preserved by the random projection.\n\n## Remarks\n\nAccording to the JL lemma, projecting 300 samples without too much distortion\nwill require at least several thousands dimensions, irrespective of the\nnumber of features of the original dataset.\n\nHence using random projections on the digits dataset which only has 64\nfeatures in the input space does not make sense: it does not allow\nfor dimensionality reduction in this case.\n\nOn the twenty newsgroups on the other hand the dimensionality can be\ndecreased from 56,436 down to 10,000 while reasonably preserving\npairwise distances.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }