{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Hierarchical clustering with and without structure\n\nThis example demonstrates hierarchical clustering with and without\nconnectivity constraints. It shows the effect of imposing a connectivity\ngraph to capture local structure in the data. Without connectivity constraints,\nthe clustering is based purely on distance, while with constraints, the\nclustering respects local structure.\n\nFor more information, see `hierarchical_clustering`.\n\nThere are two advantages of imposing connectivity. First, clustering\nwith sparse connectivity matrices is faster in general.\n\nSecond, when using a connectivity matrix, single, average and complete\nlinkage are unstable and tend to create a few clusters that grow very\nquickly. Indeed, average and complete linkage fight this percolation behavior\nby considering all the distances between two clusters when merging them\n(while single linkage exaggerates the behaviour by considering only the\nshortest distance between clusters). The connectivity graph breaks this\nmechanism for average and complete linkage, making them resemble the more\nbrittle single linkage. This effect is more pronounced for very sparse graphs\n(try decreasing the number of neighbors in `kneighbors_graph`) and with\ncomplete linkage. In particular, having a very small number of neighbors in\nthe graph, imposes a geometry that is close to that of single linkage,\nwhich is well known to have this percolation instability.\n\nThe effect of imposing connectivity is illustrated on two different but\nsimilar datasets which show a spiral structure. In the first example we\nbuild a Swiss roll dataset and run hierarchical clustering on the position\nof the data. Here, we compare unstructured Ward clustering with a\nstructured variant that enforces k-Nearest Neighbors connectivity. In the\nsecond example we include the effects of applying a such a connectivity graph\nto single, average and complete linkage.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generate the Swiss Roll dataset.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import time\n\nfrom sklearn.cluster import AgglomerativeClustering\nfrom sklearn.datasets import make_swiss_roll\n\nn_samples = 1500\nnoise = 0.05\nX1, _ = make_swiss_roll(n_samples, noise=noise)\nX1[:, 1] *= 0.5 # Make the roll thinner" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Compute clustering without connectivity constraints\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(\"Compute unstructured hierarchical clustering...\")\nst = time.time()\nward_unstructured = AgglomerativeClustering(n_clusters=6, linkage=\"ward\").fit(X1)\nelapsed_time_unstructured = time.time() - st\nlabel_unstructured = ward_unstructured.labels_\nprint(f\"Elapsed time: {elapsed_time_unstructured:.2f}s\")\nprint(f\"Number of points: {label_unstructured.size}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot unstructured clustering result\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nimport numpy as np\n\nfig1 = plt.figure()\nax1 = fig1.add_subplot(111, projection=\"3d\", elev=7, azim=-80)\nax1.set_position([0, 0, 0.95, 1])\nfor l in np.unique(label_unstructured):\n ax1.scatter(\n X1[label_unstructured == l, 0],\n X1[label_unstructured == l, 1],\n X1[label_unstructured == l, 2],\n color=plt.cm.jet(float(l) / np.max(label_unstructured + 1)),\n s=20,\n edgecolor=\"k\",\n )\n_ = fig1.suptitle(\n f\"Without connectivity constraints (time {elapsed_time_unstructured:.2f}s)\"\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Compute clustering with connectivity constraints\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.neighbors import kneighbors_graph\n\nconnectivity = kneighbors_graph(X1, n_neighbors=10, include_self=False)\n\nprint(\"Compute structured hierarchical clustering...\")\nst = time.time()\nward_structured = AgglomerativeClustering(\n n_clusters=6, connectivity=connectivity, linkage=\"ward\"\n).fit(X1)\nelapsed_time_structured = time.time() - st\nlabel_structured = ward_structured.labels_\nprint(f\"Elapsed time: {elapsed_time_structured:.2f}s\")\nprint(f\"Number of points: {label_structured.size}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot structured clustering result\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig2 = plt.figure()\nax2 = fig2.add_subplot(111, projection=\"3d\", elev=7, azim=-80)\nax2.set_position([0, 0, 0.95, 1])\nfor l in np.unique(label_structured):\n ax2.scatter(\n X1[label_structured == l, 0],\n X1[label_structured == l, 1],\n X1[label_structured == l, 2],\n color=plt.cm.jet(float(l) / np.max(label_structured + 1)),\n s=20,\n edgecolor=\"k\",\n )\n_ = fig2.suptitle(\n f\"With connectivity constraints (time {elapsed_time_structured:.2f}s)\"\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generate 2D spiral dataset.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "n_samples = 1500\nnp.random.seed(0)\nt = 1.5 * np.pi * (1 + 3 * np.random.rand(1, n_samples))\nx = t * np.cos(t)\ny = t * np.sin(t)\n\nX2 = np.concatenate((x, y))\nX2 += 0.7 * np.random.randn(2, n_samples)\nX2 = X2.T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Capture local connectivity using a graph\nLarger number of neighbors will give more homogeneous clusters to\nthe cost of computation time. A very large number of neighbors gives\nmore evenly distributed cluster sizes, but may not impose the local\nmanifold structure of the data.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "knn_graph = kneighbors_graph(X2, 30, include_self=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plot clustering with and without structure\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig3 = plt.figure(figsize=(8, 12))\nsubfigs = fig3.subfigures(4, 1)\nparams = [\n (None, 30),\n (None, 3),\n (knn_graph, 30),\n (knn_graph, 3),\n]\n\nfor subfig, (connectivity, n_clusters) in zip(subfigs, params):\n axs = subfig.subplots(1, 4, sharey=True)\n for index, linkage in enumerate((\"average\", \"complete\", \"ward\", \"single\")):\n model = AgglomerativeClustering(\n linkage=linkage, connectivity=connectivity, n_clusters=n_clusters\n )\n t0 = time.time()\n model.fit(X2)\n elapsed_time = time.time() - t0\n axs[index].scatter(\n X2[:, 0], X2[:, 1], c=model.labels_, cmap=plt.cm.nipy_spectral\n )\n axs[index].set_title(\n \"linkage=%s\\n(time %.2fs)\" % (linkage, elapsed_time),\n fontdict=dict(verticalalignment=\"top\"),\n )\n axs[index].set_aspect(\"equal\")\n axs[index].axis(\"off\")\n\n subfig.subplots_adjust(bottom=0, top=0.83, wspace=0, left=0, right=1)\n subfig.suptitle(\n \"n_cluster=%i, connectivity=%r\" % (n_clusters, connectivity is not None),\n size=17,\n )\n\nplt.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.14" } }, "nbformat": 4, "nbformat_minor": 0 }