{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/fonnesbeck/Bios8366/blob/master/notebooks/Section6_2-Clustering.ipynb)\n", "\n", "# Unsupvervised Learning\n", "\n", "Clustering is a class of unsupervised learning methods that associates observations according to some specified measure of **similarity** (e.g. Euclidean distance)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## K-means Algorithm\n", "\n", "The K-means clustering algorithm associates each point $x_i$ in a set of input points $\\{x_1, x_2, \\ldots, x_m\\}$ to $K$ clusters. Each cluster is specified by a **centroid** that is the average location of all the points in the cluster. The algorithm proceeds iteratively from arbitrary centroid locations, updating the membership of each point according to minimum distance, then updating the centroid location based on the new cluster membership. \n", "\n", "You might notice that this is just a special case of the **expectation maximization (EM)** algorithm. Recall that in EM we iteratively assigned labels to observations, according to which mixture component they were most likely to have been derived from. K-means is simpler, in that we just use the minimum distance to assign membership.\n", "\n", "The algorithm will have converged when the assignment of points to centroids does not change with each iteration." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Algorithm\n", "\n", "1. Initialize cluster centroids:\n", "\n", "$$\\mu^{(0)}_1, \\ldots, \\mu^{(0)}_k \\in \\mathbb{R}^n$$\n", "\n", "2. Iterate until converged:\n", "\n", " a. Set $c_i = \\text{argmin}_j || x_i - \\mu_j^{(s)} ||$\n", " \n", " b. Update centroids:\n", " \n", " $$\\mu_j^{(s+1)} = \\frac{\\sum_{i=1}^m I[c_i = j] x_i}{\\sum_{i=1}^m I[c_i = j]}$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The K-means algorithm is simply a Gaussian mixture model with two restrictions: \n", "\n", "1. the covariance matrix is spherical: \n", "\n", " $$\\Sigma_k = \\sigma I_D$$\n", "\n", "2. the mixture weights are fixed:\n", "\n", " $$\\pi_k = \\frac{1}{K}$$\n", "\n", "Hence, we are only interested in locating the appropriate centroid of the clusters. This serves to speed computation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can define the distortion function:\n", "\n", "$$J(c,\\mu) = \\sum_{i]1}^m ||x_i - \\mu_{c_i}||$$\n", "\n", "which gets smaller at every iteration. So, k-means is coordinate ascent on $J(c,\\mu)$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Choosing $k$\n", "\n", "To check whether a chosen $k$ is reasonable, one approach is to compare the distances between the centroids to the mean distance bewween each data point and their assigned centroid. A good fit involves relatively large inter-centroid distances. \n", "\n", "The appropriate value for k (the number of clusters) may depend on the goals of the analysis, or it may be chosen algorithmically, using an optimization procedure." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example: clustering random points" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "import seaborn as sns; sns.set_context('notebook')\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "np.random.seed(42)\n", "\n", "DATA_URL = 'https://raw.githubusercontent.com/fonnesbeck/Bios8366/master/data/'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x, y = np.random.uniform(0, 10, 50).reshape(2, 25)\n", "plt.scatter(x, y);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's start with $k=4$, arbitrarily assigned:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "centroids = (3, 3), (3, 7), (7, 3), (7, 7)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.transpose(centroids)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(x, y)\n", "plt.scatter(*np.transpose(centroids), c='r', marker='+', s=100);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use the function `cdist` from SciPy to calculate the distances from each point to each centroid." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.spatial.distance import cdist\n", "\n", "distances = cdist(centroids, list(zip(x,y)))\n", "distances.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can make the initial assignment to centroids by picking the minimum distance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "labels = distances.argmin(axis=0)\n", "labels" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(x, y, c=np.array(list('rgbc'))[labels])\n", "plt.scatter(*np.transpose(centroids), c='r', marker='+', s=100);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can re-assign the centroid locations based on the means of the current members' locations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "new_centroids = [(x[labels==i].mean(), y[labels==i].mean())\n", " for i in range(len(centroids))]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(x, y, c=np.array(list('rgbc'))[labels])\n", "plt.scatter(*np.transpose(new_centroids), c='r', marker='+', s=100);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, we simply iterate these steps until convergence." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "centroids = new_centroids\n", "iterations = 20\n", "\n", "for _ in range(iterations):\n", " distances = cdist(centroids, list(zip(x,y)))\n", " labels = distances.argmin(axis=0)\n", " centroids = [(x[labels==i].mean(), y[labels==i].mean())\n", " for i in range(len(centroids))]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(x, y, c=np.array(list('rgbc'))[labels])\n", "plt.scatter(*np.transpose(centroids), c='r', marker='+', s=100);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "Re-run the model using different initial centroid locations, and compare the results." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## k-means using `scikit-learn`\n", "\n", "The `scikit-learn` package includes a `KMeans` class for flexibly fitting K-means models. It includes additional features, such as initialization options and the ability to set the convergence tolerance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.cluster import KMeans\n", "from numpy.random import RandomState\n", "rng = RandomState(1)\n", "\n", "# Instantiate model\n", "kmeans = KMeans(n_clusters=4, random_state=rng)\n", "# Fit model\n", "kmeans.fit(np.transpose((x,y)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After fitting, we can retrieve the labels and cluster centers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "kmeans.labels_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "kmeans.cluster_centers_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The resulting plot should look very similar to the one we fit by hand." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(x, y, c=np.array(list('rgbc'))[kmeans.labels_])\n", "plt.scatter(*kmeans.cluster_centers_.T, c='r', marker='+', s=100);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example: Wine chemistry\n", "\n", "The wine chemistry dataset in `wine.dat` that includes thirteen chemical measurements carried out on each of 178 wines from three regions of Italy. If we did not have the labels for the wines, we might be interested to see whether a clustering algorithm could correctly assign labels to the wines." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "try:\n", " wine = pd.read_table(\"../data/wine.dat\", sep='\\s+')\n", "except FileNotFoundError:\n", " wine = pd.read_table(DATA_URL + \"wine.dat\", sep='\\s+')\n", "\n", "attributes = ['Grape',\n", " 'Alcohol',\n", " 'Malic acid',\n", " 'Ash',\n", " 'Alcalinity of ash',\n", " 'Magnesium',\n", " 'Total phenols',\n", " 'Flavanoids',\n", " 'Nonflavanoid phenols',\n", " 'Proanthocyanins',\n", " 'Color intensity',\n", " 'Hue',\n", " 'OD280/OD315 of diluted wines',\n", " 'Proline']\n", "\n", "wine.columns = attributes\n", "\n", "wine.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X = wine.copy()\n", "y = X.pop('Grape')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To simplify the analysis, and aid visualization, we will again perform a PCA to isolate the majority of the variation into two principal components." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.decomposition import PCA\n", "\n", "pca = PCA(n_components=2, whiten=True).fit(X)\n", "X_pca = pca.transform(X)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wine['First Component'] = X_pca[:, 0]\n", "wine['Second Component'] = X_pca[:, 1]\n", "\n", "sns.lmplot(x='First Component', y='Second Component', \n", " data=wine, \n", " fit_reg=False, \n", " hue=\"Grape\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now create a `KMeans` object with `k=3`, and fit the data with it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "km_wine = KMeans(n_clusters=3, random_state=rng)\n", "km_wine.fit(X_pca)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From this, we can extract the cluster centroids (in the `cluster_center_` attribute) and the group labels (in `labels_`) in order to generate a plot of the classification result." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "km_wine.cluster_centers_.round(2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "km_wine.labels_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can visually examine the clusters, and compare them to the known labels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Fix labels\n", "np.place(km_wine.labels_, km_wine.labels_==0, 3)\n", "wine['Cluster'] = km_wine.labels_\n", "\n", "grid = sns.lmplot(x='First Component', y='Second Component', \n", " data=wine, \n", " fit_reg=False, \n", " hue=\"Cluster\")\n", "grid.ax.scatter(*wine.loc[wine.Grape!=wine.Cluster, ['First Component', 'Second Component']].values.T, \n", " s=60, linewidth=1, facecolors='none', c='r');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`scikit-learn` includes a suite of well-known clustering algorithms. \n", "\n", "- `sklearn.cluster.Birch`\n", ": A memory-efficient, online-learning algorithm provided as an alternative to KMeans. It constructs a tree data structure with the cluster centroids being read off the leaf.\n", "- `sklearn.cluster.MeanShift`\n", ": Mean shift clustering aims to discover “blobs” in a smooth density of samples. It is a centroid-based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids. But, mean shift is not scalable to high number of samples.\n", "- `sklearn.cluster.DBSCAN`\n", ": Density-Based Spatial Clustering of Applications with Noise. Finds core samples of high density and expands clusters from them. Good for data which contains clusters of similar density." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise: clustering baseball statistics\n", "\n", "We can use clustering to try to find interesting groupings among sets of baseball statistics. Load the baseball dataset and run a clustering algorithm on the set of three statistics:\n", "\n", "* hit rate: hits / at bats\n", "* strikeout rate: strikeouts / at bats\n", "* walk rate: bases on balls /at bats\n", "\n", "You should probably set a minimum number of at bats to qualify for the analysis, since there are pitchers that get only a handful of at bats each year.\n", "\n", "Since we are clustering in 3 dimensions, you can visualize the output as a series of pairwise plots." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "try:\n", " baseball = pd.read_csv(\"../data/baseball.csv\", index_col=0)\n", "except FileNotFoundError:\n", " baseball = pd.read_csv(DATA_URL + \"baseball.csv\", index_col=0)\n", "baseball.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Write answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DP-Means\n", "\n", "The major weakness of the k-means approach to clustering is that the number of clusters needs to be specified at the outset. However, there is usually uncertainty with respect to the appropriate number of clusters for a given dataset. A flexible alternative to k-means that allows for an unknown number of clusters involves using a Bayesian non-parametric mixture model instead (Kulis and Jordan 2011). In particular, a Dirichlet process (DP) mixture model, which probabilistically assigns observations to clusters, using a stick-breaking algorithm. \n", "\n", "Recall the definition of a finite mixture model:\n", "\n", "$$f(y) = \\sum_{h=1}^{k} \\pi_h \\mathcal{K}(y|\\theta_h)$$\n", "\n", "where $k$ is the number of mixture components, $\\pi_h$ is the mixing coefficient for component $h$, and $K$ specifies the mixing components (*e.g.* a Gaussian distribution), which has parameters $\\theta_h$ for each component. \n", "\n", "A DP mixture extends this by placing a Dirichlet prior of dimension $k$ on the mixing coefficients. The distribution over the group indicators can then be specified as a categorical distribution:\n", "\n", "$$\\begin{aligned}\n", "\\mathbf{\\pi} &\\sim \\text{Dirichlet}(k, \\pi_0) \\\\\n", "z_1,\\ldots,z_n &\\sim \\text{Categorical}(\\mathbf{\\pi}) \\\\\n", "\\end{aligned}$$\n", "\n", "We might then specify the observations as being a mixture of Gaussians, whose means are drawn from an appropriate prior distribution $P$:\n", "\n", "$$\\begin{aligned}\n", "\\theta_1,\\ldots,\\theta_k &\\sim P \\\\\n", "y_1,\\ldots,y_n &\\sim N(\\theta_{z[i]}, \\sigma I)\n", "\\end{aligned}$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To demonstrate, we will implement a DP to cluster the iris dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.decomposition import PCA\n", "from sklearn import datasets\n", "\n", "iris = datasets.load_iris()\n", "\n", "pca = PCA(n_components=2, whiten=True).fit(iris.data)\n", "X_pca = pca.transform(iris.data)\n", "y = iris.target" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `sklearn.mixture` module includes a variety of Gaussian Mixture Models, including the `BayesianGaussianMixture` which fits the mixture using either Dirichlet distribution priors or Dirichlet process priors, and fits them using **variational inference**." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.mixture import BayesianGaussianMixture\n", "\n", "K = 10\n", "\n", "dp_mixture = BayesianGaussianMixture(weight_concentration_prior_type=\"dirichlet_process\", mean_precision_prior=1,\n", " n_components=K, reg_covar=0, init_params='random', weight_concentration_prior=1e5)\n", "\n", "dp_mixture.fit(X_pca)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import patches\n", "import matplotlib.gridspec as gridspec\n", "\n", "colors = np.array(['#0072B2', '#F0E442', '#D55E00'])\n", "\n", "def plot_ellipses(ax, weights, means, covars):\n", " for n in range(means.shape[0]):\n", " eig_vals, eig_vecs = np.linalg.eigh(covars[n])\n", " unit_eig_vec = eig_vecs[0] / np.linalg.norm(eig_vecs[0])\n", " angle = np.arctan2(unit_eig_vec[1], unit_eig_vec[0])\n", " # Ellipse needs degrees\n", " angle = 180 * angle / np.pi\n", " # eigenvector normalization\n", " eig_vals = 2 * np.sqrt(2) * np.sqrt(eig_vals)\n", " ell = patches.Ellipse(means[n], eig_vals[0], eig_vals[1],\n", " 180 + angle)\n", " ell.set_clip_box(ax.bbox)\n", " ell.set_alpha(weights[n])\n", " ell.set_facecolor('#56B4E9')\n", " ax.add_artist(ell)\n", "\n", "\n", "def plot_results(ax1, ax2, estimator, X, y):\n", " ax1.scatter(X[:, 0], X[:, 1], s=15, marker='o', color=colors[y], alpha=0.8)\n", " ax1.set_xticks(())\n", " ax1.set_yticks(())\n", " plot_ellipses(ax1, estimator.weights_, estimator.means_,\n", " estimator.covariances_)\n", "\n", " ax2.get_xaxis().set_tick_params(direction='out')\n", " ax2.yaxis.grid(True, alpha=0.7)\n", " for k, w in enumerate(estimator.weights_):\n", " ax2.bar(k - .45, w, width=0.9, color='#56B4E9', zorder=3)\n", " ax2.text(k, w + 0.007, \"%.1f%%\" % (w * 100.),\n", " horizontalalignment='center')\n", " ax2.set_xlim(-.6, K - .4)\n", " ax2.set_ylim(0., 1.1)\n", " ax2.tick_params(axis='y', which='both', left=False,\n", " right=False, labelleft=False)\n", " ax2.tick_params(axis='x', which='both', top=False)\n", "\n", " ax1.set_ylabel('Estimated Mixtures')\n", " ax2.set_ylabel('Weight of each component')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "plt.figure(figsize=(4.7 * 3, 8))\n", "plt.subplots_adjust(bottom=.04, top=0.90, hspace=.05, wspace=.05,\n", " left=.03, right=.99)\n", "\n", "gs = gridspec.GridSpec(3, 1)\n", "plot_results(plt.subplot(gs[:2]), plt.subplot(gs[2]), dp_mixture, X_pca, y)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we have shown, the Dirichlet process mixture results in infinite mixture models which do not fix the number of clusters in the data *a priori*. However, Bayesian non-parametric models require fitting via sampling algorithms or variational inference techniques that are non-trivial to implement and scale poorly with large data. This is in contrast to k-means, which is straightforward to implement and scales easily.\n", "\n", "It can be shown that the k-means algorithm is a limiting special case of the EM agorithm, if all of the covariance matrices associated with the clusters in a Gaussian mixture model set to $\\sigma I$ and we let $\\sigma$ go to zero. We can apply a similar limit to the Dirichlet process, using a Gibbs sampling algorithm. The result is a method with **hard** (rather than probabilistic) cluster assignments, but allows new clusters to be formed when points are far enough from existing cluster centroids.\n", "\n", "Suppose in a Gaussian mixture model, all gaussians have the same covariance $\\sigma I$, then the E-step of the EM algorithm becomes:\n", "\n", "$$\\gamma(z_{ic}) = \\frac{\\pi_c \\exp(-\\frac{1}{2}||x_i - \\pi_c||^2_2)}{\\sum_{j=1}^k \\pi_j \\exp(-\\frac{1}{2}||x_i - \\pi_j||^2_2)}$$\n", "\n", "where $\\gamma(z_{ic})$ is the probability of assigning point $i$ to cluster $c$. As $\\sigma \\rightarrow 0$, this probability **approaches zero** for all clusters except the closest one. \n", "\n", "The M-step simply recomputes the cluster means. Hence, this is an equivalent update to k-means.\n", "\n", "We can derive an analogous hard clustering algorithm, based on a Dirichlet process mixture model. We first define the baseline distribution $G_0$ to be a zero-mean Gaussian with covariance $\\rho I$. This allows us to use a straightforward Gibbs sampling update that results in new points being assigned to a new cluster with probability:\n", "\n", "$$Pr(c^{*}) = \\frac{\\alpha}{Z}[2\\pi(\\rho + \\sigma)]^{(d/2)} \\exp\\left( -\\frac{1}{2(\\rho + \\sigma)}||x_i||^2 \\right)$$\n", "\n", "and to existing cluster $c$ with probability:\n", "\n", "$$Pr(c) = \\frac{n_{-i,c}}{Z}[2\\pi\\sigma]^{(d/2)} \\exp\\left( -\\frac{1}{2\\sigma}||x_i - \\mu_c||^2_2 \\right)$$\n", "\n", "where $Z$ is a normalizing constant. We define $\\alpha = (1 + \\rho/\\sigma)^{d/2} \\exp\\left(-\\frac{\\lambda}{2\\sigma}\\right)$, for some $\\lambda$. \n", "\n", "The Gibbs sampling update becomes:\n", "\n", "$$\\hat{\\gamma}(z_{ic}) = \\frac{n_{-i,c} \\exp \\left(-\\frac{1}{2}||x_i - \\pi_c||^2 \\right)}{n_{-i,c} \\exp\\left(-\\frac{\\lambda}{2\\sigma} - \\frac{||x_i||^2}{2(\\rho + \\sigma)}\\right) + \\sum_{j=1}^k n_{-i,j} \\exp \\left(-\\frac{1}{2\\sigma}||x_i - \\pi_j||^2 \\right)}$$\n", "\n", "for existing clusters, and:\n", "\n", "$$\\hat{\\gamma}(z_{i,new}) = \\frac{\\exp \\left(-\\frac{1}{2\\sigma}\\left[\\lambda + \\frac{\\sigma}{\\rho + \\sigma}||x_i ||^2 \\right]\\right)}{n_{-i,c} \\exp\\left(-\\frac{\\lambda}{2\\sigma} - \\frac{||x_i||^2}{2(\\rho + \\sigma)}\\right) + \\sum_{j=1}^k n_{-i,j} \\exp \\left(-\\frac{1}{2\\sigma}||x_i - \\pi_j||^2 \\right)}$$\n", "\n", "for new clusters.\n", "\n", "As we allow $\\sigma \\rightarrow 0$ and leave $\\rho$ fixed, the $\\lambda$ term dominates the numerator. The result is that the probabilities become binary, with the closest cluster converging to one and the others to zero. This becomes identical to the k-means cluster assignment step, except that when the Euclidean distance is greater than $\\lambda$, we create a new cluster.\n", "\n", "The final step is to sample a mean for a new cluster, should one be created. This is taken from the posterior resulting from the prior $G_0$ and the likelihood of the single observation $x_i$ that seeds the new cluster. Since these are both Gaussian, the posterior will be Gaussian as well, with mean and covariance:\n", "\n", "$$\\begin{aligned}\n", "\\tilde{\\mu}_c &=& \\left(1 + \\frac{\\sigma}{\\rho n_c}\\right)^{-1} \\\\\n", "\\tilde{\\Sigma}_c &=& \\frac{\\sigma \\rho}{\\sigma + \\rho n_c}I\n", "\\end{aligned}$$\n", "\n", "But, as $\\sigma \\rightarrow 0$, the mean of the gaussian approaches $\\bar{x}_c$ and the covariance goes to zero, so we simply choose $\\bar{x}_c$ as the cluster center." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### DP-means algorithm\n", "\n", "1. Initialize number of clusters to 1, and assign all observations to that cluster. Calculate cluster mean to be global mean.\n", "2. Specify cluster penalty parameter $\\lambda$\n", "3. Initialize cluster indicators: $z_1 = z_2 = \\ldots, = z_n = 1 $\n", "4. Repeat until convergence:\n", "\n", " + For each data point $x_i$:\n", " \n", " + compute distance from means $d_{ic} = ||x_i - \\mu_c||^2$ for $c=1,\\ldots,k$\n", " + If $\\min_c(d_{ic}) > \\lambda$ set $k = k+1$, $z_i = k$, $\\mu_k = x_i$\n", " + Otherwise set $z_i = \\text{argmin}_c(d_{ic})$\n", " \n", " + Generate clusters $l_1, \\ldots, l_k$ from $z_1,\\ldots,z_n$\n", " \n", " + Recompute cluster means: $\\mu_j = \\frac{1}{|l_j|} \\sum_{x \\in l_j} x$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def dp_means(x, lam, max_iter=100, tol=1e-5, metric='euclidean'):\n", " \n", " x = np.array(x)\n", " n = x.shape[0]\n", " k = 1\n", " \n", " # Initialize cluster indicators\n", " z = np.ones(n, int)\n", " \n", " # Initialize with single cluster of all observations\n", " mu = [x.mean(0)]\n", " \n", " # Initialize variables\n", " converged = False\n", " iteration = 0\n", " ss = np.inf\n", "\n", " # Iterate until converged or maxed out\n", " while (not converged) and (iteration < max_iter):\n", " \n", " # Calculate distances for all points\n", " d = cdist(x, np.array(mu), metric=metric)\n", " \n", " for i in range(n):\n", " \n", " if np.min(d[i]) > lam:\n", " # Create new group\n", "\n", " k += 1\n", "\n", " z[i] = k - 1\n", "\n", " mu += [x[i]]\n", "\n", " else:\n", " # Assign to closest group\n", "\n", " z[i] = np.argmin(d[i])\n", " \n", " for j in range(k):\n", " \n", " # Recalculate centroids\n", " if (z==j).sum():\n", " indices = np.where(z==j)[0]\n", " mu[j] = np.mean(x[indices], 0)\n", " \n", " ss_old = ss\n", " \n", " # Calcuate sum of squared distances to use as convergence criterion\n", " ss = np.sum([[(x[i,j] - mu[z[i]][j])**2 for j in range(2)] for i in range(n)])\n", "\n", " ss_diff = ss_old - ss\n", " \n", " if ss_diff < tol:\n", " converged = True\n", " \n", " iteration += 1\n", " \n", " return(dict(centers=np.array(mu), z=z, k=k, iterations=iteration, converged=converged, ss=ss))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x,y = X_pca.T\n", "fig, axes = plt.subplots(1, 3, figsize=(16, 4))\n", "for i,c in enumerate([2, 3, 4]):\n", " dpm = dp_means(X_pca, c, metric='seuclidean')\n", " axes[i].scatter(x, y, c=dpm['z'])\n", " axes[i].scatter(*dpm['centers'].T, c='r', marker='+', s=100)\n", " axes[i].set_title(r'$\\lambda$={0}, k={1}'.format(c, dpm['k']))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clusters = []\n", "lambdas = np.linspace(2, 4)\n", "for c in lambdas:\n", " dpm = dp_means(X_pca, c, metric='euclidean')\n", " clusters.append(len(dpm['centers']))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(lambdas, clusters);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise: Wine clustering\n", "\n", "Try running DP-means on the wine chemistry dataset. Try varying the value for `lam` to see where the number of clusters stabilizes.\n", "\n", "Consider how might this be implemented using PyMC3?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write your answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DBSCAN\n", "\n", "Along with concerns about choosing the number of groups *a priori*, K-means has trouble identifying groups that are not spherical and convex." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import datasets\n", "\n", "points, labels = datasets.make_moons(n_samples=100, noise=.05)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "palette = np.array(sns.color_palette(\"hls\", 8))\n", "\n", "plt.scatter(*points.T, color=palette[labels]);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "moons = KMeans(n_clusters=2, random_state=rng)\n", "moons.fit(points);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(*points.T, color=palette[moons.labels_]);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a density-based clustering algorithm, introduced [Ester et al. (1996)](https://www.aaai.org/Papers/KDD/1996/KDD96-037.pdf).\n", "\n", "DBSCAN infers the number of clusters from the dataset, allowing it to discover clusters of arbitrary shape. It establishes a **neighborhood size**, and assigns points to categories based on their relationship to other points, conditioned on this neighborhood size. \n", "\n", "This confers several advantages:\n", "\n", "- Allows for more complex cluster shapes\n", "- K does not need to be specified\n", "- Automatically finds outliers\n", "\n", "While we don't need to choose K, we do need to select a distance function for quantifying **dissimilarity** between points.\n", "\n", "DBSCAN distinguishes between 3 different \"points\":\n", "\n", "- **Core points**: A core point is a point that has at least a minimum number of other points (MinPts) in its radius epsilon.\n", "- **Border points**: A border point is a point that is not a core point, since it doesn't have enough MinPts in its neighborhood, but lies within the radius epsilon of a core point.\n", "- **Noise points**: All other points that are neither core points nor border points.\n", "\n", "![](https://github.com/amueller/scipy-2017-sklearn/raw/e7d90ee4e382bebd44dfcea0042bca4698d8c57f/notebooks/figures/dbscan.png)\n", "\n", "\n", "The steps to the DBSCAN algorithm are:\n", "\n", "1. **Determine the type of a new point**\n", "\n", " - core\n", " - boundary\n", " - outlier\n", "\n", " We randomly pick that has not yet been assigned to a cluster, or been designated as an outlier. For this point, we determine its neighborhood. If it is a **core point**, we seed a cluster around it, otherwise label the point as an **outlier**.\n", "\n", "2. **Expand the new cluster by adding all reachable points**. Once we find a core point (and hence, a cluster), find all points that are reachable based in the neighborhood and add them to the cluster. If a point previously found to be an outlier is included, change its status to a **border point**.\n", "\n", "3. **Repeat** these steps until all points are either assigned to a cluster or designated as an outlier.\n", "\n", "The `DBSCAN` class in `scikit-learn` is a straightforward implementation of this algorithm, and requires three main arguments:\n", "\n", "- `eps` defines the size of the neighborhood around each point\n", "\n", "- `min_samples` is the number of points that needs to be within the neigborhood for a point to be considered a core point; density level threshold\n", "\n", "- `metric` is either a callable function or a string corresponding to a built-in distance metric\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.cluster import DBSCAN\n", "\n", "dbscan_moons = DBSCAN(eps=0.4, min_samples=11)\n", "dbscan_moons.fit(points)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dbscan_moons.labels_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(*points.T, color=palette[dbscan_moons.labels_]);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## References\n", "\n", "1. Kulis B and Jordan MI. Revisiting k-means: New Algorithms via Bayesian Nonparametrics. arXiv preprint [arXiv:11110352](http://arxiv.org/abs/1111.0352) 2011.\n", "2. Ester, M., H. P. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise”. In: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, AAAI Press, pp. 226-231. 1996" ] } ], "metadata": { "_draft": { "nbviewer_url": "https://gist.github.com/ae9a27a5351908b9b1f611e14c0b91a6" }, "gist": { "data": { "description": "Section6_2-Clustering.ipynb", "public": true }, "id": "ae9a27a5351908b9b1f611e14c0b91a6" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.9" }, "latex_envs": { "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 0 } }, "nbformat": 4, "nbformat_minor": 2 }