{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Gaussian Mixture Model Selection\n\nThis example shows that model selection can be performed with Gaussian Mixture\nModels (GMM) using `information-theory criteria `. Model selection\nconcerns both the covariance type and the number of components in the model.\n\nIn this case, both the Akaike Information Criterion (AIC) and the Bayes\nInformation Criterion (BIC) provide the right result, but we only demo the\nlatter as BIC is better suited to identify the true model among a set of\ncandidates. Unlike Bayesian procedures, such inferences are prior-free.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data generation\n\nWe generate two components (each one containing `n_samples`) by randomly\nsampling the standard normal distribution as returned by `numpy.random.randn`.\nOne component is kept spherical yet shifted and re-scaled. The other one is\ndeformed to have a more general covariance matrix.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n\nn_samples = 500\nnp.random.seed(0)\nC = np.array([[0.0, -0.1], [1.7, 0.4]])\ncomponent_1 = np.dot(np.random.randn(n_samples, 2), C) # general\ncomponent_2 = 0.7 * np.random.randn(n_samples, 2) + np.array([-4, 1]) # spherical\n\nX = np.concatenate([component_1, component_2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can visualize the different components:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\nplt.scatter(component_1[:, 0], component_1[:, 1], s=0.8)\nplt.scatter(component_2[:, 0], component_2[:, 1], s=0.8)\nplt.title(\"Gaussian Mixture components\")\nplt.axis(\"equal\")\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model training and selection\n\nWe vary the number of components from 1 to 6 and the type of covariance\nparameters to use:\n\n- `\"full\"`: each component has its own general covariance matrix.\n- `\"tied\"`: all components share the same general covariance matrix.\n- `\"diag\"`: each component has its own diagonal covariance matrix.\n- `\"spherical\"`: each component has its own single variance.\n\nWe score the different models and keep the best model (the lowest BIC). This\nis done by using :class:`~sklearn.model_selection.GridSearchCV` and a\nuser-defined score function which returns the negative BIC score, as\n:class:`~sklearn.model_selection.GridSearchCV` is designed to **maximize** a\nscore (maximizing the negative BIC is equivalent to minimizing the BIC).\n\nThe best set of parameters and estimator are stored in `best_parameters_` and\n`best_estimator_`, respectively.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.mixture import GaussianMixture\nfrom sklearn.model_selection import GridSearchCV\n\n\ndef gmm_bic_score(estimator, X):\n \"\"\"Callable to pass to GridSearchCV that will use the BIC score.\"\"\"\n # Make it negative since GridSearchCV expects a score to maximize\n return -estimator.bic(X)\n\n\nparam_grid = {\n \"n_components\": range(1, 7),\n \"covariance_type\": [\"spherical\", \"tied\", \"diag\", \"full\"],\n}\ngrid_search = GridSearchCV(\n GaussianMixture(), param_grid=param_grid, scoring=gmm_bic_score\n)\ngrid_search.fit(X)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plot the BIC scores\n\nTo ease the plotting we can create a `pandas.DataFrame` from the results of\nthe cross-validation done by the grid search. We re-inverse the sign of the\nBIC score to show the effect of minimizing it.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import pandas as pd\n\ndf = pd.DataFrame(grid_search.cv_results_)[\n [\"param_n_components\", \"param_covariance_type\", \"mean_test_score\"]\n]\ndf[\"mean_test_score\"] = -df[\"mean_test_score\"]\ndf = df.rename(\n columns={\n \"param_n_components\": \"Number of components\",\n \"param_covariance_type\": \"Type of covariance\",\n \"mean_test_score\": \"BIC score\",\n }\n)\ndf.sort_values(by=\"BIC score\").head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import seaborn as sns\n\nsns.catplot(\n data=df,\n kind=\"bar\",\n x=\"Number of components\",\n y=\"BIC score\",\n hue=\"Type of covariance\",\n)\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the present case, the model with 2 components and full covariance (which\ncorresponds to the true generative model) has the lowest BIC score and is\ntherefore selected by the grid search.\n\n## Plot the best model\n\nWe plot an ellipse to show each Gaussian component of the selected model. For\nsuch purpose, one needs to find the eigenvalues of the covariance matrices as\nreturned by the `covariances_` attribute. The shape of such matrices depends\non the `covariance_type`:\n\n- `\"full\"`: (`n_components`, `n_features`, `n_features`)\n- `\"tied\"`: (`n_features`, `n_features`)\n- `\"diag\"`: (`n_components`, `n_features`)\n- `\"spherical\"`: (`n_components`,)\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from matplotlib.patches import Ellipse\nfrom scipy import linalg\n\ncolor_iter = sns.color_palette(\"tab10\", 2)[::-1]\nY_ = grid_search.predict(X)\n\nfig, ax = plt.subplots()\n\nfor i, (mean, cov, color) in enumerate(\n zip(\n grid_search.best_estimator_.means_,\n grid_search.best_estimator_.covariances_,\n color_iter,\n )\n):\n v, w = linalg.eigh(cov)\n if not np.any(Y_ == i):\n continue\n plt.scatter(X[Y_ == i, 0], X[Y_ == i, 1], 0.8, color=color)\n\n angle = np.arctan2(w[0][1], w[0][0])\n angle = 180.0 * angle / np.pi # convert to degrees\n v = 2.0 * np.sqrt(2.0) * np.sqrt(v)\n ellipse = Ellipse(mean, v[0], v[1], angle=180.0 + angle, color=color)\n ellipse.set_clip_box(fig.bbox)\n ellipse.set_alpha(0.5)\n ax.add_artist(ellipse)\n\nplt.title(\n f\"Selected GMM: {grid_search.best_params_['covariance_type']} model, \"\n f\"{grid_search.best_params_['n_components']} components\"\n)\nplt.axis(\"equal\")\nplt.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }