{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Comparing anomaly detection algorithms for outlier detection on toy datasets\n\nThis example shows characteristics of different anomaly detection algorithms\non 2D datasets. Datasets contain one or two modes (regions of high density)\nto illustrate the ability of algorithms to cope with multimodal data.\n\nFor each dataset, 15% of samples are generated as random uniform noise. This\nproportion is the value given to the nu parameter of the OneClassSVM and the\ncontamination parameter of the other outlier detection algorithms.\nDecision boundaries between inliers and outliers are displayed in black\nexcept for Local Outlier Factor (LOF) as it has no predict method to be applied\non new data when it is used for outlier detection.\n\nThe :class:`~sklearn.svm.OneClassSVM` is known to be sensitive to outliers and\nthus does not perform very well for outlier detection. This estimator is best\nsuited for novelty detection when the training set is not contaminated by\noutliers. That said, outlier detection in high-dimension, or without any\nassumptions on the distribution of the inlying data is very challenging, and a\nOne-class SVM might give useful results in these situations depending on the\nvalue of its hyperparameters.\n\nThe :class:`sklearn.linear_model.SGDOneClassSVM` is an implementation of the\nOne-Class SVM based on stochastic gradient descent (SGD). Combined with kernel\napproximation, this estimator can be used to approximate the solution\nof a kernelized :class:`sklearn.svm.OneClassSVM`. We note that, although not\nidentical, the decision boundaries of the\n:class:`sklearn.linear_model.SGDOneClassSVM` and the ones of\n:class:`sklearn.svm.OneClassSVM` are very similar. The main advantage of using\n:class:`sklearn.linear_model.SGDOneClassSVM` is that it scales linearly with\nthe number of samples.\n\n:class:`sklearn.covariance.EllipticEnvelope` assumes the data is Gaussian and\nlearns an ellipse. It thus degrades when the data is not unimodal. Notice\nhowever that this estimator is robust to outliers.\n\n:class:`~sklearn.ensemble.IsolationForest` and\n:class:`~sklearn.neighbors.LocalOutlierFactor` seem to perform reasonably well\nfor multi-modal data sets. The advantage of\n:class:`~sklearn.neighbors.LocalOutlierFactor` over the other estimators is\nshown for the third data set, where the two modes have different densities.\nThis advantage is explained by the local aspect of LOF, meaning that it only\ncompares the score of abnormality of one sample with the scores of its\nneighbors.\n\nFinally, for the last data set, it is hard to say that one sample is more\nabnormal than another sample as they are uniformly distributed in a\nhypercube. Except for the :class:`~sklearn.svm.OneClassSVM` which overfits a\nlittle, all estimators present decent solutions for this situation. In such a\ncase, it would be wise to look more closely at the scores of abnormality of\nthe samples as a good estimator should assign similar scores to all the\nsamples.\n\nWhile these examples give some intuition about the algorithms, this\nintuition might not apply to very high dimensional data.\n\nFinally, note that parameters of the models have been here handpicked but\nthat in practice they need to be adjusted. In the absence of labelled data,\nthe problem is completely unsupervised so model selection can be a challenge.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause\n\nimport time\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom sklearn import svm\nfrom sklearn.covariance import EllipticEnvelope\nfrom sklearn.datasets import make_blobs, make_moons\nfrom sklearn.ensemble import IsolationForest\nfrom sklearn.kernel_approximation import Nystroem\nfrom sklearn.linear_model import SGDOneClassSVM\nfrom sklearn.neighbors import LocalOutlierFactor\nfrom sklearn.pipeline import make_pipeline\n\nmatplotlib.rcParams[\"contour.negative_linestyle\"] = \"solid\"\n\n# Example settings\nn_samples = 300\noutliers_fraction = 0.15\nn_outliers = int(outliers_fraction * n_samples)\nn_inliers = n_samples - n_outliers\n\n# define outlier/anomaly detection methods to be compared.\n# the SGDOneClassSVM must be used in a pipeline with a kernel approximation\n# to give similar results to the OneClassSVM\nanomaly_algorithms = [\n (\n \"Robust covariance\",\n EllipticEnvelope(contamination=outliers_fraction, random_state=42),\n ),\n (\"One-Class SVM\", svm.OneClassSVM(nu=outliers_fraction, kernel=\"rbf\", gamma=0.1)),\n (\n \"One-Class SVM (SGD)\",\n make_pipeline(\n Nystroem(gamma=0.1, random_state=42, n_components=150),\n SGDOneClassSVM(\n nu=outliers_fraction,\n shuffle=True,\n fit_intercept=True,\n random_state=42,\n tol=1e-6,\n ),\n ),\n ),\n (\n \"Isolation Forest\",\n IsolationForest(contamination=outliers_fraction, random_state=42),\n ),\n (\n \"Local Outlier Factor\",\n LocalOutlierFactor(n_neighbors=35, contamination=outliers_fraction),\n ),\n]\n\n# Define datasets\nblobs_params = dict(random_state=0, n_samples=n_inliers, n_features=2)\ndatasets = [\n make_blobs(centers=[[0, 0], [0, 0]], cluster_std=0.5, **blobs_params)[0],\n make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[0.5, 0.5], **blobs_params)[0],\n make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[1.5, 0.3], **blobs_params)[0],\n 4.0\n * (\n make_moons(n_samples=n_samples, noise=0.05, random_state=0)[0]\n - np.array([0.5, 0.25])\n ),\n 14.0 * (np.random.RandomState(42).rand(n_samples, 2) - 0.5),\n]\n\n# Compare given classifiers under given settings\nxx, yy = np.meshgrid(np.linspace(-7, 7, 150), np.linspace(-7, 7, 150))\n\nplt.figure(figsize=(len(anomaly_algorithms) * 2 + 4, 12.5))\nplt.subplots_adjust(\n left=0.02, right=0.98, bottom=0.001, top=0.96, wspace=0.05, hspace=0.01\n)\n\nplot_num = 1\nrng = np.random.RandomState(42)\n\nfor i_dataset, X in enumerate(datasets):\n # Add outliers\n X = np.concatenate([X, rng.uniform(low=-6, high=6, size=(n_outliers, 2))], axis=0)\n\n for name, algorithm in anomaly_algorithms:\n t0 = time.time()\n algorithm.fit(X)\n t1 = time.time()\n plt.subplot(len(datasets), len(anomaly_algorithms), plot_num)\n if i_dataset == 0:\n plt.title(name, size=18)\n\n # fit the data and tag outliers\n if name == \"Local Outlier Factor\":\n y_pred = algorithm.fit_predict(X)\n else:\n y_pred = algorithm.fit(X).predict(X)\n\n # plot the levels lines and the points\n if name != \"Local Outlier Factor\": # LOF does not implement predict\n Z = algorithm.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors=\"black\")\n\n colors = np.array([\"#377eb8\", \"#ff7f00\"])\n plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[(y_pred + 1) // 2])\n\n plt.xlim(-7, 7)\n plt.ylim(-7, 7)\n plt.xticks(())\n plt.yticks(())\n plt.text(\n 0.99,\n 0.01,\n (\"%.2fs\" % (t1 - t0)).lstrip(\"0\"),\n transform=plt.gca().transAxes,\n size=15,\n horizontalalignment=\"right\",\n )\n plot_num += 1\n\nplt.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }