{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Selecting dimensionality reduction with Pipeline and GridSearchCV\n\nThis example constructs a pipeline that does dimensionality\nreduction followed by prediction with a support vector\nclassifier. It demonstrates the use of ``GridSearchCV`` and\n``Pipeline`` to optimize over different classes of estimators in a\nsingle CV run -- unsupervised ``PCA`` and ``NMF`` dimensionality\nreductions are compared to univariate feature selection during\nthe grid search.\n\nAdditionally, ``Pipeline`` can be instantiated with the ``memory``\nargument to memoize the transformers within the pipeline, avoiding to fit\nagain the same transformers over and over.\n\nNote that the use of ``memory`` to enable caching becomes interesting when the\nfitting of a transformer is costly.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Illustration of ``Pipeline`` and ``GridSearchCV``\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nimport numpy as np\n\nfrom sklearn.datasets import load_digits\nfrom sklearn.decomposition import NMF, PCA\nfrom sklearn.feature_selection import SelectKBest, mutual_info_classif\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.svm import LinearSVC\n\nX, y = load_digits(return_X_y=True)\n\npipe = Pipeline(\n [\n (\"scaling\", MinMaxScaler()),\n # the reduce_dim stage is populated by the param_grid\n (\"reduce_dim\", \"passthrough\"),\n (\"classify\", LinearSVC(dual=False, max_iter=10000)),\n ]\n)\n\nN_FEATURES_OPTIONS = [2, 4, 8]\nC_OPTIONS = [1, 10, 100, 1000]\nparam_grid = [\n {\n \"reduce_dim\": [PCA(iterated_power=7), NMF(max_iter=1_000)],\n \"reduce_dim__n_components\": N_FEATURES_OPTIONS,\n \"classify__C\": C_OPTIONS,\n },\n {\n \"reduce_dim\": [SelectKBest(mutual_info_classif)],\n \"reduce_dim__k\": N_FEATURES_OPTIONS,\n \"classify__C\": C_OPTIONS,\n },\n]\nreducer_labels = [\"PCA\", \"NMF\", \"KBest(mutual_info_classif)\"]\n\ngrid = GridSearchCV(pipe, n_jobs=1, param_grid=param_grid)\ngrid.fit(X, y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import pandas as pd\n\nmean_scores = np.array(grid.cv_results_[\"mean_test_score\"])\n# scores are in the order of param_grid iteration, which is alphabetical\nmean_scores = mean_scores.reshape(len(C_OPTIONS), -1, len(N_FEATURES_OPTIONS))\n# select score for best C\nmean_scores = mean_scores.max(axis=0)\n# create a dataframe to ease plotting\nmean_scores = pd.DataFrame(\n mean_scores.T, index=N_FEATURES_OPTIONS, columns=reducer_labels\n)\n\nax = mean_scores.plot.bar()\nax.set_title(\"Comparing feature reduction techniques\")\nax.set_xlabel(\"Reduced number of features\")\nax.set_ylabel(\"Digit classification accuracy\")\nax.set_ylim((0, 1))\nax.legend(loc=\"upper left\")\n\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Caching transformers within a ``Pipeline``\n\nIt is sometimes worthwhile storing the state of a specific transformer\nsince it could be used again. Using a pipeline in ``GridSearchCV`` triggers\nsuch situations. Therefore, we use the argument ``memory`` to enable caching.\n\n
Note that this example is, however, only an illustration since for this\n specific case fitting PCA is not necessarily slower than loading the\n cache. Hence, use the ``memory`` constructor parameter when the fitting\n of a transformer is costly.