{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "##
[mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course \n", "#
Tutorial. Mlxtend.SFS: an easy way to select features\n", "###
Author: Anton Gilmanov, @wicker\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1. Intro" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\"Feature engineering and feature selection are one of the most important elements of data analysis and machine learning at all\".
\n", "
\n", "Your could read such phrase in many articles or books, and it's truth. But why do we need to select features?\n", "
\n", "
\n", "\n", "### 1. \"Noisy\" features\n", "\n", "Whatever good data you have, always there are some useful features, that help you to solve the problem and some noisy features - unuseful in your prediction model. Such features are dangerous, because it can lead to overfit. Opposite, quality of your model on hold-out data can be improved by deleting it from the dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2. Сomputation problem" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If dataset has hundreds or thousands features, fitting estimator, cross-validation or another\n", "computation can take a lot of time. Of course, we can use PCA to reduce dimension of data, but sometimes it's not available for current business-task. It's impossible to explain how business could change \"new PCA feature\" to reach their goals - it's called \"interpretation problem\". So, feature selection is useful in this case." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3. Feature engineering" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok, we have good dataset. But we added many custom features ~~to beat Kaggle's baseline~~ and now we are searching how we can select, which features improve quality metric and which not. Hm... We can use L1 regularization, it will move some weights towards 0. But what if we could fit estimator with different subsets of new features, and add one if quality metric is increasing or remove one if quality metric is decreasing, and make sure that solution takes only a couple of lines of code?

**Mlxtend SequentialFeatureSelector is what we need! **" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

**Everytime we try to select the best features :)**

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Image on web](https://i.giphy.com/media/5yLgoceFO3BdJW1zvFu/giphy.webp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, the informal intro is coming to an end. It's time to understand some formalized theory." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Introduce to SequentialFeatureSelector" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Please, install some libraries, if you haven't it in your system" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#!pip install pandas\n", "#!pip install mlxtend\n", "#!pip install scikit-learn\n", "#!pip install matplotlib" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import pandas as pd\n", "from mlxtend.feature_selection import SequentialFeatureSelector\n", "from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs\n", "from sklearn.decomposition import PCA\n", "from sklearn.feature_selection import RFE\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.metrics import accuracy_score\n", "from sklearn.model_selection import KFold, cross_val_score, train_test_split\n", "from sklearn.preprocessing import StandardScaler" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Mlxtend SequentialFeatureSelector is greedy search algorithm that is used to reduce an initial d-dimensional feature space to a k-dimensional feature subspace where k < d." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are 4 different flavors of SFAs available via the *SequentialFeatureSelector*:\n", "\n", "* Sequential Forward Selection (SFS)\n", "* Sequential Backward Selection (SBS)\n", "* Sequential Forward Floating Selection (SFFS)\n", "* Sequential Backward Floating Selection (SBFS)\n", "\n", "In \"forward\" algorithm we start with no features in our subset and add one feature on each iteration, that maximize quality metric. In the contrary, \"backward\" algorithm start with full subset of features and remove one feature on each iteration maximizing quality of our model.\n", "\n", "The floating variants, SFFS and SBFS, can be considered as extensions to the simpler SFS and SBS algorithms. The floating algorithms have an additional exclusion or inclusion step to remove features once they were included (or excluded), so that a larger number of feature subset combinations can be sampled.\n", "\n", "Lets take a look at each of them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sequential Forward Selection (SFS)\n", "\n", "\n", "### Input: \n", "\n", "Y={y1,y2,...,yd}\n", "\n", "* The SFS algorithm takes the whole d-dimensional feature set as input.\n", "\n", "### Output: \n", "\n", "Xk={xj|j=1,2,...,k;xj∈Y}, where k=(0,1,2,...,d)\n", "\n", "* SFS returns a subset of features; the number of selected features k, where k0=∅, k=0\n", "\n", "* We initialize the algorithm with an empty set ∅ (\"null set\") so that k=0 (where k is the size of the subset).\n", "\n", "\n", "### Step 1 (Inclusion):\n", "\n", "x+ = arg max J(xk+x), where x∈Y−Xk\n", "\n", "Xk+1=Xk+x+\n", "\n", "k=k+1 \n", "\n", "#### Go to Step 1\n", "\n", "* In this step, we add an additional feature, x+, to our feature subset Xk.\n", "* x+ is the feature that maximizes our criterion function, that is, the feature that is associated with the best classifier performance if it is added to Xk.\n", "* We repeat this procedure until the termination criterion is satisfied.\n", "\n", "### Termination: \n", "k=p\n", "\n", "We add features from the feature subset Xk until the feature subset of size k contains the number of desired features p that we specified a priori." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "___" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sequential Backward Selection (SBS)\n", "\n", "### Input: \n", "\n", "The set of all features: Y={y1,y2,...,yd}\n", "\n", "* The SBS algorithm takes the whole feature set as input.\n", "\n", "### Output: \n", "\n", "Xk={xj|j=1,2,...,k;xj∈Y}, where k=(0,1,2,...,d)\n", "\n", "* SBS returns a subset of features; the number of selected features k, where k0=Y, k=d\n", "\n", "* We initialize the algorithm with the given feature set so that the k=d.\n", "\n", "\n", "### Step 1 (Exclusion):\n", "\n", "x-= arg max J(xk-x), where x∈Xk \n", "\n", "Xk-1=Xk-x-\n", "\n", "k=k-1 \n", "\n", "#### Go to Step 1\n", "\n", "* In this step, we remove a feature, x- from our feature subset Xk.\n", "* x- is the feature that maximizes our criterion function upon removal, that is, the feature that is associated with the best classifier performance if it is removed from Xk.\n", "* We repeat this procedure until the termination criterion is satisfied.\n", "\n", "### Termination: \n", "\n", "k=p\n", "\n", "We remove features from the feature subset Xk until the feature subset of size k contains the number of desired features p that we specified a priori" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sequential Forward Floating Selection (SFFS)\n", "\n", "### Input: \n", "\n", "The set of all features: Y={y1,y2,...,yd}\n", "\n", "* The SFFS algorithm takes the whole feature set as input\n", "\n", "### Output: \n", "\n", "Xk={xj|j=1,2,...,k;xj∈Y}, where k=(0,1,2,...,d)\n", "\n", "* The returned output of the algorithm is a subset of the feature space of a specified size.\n", "\n", "### Initialization: \n", "\n", "X0=∅, k=0\n", "\n", "* We initialize the algorithm with an empty set (\"null set\") so that the k = 0 (where k is the size of the subset) \n", "\n", "\n", "### Step 1 (Inclusion):\n", "\n", "x+= arg max J(xk+x), where x∈Y−Xk \n", "\n", "Xk+1=Xk+x+\n", "\n", "k=k+1 \n", "\n", "#### Go to Step 2\n", "\n", "In step 1, we include the feature from the feature space that leads to the best performance increase for our feature subset (assessed by the criterion function). Then, we go over to step 2.\n", "\n", "### Step 2 (Conditional Exclusion):\n", "\n", "x-= arg max J(xk-x), where x∈Xk\n", "\n", "if J(xk - x) > J(xk):\n", "\n", "   Xk-1=Xk-x- \n", "\n", "   k=k-1 \n", " \n", "#### Go to Step 1\n", "\n", "In step 2, we only remove a feature if the resulting subset would gain an increase in performance. If k=2 or an improvement cannot be made (i.e., such feature x- cannot be found), go back to step 1; else, repeat this step.\n", "\n", "Steps 1 and 2 are repeated until the Termination criterion is reached.\n", "\n", "### Termination: \n", "\n", "k=p\n", "\n", "We add features from the feature subset Xk until the feature subset of size k contains the number of desired features p that we specified a priori." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sequential Backward Floating Selection (SBFS)\n", "\n", "### Input: \n", "\n", "The set of all features: Y={y1,y2,...,yd}\n", "\n", "* The SBFS algorithm takes the whole feature set as input.\n", "\n", "### Output: \n", "\n", "Xk={xj|j=1,2,...,k;xj∈Y}, where k=(0,1,2,...,d)\n", "\n", "* SBFS returns a subset of features; the number of selected features k, where k0=Y, k=d\n", "\n", "* We initialize the algorithm with the given feature set so that the k=d.\n", "\n", "\n", "### Step 1 (Exclusion):\n", "\n", "x-= arg max J(xk-x), where x∈Xk \n", "\n", "Xk-1=Xk-x-\n", "\n", "k=k-1 \n", "\n", "#### Go to Step 2\n", "\n", "* In this step, we remove a feature, x- from our feature subset Xk.\n", "* x- is the feature that maximizes our criterion function upon removal, that is, the feature that is associated with the best classifier performance if it is removed from Xk.\n", "\n", "### Step 2 (Conditional Inclusion):\n", "\n", "x+= arg max J(xk+x), where x∈Y−Xk\n", "\n", "if J(xk + x+) > J(xk):\n", "\n", "   Xk+1=Xk+x+ \n", "\n", "   k=k+1 \n", " \n", "#### Go to Step 1\n", "\n", "In Step 2, we search for features that improve the classifier performance if they are added back to the feature subset. If such features exist, we add the feature x+ for which the performance improvement is maximized. If k=2 or an improvement cannot be made (i.e., such feature x+ cannot be found), go back to step 1; else, repeat this step.\n", "\n", "### Termination: \n", "\n", "k=p\n", "\n", "We add features from the feature subset Xk until the feature subset of size k contains the number of desired features p that we specified a priori." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3. SequentialFeatureSelector object" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets take a look at documentation and parameters of SFS object" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**SequentialFeatureSelector**(estimator, k_features=1, forward=True, floating=False, verbose=0, scoring=None, cv=5, n_jobs=1, pre_dispatch='2n_jobs', clone_estimator=True)\n", "\n", "Sequential Feature Selection for Classification and Regression.\n", "\n", "_**Parameters**_\n", "\n", "**estimator** : scikit-learn classifier or regressor\n", "\n", "**k_features** : int or tuple or str (default: 1)\n", "\n", "Number of features to select, where k_features < the full feature set. A tuple containing a min and max value can be provided, and the SFS will consider return any feature combination between min and max that scored highest in cross-validtion. For example, the tuple (1, 4) will return any combination from 1 up to 4 features instead of a fixed number of features k. A string argument \"best\" or \"parsimonious\". If \"best\" is provided, the feature selector will return the feature subset with the best cross-validation performance. If \"parsimonious\" is provided as an argument, the smallest feature subset that is within one standard error of the cross-validation performance will be selected.\n", "\n", "**forward** : bool (default: True)\n", "\n", "Forward selection if True, backward selection otherwise\n", "\n", "**floating** : bool (default: False)\n", "\n", "Adds a conditional exclusion/inclusion if True.\n", "\n", "**verbose** : int (default: 0), level of verbosity to use in logging.\n", "\n", "If 0, no output, if 1 number of features in current set, if 2 detailed logging i ncluding timestamp and cv scores at step.\n", "\n", "**scoring** : str, callable, or None (default: None)\n", "\n", "If None (default), uses 'accuracy' for sklearn classifiers and 'r2' for sklearn regressors. If str, uses a sklearn scoring metric string identifier, for example {accuracy, f1, precision, recall, roc_auc} for classifiers, {'mean_absolute_error', 'mean_squared_error'/'neg_mean_squared_error', 'median_absolute_error', 'r2'} for regressors.\n", "\n", "**cv** : int (default: 5)\n", "\n", "Integer or iterable yielding train, test splits. If cv is an integer and estimator is a classifier (or y consists of integer class labels) stratified k-fold. Otherwise regular k-fold cross-validation is performed. No cross-validation if cv is None, False, or 0.\n", "\n", "**n_jobs** : int (default: 1)\n", "\n", "The number of CPUs to use for evaluating different feature subsets in parallel. -1 means 'all CPUs'.\n", "\n", "**pre_dispatch** : int, or string (default: '2*n_jobs')\n", "\n", "Controls the number of jobs that get dispatched during parallel execution if n_jobs > 1 or n_jobs=-1. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs An int, giving the exact number of total jobs that are spawned A string, giving an expression as a function of n_jobs, as in 2*n_jobs\n", "\n", "**clone_estimator** : bool (default: True)\n", "\n", "Clones estimator if True; works with the original estimator instance if False. Set to False if the estimator doesn't implement scikit-learn's set_params and get_params methods. In addition, it is required to set cv=0, and n_jobs=1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4. Logistic Regression with feature selection by mlxtend.sfs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this article we will use toy sklearn dataset **\"breast_cancer\"** (binary classification task). Lets load the dataset and take a look at the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import load_breast_cancer\n", "\n", "data = load_breast_cancer()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df, y = pd.DataFrame(data=data.data, columns=data.feature_names), data.target" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "columns = df.columns" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sum(y) / len(y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are 30 float not-null features and 569 examples. 62,7% of examples have class 1 and 37,3% of examples have class 0. Classes are not very skewed, so accuracy metric is suitable for us." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use LogisticRegression as our base algorithm. Firstly, scaling the data. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scaler = StandardScaler()\n", "df_scaled = scaler.fit_transform(df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then initiating cross-validation object with 5 folds with fixed random_state " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv = KFold(n_splits=5, shuffle=True, random_state=17)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Checking cv results without parameters tuning" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "logit = LogisticRegression()\n", "cross_val_score(logit, df_scaled, y, cv=cv).mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "CV result is **0.984**\n", "It will be our baseline. Trying to beat it with feature selection\n", "\n", "We will use Sequential Backward Selection, so toggle **forward** and **floating** parameters to **False**\n", "\n", "Sequential Backward Selection means that:\n", "\n", "* We will start with all features K (in our dataset K=30)\n", "* On each iteration n we fit estimator with K-n features and keep on K-n subset of features with best scoring\n", "\n", "Setting parameter k_features to tuple (1, K), so it will be subset of features in range (1, 30) with best scoring on CV as output of fit_transform method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "logit = LogisticRegression()\n", "sbs = SequentialFeatureSelector(\n", " logit,\n", " k_features=(1, 30),\n", " forward=False,\n", " floating=False,\n", " verbose=2,\n", " scoring=\"accuracy\",\n", " cv=cv,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is information about CV scoring on each iteration in log. The best quality we have with subset with 15 and from 17 to 24 features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_sbs = sbs.fit_transform(df_scaled, y, custom_feature_names=columns)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plotting results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_sfs(sbs.get_metric_dict(), kind=\"std_dev\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "SBS returns subset of dataframe with optimal K features" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_sbs.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is the subset of selected feature names:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sbs.k_feature_names_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"The best quality is {} with {} features in dataset\".format(\n", " sbs.k_score_, len(sbs.k_feature_idx_)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Quality is increased! ***0.984 -> 0.988***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Saving scores to dict and try another SFS algorithms" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sbs_dict = dict()\n", "for i in sbs.subsets_.values():\n", " sbs_dict[len(i[\"feature_names\"])] = i[\"avg_score\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we try to use Sequential Forward Selection, so toggle forward parameter to **True**

\n", "Sequential Forward Selection means that:\n", "\n", "* We will **start with 0 features**\n", "* On each iteration N we fit estimator with N features and keep on N subset of features with best scoring" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "logit = LogisticRegression()\n", "sfs = SequentialFeatureSelector(\n", " logit,\n", " k_features=(1, 30),\n", " forward=True,\n", " floating=False,\n", " verbose=2,\n", " scoring=\"accuracy\",\n", " cv=cv,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_sfs = sfs.fit_transform(df_scaled, y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ptotiing results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_sfs(sfs.get_metric_dict(), kind=\"std_dev\");" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"The best quality is {} with {} features in dataset\".format(\n", " sfs.k_score_, len(sfs.k_feature_idx_)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now the quality is equal to our baseline. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Why quality of SFS is worse, than SBS? \n", "We use \"Forward\" algorithm, so on first iteration we select one feature and fit estimator with it. It's obviously, that finding \"the best\" feature fitting one dimensional dataset is not very effective. More than that, in Sequential Forward Selection we can't remove feature once added.

Let's try to find, \"bad feature\" that we add in our dataset once and on what iteration stage this happened." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sbs_feat = set(sbs.subsets_[24][\"feature_idx\"]) # best feature set of SBS algorithm\n", "for i in range(1, 30):\n", " sfs_feat = set(\n", " sfs.subsets_[i][\"feature_idx\"]\n", " ) # iterate throw feature set on each iteration of SFS algorithm\n", " if len([x for x in sfs_feat if x not in sbs_feat]) > 0:\n", " print(\n", " 'We add \"bad feature\" # {} on {} iteration stage'.format(\n", " sfs_feat - sbs_feat, i\n", " )\n", " )\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Save results on each itertaion too" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sfs_dict = dict()\n", "for i in sfs.subsets_.values():\n", " sfs_dict[len(i[\"feature_names\"])] = i[\"avg_score\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will try to use Sequential Forward Floating Selection, so toggle floating parameter to True. It can help us to remove worst feature at each iteration additional step" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "logit = LogisticRegression()\n", "sffs = SequentialFeatureSelector(\n", " logit,\n", " k_features=(1, 30),\n", " forward=True,\n", " floating=True,\n", " verbose=2,\n", " scoring=\"accuracy\",\n", " cv=cv,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_sffs = sffs.fit_transform(df_scaled, y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plotting results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_sfs(sffs.get_metric_dict(), kind=\"std_dev\");" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"The best quality is {} with {} features in dataset\".format(\n", " sffs.k_score_, len(sffs.k_feature_idx_)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The quality is little higher, that SFS one, but SBS is the best algortihm today. Saving results to dict and lets try the last implementation - Sequential Backward Floating Selection" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sffs_dict = dict()\n", "for i in sffs.subsets_.values():\n", " sffs_dict[len(i[\"feature_names\"])] = i[\"avg_score\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "logit = LogisticRegression()\n", "sbfs = SequentialFeatureSelector(\n", " logit,\n", " k_features=(1, 30),\n", " forward=False,\n", " floating=True,\n", " verbose=2,\n", " scoring=\"accuracy\",\n", " cv=cv,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_sbfs = sbfs.fit_transform(df_scaled, y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ploting results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_sfs(sbfs.get_metric_dict(), kind=\"std_dev\");" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"The best quality is {} with {} features in dataset\".format(\n", " sbfs.k_score_, len(sbfs.k_feature_idx_)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The quality of SBS and SBFS algorithms is equal in our example. But sometimes it increased." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sbfs_dict = dict()\n", "for i in sbfs.subsets_.values():\n", " sbfs_dict[len(i[\"feature_names\"])] = i[\"avg_score\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 5. Comparing results with RFE and PCA" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Trying another feature selection and dimensional reducing algorithms" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dict_pca = dict()\n", "for i in range(1, 31):\n", " pca = PCA(n_components=i)\n", " df_pca = pca.fit_transform(df_scaled, y)\n", " logit = LogisticRegression()\n", " score = cross_val_score(logit, df_pca, y, cv=cv).mean()\n", " dict_pca[i] = score" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"The best quality is {} with {} features in dataset\".format(\n", " max(dict_pca.values()), max(dict_pca, key=dict_pca.get)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Accuracy metric is lower on PCA dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dict_rfe = dict()\n", "for i in range(1, 31):\n", " rfe = RFE(logit, n_features_to_select=i)\n", " df_rfe = rfe.fit_transform(df_scaled, y)\n", " logit = LogisticRegression()\n", " score = cross_val_score(logit, df_rfe, y, cv=cv).mean()\n", " dict_rfe[i] = score" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"The best quality is {} with {} features in dataset\".format(\n", " max(dict_rfe.values()), max(dict_rfe, key=dict_rfe.get)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "RFE quality is lower too. RFE is computationally less complex using the feature weight coefficients (e.g., linear models) or feature importance (tree-based algorithms) to eliminate features recursively, whereas SFSs eliminate (or add) features based on a user-defined classifier/regression performance metric." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Comparing CV scores of all algorithms" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.DataFrame(\n", " data=[pd.Series(dict_rfe), pd.Series(dict_pca), pd.Series(sbs_dict)],\n", " index=[\"RFE\", \"PCA\", \"SBS\"],\n", ").T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Maximum score we got with SBS and minimum 15 features in subset. RFE is worse with any number of features. PCA is better only with 9 features in subset. RFE and PCA could not find subset of features with score more than full dataset's score. SBS del with it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

**How we will choose features after this tutorial :)**

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Of course, RFE, PCA and SBS solve slightly different tasks. It's important to know how and when we should implement one or another instrument. And more important is to have an inquiring mind \n", "and test the craziest hypotheses :)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Conclusion " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this tutorial we studied something new about feature selection, understood how SequentialFeatureSelector from Mlxtend library works - it allows very easy selection from new generated features and boost model's quality. Then we compared it with another feature selection and dimension reducing algorithms.

\n", "Beginners data scientists often pay a little attention to feature selection and trying to testing many different models instead. But feature selection can boost model score very much. It's near impossible to get top Kaggle without ~~stacking xgboost~~ careful feature engineering and selecting best features.\n", "\n", "Save the best, delete the rest! That's all, folks!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Featured links\n", "Official Mlxtend Docs https://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.3" } }, "nbformat": 4, "nbformat_minor": 2 }