{"nbformat_minor": 0, "cells": [{"source": "## Feature Selection with scikit-learn (sklearn)\nJaganadh Gopinadhan\nhttp://jaganadhg.in ", "cell_type": "markdown", "metadata": {"raw_mimetype": "text/latex"}}, {"source": "Feature extraction is one of the essential step in Data Science/Machine Learning and Data Mining exercises. Effective use of feature extraction techniques helps a Data Scientist to build the best model. This note is intent to give a brief over view on feature selection with scikit-learn (sklearn). The result of a feature selection exercise is to find the most important and descriptive feature from a given data.\n\n## Note\nThe code is for getting familiarity with the utilities.", "cell_type": "markdown", "metadata": {}}, {"source": "#### Find K-Best features for classification and regression\nThe first method which we are going to explore is the selecting the K-best features using the SelectKBest utility in sklearn. We will use the famous IRIS two class data-set.\n\nThe first example we are going to look is feature selection for classification.", "cell_type": "markdown", "metadata": {}}, {"execution_count": 52, "cell_type": "code", "source": "import pandas as pd\nfrom sklearn.feature_selection import SelectKBest, f_classif\n\ndef select_kbest_clf(data_frame, target, k=2):\n \"\"\"\n Selecting K-Best features for classification\n :param data_frame: A pandas dataFrame with the training data\n :param target: target variable name in DataFrame\n :param k: desired number of features from the data\n :returns feature_scores: scores for each feature in the data as \n pandas DataFrame\n \"\"\"\n feat_selector = SelectKBest(f_classif, k=k)\n _ = feat_selector.fit(data_frame.drop(target, axis=1), data_frame[target])\n \n feat_scores = pd.DataFrame()\n feat_scores[\"F Score\"] = feat_selector.scores_\n feat_scores[\"P Value\"] = feat_selector.pvalues_\n feat_scores[\"Support\"] = feat_selector.get_support()\n feat_scores[\"Attribute\"] = data_frame.drop(target, axis=1).columns\n \n return feat_scores\n\niris_data = pd.read_csv(\"/resources/iris.csv\")\n\nkbest_feat = select_kbest_clf(iris_data, \"Class\", k=2)\nkbest_feat = kbest_feat.sort([\"F Score\", \"P Value\"], ascending=[False, False])\nkbest_feat\n", "outputs": [{"execution_count": 52, "output_type": "execute_result", "data": {"text/plain": " F Score P Value Support Attribute\n2 2498.618817 1.504801e-71 True petal-length\n3 1830.624469 3.230375e-65 True petal-width\n0 236.735022 6.892546e-28 False sepal-length\n1 41.607003 4.246355e-09 False sepal-width\n\n[4 rows x 4 columns]", "text/html": "
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
F ScoreP ValueSupportAttribute
2 2498.618817 1.504801e-71 True petal-length
3 1830.624469 3.230375e-65 True petal-width
0 236.735022 6.892546e-28 False sepal-length
1 41.607003 4.246355e-09 False sepal-width
\n

4 rows \u00d7 4 columns

\n
"}, "metadata": {}}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "##### What just happened ?\nThe selectkbest function accepts a pandas DataFrame, and target variable name and k as parameters. First we create a SelectKBest object with estimator as f_classif (because we are working with a classification problem). The we are fitting the model with the data. Once we fit the model information on feature importance will be available in the fitted model. The Annova F score of the features are accessible through the scores attributes and the p-values are available through the pvalues_. The get_support function will return a bool value if a feature is selected.\n\nNow the question is how can I determine which feature is selected? The easy way is that if the Support is Tru those features are are good. The higher the F Score and the lesser the p-values the feature is best.\n\nLet's examine the results we obtained from the iris data. The attributes 'petal-length' and 'petal-width' got higher F Score and lesser P Value; and Support is true. So those feature are important compared to other features. To understand the real-power of this method you have to check this with a data with more dimensions.\n\n##### Next ....\nIn the next example we can try to see how we can apply this technique to a regression problem. Basically there is not much difference in the code. We will change the estimator to f_regression. We can try this with the Boston house price dataset.", "cell_type": "markdown", "metadata": {}}, {"execution_count": 53, "cell_type": "code", "source": "import pandas as pd\nfrom sklearn.feature_selection import SelectKBest, f_regression\n\n\ndef select_kbest_reg(data_frame, target, k=5):\n \"\"\"\n Selecting K-Best features for regression\n :param data_frame: A pandas dataFrame with the training data\n :param target: target variable name in DataFrame\n :param k: desired number of features from the data\n :returns feature_scores: scores for each feature in the data as \n pandas DataFrame\n \"\"\"\n feat_selector = SelectKBest(f_regression, k=k)\n _ = feat_selector.fit(data_frame.drop(target, axis=1), data_frame[target])\n \n feat_scores = pd.DataFrame()\n feat_scores[\"F Score\"] = feat_selector.scores_\n feat_scores[\"P Value\"] = feat_selector.pvalues_\n feat_scores[\"Support\"] = feat_selector.get_support()\n feat_scores[\"Attribute\"] = data_frame.drop(target, axis=1).columns\n \n return feat_scores\n\nboston = pd.read_csv(\"/resources/boston.csv\")\n\nkbest_feat = select_kbest_reg(boston, \"price\", k=5)\n\nkbest_feat = kbest_feat.sort([\"F Score\", \"P Value\"], ascending=[False, False])\nkbest_feat\n", "outputs": [{"execution_count": 53, "output_type": "execute_result", "data": {"text/plain": " F Score P Value Support Attribute\n12 601.617871 5.081103e-88 True 12\n5 471.846740 2.487229e-74 True 5\n10 175.105543 1.609509e-34 True 10\n2 153.954883 4.900260e-31 True 2\n9 141.761357 5.637734e-29 True 9\n4 112.591480 7.065042e-24 False 4\n0 88.151242 2.083550e-19 False 0\n8 85.914278 5.465933e-19 False 8\n6 83.477459 1.569982e-18 False 6\n1 75.257642 5.713584e-17 False 1\n11 63.054229 1.318113e-14 False 11\n7 33.579570 1.206612e-08 False 7\n3 15.971512 7.390623e-05 False 3\n\n[13 rows x 4 columns]", "text/html": "
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
F ScoreP ValueSupportAttribute
12 601.617871 5.081103e-88 True 12
5 471.846740 2.487229e-74 True 5
10 175.105543 1.609509e-34 True 10
2 153.954883 4.900260e-31 True 2
9 141.761357 5.637734e-29 True 9
4 112.591480 7.065042e-24 False 4
0 88.151242 2.083550e-19 False 0
8 85.914278 5.465933e-19 False 8
6 83.477459 1.569982e-18 False 6
1 75.257642 5.713584e-17 False 1
11 63.054229 1.318113e-14 False 11
7 33.579570 1.206612e-08 False 7
3 15.971512 7.390623e-05 False 3
\n

13 rows \u00d7 4 columns

\n
"}, "metadata": {}}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "##### Select features according to a percentile of the highest scores.\nThe next trick we are going to explore is 'SelectPercentile' based feature selection. This technique will return the features base on percentile of the highest score. Let's see it in action with Boston data.", "cell_type": "markdown", "metadata": {}}, {"execution_count": 54, "cell_type": "code", "source": "import pandas as pd\nfrom sklearn.feature_selection import SelectPercentile, f_regression\n\n\ndef select_percentile(data_frame, target, percentile=15):\n \"\"\"\n Percentile based feature selection for regression\n :param data_frame: A pandas dataFrame with the training data\n :param target: target variable name in DataFrame\n :param k: desired number of features from the data\n :returns feature_scores: scores for each feature in the data as \n pandas DataFrame\n \"\"\"\n feat_selector = SelectPercentile(f_regression, percentile=percentile)\n _ = feat_selector.fit(data_frame.drop(target, axis=1), data_frame[target])\n \n feat_scores = pd.DataFrame()\n feat_scores[\"F Score\"] = feat_selector.scores_\n feat_scores[\"P Value\"] = feat_selector.pvalues_\n feat_scores[\"Support\"] = feat_selector.get_support()\n feat_scores[\"Attribute\"] = data_frame.drop(target, axis=1).columns\n \n return feat_scores\n\nboston = pd.read_csv(\"/resources/boston.csv\")\n\nper_feat = select_percentile(boston, \"price\", percentile=50)\n\nper_feat = per_feat.sort([\"F Score\", \"P Value\"], ascending=[False, False])\nper_feat", "outputs": [{"execution_count": 54, "output_type": "execute_result", "data": {"text/plain": " F Score P Value Support Attribute\n12 601.617871 5.081103e-88 True 12\n5 471.846740 2.487229e-74 True 5\n10 175.105543 1.609509e-34 True 10\n2 153.954883 4.900260e-31 True 2\n9 141.761357 5.637734e-29 True 9\n4 112.591480 7.065042e-24 True 4\n0 88.151242 2.083550e-19 False 0\n8 85.914278 5.465933e-19 False 8\n6 83.477459 1.569982e-18 False 6\n1 75.257642 5.713584e-17 False 1\n11 63.054229 1.318113e-14 False 11\n7 33.579570 1.206612e-08 False 7\n3 15.971512 7.390623e-05 False 3\n\n[13 rows x 4 columns]", "text/html": "
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
F ScoreP ValueSupportAttribute
12 601.617871 5.081103e-88 True 12
5 471.846740 2.487229e-74 True 5
10 175.105543 1.609509e-34 True 10
2 153.954883 4.900260e-31 True 2
9 141.761357 5.637734e-29 True 9
4 112.591480 7.065042e-24 True 4
0 88.151242 2.083550e-19 False 0
8 85.914278 5.465933e-19 False 8
6 83.477459 1.569982e-18 False 6
1 75.257642 5.713584e-17 False 1
11 63.054229 1.318113e-14 False 11
7 33.579570 1.206612e-08 False 7
3 15.971512 7.390623e-05 False 3
\n

13 rows \u00d7 4 columns

\n
"}, "metadata": {}}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "##### Univariate feature selection\nThe next method we are going to explore is univariate feature selection. We will use the same Boston data for this example also.", "cell_type": "markdown", "metadata": {}}, {"execution_count": 55, "cell_type": "code", "source": "import pandas as pd\nfrom sklearn.feature_selection import GenericUnivariateSelect, f_regression\n\n\ndef select_univarite(data_frame, target, mode='fdr'):\n \"\"\"\n Univarite feature selection \n :param data_frame: A pandas dataFrame with the training data\n :param target: target variable name in DataFrame\n :param k: desired number of features from the data\n :returns feature_scores: scores for each feature in the data as \n pandas DataFrame\n \"\"\"\n feat_selector = GenericUnivariateSelect(f_regression, mode=mode)\n _ = feat_selector.fit(data_frame.drop(target, axis=1), data_frame[target])\n \n feat_scores = pd.DataFrame()\n feat_scores[\"F Score\"] = feat_selector.scores_\n feat_scores[\"P Value\"] = feat_selector.pvalues_\n feat_scores[\"Support\"] = feat_selector.get_support()\n feat_scores[\"Attribute\"] = data_frame.drop(target, axis=1).columns\n \n return feat_scores\n\nboston = pd.read_csv(\"/resources/boston.csv\")\n\nuv_feat = select_univarite(boston, \"price\", mode='fpr')\n\nuv_feat = uv_feat.sort([\"F Score\", \"P Value\"], ascending=[False, False])\nuv_feat", "outputs": [{"execution_count": 55, "output_type": "execute_result", "data": {"text/plain": " F Score P Value Support Attribute\n12 601.617871 5.081103e-88 True 12\n5 471.846740 2.487229e-74 True 5\n10 175.105543 1.609509e-34 True 10\n2 153.954883 4.900260e-31 True 2\n9 141.761357 5.637734e-29 True 9\n4 112.591480 7.065042e-24 True 4\n0 88.151242 2.083550e-19 True 0\n8 85.914278 5.465933e-19 True 8\n6 83.477459 1.569982e-18 True 6\n1 75.257642 5.713584e-17 True 1\n11 63.054229 1.318113e-14 True 11\n7 33.579570 1.206612e-08 True 7\n3 15.971512 7.390623e-05 False 3\n\n[13 rows x 4 columns]", "text/html": "
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
F ScoreP ValueSupportAttribute
12 601.617871 5.081103e-88 True 12
5 471.846740 2.487229e-74 True 5
10 175.105543 1.609509e-34 True 10
2 153.954883 4.900260e-31 True 2
9 141.761357 5.637734e-29 True 9
4 112.591480 7.065042e-24 True 4
0 88.151242 2.083550e-19 True 0
8 85.914278 5.465933e-19 True 8
6 83.477459 1.569982e-18 True 6
1 75.257642 5.713584e-17 True 1
11 63.054229 1.318113e-14 True 11
7 33.579570 1.206612e-08 True 7
3 15.971512 7.390623e-05 False 3
\n

13 rows \u00d7 4 columns

\n
"}, "metadata": {}}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "In the example if we change the mode to 'fdr' the algo will find the score based on false discovery rate, 'fpr' false positive rate, 'fwr' family based error, 'percentile' and 'kbest' will do Percentile and KBest based scoring.\n\n#### Family-wise error rate\nThe next method we are going to explore is Family-wise error rate. We will use the same Boston data for this example also.", "cell_type": "markdown", "metadata": {}}, {"execution_count": 20, "cell_type": "code", "source": "import pandas as pd\nfrom sklearn.feature_selection import SelectFwe, f_regression\n\n\ndef fwe_feat_select(data_frame, target):\n \"\"\"\n Family Wise Error Rate based feature selection for regression\n :param data_frame: A pandas dataFrame with the training data\n :param target: target variable name in DataFrame\n :param k: desired number of features from the data\n :returns feature_scores: scores for each feature in the data as \n pandas DataFrame\n \"\"\"\n feat_selector = SelectFwe(f_regression)\n _ = feat_selector.fit(data_frame.drop(target, axis=1), data_frame[target])\n \n feat_scores = pd.DataFrame()\n feat_scores[\"F Score\"] = feat_selector.scores_\n feat_scores[\"P Value\"] = feat_selector.pvalues_\n feat_scores[\"Support\"] = feat_selector.get_support()\n feat_scores[\"Attribute\"] = data_frame.drop(target, axis=1).columns\n \n return feat_scores\n\nboston = pd.read_csv(\"/resources/boston.csv\")\n\nfwe_feat = fwe_feat_select(boston, \"price\")\n\nfwe_feat = fwe_feat.sort([\"F Score\", \"P Value\"], ascending=[False, False])\nfwe_feat", "outputs": [{"execution_count": 20, "output_type": "execute_result", "data": {"text/plain": " F Score P Value Support Attribute\n12 601.617871 5.081103e-88 True 12\n5 471.846740 2.487229e-74 True 5\n10 175.105543 1.609509e-34 True 10\n2 153.954883 4.900260e-31 True 2\n9 141.761357 5.637734e-29 True 9\n4 112.591480 7.065042e-24 True 4\n0 88.151242 2.083550e-19 True 0\n8 85.914278 5.465933e-19 True 8\n6 83.477459 1.569982e-18 True 6\n1 75.257642 5.713584e-17 True 1\n11 63.054229 1.318113e-14 True 11\n7 33.579570 1.206612e-08 True 7\n3 15.971512 7.390623e-05 True 3\n\n[13 rows x 4 columns]", "text/html": "
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
F ScoreP ValueSupportAttribute
12 601.617871 5.081103e-88 True 12
5 471.846740 2.487229e-74 True 5
10 175.105543 1.609509e-34 True 10
2 153.954883 4.900260e-31 True 2
9 141.761357 5.637734e-29 True 9
4 112.591480 7.065042e-24 True 4
0 88.151242 2.083550e-19 True 0
8 85.914278 5.465933e-19 True 8
6 83.477459 1.569982e-18 True 6
1 75.257642 5.713584e-17 True 1
11 63.054229 1.318113e-14 True 11
7 33.579570 1.206612e-08 True 7
3 15.971512 7.390623e-05 True 3
\n

13 rows \u00d7 4 columns

\n
"}, "metadata": {}}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "##### Recursive Feature Elimination\nRecursive Feature Elimination RFE, utilities an external estimator to estimate the weight of features. The goal of this method is to select features by recursively considering smaller and smaller sets.\n\nLet's examine this feature through an example. The external estimator which we are going to use is Support Vector Machine Regression (SVR) from sklearn.", "cell_type": "markdown", "metadata": {}}, {"execution_count": 3, "cell_type": "code", "source": "import pandas as pd\n\nfrom sklearn.feature_selection import RFE\nfrom sklearn.svm import SVR\n\ndef ref_feature_select(data_frame,target_name, n_feats=20):\n \"\"\"\n :param data_frame: a apndas DataFrame containing the data\n :param target_name: Header of the target variable name \n :param n_feats: Number of features to be selected\n :returns scored: pandas DataFrame containing feature scoring\n Identify the number of features based Recursive Feature Elimination\n Cross Validated method in scikit-learn.\n \"\"\"\n estimator = SVR(kernel='linear')\n selector = RFE(estimator, step = 1)\n _ = selector.fit(data_frame.drop(target_name,axis = 1),\\\n data_frame[target_name])\n\n scores = pd.DataFrame()\n scores[\"Attribute Name\"] = data_frame.drop(target_name,axis = 1).columns\n scores[\"Ranking\"] = selector.ranking_\n scores[\"Support\"] = selector.support_\n\n return scores\n\nboston = pd.read_csv(\"/resources/boston.csv\")\n\nfeatures = ref_feature_select(boston, \"price\")\n\nfeatures = features.sort([\"Ranking\"], ascending=[False])\nfeatures\n", "outputs": [{"execution_count": 3, "output_type": "execute_result", "data": {"text/plain": " Attribute Name Ranking Support\n9 9 8 False\n8 8 7 False\n11 11 6 False\n6 6 5 False\n1 1 4 False\n2 2 3 False\n0 0 2 False\n12 12 1 True\n10 10 1 True\n7 7 1 True\n5 5 1 True\n4 4 1 True\n3 3 1 True\n\n[13 rows x 3 columns]", "text/html": "
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Attribute NameRankingSupport
9 9 8 False
8 8 7 False
11 11 6 False
6 6 5 False
1 1 4 False
2 2 3 False
0 0 2 False
12 12 1 True
10 10 1 True
7 7 1 True
5 5 1 True
4 4 1 True
3 3 1 True
\n

13 rows \u00d7 3 columns

\n
"}, "metadata": {}}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "##### There is more ....\nThe RFE has another variant in sklearn called Recursive Feature Elimination Cross Validated (RFECV). The difference is that the training data passed to the estimator will be split into cross validation set. Then based on the cross validation steps the estimator fits model and selects the best model to assign the feature score.\n\nLet's see the code ...", "cell_type": "markdown", "metadata": {}}, {"execution_count": 2, "cell_type": "code", "source": "import pandas as pd\n\nfrom sklearn.feature_selection import RFECV\nfrom sklearn.svm import SVR\n\ndef refcv_feature_select(data_frame,target_name,n_feats=20):\n \"\"\"\n :param data_frame: a apndas DataFrame containing the data\n :param target_name: Header of the target variable name \n :param n_feats: Number of features to be selected\n :returns scored: pandas DataFrame containing feature scoring\n Identify the number of features based Recursive Feature Elimination\n Cross Validated method in scikit-learn.\n \"\"\"\n estimator = SVR(kernel='linear')\n selector = RFECV(estimator, step = 1, cv = 3)\n _ = selector.fit(data_frame.drop(target_name,axis = 1),\\\n data_frame[target_name])\n\n scores = pd.DataFrame()\n scores[\"Attribute Name\"] = data_frame.drop(target_name,axis = 1).columns\n scores[\"Ranking\"] = selector.ranking_\n scores[\"Support\"] = selector.support_\n\n return scores\n\n\nboston = pd.read_csv(\"/resources/boston.csv\")\n\nref_cv_features = refcv_feature_select(boston, \"price\")\n\nref_cv_features = ref_cv_features.sort([\"Ranking\"], ascending=[False])\nref_cv_features", "outputs": [{"execution_count": 2, "output_type": "execute_result", "data": {"text/plain": " Attribute Name Ranking Support\n9 9 4 False\n8 8 3 False\n11 11 2 False\n12 12 1 True\n10 10 1 True\n7 7 1 True\n6 6 1 True\n5 5 1 True\n4 4 1 True\n3 3 1 True\n2 2 1 True\n1 1 1 True\n0 0 1 True\n\n[13 rows x 3 columns]", "text/html": "
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Attribute NameRankingSupport
9 9 4 False
8 8 3 False
11 11 2 False
12 12 1 True
10 10 1 True
7 7 1 True
6 6 1 True
5 5 1 True
4 4 1 True
3 3 1 True
2 2 1 True
1 1 1 True
0 0 1 True
\n

13 rows \u00d7 3 columns

\n
"}, "metadata": {}}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "##### Variance threshold based feature selection (for un-supervised) learning\nSo far we have examined feature selection for supervised learning such as classification and regression. What about un-supervised feature selection? The variance thresholds based feature selection utility in sklearn comes handy here. This method will remove all low variance features and the threshold for this can be configured too.\n\nLet's try this in the Boston data !", "cell_type": "markdown", "metadata": {"collapsed": true}}, {"execution_count": 5, "cell_type": "code", "source": "import pandas as pd\n\nfrom sklearn.feature_selection import VarianceThreshold\n\n\ndef var_thr_feat_select(data_frame):\n \"\"\"\n Variance threshold based feature selection\n :param data_frame: a pandas data frame with only X\n :returns scores: a pandas data frame with feature scores.\n \"\"\"\n \n varthr = VarianceThreshold()\n varthr.fit(data_frame)\n \n scores = pd.DataFrame()\n scores[\"Attribute Name\"] = data_frame.columns\n scores[\"Variance\"] = varthr.variances_\n scores[\"Support\"] = varthr.get_support()\n \n return scores\n\nboston = pd.read_csv(\"/resources/boston.csv\")\n\nvar_t_features = var_thr_feat_select(boston.drop(\"price\",axis=1))\n\nvar_t_features = var_t_features.sort([\"Variance\"], ascending=[False])\nvar_t_features\n ", "outputs": [{"output_type": "stream", "name": "stdout", "text": "0.14.1\n"}, {"ename": "ImportError", "evalue": "cannot import name VarianceThreshold", "traceback": ["\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[1;31mImportError\u001b[0m Traceback (most recent call last)", "\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m()\u001b[0m\n\u001b[0;32m 3\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mpandas\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mpd\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 4\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 5\u001b[1;33m \u001b[1;32mfrom\u001b[0m \u001b[0msklearn\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfeature_selection\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mVarianceThreshold\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 6\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 7\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n", "\u001b[1;31mImportError\u001b[0m: cannot import name VarianceThreshold"], "output_type": "error"}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "#### L1 based feature selection\nAnother supervised method for feature selection is L1 based methods. The estimators used for regression is Lasso, for logistic regression LogisticRegression and for classification LinierSVC. Unlike the previous methods it will produce a new data set with selected features not the feature scores.", "cell_type": "markdown", "metadata": {}}, {"execution_count": 6, "cell_type": "code", "source": "#This example is taken from sklearn document\nfrom sklearn.svm import LinearSVC\nfrom sklearn.datasets import load_iris\niris = load_iris()\nX, y = iris.data, iris.target\nprint X.shape\nX_new = LinearSVC(C=0.01, penalty=\"l1\", dual=False).fit_transform(X, y)\nprint X_new.shape\n\nprint X[0]\nprint X_new[0]", "outputs": [{"output_type": "stream", "name": "stdout", "text": "(150, 4)\n(150, 3)\n[ 5.1 3.5 1.4 0.2]\n[ 5.1 3.5 1.4]\n"}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "If we closely look in to the resulting data set, we can see that the last feature is eliminated after the L1 process.\n\n#### Tree and Ensemble based feature importance\nWe can find the feature importance based on Tree and Ensemble classifiers available in sklearn. The ExtraTreesClassifier, GradientBoostingClassifier, RandomForestClassifier, and AdaBoostClassifier from ensemble and DecisionTreeClassifier from tree can be used for this.\n\nBack to some code with IRIS again !!!", "cell_type": "markdown", "metadata": {}}, {"execution_count": 14, "cell_type": "code", "source": "%matplotlib inline\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn.ensemble import ExtraTreesClassifier, GradientBoostingClassifier, \\\nRandomForestClassifier, AdaBoostClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n\nimport matplotlib.pyplot as plt\n\nclass CalculateFeatureImportance(object):\n \"\"\"\n Calculate the feature importance from a given data set using ensemble and \n tree classifiers.\n \"\"\"\n \n def __init__(self):\n \"\"\"\n \"\"\"\n self.classifiers = [ExtraTreesClassifier,GradientBoostingClassifier,\\\n RandomForestClassifier,AdaBoostClassifier,DecisionTreeClassifier]\n self.mapping = [\"Extra Tree\",\"Gradient Boosting\",\"Random Forest\",\\\n \"Ada Boost\",\"Decision Tree\"]\n \n def feat_importance(self, X, Y):\n \"\"\"\n Compute the importance\n :param X: a pandas DataFrame with features \n :param Y: a pandas DataFrame with target values\n :returns feature_importances: a numpy array ?\n \"\"\"\n \n feature_importances = dict()\n \n for clf_n in range(len(self.classifiers)):\n clf = self.classifiers[clf_n]()\n clf.fit(X,Y)\n imp_features = clf.feature_importances_\n feature_importances[self.mapping[clf_n]] = imp_features\n \n return feature_importances\n\n \n def plot_feat_importance(self, feat_impts):\n \"\"\"\n Plot the feature importance\n :param feat_impts: Feature importance calculated by the estimator.\n \"\"\"\n plot_nums = lambda x: x if x / 2 == 0 else int((x + 1) / 2)\n pnums = plot_nums(len(feat_impts))\n ax_index = 1\n \n fig = plt.figure()\n \n\n for name_,importance in feat_impts.items():\n indics = np.argsort(importance)[::1]\n ax_name = dict()\n ax_name[\"name\"] = \"ax_\" + str(ax_index)\n ax_name[\"name\"] = fig.add_subplot(pnums, 2, ax_index)\n ax_name[\"name\"].bar(range(len(indics)), importance[indics], color='g')\n ax_name[\"name\"].set_xticks(indics)\n ax_name[\"name\"].set_xlim([-1, len(indics)])\n ax_name[\"name\"].set_xlabel(\"Feature\")\n ax_name[\"name\"].set_ylabel(\"Importance\")\n ax_name[\"name\"].set_title(name_)\n ax_index += 1\n \n plt.tight_layout()\n plt.show()\n\niris = pd.read_csv(\"/resources/iris.csv\")\nY = iris[\"Class\"]\nX = iris.drop(\"Class\", 1)\nfimp = CalculateFeatureImportance()\ncfimp = fimp.feat_importance(X, Y)\nfimp.plot_feat_importance(cfimp)", "outputs": [{"output_type": "stream", "name": "stderr", "text": "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_agg.py:517: DeprecationWarning: npy_PyFile_Dup is deprecated, use npy_PyFile_Dup2\n filename_or_obj, self.figure.dpi)\n/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_agg.py:517: DeprecationWarning: npy_PyFile_Dup is deprecated, use npy_PyFile_Dup2\n filename_or_obj, self.figure.dpi)\n"}, {"output_type": "display_data", "data": {"image/png": "iVBORw0KGgoAAAANSUhEUgAAAakAAAEaCAYAAACrcqiAAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJztnXmcHFW597/JJBBCgCAgSwIMIKiAC4tsgg7g5Q2gwMsi\nFwEJ8gpyRVyuIriO4oaKIC6IigTkCnIBuXAvKII0oGyCJIQtsoV9D0tC2ELm/eN36lZNT8/06Zqe\nruru3/fzqc9UdS3nzEw9/ZzznGcBY4wxxhhjjDHGGGOMMcYYY4wxxhhjjDHGGGOMMSaKnqI7YIwx\nBTAL2AK4quB+mDqML7oDXc58YDGwMLOdEnFfH/BwE/uxQ6b9RcDSzPGLwPQmtmXMWFIBFgDL1Llu\nIGx5mE8qtwuA/2bsZaQCHDbGbZQSK6liGQA+CKyQ2Y5u0rMbmSVfm2l/k/DZSuF4ReCRnM81ppX0\nAlsBTwF7RFw/Lmc7WbldE3gS+EnOZzXSZldiJVVeTgXOzxyfAFwBTAYuA9YinemsCfSH638LvAAc\nArwHuB54DngMCdLEOu1WC26t564EnB6e+QhwPIPfpY8Bd6JR5h+BdSJ+X2NGy0eRjPwWvadZNgP+\ngeTlXGBS5tzKaDb0FHpnLwGmRbb5KnABsHHms5WAs8Lz5gNfJpWrccBXwudPAmeigSChT2cDzyCZ\nvQl4M/BtZO34KfHWFmOawgPAzsOcWw6Yh4RtB+BppJgA3s9Qc18/8BrpCHISsDkaWY4H1kWK49N1\n+tSLzH2J0qn13D8gJbocsBpwI3B4OL8ncA/w1vCMLwN/q9OmMc3gXuBAYEP0zr45fL4M8CB693uA\nfcL5b4bzbwL+L3q3pwDnoXd8OLJyOxkpmlmZ82eF+5dHcjcPDdwIP+9BcrY8UnBnhXNHABeHfoxD\ninWFcO6qzDOMaRnz0cjoucyWtTtvhUZ284H9M5/3UVtJVeq09xngwjrX9DJUSWWfuzrwCoNHogcA\nfwn7lzFYmMYDLwFr12nXmNGwPfAy6Zf6bPS+A7wPeLTq+r+RKqlq3o3kbjjmk8rta8iasGk414Nm\nV2/LXH84qYPGlcAnMuc2Cs/oAQ4N/XpHjTavwmtSpgAG0Mxj5cx2eub8TcD9Yf8/I573SNXxRsiM\n8Tgy1X0bWCVHP7PPXReZDB8nVay/QDOq5PyPM+eeDZ/Hmk+MycMhwOVIeYDkJTH5rcVQJfUgqQlu\nMnAaUj4vAFcjk91wa1ZZuV0W+FS4583Aqkg+Hsxc/xDp+79mjXMTwr2/Bf6EzJGPIhP/hKp2uw4r\nqXLzSWSqeAw4JvN5rZe1lrfSqcjE9xYkdF+m8f959XMfRiPFVUgV60qko7+H0Mgxq3iXB25osF1j\nYlkO+DCwExo8PQ78O/DOsD3G0EHSuqTv9b+jAd1W6F1+P1JQMY4VA8i09waazT0DvI4sEgnrkA70\nHqtxbglan1qCZnebANsh54yPZtrpSmK/sHqBD4T9yaQLfWb0DCcIGyGHhAPRi3oM8K5w7kmkJLL/\nh1rPmYJGlouR+eHIJvTvcTRi/REyrYwHNkAmFdCs6kukC8krAfvlaLdT6cWy1Gz2Ql/wb0cy8q6w\n/1ckO9eH80ejWc7eyKkoYQoyFb6A1qe+HtFm1hEimVXdhZTVechqMQUpw88ihwiAc8Jxbzj/HTRz\nWorM+O9Apr+FSNm9Ee57EsmZqcHhwN+B+8LxRsiuakbPAwyNk7oAvaQ3Mnj29AngNlLvvNPRqG0B\nMiF8nXQBNmEHJDgLgWuAb4SfI9GLBCMZwNR67orAz9Gs6nnkNfXhzPmDQl9fQDOrX9dps1uwLI0N\nlwE/qPH5fmjmMh4F7ma9+84hXZNaE635LATuRv+nrAxUk5XbF9G7fkDm/FRkunsKvf9fYbBS+2r4\n/CkkWyuFc/8a2l8EPAGcnOnDNsgBY0H43GSYg+yut2Y+mxt57wz0R78H+OIw1/SFZ99O/YV/Y9qZ\nvLL0GzSSHunaU5CczUFeYcZ0DTeFn4lgTUAjh3r0IJfQXjT6n42m4FmmAneQRmuvOpqOGlNy8srS\nDkjxDKekdgMuDftb4/U/00HErEldjRbcJwP/grxmLom4byukpOYj2+q5yHab5SPIvJUsKj4T8Vxj\n2pW8snQt8pQcjj1QrA7ITDwVhQoY0/bEKKljUSDpXBRsdimysdZjGoNjeR5hqIfNhmih8irgZuDg\niOca067klaV61JI151s0HcGE+pcwCS3S/zIc9yCXz8V17otxmZyIsiLsjEaX1yNTxT1V180m9Wwz\npkzMQcGfMeSVpRiqvTCHkz/LkikrNWUpRkn9BSmRReF4Mgo4267OfY8yOMvA2gwNNn0YmfheDts1\nSICqldS7yJ8Msln0h61o+im+H2XoA5SjH43Er+SVpXpUy9p0hgavJliWytMHKEc/BlrWA7UzUpD0\nEGLMfcuSChXI7XJyxH03I3NeLwpI3R/lpcryXygAric8c2sUfGpMJ5JXlupxMWnQ5zYoLODJJjzX\nmMKJUVIvoRiDhC3RrKceS4AzkG//IhQEeheyxR8RrlkDeG84/yxKF2IlZTqVvLJ0DnAdStr7MMqN\nmJWjS1H6rHtRep9/a1J/jSmcGHPfZ1AE9ePheE0GJzsdjh5gJhKsR1EQ49uREGX5M3G1X4qmUnQH\nApWiO0A5+gDl6UcseWXpABRzeDKSq9VQXrcs/Sj91RooQPSHDM7MXSYqRXeAcvQBytOP0hJrm14G\nKZsBNDN6PeKebVG2ghnh+Njw83uZa/pQ3qwP1XnWQAN9Nd3CeF5k6f9mvR7LdhaydNj0RY2+m3lk\nqSdc+wHSAd8ByDKR0I/MiceheMN5yA19ySj7azqfMq1JDTkXM5MCmSXWC9dvHj6rTpVTTS232K1r\ndGo75NXxKPB5bO4zsSxlhZYIV39TFWEeWcrGHEIac5hVUo+jZKqgtFXPMlRBGdN2xCips4H1kevq\nG5nP6wlWjNfTP5BX0mJgV+AilM+sFv2Z/QqeJpti6AtbHvLKUsyA71fIe/AxlPj3wxjTAcQoqS1Q\nRutGU8XHuKAvzOxfhpKWvonaBcf6G2zfmLGgwuABUkzG7IS8shRz/ZeQ8utD2bL/jNzNF9a4tj+z\nX8EDPlMMfUQM+GKU1O1ogfexBjtwMzI/3IfS0C/P0FLpq6NMwFuiQN4FjFwR05h2Jq8sxQz4tkPl\nIUAy9wBa+7q5xvP6G2zfmLGgQsSAL0ZJrYbWiW5Cxe5AI7t6HnnZ0V92MSxxmz0N2BfVOFoXpbz/\nUUR/jGlX8spSNubwMeQReEDVNXcjx4q/ocHfW0mrOhvTtsQoqf6cz94KZXjOevftyWDvvp+h1Eiv\noSJk1ZkmjOkk+nPetwQ4CmWn6EGplZKYQ9CA7zsoLnEOin88BlslTAcQo6QqOZ8ds9g7DSmunZCS\n6toSyaYrqIzi3oHMtjR8lo05fAY4ETgJKbLDgd+Noj1jSkFMxoltUVzGIhTTsRSZ5uoRo3BORjOs\nxD/e8Rumk8krSz3AT5FVYmNk6qtVm+1nKOZwU2RKN6btiZlJ/RSVNT4POTh8FNm76xGz2LsFivkA\nBSDuioS3Oscf2CPJlIM+8rug55WlmDgp12YzHUlsMO89aDT3BrJ7zybNIDEcMYu962f2z0AF4Gop\nKLBHkikHFfK7oEM+WYoxnW+I1nevQnFSPwZ+22DfjCkdMUrqJZRuZQ7wfeAJ4sxy2QSz44ArGbrY\nuyfwTWT2WBu5zV4Y331j2oq8stTM2mxgq4QpB300KU7qYLR2dRTwWVSrZp+I+2ISzF6BynUAvAP4\nA1JaxnQieWWpmbXZwFYJUw4qRFglYhwn9kIv/Qvo5f4csHvEfVk7+uukdvQsL2X2p2A7uuls8sqS\na7OZriVGSc2s8dmhEffVsqNPq3HdXsgMeBlwdMRzjWlXZtb4LEaWsnFSdwK/Z2httruBP6LYxBtR\nLj8rKdP2jGTuOwB5DK2HHBoSVkAZlusRG/N0Udh2QAu9w3k79Wf2K9iOXhytKpGhtkYqk1EEfTTu\n3TdaWYL6cVKgGlJXo/WoanOgMW3JSErqOpT+f1X08icLvC+i0Vo9YuzoWa4N/VmF2oLbH9GmaQWt\nKpEBzS6T0QwqNO7dN1pZSuKksvWkLmawC3py3QloRuWYQ9MRjKSkHkQC8SoanTVKjAv6Bii/2ABp\nbZ3YkaUx7cJoZSkmTgrgU8D5KHuLMR1BvTWpJSieY2qOZyd29GtRhP10lEgza0ffB3gILSYn171z\nyJOMaX9GI0sx67tJirFTw7FTjJmOIDZOai6qT5N44w0Q5+RwORo9bkTtstffR8rpTuTxNAP4JbBN\nXPeNaSvyylKzU4z1Z/YreH3XFEMfTYqTujBsiaCMI36UFmOmuD6zfyOacRnTieSVpbFMMWZMUVRo\nUj2pWShKPinrfjd6+WOISeeS5TDg0shnG9NuzCKfLDU7xZgxbUOMkuoDzkSLvwDrAIcQtwDciF18\nR+BjwHuHOd+f2a9gE4Uphj7yJ5jtI58sxdSTMqYjiVFSPwJ2QTn4QKPAc0m98UYi1g39nSj4cAbw\n3DDP6o9oz5ixpkL+BLOjkaV6cVIHokKH44CFyMxuTNsTo6QmkAoVwD8j7wOZKd4J3IcEa3mUADPL\njsjENx6tV50Y+ezupLsDadudvLIUEyd1P/A+7IBkOowYAbkF+DVwNhqlHYiUTwxZc1/W2yhrpjgM\neQC+gnKZ7Y8cLkwtujuQtt3JK0t2QDJdS4ySOhL4JKmb7LXAzyOfvxWKqJ8Rjo9FwvW9zDUHhZ9f\nR3FSnkmZTiWvLNkByXQtMUrqFWRquALNjO4GXot8fqPCZUwnk1eWmumABHZCMuWgjybFSe0O/ALZ\nvEGurkcQN1JrZtR7f2a/ggXLFEMf+b378spSMx2QwE5IphxUaFKc1I/Q6CzxFtoACVWMkmo0yexI\n9Oe8z5hmUmF03n15ZCkmTmodFCh8EPbsMx1EjJJ6kcEv/f3hsxhihCvBWZtNp5NXlmLipL4GrEya\nu+917IBkOoBY775LgfPC8X5I+ewdji8c4d4PoEqi/0TCeCJDhevXKKBxPMpn9mlgY+REUSb6sInR\njI7RyFK9OKn/ByxG6ZAWk8pYGemjeFkqQx+gPP0oLTGVeScBTwHvD9vT4bMPhW04ktiOHVB81COo\nuCFIuE4DdgPWAiYC26FEs+tQPgUF+dchjEkYrSzNQAO4A4C3V12zG/AWZLk4nHRGVUb6iu4A5egD\nlKcfpSVmJjUz57NjYjv2QGliQLEdU4HVgSdztmlMmZmZ8z7LkulaYpTU+qiYWm/m+gEkFCMR435e\n65rpWLBMZ2JZMqZBYpTURWjd6BJSW3iMa3ms+3m1w0St++Y08LyxpBFPrrGjv6Wt1f67l6EP0Mp+\nDNeHOQ08w7KUUgZZKkMfoAz96G9paw3JUmww7yk5OhLjfl59zfTwWTXvztG+MWXDsmTMGHAw0rPb\nomzNyVaPCSixbC/y8JtN7cXeJEZkG+CGUffWmPJiWTKmQWJmUpsg4dqR1ERBOB6JmNiOS5Fw3Yvc\nzw+N7bgxbYhlyZgx4D40ejPGjA7LkjENEhMnNRdFsnczM1Ay0HuALxbUh98gT625BbUPWvO4CrgD\nuJ00m3crmYRcrGejuLrvFtCHvHS7LFmOUixLTeRqlKzycuSVdAkquNYt9CATSi8KOq61HtAKdgA2\no1jhWoN04X0KKuBXxN9icvg5Aa29bF9AH/LQzbJkORqMZSmSmDWp4t0jiyUmkLIVXIsEvEieCBso\nK8hdKGNIq/8Wi8PPZdCX34IWt5+XbpYly9FgLEuRxCipylh3ouS4JlZtetGI9MYC2h4P/ANlET8V\nmSragUrRHSgQy9Hw9GJZGpaRlNQihg+6GgBWbH53SkkZAh/LxhTgfJQMuIg8i0uRqWQl5PHWR7kV\ngGXJcjQclqU6jKSkprSsF+WmmTWxOoGJwAXA2aQJg4viBeB/gC0pmWBVYVmyHNXCsmSaQkwgZavo\npdgF33HAWcBJBfZhVZQ8FWA54Bpg5+K6YyKxHA3GsmSayq7I++Ze4LiC+nAOKhz5KrLtFxGsuT0y\nD8wGbg3bjBb34R3Ihj4buA34QovbN/mxHKVYlowxxhhjjDHGGGOMMcYYY4wxxhhjjDHGGGOMMcYY\n0628QRpPcSuwTo5n7ElxAZTGlAXLkjFjwMImPGMWsE+D98QkITamnbAsGTMG1BKsLVCerZuBP6Ia\nNQAfB25CkeTno5Qn2wHPAvejKPP1w71bhHtWBR4I+zNRnaMrUWG2yahY3I3h3j2a9DsZUwSWJWPG\ngCWk5okL0KjsOmCVcH5/4PSw/6bMfccDR4X9M4C9M+euAjYP+9WC9TBpXq/vAAeG/akotU1SLM2Y\ndsOy1EZ4+tk+vIxqziRsCmwCXBGOe1BOMlBOrm+h9PtT0MgwYVxke38Gng/7uwAfAj4fjpdFWazn\nxXffmNJgWWojrKTal3HAHcj0UM0sZEaYCxyCasQkZOv6LEFFzwAmVT3jparjvYF78nXVmFJjWSox\n4+tfYkrKPGA1YJtwPBHYOOxPQaWpJwIHkQrTQgYX2JuP6scA7DtCW38Cjs4cbzbchca0IZalEmMl\n1T5UVzZ9DQnDCaTp/rcN576KFmb/CtyVuedclI7/FmA94IfAkWgBd5VMGwNV7R2PhPQ24HbgG834\nhYwpCMuSMcYYY4wxxhhjjDHGGGOMMcYYY4wxxhhjjDHGGGOMMcYYY4wxxhhjjDHGGGOMMZ1NP/Db\nojthTImZD+wU9r8E/Kq4rhTCqcBXiu6EKRfzgcUoK/ITSImsONINo+DrtEZJ9QFL0e+UbP/VgnYT\nekP7ToDcWfwrSta6CHgSuAElZW0mD5AqqWbRS/33sR94nVRe7mRwUcSxYCZw7Ri3UTj+Ehg9A8AH\ngRWAd6EiaZ0wknkU/U7JtmeOZ4z2/YotKmfKz78DJ6NM46uH7RPAe4FlhrmnbN9PI72PA8A5pPLy\nGeBsVALEjIKyvQTtzpPA5ajKZ8KxwL3Ai6iw2l6ZczNRCYAfAAuA+4EZmfPrAVeHey9HZamz7BGe\n+RwqX/22zLn5qPrnbWhkdzr6YrgMeAFVC51K47wdqIQ2b0dVRhNmIXPDpWi03AeshUp0PxV+v09l\nrt8KuDn05wlU7gDgmvDz+dD3rXP005SHlVBJiiOBC0mLAM5GNZpeC8ezGPr+7I5KZ7wAPISsCVkO\nBh4EnkHmvSz9DLY8bIPKxD8X2n5/5lwF+CaSxxdR3aeknHzM+ziOwUrs8nDtBpnPPo6KHT6LLBNr\nZs5tB/w9tHETaakQ0PfEfaFf9wMfQbL+i3DdQvT9AfobHh/2+4BHgM+h76bHwrMSVgEuQX/bm1AF\n4o6fmXUjDwA7h/3pSCl8LXN+X2CNsP9hJHyrh+OZSEAPQy/4J9AMJuF69MU9EdgBvaRnhXMbhWft\njMpdfwEJQFJt+QEkkKshRfEkqnXzLlSy+sqqfmbpAx6u8flEpHCPDe3sGPq0UTg/CwlZImDLoXo7\nXwnXr4eEbZfM73dg2J9MKvzrYnNfJzEDmcLq/T9nMfj9WRYpkmTQ9w40mElm9RujL+jt0WzsxNBO\nYu7LmsenIUWWDAI/EI4TRVRB8vMWVFn3KuC74VzM+9ifaWscsq4sIDX97wQ8Dbw79PUUNAAFeBNS\nnAeGNv413LsysDxSIhuGa1cnLch4CEOVyhlI2YLk+PXQtx5gVzRAWCmcPxf4Xfh9344GAddgOo75\nSFBeRC/yHxj5Zb4VzYBASipbRnpyeMabgXXQC7Zc5vx/kCqpr6KXLGEcGjW9Lxw/AByQOX8+8LPM\n8VGhr7XoA95AgpNs+yJF+XjVtb8jHd3OClvC1miUm+U44Ddh/2okQNUzxF6spDqJgxj63iQzmsVI\nycDQ96cWJwM/CvtfQ+9fwmTgVVIl1U+qOL5IKjsJfwQ+GvavYvBM7EhkdYD4NalX0e+0CJWT/3zm\n/OnA9zLHy6MB6rpoNnhD1fOuQ0pocnjm3gz+LoDaa1JnMHgmtbiq308iC0ZPaH/DzLnjazyvcPwl\nMHoG0MhuRfRS7ERaRhokBLeSftlvSjp6A40MExaHn1PQ7Oc54OXM+ewX/lpo5JPtx8NoxJjwZGb/\n5arjV0I7w/EYGskl2/mhzeoZ1oPh86QPj2TOrZv5PZLtOKSEQTPIjVDF05uQacd0Hs+igUj2+2Y7\n9F49m/k8eYezbI0UyFNolnUEqfysxeD3bXF4Xi3WBfZj8Lv4XlIrBwyWxZcZWT5q8Xv0O01BZr5D\ngMPDuTUZLL8vhb5OC+eysgypXC0G9kdWlseA/wbe2kCfnkUKNmFx6N9qyLqR/Xtn/5alIVZJ9aLp\nMUizj5X3WrtzDfATtDgMEoxfAp9EU/qV0TpOjEPA4+H6yZnP1s3sP1p1PA5Ym8HmwmpG64jwWGgj\n+5x1q9rMlsp+CM3osspuRWQKAZkOP4IE5gSkCJdjaHnvTqKX7pOl69EsY696F9bgd8BFyJQ+Fa3D\nJO9f8j4mTGbwADDLQ2hWlX0XVwC+H9GHmPdxgMFy8SCaqSVrto+h/33C8qGvj4RzWVmGwXJ1OTKR\nrwHcTepWP1y/Yvr7NJrtZf9+aw9zbaHEKKnDgf8ETgvH0xneTGRkjtgKjQCXRy/MM+hvfSiaScXw\nIHIq+AZaC9qe9Msd9D/ZHc3cJiLvqVeQmWCsuAGNxI4JbfaFPiVmx2oleBMyhR6DlE8P+v2TmeZB\npN5PL6C/1VIkQEsZvOjcCXSrLD2P3uOfA/sg5TAerc8sn7mu1iBqCpr1vIbk6iOZcxeg9y/xEPwm\nw3+nnY0Uxi7oPZyE3t+s5WG4QVzM+1h973Tg/6BBKcjz71DSNeHvIHl6CJkVN0Lm+Qlo5vQ2NGt6\nM7LULI/M/y8hUzzIMjIdyWK2HzGD0TeQE0s/ks23IbNj6QaIMUrqk+gL8sVw/E9Sc40ZyjPAmcgG\nfidazL0emRI2Rd5DCQMMfSmyxx9Bym4Bsr+fmTk3D33J/wQJ0e5ICJeM0LeBqv2RXsha514Pbewa\n2vwperH/Ocwzl6IvkXcjr6Sn0cwymT0kQrwQOAktGL+KFOG3gb+hL6itRuhnO9HNsvQD5GV2DJKF\nJ9Cs6BgkH1D7nfw3pHxeROuwv8+cuwP9TX+HZiMLGGy+yj7vEfRl/yVkOnwIDezGVV1f696Y93EA\nKZckTuomJOvfCOevDP2/IPR1PfS+g0xyHwz9eQatZSWOF+OBz6JZ1bNoXfjIzDPvQH/Lp2r0u/p3\nquYo5ETxBPpuOYfU07KtuCn8vDX8nIA82GKYgaan96Av7Wo+H557KzAXfcHmcYs2ph0YS1kCzQxu\nRYq/kquHpps5ATletB0/AL6MRu7/gswT3464rwetOfSi6ehs5OY4HB8ErhhNR40pOWMpS1PRqHp6\nOK72mDSmmrcC70Szya2QpWOPEe8oKT3Iln5+2D5OnM1zW7RwmHBs2Ibjd8jby5hOZSxlKTGLGRPL\nlmhm/hIyxw83Qy89yyPhSuhhsMfZcOzL4OSOyfpJLSYje6tNfaaTGUtZOgmtEV6FHG4Ozt9NY8pD\njOPEXxgcRDaZOLNcI14iH0KLjM8Pc3426YKgN29l2mYTz1jK0kRgc2A35JDyVQYHaiZYlryVdasp\nSxNqfVjFsiiCOmEhcaO/R4HN0GJvYlOv1LiuD7nkLgzn+2pc8y6KTzbaH7ai6af4fpShDwADLemF\n2hju/Rto4EmjkaXqeJbqwMuHkWfYy2G7BsnNPVXXWZbK0wcoRz/K0AcYRpZilNRLwBYoBxvIjvny\n8Jf/L0meuB3D/rPIHJFlKkooOYDiBJbHmM4lryzdjBa470Nu/cuT5otMeBylm9oTKaE1SNMHGdO2\nxCipzwDnkebeWhPFA9RjC2AOylnVg/K0vYPU++g0FAc0D7nNJiNAYzqVvLKUHWFmZ0FHhJ+nobif\neSiodSkKFr1zNJ01pgzEKKm/I3fXtyJhmYeCOusxDSmfj4fjg1BgajbJ4obIbLEJGi3+mPJWnq0U\n3YFApegOUI4+tCN5ZWkrFE+VZPA+Fs2Yvld13f0MLp1SVipFd4By9AHK0I9xfIGBISVQxobxLGRp\nY6nAYpQUyCyxXrh+8/BZdUbhamJs9cli787INn89ShVSbUeHwTbTCq3/57a6veGoFN0BytGHouij\n9rppLHlkaRpDE4FW1zQaQElb56A1rM9T3plUpegOUI4+QBn6McDklq1I9bNCo7fEKKmzgfWR58Ub\nmc/rCVYzF3uhHAt7xlQY/MXSyAg0ryzFDPj+gWRsMUpbdRFpna9q+jP7FcrwRWm6kT4iBnwxSmoL\nVGSrES8m8GKvMdXklaWYAd/CzP5lKJnrm0grtmbpb7B9Y8aCChEDvpg4qdsZXOY4lpEWe5MF3+xi\nbw9e7DWdTV5Zuhmt3/YiWdkfuLjqmtVJ5WyrsF9LQRnTVsTMpFZDiuMmlKEapIDq5XjqtMVeY0ZL\nXllaghJ/zkPK50pUKDLr3bcvyo49EZn5jmtmx40pihgl1Z/z2Z222GvMaOnPeV8PKhX+ViQniZfg\naZlrfoZKX/wZrenWWtc1pu2IUVKVnM/2Yq/pRPrI791XyXnfVihjy/xwfC6yStxVdd2nUOLa9+Rs\nx7Sa8bzI0sY93nK21bD7dxmIUVLbAqegkduyaFS3iPplr73YazqRCvm9+/LKUoxVYhpSXDshJdWo\nc4YpgqWsUGb37zIQo6R+iipInodiPD6KzA71iPHuWx1VlNwSxUgtwIu9pnPJK0sxCudktO47QP0S\n4v2Z/Qq2Sphi6KNJLugg+3YPiu04A8V5jFQbCuJSuSSLveui8tB2PzedTh5ZirFKbIHMgKCCh7ui\nbBbVXoBgq4QpBxUirBKxCWaXRc4N3weeIC6Lcox338+QN9JryEThxV7TyeSVpRirxGdR0cOlwApI\ntmopKGPaipg4qYPDdUchB4fpwD4R99Wyo0+rcc2eKBM62I5uOpu8shQTc3gFytayGarNduRoO2tM\nGYiZSe2fR0TXAAAWaUlEQVSFEr++TGom+HT4bCRsRzedSB/5vfvyylKMVeKlzP4JqFKvMW1PjJKa\nyVAhOrTGZ9XEFD3cgTSQcULYtx3dlJkK+b37ZpJPlmK8+0BK8Lsoq8UuDfTLmNIykpI6ANV7Wg+4\nJPP5CqiAYT1iih5uQDoCvAgJnu3optMYrSzFmsEvCtsOqORNjOegMaVmJCV1HUoAuyrwQ1JT3IvI\n9FCPmKKHWRPFhPBsYzqN0cpSjHdflmuRPK1CbSXYn9mvYNO5KYY+RumC/iASjleRgmmUmKKHYBOF\n6XxGK0vZBLOPoQSzB1RdswHKgzlAWqdquFlaf44+GNNsKjTBBX0JiueYCjzfYAeabaLoz+xX8OjP\nFEMf+RwnRiNLMQlmv4UGfKCQjk/k6KMxpSM2TmouSlyZmOcGgKPr3DeWJgpjiqJCfseJvLIUk2D2\nFKSYXkBegP3AOQ30zZhSEqOkLgxbMjMaR9wsKTFRHILKBqwH/KTqmg2AbYBjgOWAtZCZMGYx2Zh2\nI68sxSSYvT6zfyPp2q8xbU2MkpqFouST7OR3IzfxeixBI8Q/IDv6N4H90MjzCTQK3AeZLBYBTyLv\nv18ixWVMpzGLfLIU64KecBhwaY7+GVM6YpRUH3AmWvwFWAfNjmIWgBcAfyENQnwj/EzMFN8PW8LK\naFZlTCfSRz5ZaiQTy47Ax4D3jnBNf2a/gtd3TTH00aQEsz9CXnfzwvFGyNyw+bB3pHgEaExKXlmK\nXd99J/ArNCh8boTn9Uf01ZixpkKTEsxOIBUqgH9G3gfNHQH2Z/YrePRXHN1dqK2P/GmR8spSjAv6\nOmi96yC0fmVMRxAjILcAvwbORgu9ByKhiaGZI8D+yDbNWNPdhdoq5PfuyytLMS7oJ6KSN0ng8KPI\n4cKYtiZGSR0JfJLUTfZaVEE3hpgSAzsiE9945LF0YuSzjWk38spSjAv6v6HEsnuhgZ7lyHQEMUrq\nFeR1dwUy392NggVjiCl8eBiKxH8F+BwyZXgEaDqRvLIU44L+dNh2b1JfjSkFMUpqd+AXKOUKwPpI\nycQ4OMSUGDgo/Pw6ckX3CNB0KnllqVEHJGM6hljvvh1JF2M3QEIVo6QsXMak5JWlZhcD7c/sV7AT\nkimGPprkgv4ig72F7ic+W3kzhas/s1+hWwWruz3rykAf+b378spSoynG6tE/inuNaRYVmuSCfgsa\n6Z0XjvdDDhF7h+MLR7h3beQq24e8msYzVLhOAXZFtXX+Y4Rn9Uf0dSzpowyKsbs968pAhdF59+WR\npRgHJJAsHYjWeP+CqhCUkT6Kl6Uy9MFEMD7imknAU8D7w/Z0+OxDYRuOHuAolIdvV1T07aMMLmq4\nG/AWFANyCbBvY91vKX1Fd8C0PXllaSQHpMQJ6UBUFmcCMAW4IfwsI31Fd4By9MFEEFs+Pg+JR9KP\ngf9B2c1vZnB8x2aoTMfDwIpIqB4B3oacKIzpJGbmvC/GAWmH8Pzfh+O70YzLcmTamhgltT7wKRTt\nnlw/AOxR577EaeKysCVFDyGN77gEOIvU3n4F8EUsWKYzGa0sJdRyQKp1zXSUuNkM5TgaM9Xmx2u7\noyJGSV2E1pMuQfZwiHOIiHWaGFd1XOu+OQ08byxpzUtdj/6Wtlb7716GPkAr+zFcH+Y08AzLUko5\nZKkVyNGpDO8wlFuea8pSbDDvKTk6EuORVH3N9PBZNe/O0b4xZcOyZMwYcDDSs9uibM3JVo8JyBup\nF1gGmI1SuWTZjTRGZBu02GtMp2JZMqZBYmZSmyDh2pHUREE4HoklyLvvT8jT73SGJsW8FAnXvaic\n9qGxHTemDbEsGTMG3IdGb8aY0WFZMqZBYuKk5qKKud3MDOTSew/yPiyC3yBPrbkFtQ9a87gKuAO4\nnTSbdyuZBNyITF53At8toA956XZZshylWJaayNUo9f/lyCvpEgYH5HY6PciE0gtMpPZ6QCvYAcWV\nFSlca5AuvE9B9Y2K+FtMDj8noLWX7QvoQx66WZYsR4OxLEUSsybVPa6itYkpk9AKrkUCXiRPhA0U\ny3YXsBat/1ssDj+XQV9+C1rcfl66WZYsR4OxLEUSo6QqY92JkuNM7rXpRSPSGwtoezzwD5RF/FRk\nqmgHKkV3oEAsR8PTi2VpWEZSUosYPuhqALomgroMgY9lYwpwPvBpiskOshSZSlZCHm99lFsBWJYs\nR8NhWarDSEqqrMkpW02zyyS0OxOBC4CzUQaFInkB5YXckpIJVhWWJctRLSxLpinEBFK2il6KXfAd\nh3ItnlRgH1YFpob95YBrqF22wpQLy9FgLEumqeyKvG/uRYkpi+Ac4DFUK+hhignW3B6ZB2ajWkW3\nkmbmbhXvQDb02Sgz+Bda3L7Jj+UoxbJkjDHGGGOMMcYYY4wxxhhjjDHGGGOMMcYYY4wxxphu5Q3S\neIpbgXVyPGNPigugNKYsWJaMGQMWNuEZs4B9GrwnJgmxMe2EZcmYMaCWYG2B8mzdDPwR1agB+Dhw\nE4okPx+lPNkOeBa4H0WZrx/u3SLcsyrwQNifieocXYkKs01GxeJuDPfu0aTfyZgisCwZMwYsITVP\nXIBGZdcBq4Tz+wOnh/03Ze47Hjgq7J8B7J05dxWwedivFqyHSfN6fQc4MOxPRaltkmJpxrQblqU2\nwtPP9uFlVHMmYVNgE+CKcNyDcpKBcnJ9C6Xfn4JGhgnjItv7M/B82N8F+BDw+XC8LMpiPS+++8aU\nBstSG2El1b6MA+5ApodqZiEzwlzgEFQjJiFb12cJKnoGMKnqGS9VHe8N3JOvq8aUGstSiRlf/xJT\nUuYBqwHbhOOJwMZhfwoqTT0ROIhUmBYyuMDefFQ/BmDfEdr6E3B05niz4S40pg2xLJUYK6n2obqy\n6WtIGE4gTfe/bTj3VbQw+1fgrsw956J0/LcA6wE/BI5EC7irZNoYqGrveCSktwG3A99oxi9kTEFY\nlowxxhhjjDHGGGOMMcYYY4wxxhhjjDHGGGOMMcYYY4wxxhhjjDHGGGOMMcYYY4zpDi4FDo64biHQ\nO7ZdMcYY047MBxYDLwLPAX8DjiC+Bk1ZWYSU30JgKfodk+MDCuyXMcaYBngA2Cnsr4AKnd2PSkd3\nCtnfsRrXKjPGmBJT6wv8PcAbqAooqCrnD4EHUc2aUxlcMG1PVDrgBeBeVNUToAIcFvbfAlyNKn8+\njUoIJCwF1g/7KwFnAU+hWd6XSWd1M1Epgh8AC5AyndHg79gHPAIcAzwOnBmef2zo+zPA74GVM/dv\ng8p2Pxd+z/dHtGmMMaYJDDfLeBCZ/QBOAi4CpqLiahcD3wnntkKKZ+dwvBbw1rB/FfCxsH8OcFzY\nX4bB1UWzSuos4A/A8sC6qNBb8oyZqKbOYUixfAJ4tMHfsQ94HfguqpczCfg0UkJrhc9+AfwuXD8N\nKa5EGX4gHK8a0a4xxphRMpySuh4plXFofWf9zLlt0SwG4DTgxGGenVVSZ4Zrp9W4LlFSPcCrwNsy\n5w4PzwEpqWw568nh3jcP035CtZJ6FSnKhDsZ/DdYEynDHuCLSHFm+SPw0TptGmO6CFfmbT3TkUlt\nVaQMbkHmrueAy0hnEtOB+yKedwxSeDehSp+H1rhmVTSTeTDz2UMMVmxPZPYXh59TItrP8jRSQgm9\naPaW/H53AkuA1dFsbr/MueeA9wJrNNimMaaD8eJ2a3kPMn39FXgWeBnYGK3hVPMwWm+qx5NoVgT6\nkr8CrVHdn7nmGWSK6yUtgb0OWkNqJtVluR9CSvP6Gtc+BPyWtO/GGDOE2JlUL1ozAI3+VxyT3nQe\niWPCisAH0frRb4E7kDntV8DJwGrhummkzhGnoy/4ndD/aRrpmlSW/dCsC7SGNRCeneUN4Dzg22h2\ntC7wWeDs3L9ZHL9Aa2zrhOPVgD3C/tnI43EXZP6bhEyGtcyWxhgzLIcDfyc1PW0EXFlcd9qGB0jj\npJ5HcVJHMjhOalmkOO5DHnx3Akdlzu8FzAnPuAf4l/B5dk3qBDQjWoi86P5f5v43SNe8piIF+RSa\nxXwl05dDgGuq+p+9d6TfMbsm9VDV+XFIGd4dfod7gW9lzm+FPBWfDf26BFi7TpvGGDOIOejL9NbM\nZ3Mj7/0NMkeNdP0p6At4DrBZng4aY4zpTGLMfa+GLWECQ9cehuMMRo632Q2tu2yIZmynRj7XGGNM\nFxCjpK5GgZ+TkbnpP5FZJoZrkdfWcOyBXKgBbkQmqdUjn22MMabDiVFSxyLX4rkoCPVStJ7RDKYh\nL7aER0idAIwxxnQ5MS7ok5Cn2S/DcQ+wHGkszWipTrhay5Q4G3hXk9ozppnMAd5ddCeM6VRilNRf\nUGqeReF4MvAnBqffycujDPbmmk7tdDzvovjs4f1hK5p+iu9HGfoAMNCSXqiN4d6/2PVZY0wOYsx9\ny5IqKJCr8+QmtX8xaRqcbZCr9pNNerYxxpg2J2Ym9RKwBUrfA7AlypQQwzkos/WqaO3p6yg9Dyjf\n3KXIw+/e0E6tlD7GGGO6lBgl9RmUrSBJ3bMmsH/k889EsU8vAb9maC2lVZEL+qLQl3cC/4h8dqup\nFN2BQKXoDlCOPhhjuoDYdZ5lUEqeAVTi4fWIe3rCtR9A60x/R5Vb78pc04/MicchhTUPuaAvqXrW\nQAN9Nd1DWdak/G4aM0bEJpjdElgvXL95+Ky6zEI1WyEz3vxwfC4q4pdVUo+j2RMov92zDFVQxhhj\nupQYJXU2yuE2G+VzS6inpGrFQG1ddc2vkPfgY6jE+ocj+mOMMaZLiFFSW6ByEo262sZc/yWk/PqA\nDYA/I3fzhTWu7c/sV/C6iCmGvrAZY1pAjJK6HTlLPNbgs6tjoNZmaP2i7VAWcFAm8AfQ2tfNNZ7X\n32D7xowFFQYPkL5eTDeM6Q5ilNRqqITETaSJZgdI6wINx80ocWwvUnD7I8eJLHcjx4q/IYeJtzK4\nWJ8xxpguJkZJ9ed89hJUG+lPyNPvdOQ0cUQ4fxoqiHcGSi0zHpVCX5CzPWOMMR1GjJKqjOL5A5kt\nqRZ7Wub8M8CJwElIkR0O/G4U7RljjOkgYtIibYtinBah+KilqMpqPXqAn6J6UhsjU9/bq66ZCvwM\nlRHfFNg3qtfGGGO6ghgl9VPgI6h67iTgMODnEfdl46ReJ42TyvIR4AJSh4pnIp5rjDGmS4hRUiAF\n1YPipOpV202oFSc1reqaDYE3AVchR4uDI/tjjDGmC4hNMLsscm74PvAEcWlgYuKkJqIMFjujzOrX\nAzcgpVhNf2a/guOkTDH04TgpY1pGjJI6GM24jgI+i2o+7RNxX0yc1MPIxPdy2K5Bwbz1lJQxRVHB\ncVLGtIwYc99eSIG8gBTF54DdI+7Lxkktg+KkLq665r+A7ZEpcTJKm3RnxLONMcZ0ATFKamaNz2Lq\nPmXjpO4Efk8aJ5XESt0N/BG4DbgR5fKzkjLGGAOMbO47AHnfrQdckvl8BZStPIZ6cVIAPwSuRutR\n1eZAY4wxXcxISuo6VEpjVaRIEmeJF9HMpx5JnFS2ntTFDC7VkVx3AppRuS6PMcaY/2UkJfUgUi6v\noplOo8TUkwL4FHA+8J4cbRhjjOlg6q1JLUGxUVNzPDsmTmoaUlynhuNGy4EYY4zpYGLjpOaiWk8v\nhc8GgKPr3BejcE4GjiUtwT2Sua8/s1/BcVKmGPpwnJQxLSNGSV0YtkTpjCNOAcXESW2BzICgta9d\nUQqlald1cJyUKQcVHCdlTMuIUVKzUMaJjcLx3UiR1COmntT6mf0zkBdhLQVljDGmC4lRUn3AmciR\nAmAd4BDqO1PE1JMyxhhjhiVGSf0I2AWYF443Qia6zSPuvQyZBk8GPoZipU6ouuZAVOxwXHj2vcS5\nuBtjjOlwYpTUBFIFBfDPyPsgLlbqfuB9KO3SDOCXwDaRzzfGGNPBxCibW4BfA2ej2c6BaL0phphY\nqesz+zeiBLbGGGNMVO6+I5FSORoF3t4RPoshJlYqy2HApZHPNsYY0+HEzKReQSa7K9D60t3Aa5HP\nbyQ4d0e0bvXeYc73Z/YrOE7KFEMfjpMypmXEKKndgV+gtSOQ2/gRxM14YmKlAN6JMqDPAJ4b5ln9\nEe0ZM9ZUcJyUMS0j1rtvR7S2BLABUlAxSiomVmodFCx8UKYNY4wxJkpJvchg5XF/+CyGmFiprwEr\nk+bvex05XBhjjOlyYr37LgXOC8f7oRnS3uH4whHunQGchBw0fkUaI5UN5F0MLAjXzARujehTEfRR\njnWwPorvRxn6YIzpAmK8+yYBTwHvD9vT4bMPhW04khipGcDGyMz39qprdgPegkyCh5POpspIX9Ed\nCPQV3QHK0QdjTBcQM5OamfPZMTFSe6CUS6AYqanA6sCTOds0xhjTQcQoqfVRfFRv5voBpGBGolaM\n1NYR10zHSsoYYwxxSuoilHHiEpR7D+Lin2JjpKprSNW6b04DzxtLyuJuXIZ+lKEPrQxMGO79m9Oy\nHhjThcQG856S49kxMVLV10wPn1Xz7hztG2OM6QIORuPVbVHm82SrxwTgPmQmXAaYTW3HiSTeahvg\nhlH31hhjTMcQM5PaBCmqHUnNfYTjkYiJkboUKap7UWn6Q2M7bowxxoBmQ8sU3QljjDHdR0yc1FyU\nEaKbmYES694DfLGgPvwGeT3OLah90PrhVSgT/u0oM36rmYTCFWYDdwLfLaAPxpgScTVK+no58vC7\nBBUu7BZ6kDmyF5hI7bW1VrADsBnFKqk1SJ1YpqBimEX8LSaHnxPQOub2BfTBGNMCYtakyuFqXBwx\nQcmt4FqkKIvkibABLEJ/g7Vo/d9icfi5DBpELGhx+8aYFhGjpCpj3YmSExOU3I30opndjQW0PR74\nB8rIfyoy+xljOpCRlNQihg9gHABWbH53SkkZgojLxhTgfODT6D1pNUuR2XEl5D3ahwdTxnQkIymp\nKS3rRbmJLdzYLUwELgDORtlIiuQF4H+ALbGSMsZ0KTFBya2il2IdJ8YBZ6HyK0WxKkpEDLAccA2w\nc3HdMcaY4tkVebLdCxxXUB/OQdWNX0VrZEUEPm+PTG2zUd2vW5F7fit5B1qPmg3cBnyhxe0bY4wx\nxhhjjDHGGGOMMcYYY4wxxhhjjDHGGGOMMcYYY/LzBmls0q3AOjmesSfFBSIbY4zpYBY24RmzgH0a\nvCcmCbExxpgup5aS2gLlrLsZ+COq9wTwceAmlJXhfJQ+aDvgWeB+lLFh/XDvFuGeVYEHwv5MVDPs\nSlTkcDIqunhjuHePJv1OxhhjOoQlpKa+C9AM5zpglXB+f+D0sP+mzH3HA0eF/TOAvTPnrgI2D/vV\nSuph0hx53wEODPtTUYqopPCgMcaMGTbltA8vo/pNCZsCmwBXhOMelNsPlN/uW6iUxRQ0y0oYF9ne\nn4Hnw/4uwIeAz4fjZVE2+Hnx3TfGmMaxkmpfxgF3IDNeNbOQSW4ucAiqt5SQrY+1BBUQBJhU9YyX\nqo73Bu7J11VjjMnH+PqXmJIyD1gN2CYcTwQ2DvtTUJn3icBBpIppIYOLVc5HtZgA9h2hrT8BR2eO\nNxvuQmOMaSZWUu1DdYXg15BiOYG0dMa24dxXkZPDX4G7Mveci0pb3AKsB/wQOBI5Q6ySaWOgqr3j\nkcK7Dbgd+EYzfiFjjDHGGGOMMcYYY4wxxhhjjDHGGGOMMcYYY4wxxhhjjDHGmE7n/wO9pSvO7iJc\nQgAAAABJRU5ErkJggg==\n", "text/plain": ""}, "metadata": {}}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "#### What just happened ?\nThe code has some level of abstraction. It is iterating fitting each estimators in the data with default parameters. Then the feature importance is plotted for visual examination. If we change the parameters for each estimator the result will vary. Try with a dataset with more attributes.\n\n### Combining Multiple Feature Selection (Feature Union)\nSklearn provides a handy utility to combine various feature selection techniques applied above. This is facilitated through the FeatureUnion API. Let's see how FeatureUnion is working.", "cell_type": "markdown", "metadata": {}}, {"execution_count": 19, "cell_type": "code", "source": "import pandas as pd\n\nfrom sklearn.pipeline import FeatureUnion\nfrom sklearn.decomposition import PCA\nfrom sklearn.feature_selection import SelectKBest\n\n\ndef feature_union(X,y):\n \"\"\"\n Apply feature union with PCA and SelectKBest\n :param X: pandas DataFrame with attributes\n :param y: pandas Series with target variable\n :returns new_feats: pandas DataFrame with new features\n \"\"\"\n \n pca = PCA(n_components=2)\n kbest = SelectKBest(k=1)\n f_union = FeatureUnion([(\"pca\", pca), (\"kbest\", kbest)])\n selected_feat = f_union.fit(X,y).transform(X)\n \n new_feats = pd.DataFrame(selected_feat)\n new_feats[\"target\"] = y\n \n return new_feats\n\niris = pd.read_csv(\"/resources/iris.csv\")\n\nY = iris[\"Class\"]\nX = iris.drop(\"Class\", 1)\nnew_f = feature_union(X,Y)\n\nnew_f.head()", "outputs": [{"execution_count": 19, "output_type": "execute_result", "data": {"text/plain": " 0 1 2 target\n0 -2.237799 -0.296785 5.6 1\n1 2.346082 -0.109259 1.6 0\n2 -2.877376 0.472073 6.0 1\n3 2.374624 0.205149 1.4 0\n4 -2.583883 0.029046 5.6 1\n\n[5 rows x 4 columns]", "text/html": "
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
012target
0-2.237799-0.296785 5.6 1
1 2.346082-0.109259 1.6 0
2-2.877376 0.472073 6.0 1
3 2.374624 0.205149 1.4 0
4-2.583883 0.029046 5.6 1
\n

5 rows \u00d7 4 columns

\n
"}, "metadata": {}}], "metadata": {"collapsed": false, "trusted": false}}, {"source": "#### What just happened ?\nHere in this example we have used Principal Component Analysis (PCA) and SelectKBest method together in a pipeline to select the features. The PCA selected 2 features and SelectKBest selected one feature; which is original one. Alltogether the pipe-line created a new dataset with three features. This method will return a new dataset with selected features not the scores.\n\nWe can use the Grid Search hyper parameter tuning with L1 based feature selection in combination with FeatureUnion for better results. There are examples available in the sklearn documentation for the same too.\n\nI will explain PCA in a separate note-book.\n\n### Closing Notes\nThe example given here is just for demonstration sake. You can use the code with different data set and check how it is affecting the classification/regaression/clustering accuracy. I will create a separate note on how the accuracy is being improved with these tricks.\n\nHappy hacking !!!!", "cell_type": "markdown", "metadata": {}}], "nbformat": 4, "metadata": {"kernelspec": {"display_name": "Python 2", "name": "python2", "language": "python"}, "language_info": {"mimetype": "text/x-python", "nbconvert_exporter": "python", "version": "2.7.6", "name": "python", "file_extension": ".py", "pygments_lexer": "ipython2", "codemirror_mode": {"version": 2, "name": "ipython"}}, "celltoolbar": "Raw Cell Format"}}