{ "metadata": { "name": "" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Feature Engineering\n", "\n", "- based on the book \"Feature Extraction Foundations and Applications\"\n", "- motivation: Learn the best that human experts are doing in feature engineering, and try to automate it\n", "- another promising direction to look at is Deep Learning and Unsupervised Feature Engineering" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "- It is usually the data type of a problem that determines the suitable technology (e.g., vector data, list data and etc.)\n", "\n", "- **Feature Construction** is one of the key steps in data analysis process, largely conditioning the success of any machine learning endeavor. \n", "\n", "- One should beware of NOT losing information at the feature construction stage. It may be a good idea to add the RAW FEATURES to the preprocessed data - AND use FEATURE SELECTION (or REGULARIZATION) in the machine stage, This is simplely because ADDING all those features comes at a price - it increases the dimensionality of the patterns and thereby immerses the relevant information into a sea of possibly irrelevant, noisy or redundant features - and thus increase the difficulty for machine learning models to search for the optimal solution (in a bigger search space).\n", "\n", "- data value type can be binary, categorical or continous\n", "\n", "- To understand the organization of the tutorial, there are four aspects of feature extraction:\n", "\n", " - **feature contruction**\n", " - **feature subset generation (or search strategy)**\n", " - **evaluation criterion definition (e.g. relevance index or predictive power)**\n", " - **evaluation criterion estimation (or assessment method)**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Preprocessing\n", "\n", "- **Preprocessing** of features can include - most of them are about **removing dependences**\n", " - Standardization (each feature independently) - **UNIFYING**\n", " - why: Features can have different scales although they refer to comparable objects. This could cause troubles specially for distance-based models such as kernel methods (tree models are more robust in this case)\n", " - how: centering and scaling x_new = (x-mean)/std INDIVIDUALL for each feature\n", " - Normalization (across different features) - **COMPARING**\n", " - why: Consider the case where $x$ is an image and $x_{i}$ are the number of pixels with color i, it makes sense to normalize $x$ by dividing it by the total number of counts in order to encode the distribution and **remove the dependence on the size of the image.** \n", " - how: $x'=x / ||x||$\n", " - Signal Enhacement (signal / image processing) - **DOMAIN**\n", " - why: the signal-to-noise ratio may be improved by applying signal or image-prcocessing filters\n", " - how: basedline/background removal, de-noising, smoothing, or shapening. The Fourier transform, wavelet transforms and morphological image analysis are popular methods. \n", " - Extraction of Local Features (sequential spatial or other *structured* data) - **DOMAIN**\n", " - why: The technique essentially encode problem-specific knowledge into the features.\n", " - how: For sequential, spatial or other structured data, specific techniques such as *convolutional* methods using hand-crafted kernels of syntactic and structurual methods are used.\n", " - Linear/Nonlinear space embedding methods (dimensionality reduction) - **REDUCTION**\n", " - why: when dimensionality is very high, projecting or embedding the data into a lower dimensional space while retaining as much information as possible. \n", " - PCA, MDS (multidimensional scaling). The coordinates of the data points in the lower dimension might be used as features or simply as a means of data visualization.\n", " - Nonlinear expansions (linear expansions are not useful in practice, as linear models will be able to catpure them) - **EXTENSION**\n", " - why: To increase the dimensionality when the problem is very complex and first order interactions (e.g., linear model) are not enough to derive good results. \n", " - how: this consists for instance in computing products of the original features $x_{i}$ to create monomials $x_{k_{1}}x_{k_{2}}...x_{k_{p}}$\n", " - Feature Discretization - **DISCRETIZATION & EXTENSION**\n", " - why: Some algorithms do NOT handel well continuous data. And sometime discretization (e.g., by tri-clustering) actually performs a nonlinear expansion.\n", " - how: To discretize continuous values into a finite discrete set." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Feature Selection\n", "\n", "- *filters*: methods that select features without optimizing the performance of a predictor.\n", "\n", "- *wrappers* utilize a learning machine as a blackbox to score subsets of features according to their predictive power\n", "\n", "- *embedded* methods perform feature selection in the process of training and are usually specific to given learning machines\n", "\n", "- *ensemble of wrappers/embedded methods* : wrappers and embedded methods may be yield very different feature subsets under small perturbatioins of the dataset. One way of minimizing the effect is to use *ensemble* methods\n", "\n", "### Basic Concepts\n", "\n", "Some basic techniques and discussions about how feature selection can be done and why it is this way\n", "\n", "- Individual Relevance Ranking\n", " - pros: (1) fast (2) accurate when a lot features but only small set of data, where a full subset search could be prohibitive or overfitting (3) it works well when a feature is invidually relevant to target BY ITSELF and other feature does not help providing better prediction\n", " - cons: its assumption could be too limited \n", " - __features that are not individually relevant (to target) may become relevant in the context of others__\n", " - __features that are individually relevant (to target) may not all be useful because of possible redundancies__\n", " - techniques:\n", " - `Pearson correlation coefficient` \n", " - essentially the absolute value of cosine between x and y after being centered (mean substracted)\n", " - can be used for REGRESSION and BINARY-CLASSIFICATION\n", " - for MULTI-CLASSIFICATION problem, one can use instead `Fisher-coefficient`\n", " - **Rotations in feature space (PCA)** often simplify feature selection (e.g. 45 degree boundary becomes vertical and needed features number become from 2 to 1). `PCA` can be used to perform such linear transformation\n", " - [discussion](http://www.gigawiz.com/correlations.html) on several different types of correlations \n", "\n", "- Multivariate Feature Ranking\n", " - pros: remedy to the cons of \"Individual Relevance Ranking\"\n", " - justifications:\n", " - *a helpful feature may be irrelevant by itself*. e.g., two spear-shaped clusters aligned with 45-degree line, and the two clusters are fully/almost overlapped on each projection. A practical example: feature 1 represents a measurement in an image that is randomly offset by a local background change; feature 2 measures such local offset, which by itself is not informative but will improve separability if subtracted from feature 1.\n", " - *two individually irrelevant features may become relevant when used together*, e.g., XOR problem\n", " - multivariate methods can take into account feature redundancy and yield more compact subsets of features.\n", " - noise reduction can be achieved with features having identical projected distributions\n", " - **Correlation does NOT imply Redundancy**\n", " - techniques: most multivariate methods rank subsets of features rather than individual features, still there exist multivariate relevance criteria to **rank individual features according to their relevance in the context of others**.\n", " - *relief algorithm*: The Relief algorithm uses an approach based on the K-nearest-neighbor algorithm. To evaluate the index, we first identify in the original feature space, for each example xi, the K closest ex- amples of the same class $\\{xHk(i)\\}$,k = 1...K (nearest hits) and the K closest examples of a different class {xMk(i)} (nearest misses.) Then, in projection on feature j, the sum of the distances between the examples and their nearest misses is compared to the sum of distances to their nearest hits. , we use the ratio of these two quantities to create an index independent of feature scale variations. The Relief method works for multi-class problems.\n", " - forward/backward greedy search - **Forward selection yields better choice if we end up selecting only several most significant features. Backward elimination procedure may yield better performance but at the expense of possibly larger feature sets. However if the feature set is reduced too much, the performance may degrade abruptly. SO if you use backward search, make sure to use enough features (e.g., by cross validation)**\n", " - Gram Schmidt orthogonalization (Partial Least Square) - forward selection\n", " - Random Forest Based - forward selection (selecting features in the process of building a classification or regression tree\n", " - recursive feature elimination SVM - backward elimination" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Assessment Methods for Feature Relevance" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## NIPS 2003 Feature Selection Challenge" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Summary\n", "\n", "- Eliminating meaningless features is note critical\n", "\n", "- A filter as simple as the Pearson correlation coefficient proves to be very effective\n", "\n", "- " ] } ], "metadata": {} } ] }