{ "metadata": { "name": "09_pipelining_estimators" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "code", "collapsed": false, "input": [ "%pylab inline\n", "import numpy as np\n", "import pylab as pl" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 1, "metadata": {}, "source": [ "Pipelining estimators" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this section we study how different estimators maybe be chained" ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "A simple example: feature extraction and selection before an estimator" ] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Feature extraction: vectorizer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from sklearn import datasets, feature_selection\n", "from sklearn.feature_extraction.text import TfidfVectorizer\n", "news = datasets.fetch_20newsgroups()\n", "X, y = news.data, news.target\n", "vectorizer = TfidfVectorizer()\n", "vectorizer.fit(X)\n", "vector_X = vectorizer.transform(X)\n", "print vector_X.shape" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The feature selection object is a \"transformer\": it has a \"fit\" method and a \"transform\" method.\n", "\n", "Importantly, the \"fit\" method of the transformer is applied on the training set, but the transform method can be applied on any data, including the test set.\n", "\n", "We can see that the vectorized data has a very large number of features, as it list the words of the document. Many of these are not relevant for the classification problem." ] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Feature selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Supervised feature selection can select features that seem relevent for a learning task based on a simple test. It is often a computationally cheap way of reducing the dimensionality.\n", "\n", "Scikit-learn has a variety of feature selection strategy. The univariate feature selection strategies, (FDR, FPR, FWER, k-best, percentile) apply a simple function to compute a test statistic on each feature. The choice of this function (the score_func parameter) is important:\n", "\n", "- f_regression for regression problems\n", "- f_classif for classification problems\n", "- chi2 for classification problems with sparse non-negative data (typically text data)." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from sklearn import feature_selection\n", "\n", "selector = feature_selection.SelectPercentile(percentile=5, score_func=feature_selection.chi2)\n", "X_red = selector.fit_transform(vector_X, y)\n", "print \"Original data shape %s, reduced data shape %s\" % (vector_X.shape, X_red.shape)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Pipelining" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A transformer and a predictor can be combined to form a predictor using the pipeline object.\n", "\n", "The constructor of the pipeline object takes a list of (name, estimator) pairs, that are applied on the data in the order of the list. The pipeline object exposes fit, transform, predict and score methods that result from applying the transforms (and fit in the case of the fit method) one after the other to the data, and calling the last object's corresponding function.\n", "\n", "Using a pipeline we can combine our feature extraction, selection and final SVC in one step. This is convenient, as it enables to do clean cross-validation." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from sklearn.pipeline import Pipeline\n", "from sklearn.svm import LinearSVC\n", "from sklearn.cross_validation import cross_val_score\n", "\n", "svc = LinearSVC()\n", "pipeline = Pipeline([('vectorize', vectorizer), ('select', selector), ('svc', svc)])\n", "cross_val_score(pipeline, X, y, verbose=3)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 3, "metadata": {}, "source": [ "Parameter selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The resulting pipelined predictor object has implicitely many parameters. How do we set them in a principled way?\n", "\n", "As a reminder, the GridSearchCV object can be used to set the parameters of an estimator. We just need to know the name of the parameters to set.\n", "\n", "The pipeline object exposes the parameters of the estimators it wraps with the following convention: first the name of the estimator, as given in the constructor list, then the name of parameter, separated by a double underscore. For instance, to set the SVC's 'C' parameter:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "pipeline.set_params(svc__C=10)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can then use the grid search to choose the best C between 3 values.\n", "\n", "**Performance tip**: choosing parameters by cross-validation may imply running the transformers many times on the same data with the same parameters. One way to avoid part of this overhead is to use memoization. In particular, we can use the version of joblib that is embedded in scikit-learn:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "from sklearn.externals import joblib\n", "memory = joblib.Memory(cachedir='.')\n", "memory.clear()\n", "selector.score_func = memory.cache(selector.score_func)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can proceed to run the grid search:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "from sklearn.grid_search import GridSearchCV\n", "grid = GridSearchCV(estimator=pipeline, param_grid=dict(svc__C=[1e-2, 1, 1e2]))\n", "grid.fit(X, y)\n", "print grid.best_estimator_.named_steps['svc']" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Exercise" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the 'labeled faces in the wild' (datasets.fetch_lfw_people) chain a randomized PCA with an SVC for prediction" ] }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [] } ], "metadata": {} } ] }