{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Fitting old pipelines to new data\n", "\n", "In this brief notebook, you'll see how to load a saved pipeline and fit it to raw data again. This is a very useful way to retrain a classifier to new data (which you might do after observing data drift) or to change the hyperparameters you used while training the classifier. \n", "\n", "Machine learning pipelines work in two ways: for training, they allow you to precisely specify a sequence of steps (data cleaning, feature extraction, dimensionality reduction, model training, etc.) that start with raw data and result in a model, trying this sequence with different hyperparameters. For production, they allow you to reuse the exact sequence of steps that were used in training a model to make predictions from new raw data. \n", "\n", "We'll start by loading training and testing data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "from sklearn import model_selection\n", "import os.path\n", "\n", "df = pd.read_parquet(os.path.join(\"data\", \"training.parquet\"))\n", "\n", "# X_train and X_test are lists of strings, each \n", "# representing one document\n", "# y_train and y_test are vectors of labels\n", "\n", "train, test = model_selection.train_test_split(df, random_state=43)\n", "X_train = train[\"text\"]\n", "y_train = train[\"label\"]\n", "\n", "X_test = test[\"text\"]\n", "y_test = test[\"label\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next up, we'll load the two steps of the pipeline that we created in earlier notebooks:\n", "\n", "- `feature_pipeline.sav` from either the [simple summaries notebook](03-feature-engineering-summaries.ipynb) or the [TF-IDF notebook](03-feature-engineering-tfidf.ipynb), and\n", "- `model.sav` from either the [logistic regression notebook](04-model-logistic-regression.ipynb) or the [random forest notebook](04-model-random-forest.ipynb).\n", "\n", "(If you haven't worked through a feature engineering notebook and a model training notebook, the next cell won't work.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## loading in feature extraction pipeline\n", "import pickle\n", "filename = 'feature_pipeline.sav'\n", "feat_pipeline = pickle.load(open(filename, 'rb'))\n", "\n", "## loading model\n", "filename = 'model.sav'\n", "model = pickle.load(open(filename, 'rb'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can combine the two stages together in a pipeline and fit it to new training data. (Note that the feature extraction pipeline stage, `feature_pipeline.sav`, is *itself* a pipeline!)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.pipeline import Pipeline\n", "pipeline = Pipeline([\n", " ('features',feat_pipeline),\n", " ('model',model)\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A pipeline supports the same interface as a classifier, so we can use it to `fit` to raw data and then `predict` labels from raw data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline.fit(X_train,y_train)\n", "y_preds = pipeline.predict(X_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can then evaluate the performance of our classifier, using a confusion matrix:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from mlworkflows import plot\n", "_, chart = plot.confusion_matrix(y_test, y_preds)\n", "chart" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "...or an F1-score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import f1_score\n", "# calculate f1 score\n", "mean_f1 = f1_score(y_test, y_preds, average='micro')\n", "print(mean_f1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The [scikit-learn pipeline](https://scikit-learn.org/stable/modules/compose.html#pipeline) doesn't just make a particular pipeline repeatable; it also lets you run repeatable experiments by evaluating the same pipeline with different hyperparameters for the same data set. To see this in action, we'll inspect the pipeline stages to see their hyperparameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline.named_steps" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's experiment with a couple of different options for a hyperparameter. Since we have no way of knowing while we're writing this notebook whether you trained a logistic regression model or a random forest model, this notebook will try and figure it out on the fly (since these model types have different hyperparameters). The [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV) class in scikit-learn allows us to evaluate different combinations of hyperparameters; we'll use it with just a few options to quickly demonstrate its functionality.\n", "\n", "Since this can be an expensive operation, we'll also time it to see how long it takes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "from sklearn.model_selection import GridSearchCV\n", "\n", "search = None\n", "param_grid = {}\n", "\n", "small_train, small_test = model_selection.train_test_split(df.sample(5000), random_state=43)\n", "\n", "if 'LogisticRegression' in str(pipeline.named_steps['model']):\n", " # we're dealing with a logistic regression model\n", " param_grid = { 'model__multi_class' : ['ovr', 'multinomial'], 'model__solver' : ['lbfgs', 'newton-cg']}\n", "else:\n", " # we're dealing with a random forest model\n", " param_grid = { 'model__min_samples_split' : [2, 8], \n", " 'model__n_estimators' : [10, 25, 50, 100], \n", " 'model__criterion' : ['gini', 'entropy']}\n", "\n", "search = GridSearchCV(pipeline, param_grid, iid=False, cv=3, return_train_score=False)\n", "search.fit(small_train[\"text\"], small_train[\"label\"])\n", "\n", "print(\"Best parameters were %s\" % str(search.best_params_))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`GridSearchCV` evaluates every hyperparameter combination from the supplied lists of values. So in the random forest case above, we'd consider the following hyperparameter mappings:\n", "\n", "- `multiclass == 'ovr'` and `solver == 'lbgfs'`,\n", "- `multiclass == 'ovr'` and `solver == 'newton-cg'`,\n", "- `multiclass == 'multinomial'` and `solver == 'lbgfs'`, and\n", "- `multiclass == 'multinomial'` and `solver == 'newton-cg'`.\n", "\n", "In our example, [we divide the training set into three subsets, or _folds_, instead of using train and test sets](https://en.wikipedia.org/wiki/Cross-validation_(statistics)): if we call the three subsets $a$, $b$, and $c$, we'll be \n", "\n", "- training against $a \\cup b$ and testing against $c$,\n", "- training against $a \\cup c$ and testing against $b$, and\n", "- training against $b \\cup c$ and testing against $a$\n", "\n", "for each hyperparameter combination before averaging the results of each test. Since we will train $k - 1$ models for $k$-fold validation, and since we'll validate every possible combination of hyperparameters, grid search can get computationally expensive very quickly. (If you were using grid search in a real application, you'd have more time than we do during this workshop, so you'd probably use more folds for cross-validation and also probably be working with a larger sample count.)\n", "\n", "One option for making grid search faster is to run individual experiments in parallel. We'll mention an option now: [Dask](https://dask.org) provides scale-out versions of many features in scikit-learn and Pandas so that you can run machine learning pipelines in parallel across multiple threads or multiple containers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Exercises\n", "\n", "\n", "Later in this session, we'll see how pipeline abstractions can make it easier not just to experiment with variations on techniques, but also to put machine learning into production. For now, here are some things to try out and think about.\n", "\n", "1. Identify some ways in which machine learning pipelines are like CI/CD pipelines and some ways in which they're different.\n", "2. Experiment with values for some different hyperparameters in the text classification pipeline.\n", "3. Even if you aren't overfitting, you might not want to choose the absolute best-performing hyperparameter settings. What are some cases in which you might not?\n", "4. What are some considerations that grid search might not capture?" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 2 }