{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# [sl3](https://sl3.tlverse.org/): Simplifying machine learning in R using pipelines\n", "\n", "## [Nima Hejazi](https://nimahejazi.org) & Jeremy Coyle\n", "\n", "## Date: 18 April 2018\n", "\n", "### _Attribution:_ based on materials originally produced by Jeremy Coyle, Nima Hejazi, Ivana Malenica, and Oleg Sofrygin" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Common in the language of modern data science are words such as \"munging,\"\n", "\"massaging,\" \"mining\" -- terms denoting the interactive process by which the\n", "analyst extracts some form of deliverable inference from the data set at hand.\n", "These terms express, among other things, the often convoluted process by which a\n", "set of pre-processing and estimation procedures are applied to an input data set\n", "in order to transform it into a\n", "[\"tidy\"](http://vita.had.co.nz/papers/tidy-data.html) data set from which\n", "informative visualizations and summaries may be easily extracted. A formalism\n", "that captures this involved process is that of machine learning _pipelines_. A\n", "_pipeline_ -- popularized by the [method of the same\n", "name](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html)\n", "in Python's [scikit-learn library](http://scikit-learn.org/stable/index.html) --\n", "may be thought of as an _ordered_ set of instructions corresponding to\n", "procedures to be applied to the input data set, with the ultimate goal of\n", "producing a tidy output data set.\n", "\n", "Recently, the new [`sl3` R package](https://github.com/tlverse/sl3) introduced\n", "the pipeline idiom into the R programming language. A concrete understanding of\n", "the utility of pipelines is best developed by example -- so, that's precisely\n", "what we'll aim to do here. In the following, we'll apply the concept of a\n", "machine learning pipeline to the canonical [iris data\n", "set](https://en.wikipedia.org/wiki/Iris_flower_data_set), combining a series\n", "of machine learning algorithms for classification with principal components\n", "analysis, a simple pre-processing step for dimensionality reduction." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "library(datasets)\n", "library(tidyverse)\n", "library(data.table)\n", "library(caret)\n", "library(sl3)\n", "set.seed(352)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the above, we simply load a few of the R packages that we'll rely on\n", "throughout this demonstration and set a seed in order to control any randomness\n", "in the estimation procedure that follows. Next, let's load the iris data set:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data(iris)\n", "iris <- iris %>%\n", " as_tibble(.)\n", "head(iris)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we see above, the iris data set consists of a simple structure: numerical\n", "measurements of the length and width of sepals and petals, alongside the species\n", "of the observed flower (restricted to three: _Iris setosa_, _Isis versicolor_,\n", "_Iris virginica_).\n", "\n", "To create very simple training and testing splits, we'll rely on the popular\n", "[`caret` R package](https://topepo.github.io/caret/):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "trn_indx <- createDataPartition(iris$Species, p = .8, list = FALSE,\n", " times = 1) %>%\n", " as.numeric()\n", "tst_indx <- which(!(seq_len(nrow(iris)) %in% trn_indx))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have our training and testing splits, we can organize the data into\n", "tasks -- the central bookkeeping object in the `sl3` framework. Essentially,\n", "tasks represent a, well, data analytic _task_ that is to be solved by invoking\n", "the various machine learning algorithms made available through `sl3`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# a task with the data from the training split\n", "iris_task_train <- sl3_Task$new(\n", " data = iris[trn_indx, ],\n", " covariates = colnames(iris)[-5],\n", " outcome = colnames(iris)[5],\n", " outcome_type = \"categorical\"\n", ")\n", "\n", "# a task with the data from the testing split\n", "iris_task_test <- sl3_Task$new(\n", " data = iris[tst_indx, ],\n", " covariates = colnames(iris)[-5],\n", " outcome = colnames(iris)[5],\n", " outcome_type = \"categorical\"\n", ")\n", "\n", "# let's take a look at the training data task\n", "iris_task_train" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Having set up the data properly, let's proceed to design _pipelines_ that we can\n", "rely on for processing and analyzing the data. A __pipeline__ simply represents\n", "a set of machine learning procedures to be invoked sequentially, with the\n", "results derived from earlier algorithms in the pipeline being used to train\n", "those later in the pipeline. Thus, a pipeline is a closed _end-to-end system_\n", "for resolving the problem posed by an `sl3` task.\n", "\n", "We'll rely on PCA for dimension reduction, gathering only the two most important\n", "principal component dimensions to use in training our classification models.\n", "Since this is a quick experiment with a well-studied data set, we'll use just\n", "two classification procedures: (1) Logistic regression with regularization\n", "(e.g., the LASSO) and (2) Random Forests." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pca_learner <- Lrnr_pca$new(n_comp = 2)\n", "glmnet_learner <- Lrnr_glmnet$new()\n", "rf_learner <- Lrnr_randomForest$new()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above, we merely instantiate the learners by invoking the `$new()` method of\n", "each of the appropriate objects. We now have a machine learning object that\n", "invokes PCA to generate and extract just the first two (from the argument\n", "`n_comp` above) principal components derived from the design matrix.\n", "\n", "Other than our PCA learner, we've also instantiated a regularized logistic\n", "regression model (`glmnet_learner` above) based on the implementation available\n", "through the popular [`glmnet` R\n", "package](https://cran.r-project.org/package=glmnet), as well as a random forest\n", "model based on the canonical implementation available in the\n", "[`randomForest` R package](https://cran.r-project.org/package=randomForest).\n", "\n", "Now that our individual learners are set up, we can intuitively string them into\n", "pipelines by invoking the appropriate `$new()` method like so" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pca_to_glmnet <- Pipeline$new(pca_learner, glmnet_learner)\n", "pca_to_rf <- Pipeline$new(pca_learner, rf_learner)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first pipeline above merely invokes our PCA learner, extracting the first\n", "two principal components of the design matrix from the input task and passing\n", "these as inputs to the regularized logistic regression model. Similarly, the\n", "second pipeline invokes PCA and passes the results to our random forest model.\n", "\n", "To streamline the training of our pipelines, we'll bundle them into a single\n", "_stack_, then train the model stack all at once. Similar in spirit to a\n", "pipeline, a __stack__ is a bundle of `sl3` learner objects that are to be\n", "trained together. The principle difference is that learners in a pipeline are\n", "trained sequentially, as described above, while those in a stack are trained in\n", "simultaneously. Thus, the models in a stack are trained independently of one\n", "another.\n", "\n", "Now, forward -- let's generate a stack and train the two pipelines on our\n", "training split of the iris data set:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_stack <- Stack$new(pca_to_glmnet, pca_to_rf)\n", "fit_model_stack <- model_stack$train(iris_task_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Having trained our stacked pipelines, we can now predict on the testing data set\n", "simply by feeding the object `iris_task_test` that we created above to our model\n", "stack using the `$predict()` method. After doing that, we'll simply do a bit of\n", "bookkeeping to extract the predicted class probabilities (of each observation)\n", "from the two pipelines in our stack." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "out_model_stack <- fit_model_stack$predict(iris_task_test)\n", "pipe_preds <- lapply(out_model_stack, unpack_predictions)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After extracting the predicted species probabilities for each observation (the\n", "most likely iris species), we now clean up the results a bit, just to make them\n", "easier to report" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# get class predictions\n", "pipe1_classes <- predict_classes(pipe_preds[[1]])\n", "pipe2_classes <- predict_classes(pipe_preds[[2]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A standard way to summarize results in machine learning problems is the\n", "[confusion\n", "matrix](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html).\n", "Now, let's take a look at the results, using the handy\n", "[`confusionMatrix`\n", "function](https://rdrr.io/cran/caret/man/confusionMatrix.html) from\n", "[`caret`](https://topepo.github.io/caret/index.html) to compare our predicted\n", "classes to the true species in the holdout/testing data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(cfmat_pipe1 <- confusionMatrix(pipe1_classes, iris_task_test$Y))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's find out whether our pipeline of PCA and Random Forest fared any better\n", "than the one with PCA and GLMs above:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(cfmat_pipe2 <- confusionMatrix(pipe2_classes, iris_task_test$Y))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The predictions looks good!\n", "\n", "---\n", "\n", "## Summary\n", "\n", "* We've taken a look at how to efficiently perform machine learning tasks with\n", " the `sl3` R package.\n", "* We've examined standard machine learning idioms, including _tasks_,\n", " _pipelines_, and _stacks_.\n", "* We've examined a simple case of training and predicting with pipelines on the\n", " canonical Iris data set." ] } ], "metadata": { "kernelspec": { "display_name": "R", "language": "R", "name": "ir" }, "language_info": { "codemirror_mode": "r", "file_extension": ".r", "mimetype": "text/x-r-source", "name": "R", "pygments_lexer": "r", "version": "3.4.4" } }, "nbformat": 4, "nbformat_minor": 2 }