{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Optimal Ensemble Learning with the [`sl3`](https://jeremyrcoyle.github.io/sl3/) R package\n", "\n", "## Author: [Nima Hejazi](https://nimahejazi.org)\n", "\n", "## Date: 14 February 2018\n", "\n", "### _Attribution:_ based on materials by David Benkeser, Jeremy Coyle, Ivana Malenica, and Oleg Sofrygin" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "In this demonstration, we will illustrate the basic functionality of the `sl3` R package. Specifically, we will walk through the concept of machine learning pipelines, the construction of ensemble models, simple optimality properties of stacked regression. After this introduction we will be well prepared to discuss more advanced topics in ensemble learning, such as optimal kernel density estimation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Resources\n", "\n", "* The `sl3` R package homepage: https://jeremyrcoyle.github.io/sl3/\n", "* The `sl3` R package repository: https://github.com/jeremyrcoyle/sl3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup\n", "\n", "First, we'll load the packages required for this exercise and load a simple data set (`cpp_imputed` below) that we'll use for demonstration purposes:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Loading required package: nnls\n", "Super Learner\n", "Version: 2.0-23-9000\n", "Package created on 2017-11-29\n", "\n", "origami: Generalized Cross-Validation Framework\n", "Version: 1.0.0\n" ] }, { "data": { "text/html": [ "\n", "\n", "\n", "\t\n", "\t\n", "\t\n", "\t\n", "\t\n", "\t\n", "\n", "
subjidagedayswtkghtcmlencmbmiwazhazwhzbazmmaritnmmaritmeducyrssesnsesparitygravidasmokedmcignumcomprisk
11 1 4.621 55 55 15.27603 2.380000 2.61000 0.19 1.35 1 Married 12 50 Middle 1 1 0 0 none
31 366 14.500 79 79 23.23346 3.840000 1.35000 4.02 3.89 1 Married 12 50 Middle 1 1 0 0 none
42 1 3.345 51 51 12.86044 0.060000 0.50000 -0.64 -0.43 1 Married 0 0 .0 0 1 35 none
62 366 8.400 73 73 15.76281 -1.270000 -1.17000 -0.96 -0.80 1 Married 0 0 .0 0 1 35 none
72 2558 19.100 114 0 14.69683 -1.372732 -1.46648 0.00 0.00 1 Married 0 0 .0 0 1 35 none
83 1 3.827 54 54 13.12414 0.990000 2.08000 -1.29 -0.22 1 Married 0 0 .1 1 1 20 none
\n" ], "text/latex": [ "\\begin{tabular}{r|lllllllllllllllllllllllllllllllll}\n", " & subjid & agedays & wtkg & htcm & lencm & bmi & waz & haz & whz & baz & ⋯ & mmaritn & mmarit & meducyrs & sesn & ses & parity & gravida & smoked & mcignum & comprisk\\\\\n", "\\hline\n", "\t1 & 1 & 1 & 4.621 & 55 & 55 & 15.27603 & 2.380000 & 2.61000 & 0.19 & 1.35 & ⋯ & 1 & Married & 12 & 50 & Middle & 1 & 1 & 0 & 0 & none \\\\\n", "\t3 & 1 & 366 & 14.500 & 79 & 79 & 23.23346 & 3.840000 & 1.35000 & 4.02 & 3.89 & ⋯ & 1 & Married & 12 & 50 & Middle & 1 & 1 & 0 & 0 & none \\\\\n", "\t4 & 2 & 1 & 3.345 & 51 & 51 & 12.86044 & 0.060000 & 0.50000 & -0.64 & -0.43 & ⋯ & 1 & Married & 0 & 0 & . & 0 & 0 & 1 & 35 & none \\\\\n", "\t6 & 2 & 366 & 8.400 & 73 & 73 & 15.76281 & -1.270000 & -1.17000 & -0.96 & -0.80 & ⋯ & 1 & Married & 0 & 0 & . & 0 & 0 & 1 & 35 & none \\\\\n", "\t7 & 2 & 2558 & 19.100 & 114 & 0 & 14.69683 & -1.372732 & -1.46648 & 0.00 & 0.00 & ⋯ & 1 & Married & 0 & 0 & . & 0 & 0 & 1 & 35 & none \\\\\n", "\t8 & 3 & 1 & 3.827 & 54 & 54 & 13.12414 & 0.990000 & 2.08000 & -1.29 & -0.22 & ⋯ & 1 & Married & 0 & 0 & . & 1 & 1 & 1 & 20 & none \\\\\n", "\\end{tabular}\n" ], "text/markdown": [ "\n", "| | subjid | agedays | wtkg | htcm | lencm | bmi | waz | haz | whz | baz | ⋯ | mmaritn | mmarit | meducyrs | sesn | ses | parity | gravida | smoked | mcignum | comprisk | \n", "|---|---|---|---|---|---|\n", "| 1 | 1 | 1 | 4.621 | 55 | 55 | 15.27603 | 2.380000 | 2.61000 | 0.19 | 1.35 | ⋯ | 1 | Married | 12 | 50 | Middle | 1 | 1 | 0 | 0 | none | \n", "| 3 | 1 | 366 | 14.500 | 79 | 79 | 23.23346 | 3.840000 | 1.35000 | 4.02 | 3.89 | ⋯ | 1 | Married | 12 | 50 | Middle | 1 | 1 | 0 | 0 | none | \n", "| 4 | 2 | 1 | 3.345 | 51 | 51 | 12.86044 | 0.060000 | 0.50000 | -0.64 | -0.43 | ⋯ | 1 | Married | 0 | 0 | . | 0 | 0 | 1 | 35 | none | \n", "| 6 | 2 | 366 | 8.400 | 73 | 73 | 15.76281 | -1.270000 | -1.17000 | -0.96 | -0.80 | ⋯ | 1 | Married | 0 | 0 | . | 0 | 0 | 1 | 35 | none | \n", "| 7 | 2 | 2558 | 19.100 | 114 | 0 | 14.69683 | -1.372732 | -1.46648 | 0.00 | 0.00 | ⋯ | 1 | Married | 0 | 0 | . | 0 | 0 | 1 | 35 | none | \n", "| 8 | 3 | 1 | 3.827 | 54 | 54 | 13.12414 | 0.990000 | 2.08000 | -1.29 | -0.22 | ⋯ | 1 | Married | 0 | 0 | . | 1 | 1 | 1 | 20 | none | \n", "\n", "\n" ], "text/plain": [ " subjid agedays wtkg htcm lencm bmi waz haz whz baz ⋯\n", "1 1 1 4.621 55 55 15.27603 2.380000 2.61000 0.19 1.35 ⋯\n", "3 1 366 14.500 79 79 23.23346 3.840000 1.35000 4.02 3.89 ⋯\n", "4 2 1 3.345 51 51 12.86044 0.060000 0.50000 -0.64 -0.43 ⋯\n", "6 2 366 8.400 73 73 15.76281 -1.270000 -1.17000 -0.96 -0.80 ⋯\n", " mmaritn mmarit meducyrs sesn ses parity gravida smoked mcignum\n", "1 1 Married 12 50 Middle 1 1 0 0 \n", "3 1 Married 12 50 Middle 1 1 0 0 \n", "4 1 Married 0 0 . 0 0 1 35 \n", "6 1 Married 0 0 . 0 0 1 35 \n", " comprisk\n", "1 none \n", "3 none \n", "4 none \n", "6 none \n", " [ reached getOption(\"max.print\") -- omitted 2 rows ]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "set.seed(49753)\n", "\n", "# packages we'll be using\n", "library(data.table)\n", "library(SuperLearner)\n", "library(origami)\n", "library(sl3)\n", "\n", "# load example data set\n", "data(cpp_imputed)\n", "\n", "# take a peek at the data\n", "head(cpp_imputed)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To use this data set with `sl3`, the object must be wrapped in a customized `sl3` container, an __`sl3` \"Task\"__ object. A _task_ is an idiom for all of the elements of a prediction problem other than the learning algorithms and prediction approach itself -- that is, a task delineates the structure of the data set of interest and any potential metadata (e.g., observation-level weights)." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "A sl3 Task with 1441 obs and these nodes:\n", "$covariates\n", "[1] \"apgar1\" \"apgar5\" \"parity\" \"gagebrth\" \"mage\" \"meducyrs\" \"sexn\" \n", "\n", "$outcome\n", "[1] \"haz\"\n", "\n", "$id\n", "NULL\n", "\n", "$weights\n", "NULL\n", "\n", "$offset\n", "NULL\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# here are the covariates we are interested in and, of course, the outcome\n", "covars <- c(\"apgar1\", \"apgar5\", \"parity\", \"gagebrth\", \"mage\", \"meducyrs\",\n", " \"sexn\")\n", "outcome <- \"haz\"\n", "\n", "# create the sl3 task and take a look at it\n", "task <- make_sl3_Task(data = cpp_imputed, covariates = covars,\n", " outcome = outcome, outcome_type = \"continuous\")\n", "\n", "# let's take a look at the sl3 task\n", "task" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Interlude: Object Oriented Programming in `R`\n", "\n", "`sl3` is designed using basic OOP principles and the `R6` OOP framework. While we’ve tried to make it easy to use `sl3` without worrying much about OOP, it is helpful to have some intuition about how `sl3` is structured. In this section, we briefly outline some key concepts from OOP. Readers familiar with OOP basics are invited to skip this section. The key concept of OOP is that of an object, a collection of data and functions that corresponds to some conceptual unit. Objects have two main types of elements: (1) _fields_, which can be thought of as nouns, are information about an object, and (2) _methods_, which can be thought of as verbs, are actions an object can perform. Objects are members of classes, which define what those specific fields and methods are. Classes can inherit elements from other classes (sometimes called base classes) – accordingly, classes that are similar, but not exactly the same, can share some parts of their definitions.\n", "\n", "Many different implementations of OOP exist, with variations in how these concepts are implemented and used. R has several different implementations, including `S3`, `S4`, reference classes, and `R6`. `sl3` uses the `R6` implementation. In `R6`, methods and fields of a class object are accessed using the `$` operator. The next section explains how these concepts are used in `sl3` to model machine learning problems and algorithms." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## `sl3` Learners\n", "\n", "`Lrnr_base` is the base class for defining machine learning algorithms, as well as fits for those algorithms to particular `sl3_Tasks`. Different machine learning algorithms are defined in classes that inherit from `Lrnr_base`. For instance, the `Lrnr_glm` class inherits from `Lrnr_base`, and defines a learner that fits generalized linear models. We will use the term learners to refer to the family of classes that inherit from `Lrnr_base`. Learner objects can be constructed from their class definitions using the `make_learner` function:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# make learner object\n", "lrnr_glm <- make_learner(Lrnr_glm)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because all learners inherit from `Lrnr_base`, they have many features in common, and can be used interchangeably. All learners define three main methods: `train`, `predict`, and `chain`. The first, `train`, takes an `sl3_task` object, and returns a `learner_fit`, which has the same class as the learner that was trained:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "TRUE" ], "text/latex": [ "TRUE" ], "text/markdown": [ "TRUE" ], "text/plain": [ "[1] TRUE" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# fit learner to task data\n", "lrnr_glm_fit <- lrnr_glm$train(task)\n", "\n", "# verify that the learner is fit\n", "lrnr_glm_fit$is_trained" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we fit the learner to the CPP task we defined above. Both `lrnr_glm` and `lrnr_glm_fit` are objects of class `Lrnr_glm`, although the former defines a learner and the latter defines a fit of that learner. We can distiguish between the learners and learner fits using the `is_trained` field, which is true for fits but not for learners.\n", "\n", "Now that we’ve fit a learner, we can generate predictions using the predict method:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 0.362984982737989
  2. \n", "\t
  3. 0.362984982737989
  4. \n", "\t
  5. 0.259930715399135
  6. \n", "\t
  7. 0.259930715399135
  8. \n", "\t
  9. 0.259930715399135
  10. \n", "\t
  11. 0.0568026361161085
  12. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 0.362984982737989\n", "\\item 0.362984982737989\n", "\\item 0.259930715399135\n", "\\item 0.259930715399135\n", "\\item 0.259930715399135\n", "\\item 0.0568026361161085\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 0.362984982737989\n", "2. 0.362984982737989\n", "3. 0.259930715399135\n", "4. 0.259930715399135\n", "5. 0.259930715399135\n", "6. 0.0568026361161085\n", "\n", "\n" ], "text/plain": [ "[1] 0.36298498 0.36298498 0.25993072 0.25993072 0.25993072 0.05680264" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# get learner predictions\n", "preds <- lrnr_glm_fit$predict()\n", "head(preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we specified task as the task for which we wanted to generate predictions. If we had omitted this, we would have gotten the same predictions because predict defaults to using the task provided to train (called the training task). Alternatively, we could have provided a different task for which we want to generate predictions.\n", "\n", "The final important learner method, chain, will be discussed below, in the section on learner composition. As with `sl3_Task`, learners have a variety of fields and methods we haven’t discussed here. More information on these is available in the help for `Lrnr_base`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pipelines\n", "\n", "A pipeline is a set of learners to be fit sequentially, where the fit from one learner is used to define the task for the next learner. There are many ways in which a learner can define the task for the downstream learner. The chain method defined by learners defines how this will work. Let’s look at the example of pre-screening variables. For now, we’ll rely on a screener from the `SuperLearner` package, although native `sl3` screening algorithms will be implemented soon.\n", "\n", "Below, we generate a screener object based on the `SuperLearner` function `screen.corP` and fit it to our task. Inspecting the fit, we see that it selected a subset of covariates:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1] \"Lrnr_pkg_SuperLearner_screener_screen.corP\"\n", "$selected\n", "[1] \"parity\" \"gagebrth\"\n", "\n" ] } ], "source": [ "screen_cor <- Lrnr_pkg_SuperLearner_screener$new(\"screen.corP\")\n", "screen_fit <- screen_cor$train(task)\n", "print(screen_fit)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `Pipeline` class automates this process. It takes an arbitrary number of learners and fits them sequentially, training and chaining each one in turn. Since `Pipeline` is a learner like any other, it shares the same interface. We can define a pipeline using `make_learner`, and use `train` and `predict` just as we did before:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 0.380844719477376
  2. \n", "\t
  3. 0.380844719477376
  4. \n", "\t
  5. 0.298876230525396
  6. \n", "\t
  7. 0.298876230525396
  8. \n", "\t
  9. 0.298876230525396
  10. \n", "\t
  11. -0.00987783994257363
  12. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 0.380844719477376\n", "\\item 0.380844719477376\n", "\\item 0.298876230525396\n", "\\item 0.298876230525396\n", "\\item 0.298876230525396\n", "\\item -0.00987783994257363\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 0.380844719477376\n", "2. 0.380844719477376\n", "3. 0.298876230525396\n", "4. 0.298876230525396\n", "5. 0.298876230525396\n", "6. -0.00987783994257363\n", "\n", "\n" ], "text/plain": [ "[1] 0.38084472 0.38084472 0.29887623 0.29887623 0.29887623 -0.00987784" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sg_pipeline <- make_learner(Pipeline, screen_cor, lrnr_glm)\n", "sg_pipeline_fit <- sg_pipeline$train(task)\n", "sg_pipeline_preds <- sg_pipeline_fit$predict()\n", "head(sg_pipeline_preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Stacks\n", "\n", "Like `Pipelines`, `Stacks` combine multiple learners. Stacks train learners simultaneously, so that their predictions can be either combined or compared. Again, `Stack` is just a special learner and so has the same interface as all other learners:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\t\n", "\t\n", "\t\n", "\t\n", "\t\n", "\t\n", "\n", "
Lrnr_glmLrnr_pkg_SuperLearner_screener_screen.corP___Lrnr_glm
0.36298498 0.38084472
0.36298498 0.38084472
0.25993072 0.29887623
0.25993072 0.29887623
0.25993072 0.29887623
0.05680264 -0.00987784
\n" ], "text/latex": [ "\\begin{tabular}{r|ll}\n", " Lrnr\\_glm & Lrnr\\_pkg\\_SuperLearner\\_screener\\_screen.corP\\_\\_\\_Lrnr\\_glm\\\\\n", "\\hline\n", "\t 0.36298498 & 0.38084472\\\\\n", "\t 0.36298498 & 0.38084472\\\\\n", "\t 0.25993072 & 0.29887623\\\\\n", "\t 0.25993072 & 0.29887623\\\\\n", "\t 0.25993072 & 0.29887623\\\\\n", "\t 0.05680264 & -0.00987784\\\\\n", "\\end{tabular}\n" ], "text/markdown": [ "\n", "Lrnr_glm | Lrnr_pkg_SuperLearner_screener_screen.corP___Lrnr_glm | \n", "|---|---|---|---|---|---|\n", "| 0.36298498 | 0.38084472 | \n", "| 0.36298498 | 0.38084472 | \n", "| 0.25993072 | 0.29887623 | \n", "| 0.25993072 | 0.29887623 | \n", "| 0.25993072 | 0.29887623 | \n", "| 0.05680264 | -0.00987784 | \n", "\n", "\n" ], "text/plain": [ " Lrnr_glm Lrnr_pkg_SuperLearner_screener_screen.corP___Lrnr_glm\n", "1 0.36298498 0.38084472 \n", "2 0.36298498 0.38084472 \n", "3 0.25993072 0.29887623 \n", "4 0.25993072 0.29887623 \n", "5 0.25993072 0.29887623 \n", "6 0.05680264 -0.00987784 " ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "stack <- make_learner(Stack, lrnr_glm, sg_pipeline)\n", "stack_fit <- stack$train(task)\n", "stack_preds <- stack_fit$predict()\n", "head(stack_preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above, we’ve defined and fit a stack comprised of a simple `glm` learner as well as a pipeline that combines a screening algorithm with that same learner. We could have included any abitrary set of learners and pipelines, the latter of which are themselves just learners. We can see that the predict method now returns a matrix, with a column for each learner included in the stack." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Super Learner Algorithm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Having defined a stack, we might want to compare the performance of learners in the stack, which we may do using cross-validation. The `Lrnr_cv` learner wraps another learner and performs training and prediction in a cross-validated fashion, using separate training and validation splits as defined by `task$folds`.\n", "\n", "Below, we define a new `Lrnr_cv` object based on the previously defined stack and train it and generate predictions on the validation set:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "cv_stack <- Lrnr_cv$new(stack)\n", "cv_fit <- cv_stack$train(task)\n", "cv_preds <- cv_fit$predict()" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Lrnr_glm \n", " 1.604769 \n", "Lrnr_pkg_SuperLearner_screener_screen.corP___Lrnr_glm \n", " 1.604186 \n" ] } ], "source": [ "risks <- cv_fit$cv_risk(loss_squared_error)\n", "print(risks)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can combine all of the above elements, `Pipelines`, `Stacks`, and cross-validation using `Lrnr_cv`, to easily define a Super Learner. The Super Learner algorithm works by fitting a “meta-learner”, which combines predictions from multiple stacked learners. It does this while avoiding overfitting by training the meta-learner on validation-set predictions in a manner that is cross-validated. Using some of the objects we defined in the above examples, this becomes a very simple operation:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "metalearner <- make_learner(Lrnr_nnls)\n", "cv_task <- cv_fit$chain()\n", "ml_fit <- metalearner$train(cv_task)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we used a special learner, Lrnr_nnls, for the meta-learning step. This fits a non-negative least squares meta-learner. It is important to note that any learner can be used as a meta-learner.\n", "\n", "The Super Learner finally produced is defined as a pipeline with the learner stack trained on the full data and the meta-learner trained on the validation-set predictions. Below, we use a special behavior of pipelines: if all objects passed to a pipeline are learner fits (i.e., `learner$is_trained` is `TRUE`), the result will also be a fit:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 0.335036364767833
  2. \n", "\t
  3. 0.335036364767833
  4. \n", "\t
  5. 0.251128722856295
  6. \n", "\t
  7. 0.251128722856295
  8. \n", "\t
  9. 0.251128722856295
  10. \n", "\t
  11. 0.022648706935869
  12. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 0.335036364767833\n", "\\item 0.335036364767833\n", "\\item 0.251128722856295\n", "\\item 0.251128722856295\n", "\\item 0.251128722856295\n", "\\item 0.022648706935869\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 0.335036364767833\n", "2. 0.335036364767833\n", "3. 0.251128722856295\n", "4. 0.251128722856295\n", "5. 0.251128722856295\n", "6. 0.022648706935869\n", "\n", "\n" ], "text/plain": [ "[1] 0.33503636 0.33503636 0.25112872 0.25112872 0.25112872 0.02264871" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sl_pipeline <- make_learner(Pipeline, stack_fit, ml_fit)\n", "sl_preds <- sl_pipeline$predict()\n", "head(sl_preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An optimal stacked regression model (or Super Learner) may be fit in a more streamlined manner using the `Lrnr_sl` learner. For simplicity, we will use the same set of learners and meta-learning algorithm as we did before:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 0.335036364767833
  2. \n", "\t
  3. 0.335036364767833
  4. \n", "\t
  5. 0.251128722856295
  6. \n", "\t
  7. 0.251128722856295
  8. \n", "\t
  9. 0.251128722856295
  10. \n", "\t
  11. 0.022648706935869
  12. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 0.335036364767833\n", "\\item 0.335036364767833\n", "\\item 0.251128722856295\n", "\\item 0.251128722856295\n", "\\item 0.251128722856295\n", "\\item 0.022648706935869\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 0.335036364767833\n", "2. 0.335036364767833\n", "3. 0.251128722856295\n", "4. 0.251128722856295\n", "5. 0.251128722856295\n", "6. 0.022648706935869\n", "\n", "\n" ], "text/plain": [ "[1] 0.33503636 0.33503636 0.25112872 0.25112872 0.25112872 0.02264871" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sl <- Lrnr_sl$new(learners = stack,\n", " metalearner = metalearner)\n", "sl_fit <- sl$train(task)\n", "lrnr_sl_preds <- sl_fit$predict()\n", "head(lrnr_sl_preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that this generates the same predictions as the more hands-on definition above." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise\n", "\n", "* Construct a Super Learner using $5$ (or more) learning algorithms, fit it on the training data given below (`task_train`) , and obtain predictions on the held out set (`task_valid`).\n", "\n", "* At least $2$ of the learners that you choose should be variations of a single learner, differentiated from one another solely by the use of different values for $1$ (or more) hyperparameters.\n", "\n", "* After fitting the Super Learner, identify the \"discrete Super Learner\"." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "A sl3 Task with 1080 obs and these nodes:\n", "$covariates\n", "[1] \"apgar1\" \"apgar5\" \"parity\" \"gagebrth\" \"mage\" \"meducyrs\" \"sexn\" \n", "\n", "$outcome\n", "[1] \"haz\"\n", "\n", "$id\n", "NULL\n", "\n", "$weights\n", "NULL\n", "\n", "$offset\n", "NULL\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# let's split the data into training and validation sets\n", "train_cpp_imputed <- as.data.table(cpp_imputed[sample(nrow(cpp_imputed), 0.75 * nrow(cpp_imputed)), ])\n", "valid_cpp_imputed <- as.data.table(cpp_imputed[!(seq_len(nrow(cpp_imputed)) %in% rownames(train_cpp_imputed)), ])\n", "\n", "# create the sl3 task and take a look at it\n", "task_train <- make_sl3_Task(data = train_cpp_imputed, covariates = covars,\n", " outcome = outcome, outcome_type = \"continuous\")\n", "task_train\n", "\n", "# we'll also create an sl3 task for the holdout set\n", "task_valid <- make_sl3_Task(data = valid_cpp_imputed, covariates = covars,\n", " outcome = outcome, outcome_type = \"continuous\")" ] } ], "metadata": { "kernelspec": { "display_name": "R", "language": "R", "name": "ir" }, "language_info": { "codemirror_mode": "r", "file_extension": ".r", "mimetype": "text/x-r-source", "name": "R", "pygments_lexer": "r", "version": "3.4.3" } }, "nbformat": 4, "nbformat_minor": 2 }