{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Random Forest" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* * *\n", "![Alt text](./images/Forest3.jpg \"Random Forest Drawing\")\n", "* * *\n", "Drawing by Phil Cutler." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "Any tutorial on [Random Forests](https://en.wikipedia.org/wiki/Random_forest) (RF) should also include a review of decision trees, as these are models that are ensembled together to create the Random Forest model -- or put another way, the \"trees that comprise the forest.\" Much of the complexity and detail of the Random Forest algorithm occurs within the individual decision trees and therefore it's important to understand decision trees to understand the RF algorithm as a whole. Therefore, before proceeding, it is recommended that you read through the accompanying [Classification and Regression Trees Tutorial](decision-trees.ipynb).\n", "\n", "\n", "## History\n", "\n", "The Random Forest algorithm is preceded by the [Random Subspace Method](https://en.wikipedia.org/wiki/Random_subspace_method) (aka \"attribute bagging\"), which accounts for half of the source of randomness in a Random Forest. The Random Subspace Method is an ensemble method that consists of several classifiers each operating in a subspace of the original feature space. The outputs of the models are then combined, usually by a simple majority vote. Tin Kam Ho applied the random subspace method to decision trees in 1995.\n", "\n", "[Leo Breiman](https://en.wikipedia.org/wiki/Leo_Breiman) and [Adele Culter](http://www.math.usu.edu/~adele/) combined Breiman's [bagging](https://en.wikipedia.org/wiki/Bootstrap_aggregating) idea with the random subspace method to create a \"Random Forest\", a name which is trademarked by the duo. Due to the trademark, the algorithm is sometimes called Random Decision Forests.\n", "\n", "The introduction of random forests proper was first made in a [paper](http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) by Leo Breiman [1]. This paper describes a method of building a forest of uncorrelated trees using a CART like procedure, combined with randomized node optimization and bagging. In addition, this paper combines several ingredients, some previously known and some novel, which form the basis of the modern practice of random forests, in particular:\n", "\n", "- Using [out-of-bag error](https://en.wikipedia.org/wiki/Out-of-bag_error) as an estimate of the [generalization error](https://en.wikipedia.org/wiki/Generalization_error).\n", "- Measuring [variable importance](https://en.wikipedia.org/wiki/Random_forest#Properties) through permutation.\n", "\n", "The report also offers the first theoretical result for random forests in the form of a bound on the generalization error which depends on the strength of the trees in the forest and their correlation.\n", "\n", "Although Breiman's implementation of Random Forests used his CART algorithm to construct the decision trees, many modern implementations of Random Forest use entropy-based algorithms for constructing the trees." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bagging\n", "\n", "Bagging (Bootstrap aggregating) was proposed by Leo Breiman in 1994 to improve the classification by combining classifications of randomly generated training sets. Although it is usually applied to decision tree methods, it can be used with any type of method. Bagging is a special case of the [model averaging](https://en.wikipedia.org/wiki/Ensemble_learning) approach.\n", "\n", "- Bagging or *bootstrap aggregation* averages a noisy fitted function, refit to many bootstrap samples to reduce it's variance.\n", "- Bagging can dramatically reduce the variance of unstable procedures (like trees), leading to improved prediction, however any simple, interpretable, model structure (like that of a tree) is lost.\n", "- Bagging produces smoother decision boundaries than trees.\n", "\n", "The training algorithm for random forests applies the general technique of bootstrap aggregating, or bagging, to tree learners. Given a training set $X = x_1, ..., x_n$ with responses $Y = y_1, ..., y_n$, bagging repeatedly ($B$ times) selects a random sample with replacement of the training set and fits trees to these samples:\n", "\n", "For $b = 1, ..., B$:\n", " 1. Sample, with replacement, $n$ training examples from $X$, $Y$; call these $X_b$, $Y_b$.\n", " 2. Train a decision or regression tree, $f_b$, on $X_b$, $Y_b$.\n", "\n", "After training, predictions for unseen samples $x'$ can be made by averaging the predictions from all the individual regression trees on $x'$:\n", "\n", "$$ {\\hat {f}}={\\frac {1}{B}}\\sum _{b=1}^{B}{\\hat {f}}_{b}(x')$$\n", "\n", "or by taking the majority vote in the case of decision trees.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Random Forest Algorithm\n", "\n", "The above procedure describes the original bagging algorithm for trees. [Random Forests](https://en.wikipedia.org/wiki/Random_forest) differ in only one way from this general scheme: they use a modified tree learning algorithm that selects, at each candidate split in the learning process, a random subset of the features. This process is sometimes called \"feature bagging\". \n", "\n", "- Random Forests correct for decision trees' habit of overfitting to their training set.\n", "- Random Forest is an improvement over bagged trees that \"de-correlates\" the trees even further, reducing the variance.\n", "- At each tree split, a random sample of $m$ features is drawn and only those $m$ features are considered for splitting. \n", "- Typically $m = \\sqrt{p}$ or $\\log_2p$ where $p$ is the original number of features.\n", "- For each tree gown on a bootstrap sample, the error rates for observations left out of the bootstrap sample is monitored. This is called the [\"out-of-bag\"](https://en.wikipedia.org/wiki/Out-of-bag_error) or OOB error rate.\n", "- Each tree has the same (statistical) [expectation](https://en.wikipedia.org/wiki/Expected_value), so increasing the number of trees does not alter the bias of bagging or the Random Forest algorithm." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Decision Boundary\n", "\n", "This is an example of a decision boundary in two dimensions of a (binary) classification Random Forest. The black circle is the Bayes Optimal decision boundary and the blue square-ish boundary is learned by the classification tree.\n", "\n", "![Alt text](./images/boundary_bagging.png \"Bagging Boundary\")\n", "Source: Elements of Statistical Learning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Random Forest by Randomization (aka \"Extra-Trees\")\n", "\n", "In [Extremely Randomized Trees](http://link.springer.com/article/10.1007%2Fs10994-006-6226-1) (aka Extra-Trees) [2], randomness goes one step further in the way splits are computed. As in Random Forests, a random subset of candidate features is used, but instead of looking for the best split, thresholds (for the split) are drawn at random for each candidate feature and the best of these randomly-generated thresholds is picked as the splitting rule. This usually allows to reduce the variance of the model a bit more, at the expense of a slightly greater increase in bias.\n", "\n", "Extremely Randomized Trees is implemented in the [extraTrees](https://cran.r-project.org/web/packages/extraTrees/index.html) R package and also available in the [h2o](https://0xdata.atlassian.net/browse/PUBDEV-2837) R package as part of the `h2o.randomForest()` function via the `histogram_type = \"Random\"` argument." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Out-of-Bag (OOB) Estimates\n", "\n", "In random forests, there is no need for cross-validation or a separate test set to get an unbiased estimate of the test set error. It is estimated internally, during the run, as follows:\n", "\n", "- Each tree is constructed using a different bootstrap sample from the original data. About one-third of the cases are left out of the bootstrap sample and not used in the construction of the kth tree.\n", "- Put each case left out in the construction of the kth tree down the kth tree to get a classification. In this way, a test set classification is obtained for each case in about one-third of the trees. \n", "- At the end of the run, take j to be the class that got most of the votes every time case n was oob. The proportion of times that j is not equal to the true class of n averaged over all cases is the oob error estimate. This has proven to be unbiased in many tests. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Variable Importance\n", "\n", "In every tree grown in the forest, put down the OOB cases and count the number of votes cast for the correct class. Now randomly permute the values of variable m in the oob cases and put these cases down the tree. Subtract the number of votes for the correct class in the variable-$m$-permuted OOB data from the number of votes for the correct class in the untouched OOB data. The average of this number over all trees in the forest is the raw importance score for variable $m$.\n", "\n", "If the values of this score from tree to tree are independent, then the standard error can be computed by a standard computation. The correlations of these scores between trees have been computed for a number of data sets and proved to be quite low, therefore we compute standard errors in the classical way, divide the raw score by its standard error to get a $z$-score, and assign a significance level to the $z$-score assuming normality.\n", "\n", "If the number of variables is very large, forests can be run once with all the variables, then run again using only the most important variables from the first run.\n", "\n", "For each case, consider all the trees for which it is oob. Subtract the percentage of votes for the correct class in the variable-$m$-permuted OOB data from the percentage of votes for the correct class in the untouched OOB data. This is the local importance score for variable m for this case. \n", "\n", "Variable importance in Extremely Randomized Trees is explained [here](http://www.slideshare.net/glouppe/understanding-variable-importances-in-forests-of-randomized-trees)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overfitting\n", "\n", "Leo Breiman famously claimed that \"Random Forests do not overfit.\" This is perhaps not exactly the case, however they are certainly more robust to overfitting than a Gradient Boosting Machine (GBM). Random Forests can be overfit by growing trees that are \"too deep\", for example. However, it is hard to overfit a Random Forest by adding more trees to the forest -- typically that will increase accuracy (at the expense of computation time).\n", "\n", "## Missing Data\n", "\n", "Missing values do not necessarily have to be imputed in a Random Forest implementation, although some software packages will require it. \n", "\n", "## Practical Uses\n", "\n", "Here is a short article called, [The Unreasonable Effectiveness of Random Forests](https://medium.com/rants-on-machine-learning/the-unreasonable-effectiveness-of-random-forests-f33c3ce28883#.r734znc9f), by Ahmed El Deeb, about the utility of Random Forests. It summarizes some of the algorithm's pros and cons nicely.\n", "\n", "## Resources\n", "\n", "- [Gilles Louppe - Understanding Random Forests (PhD Dissertation)](http://arxiv.org/abs/1407.7502) (pdf)\n", "- [Gilles Louppe - Understanding Random Forests: From Theory to Practice](http://www.slideshare.net/glouppe/understanding-random-forests-from-theory-to-practice) (slides)\n", "- [Trevor Hastie - Gradient Boosting & Random Forests at H2O World 2014](https://www.youtube.com/watch?v=wPqtzj5VZus&index=16&list=PLNtMya54qvOFQhSZ4IKKXRbMkyLMn0caa) (YouTube)\n", "- [Mark Landry - Gradient Boosting Method and Random Forest at H2O World 2015](https://www.youtube.com/watch?v=9wn1f-30_ZY) (YouTube)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Random Forest Software in R \n", "\n", "The oldest and most well known implementation of the Random Forest algorithm in R is the [randomForest](https://cran.r-project.org/web/packages/randomForest/index.html) package. There are also a number of packages that implement variants of the algorithm, and in the past few years, there have been several \"big data\" focused implementations contributed to the R ecosystem as well.\n", "\n", "Here is a non-comprehensive list:\n", "\n", "- [randomForest::randomForest](http://www.rdocumentation.org/packages/randomForest/functions/randomForest)\n", "- [h2o::h2o.randomForest](http://www.rdocumentation.org/packages/h2o/functions/h2o.randomForest)\n", "- [DistributedR::hpdRF_parallelForest](https://github.com/vertica/DistributedR/blob/master/algorithms/HPdclassifier/R/hpdRF_parallelForest.R)\n", "- [party::cForest](http://www.rdocumentation.org/packages/party/functions/cforest): A random forest variant for response variables measured at arbitrary scales based on conditional inference trees.\n", "- [randomForestSRC](https://cran.r-project.org/web/packages/randomForestSRC/index.html) implements a unified treatment of Breiman's random forests for survival, regression and classification problems.\n", "- [quantregForest](https://cran.r-project.org/web/packages/quantregForest/index.html) can regress quantiles of a numeric response on exploratory variables via a random forest approach.\n", "- [ranger](https://cran.r-project.org/web/packages/ranger/index.html)\n", "- [Rborist](https://cran.r-project.org/web/packages/Rborist/index.html)\n", "- The [caret](https://topepo.github.io/caret/index.html) package wraps a number of different Random Forest packages in R ([full list here](https://topepo.github.io/caret/Random_Forest.html)):\n", " - Conditional Inference Random Forest (`party::cForest`)\n", " - Oblique Random Forest (`obliqueRF`)\n", " - Parallel Random Forest (`randomForest` + `foreach`)\n", " - Random Ferns (`rFerns`)\n", " - Random Forest (`randomForest`)\n", " - Random Forest (`ranger`)\n", " - Quantile Random Forest (`quantregForest`)\n", " - Random Forest by Randomization (`extraTrees`)\n", " - Random Forest Rule-Based Model (`inTrees`)\n", " - Random Forest with Additional Feature Selection (`Boruta`)\n", " - Regularized Random Forest (`RRF`)\n", " - Rotation Forest (`rotationForest`)\n", " - Weighted Subspace Random Forest (`wsrf`)\n", "- The [mlr](https://github.com/mlr-org/mlr) package wraps a number of different Random Forest packages in R:\n", " - Conditional Inference Random Forest (`party::cForest`)\n", " - Rotation Forest (`rotationForest`)\n", " - Parallel Forest (`ParallelForest`)\n", " - Survival Forest (`randomForestSRC`)\n", " - Random Ferns (`rFerns`)\n", " - Random Forest (`randomForest`)\n", " - Random Forest (`ranger`)\n", " - Synthetic Random Forest (`randomForestSRC`)\n", " - Random Uniform Forest (`randomUniformForest`)\n", "\n", "Since there are so many different Random Forest implementations available, there have been several benchmarks to compare the performance of popular implementations, including implementations outside of R. A few examples:\n", "1. [Benchmarking Random Forest Classification](http://www.wise.io/tech/benchmarking-random-forest-part-1) by Erin LeDell, 2013\n", "2. [Benchmarking Random Forest Implementations](http://datascience.la/benchmarking-random-forest-implementations/) by Szilard Pafka, 2015\n", "3. [Ranger](http://arxiv.org/pdf/1508.04409v1.pdf) publication by Marvin N. Wright and Andreas Ziegler, 2015\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## randomForest\n", "\n", "Authors: Fortran original by [Leo Breiman](http://www.stat.berkeley.edu/~breiman/) and [Adele Cutler](http://www.math.usu.edu/~adele/), R port by [Andy Liaw](https://www.linkedin.com/in/andy-liaw-1399347) and Matthew Wiener.\n", "\n", "Backend: Fortran\n", "\n", "Features:\n", "- This package wraps the original Fortran code by Leo Breiman and Adele Culter and is probably the most widely known/used implementation in R.\n", "- Single-threaded.\n", "- Although it's single-threaded, smaller forests can be trained in parallel by writing custom [foreach](https://cran.r-project.org/web/packages/foreach/index.html) or [parallel](http://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf) code, then combined into a bigger forest using the [randomForest::combine()](http://www.rdocumentation.org/packages/randomForest/functions/combine) function.\n", "- Row weights unimplemented (been on the wishlist for as long as I can remember).\n", "- Uses CART trees split by Gini Impurity.\n", "- Categorical predictors are allowed to have up to 53 categories.\n", "- Multinomial response can have no more than 32 categories.\n", "- Supports R formula interface (but I've read some reports that claim it's slower when the formula interface is used).\n", "- GPL-2/3 Licensed." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# randomForest example\n", "#install.packages(\"randomForest\")\n", "#install.packages(\"cvAUC\")\n", "library(randomForest)\n", "library(cvAUC)" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 10000
  2. \n", "\t
  3. 29
  4. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 10000\n", "\\item 29\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 10000\n", "2. 29\n", "\n", "\n" ], "text/plain": [ "[1] 10000 29" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
    \n", "\t
  1. 5000
  2. \n", "\t
  3. 29
  4. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 5000\n", "\\item 29\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 5000\n", "2. 29\n", "\n", "\n" ], "text/plain": [ "[1] 5000 29" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
    \n", "\t
  1. 'response'
  2. \n", "\t
  3. 'x1'
  4. \n", "\t
  5. 'x2'
  6. \n", "\t
  7. 'x3'
  8. \n", "\t
  9. 'x4'
  10. \n", "\t
  11. 'x5'
  12. \n", "\t
  13. 'x6'
  14. \n", "\t
  15. 'x7'
  16. \n", "\t
  17. 'x8'
  18. \n", "\t
  19. 'x9'
  20. \n", "\t
  21. 'x10'
  22. \n", "\t
  23. 'x11'
  24. \n", "\t
  25. 'x12'
  26. \n", "\t
  27. 'x13'
  28. \n", "\t
  29. 'x14'
  30. \n", "\t
  31. 'x15'
  32. \n", "\t
  33. 'x16'
  34. \n", "\t
  35. 'x17'
  36. \n", "\t
  37. 'x18'
  38. \n", "\t
  39. 'x19'
  40. \n", "\t
  41. 'x20'
  42. \n", "\t
  43. 'x21'
  44. \n", "\t
  45. 'x22'
  46. \n", "\t
  47. 'x23'
  48. \n", "\t
  49. 'x24'
  50. \n", "\t
  51. 'x25'
  52. \n", "\t
  53. 'x26'
  54. \n", "\t
  55. 'x27'
  56. \n", "\t
  57. 'x28'
  58. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 'response'\n", "\\item 'x1'\n", "\\item 'x2'\n", "\\item 'x3'\n", "\\item 'x4'\n", "\\item 'x5'\n", "\\item 'x6'\n", "\\item 'x7'\n", "\\item 'x8'\n", "\\item 'x9'\n", "\\item 'x10'\n", "\\item 'x11'\n", "\\item 'x12'\n", "\\item 'x13'\n", "\\item 'x14'\n", "\\item 'x15'\n", "\\item 'x16'\n", "\\item 'x17'\n", "\\item 'x18'\n", "\\item 'x19'\n", "\\item 'x20'\n", "\\item 'x21'\n", "\\item 'x22'\n", "\\item 'x23'\n", "\\item 'x24'\n", "\\item 'x25'\n", "\\item 'x26'\n", "\\item 'x27'\n", "\\item 'x28'\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 'response'\n", "2. 'x1'\n", "3. 'x2'\n", "4. 'x3'\n", "5. 'x4'\n", "6. 'x5'\n", "7. 'x6'\n", "8. 'x7'\n", "9. 'x8'\n", "10. 'x9'\n", "11. 'x10'\n", "12. 'x11'\n", "13. 'x12'\n", "14. 'x13'\n", "15. 'x14'\n", "16. 'x15'\n", "17. 'x16'\n", "18. 'x17'\n", "19. 'x18'\n", "20. 'x19'\n", "21. 'x20'\n", "22. 'x21'\n", "23. 'x22'\n", "24. 'x23'\n", "25. 'x24'\n", "26. 'x25'\n", "27. 'x26'\n", "28. 'x27'\n", "29. 'x28'\n", "\n", "\n" ], "text/plain": [ " [1] \"response\" \"x1\" \"x2\" \"x3\" \"x4\" \"x5\" \n", " [7] \"x6\" \"x7\" \"x8\" \"x9\" \"x10\" \"x11\" \n", "[13] \"x12\" \"x13\" \"x14\" \"x15\" \"x16\" \"x17\" \n", "[19] \"x18\" \"x19\" \"x20\" \"x21\" \"x22\" \"x23\" \n", "[25] \"x24\" \"x25\" \"x26\" \"x27\" \"x28\" " ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Load binary-response dataset\n", "train <- read.csv(\"data/higgs_train_10k.csv\")\n", "test <- read.csv(\"data/higgs_test_5k.csv\")\n", "\n", "# Dimensions\n", "dim(train)\n", "dim(test)\n", "\n", "# Columns\n", "names(train)" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Identity the response column\n", "ycol <- \"response\"\n", "\n", "# Identify the predictor columns\n", "xcols <- setdiff(names(train), ycol)\n", "\n", "# Convert response to factor (required by randomForest)\n", "train[,ycol] <- as.factor(train[,ycol])\n", "test[,ycol] <- as.factor(test[,ycol])" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ " user system elapsed \n", " 1.492 0.010 1.503 " ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Train a default RF model with 500 trees\n", "set.seed(1) # For reproducibility\n", "system.time(model <- randomForest(x = train[,xcols], \n", " y = train[,ycol],\n", " xtest = test[,xcols],\n", " ntree = 50))" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.76796103462561" ], "text/latex": [ "0.76796103462561" ], "text/markdown": [ "0.76796103462561" ], "text/plain": [ "[1] 0.767961" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Generate predictions on test dataset\n", "preds <- model$test$votes[, 2]\n", "labels <- test[,ycol]\n", "\n", "# Compute AUC on the test set\n", "cvAUC::AUC(predictions = preds, labels = labels)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## caret method \"parRF\"\n", "\n", "Authors: Max Kuhn\n", "\n", "Backend: Fortran (wraps the `randomForest` package)\n", "\n", "This is a wrapper for the `randomForest` package that parallelizes the tree building." ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": false }, "outputs": [], "source": [ "library(caret)\n", "library(doParallel)\n", "library(e1071)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Train a \"parRF\" model using caret\n", "registerDoParallel(cores = 8)\n", "\n", "model <- caret::train(x = train[,xcols], \n", " y = train[,ycol], \n", " method = \"parRF\",\n", " preProcess = NULL,\n", " weights = NULL,\n", " metric = \"Accuracy\",\n", " maximize = TRUE,\n", " trControl = trainControl(), \n", " tuneGrid = NULL,\n", " tuneLength = 3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## h2o\n", "\n", "Authors: [Jan Vitek](http://www.cs.purdue.edu/homes/jv/), [Arno Candel](https://www.linkedin.com/in/candel), H2O.ai contributors\n", "\n", "Backend: Java\n", "\n", "Features:\n", "\n", "- Distributed and parallelized computation on either a single node or a multi-node cluster.\n", "- Automatic early stopping based on convergence of user-specified metrics to user-specified relative tolerance.\n", "- Data-distributed, which means the entire dataset does not need to fit into memory on a single node.\n", "- Uses histogram approximations of continuous variables for speedup.\n", "- Uses squared error to determine optimal splits.\n", "- Automatic early stopping based on convergence of user-specified metrics to user-specified relative tolerance.\n", "- Support for exponential families (Poisson, Gamma, Tweedie) and loss functions in addition to binomial (Bernoulli), Gaussian and multinomial distributions, such as Quantile regression (including Laplace)ˆ.\n", "- Grid search for hyperparameter optimization and model selection.\n", "- Apache 2.0 Licensed.\n", "- Model export in plain Java code for deployment in production environments.\n", "- GUI for training & model eval/viz (H2O Flow).\n", "\n", "Implementation details are presented in slidedecks by [Michal Mahalova](http://www.slideshare.net/0xdata/rf-brighttalk) and [Jan Vitek](http://www.slideshare.net/0xdata/jan-vitek-distributedrandomforest522013)." ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "H2O is not running yet, starting it now...\n", "\n", "Note: In case of errors look at the following log files:\n", " /var/folders/2j/jg4sl53d5q53tc2_nzm9fz5h0000gn/T//RtmpZIIhCg/h2o_me_started_from_r.out\n", " /var/folders/2j/jg4sl53d5q53tc2_nzm9fz5h0000gn/T//RtmpZIIhCg/h2o_me_started_from_r.err\n", "\n", "\n", "Starting H2O JVM and connecting: . Connection successful!\n", "\n", "R is connected to the H2O cluster: \n", " H2O cluster uptime: 1 seconds 107 milliseconds \n", " H2O cluster version: 3.8.2.6 \n", " H2O cluster name: H2O_started_from_R_me_pvt513 \n", " H2O cluster total nodes: 1 \n", " H2O cluster total memory: 3.56 GB \n", " H2O cluster total cores: 8 \n", " H2O cluster allowed cores: 8 \n", " H2O cluster healthy: TRUE \n", " H2O Connection ip: localhost \n", " H2O Connection port: 54321 \n", " H2O Connection proxy: NA \n", " R Version: R version 3.3.0 (2016-05-03) \n", "\n" ] } ], "source": [ "#install.packages(\"h2o\")\n", "library(h2o)\n", "#h2o.shutdown(prompt = FALSE)\n", "h2o.init(nthreads = -1) #Start a local H2O cluster using nthreads = num available cores" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " |======================================================================| 100%\n", " |======================================================================| 100%\n" ] }, { "data": { "text/html": [ "
    \n", "\t
  1. 10000
  2. \n", "\t
  3. 29
  4. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 10000\n", "\\item 29\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 10000\n", "2. 29\n", "\n", "\n" ], "text/plain": [ "[1] 10000 29" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
    \n", "\t
  1. 5000
  2. \n", "\t
  3. 29
  4. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 5000\n", "\\item 29\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 5000\n", "2. 29\n", "\n", "\n" ], "text/plain": [ "[1] 5000 29" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
    \n", "\t
  1. 'response'
  2. \n", "\t
  3. 'x1'
  4. \n", "\t
  5. 'x2'
  6. \n", "\t
  7. 'x3'
  8. \n", "\t
  9. 'x4'
  10. \n", "\t
  11. 'x5'
  12. \n", "\t
  13. 'x6'
  14. \n", "\t
  15. 'x7'
  16. \n", "\t
  17. 'x8'
  18. \n", "\t
  19. 'x9'
  20. \n", "\t
  21. 'x10'
  22. \n", "\t
  23. 'x11'
  24. \n", "\t
  25. 'x12'
  26. \n", "\t
  27. 'x13'
  28. \n", "\t
  29. 'x14'
  30. \n", "\t
  31. 'x15'
  32. \n", "\t
  33. 'x16'
  34. \n", "\t
  35. 'x17'
  36. \n", "\t
  37. 'x18'
  38. \n", "\t
  39. 'x19'
  40. \n", "\t
  41. 'x20'
  42. \n", "\t
  43. 'x21'
  44. \n", "\t
  45. 'x22'
  46. \n", "\t
  47. 'x23'
  48. \n", "\t
  49. 'x24'
  50. \n", "\t
  51. 'x25'
  52. \n", "\t
  53. 'x26'
  54. \n", "\t
  55. 'x27'
  56. \n", "\t
  57. 'x28'
  58. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 'response'\n", "\\item 'x1'\n", "\\item 'x2'\n", "\\item 'x3'\n", "\\item 'x4'\n", "\\item 'x5'\n", "\\item 'x6'\n", "\\item 'x7'\n", "\\item 'x8'\n", "\\item 'x9'\n", "\\item 'x10'\n", "\\item 'x11'\n", "\\item 'x12'\n", "\\item 'x13'\n", "\\item 'x14'\n", "\\item 'x15'\n", "\\item 'x16'\n", "\\item 'x17'\n", "\\item 'x18'\n", "\\item 'x19'\n", "\\item 'x20'\n", "\\item 'x21'\n", "\\item 'x22'\n", "\\item 'x23'\n", "\\item 'x24'\n", "\\item 'x25'\n", "\\item 'x26'\n", "\\item 'x27'\n", "\\item 'x28'\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 'response'\n", "2. 'x1'\n", "3. 'x2'\n", "4. 'x3'\n", "5. 'x4'\n", "6. 'x5'\n", "7. 'x6'\n", "8. 'x7'\n", "9. 'x8'\n", "10. 'x9'\n", "11. 'x10'\n", "12. 'x11'\n", "13. 'x12'\n", "14. 'x13'\n", "15. 'x14'\n", "16. 'x15'\n", "17. 'x16'\n", "18. 'x17'\n", "19. 'x18'\n", "20. 'x19'\n", "21. 'x20'\n", "22. 'x21'\n", "23. 'x22'\n", "24. 'x23'\n", "25. 'x24'\n", "26. 'x25'\n", "27. 'x26'\n", "28. 'x27'\n", "29. 'x28'\n", "\n", "\n" ], "text/plain": [ " [1] \"response\" \"x1\" \"x2\" \"x3\" \"x4\" \"x5\" \n", " [7] \"x6\" \"x7\" \"x8\" \"x9\" \"x10\" \"x11\" \n", "[13] \"x12\" \"x13\" \"x14\" \"x15\" \"x16\" \"x17\" \n", "[19] \"x18\" \"x19\" \"x20\" \"x21\" \"x22\" \"x23\" \n", "[25] \"x24\" \"x25\" \"x26\" \"x27\" \"x28\" " ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Load binary-response dataset\n", "train <- h2o.importFile(\"./data/higgs_train_10k.csv\")\n", "test <- h2o.importFile(\"./data/higgs_test_5k.csv\")\n", "\n", "# Dimensions\n", "dim(train)\n", "dim(test)\n", "\n", "# Columns\n", "names(train)" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Identity the response column\n", "ycol <- \"response\"\n", "\n", "# Identify the predictor columns\n", "xcols <- setdiff(names(train), ycol)\n", "\n", "# Convert response to factor (required by randomForest)\n", "train[,ycol] <- as.factor(train[,ycol])\n", "test[,ycol] <- as.factor(test[,ycol])" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\r", " | \r", " | | 0%\r", " | \r", " |========== | 14%\r", " | \r", " |======================== | 34%\r", " | \r", " |========================================= | 58%\r", " | \r", " |======================================================= | 78%\r", " | \r", " |======================================================================| 100%\n" ] }, { "data": { "text/plain": [ " user system elapsed \n", " 0.217 0.008 5.502 " ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Train a default RF model with 100 trees\n", "\n", "system.time(model <- h2o.randomForest(x = xcols,\n", " y = ycol,\n", " training_frame = train,\n", " seed = 1, #for reproducibility\n", " ntrees = 50)) " ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.769800386918767" ], "text/latex": [ "0.769800386918767" ], "text/markdown": [ "0.769800386918767" ], "text/plain": [ "[1] 0.7698004" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Compute AUC on test dataset\n", "# H2O computes many model performance metrics automatically, including AUC\n", "\n", "perf <- h2o.performance(model = model, newdata = test)\n", "h2o.auc(perf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Rborist\n", "\n", "Authors: Mark Seligman\n", "\n", "Backend: C++\n", "\n", "The [Arborist](https://github.com/suiji/Arborist) provides a fast, open-source implementation of the Random Forest algorithm. The Arborist achieves its speed through efficient C++ code and parallel, distributed tree construction. This [slidedeck](http://www.rinfinance.com/agenda/2015/talk/MarkSeligman.pdf) provides detail about the implementation and vision of the project.\n", "\n", "Features:\n", "- Began as proprietary implementation, but was open-sourced and rewritten following dissolution of venture.\n", "- Project called \"Arborist\" but R package is called \"Rborist\". A Python interface is in development.\n", "- CPU based but a GPU version called Curborist (Cuda Rborist) is in development (unclear if it will be open source).\n", "- Unlimited factor cardinality.\n", "- Emphasizes multi-core but not multi-node.\n", "- Both Python support and GPU support have been \"coming soon\" since summer 2015, not sure the status of the projects.\n", "- GPL-2/3 licensed.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ranger\n", "\n", "Authors: [Marvin N. Wright](http://www.imbs-luebeck.de/imbs/node/323) and Andreas Ziegler\n", "\n", "Backend: C++\n", "\n", "[Ranger](http://arxiv.org/pdf/1508.04409v1.pdf) is a fast [implementation](https://github.com/imbs-hl/ranger) of random forest (Breiman 2001) or recursive partitioning, particularly suited for high dimensional data. Classification, regression, probability estimation and survival forests are supported. Classification and regression forests are implemented as in the original Random Forest (Breiman 2001), survival forests as in Random Survival Forests (Ishwaran et al. 2008). For probability estimation forests see Malley et al. (2012). \n", "\n", "Features:\n", "\n", "- Multi-threaded.\n", "- Direct support for [GWAS](https://en.wikipedia.org/wiki/Genome-wide_association_study) (Genome-wide association study) data.\n", "- Excellent speed and support for high-dimensional or wide data.\n", "- Not as fast for \"tall & skinny\" data (many rows, few columns).\n", "- GPL-3 licensed." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Alt text](./images/ranger_vs_arborist.png \"Ranger vs Rborist\")\n", "Plot from the [ranger article](http://arxiv.org/pdf/1508.04409v1.pdf)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "\n", "[1] [http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf](http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf)\n", "\n", "[2] [P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006.](http://link.springer.com/article/10.1007%2Fs10994-006-6226-1)\n", "\n", "[3] [http://www.cs.uvm.edu/~icdm/algorithms/10Algorithms-08.pdf](http://www.cs.uvm.edu/~icdm/algorithms/10Algorithms-08.pdf)\n" ] } ], "metadata": { "kernelspec": { "display_name": "R", "language": "R", "name": "ir" }, "language_info": { "codemirror_mode": "r", "file_extension": ".r", "mimetype": "text/x-r-source", "name": "R", "pygments_lexer": "r", "version": "3.3.0" } }, "nbformat": 4, "nbformat_minor": 0 }