{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# H2O Tutorial: EEG Eye State Classification\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Author: Erin LeDell\n", "\n", "Contact: erin@h2o.ai\n", "\n", "This tutorial steps through a quick introduction to H2O's R API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from R. \n", "\n", "Most of the functionality for R's `data.frame` is exactly the same syntax for an `H2OFrame`, so if you are comfortable with R, data frame manipulation will come naturally to you in H2O. The modeling syntax in the H2O R API may also remind you of other machine learning packages in R.\n", "\n", "References: [H2O R API documentation](http://h2o-release.s3.amazonaws.com/h2o/latest_stable_Rdoc.html), the [H2O Documentation landing page](http://www.h2o.ai/docs/) and [H2O general documentation](http://h2o-release.s3.amazonaws.com/h2o/latest_stable_doc.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install H2O in R\n", "\n", "### Prerequisites\n", "\n", "This tutorial assumes you have R installed. The `h2o` R package has a few dependencies which can be installed using CRAN. The packages that are required (which also have their own dependencies) can be installed in R as follows:\n", "```r\n", "pkgs <- c(\"methods\",\"statmod\",\"stats\",\"graphics\",\"RCurl\",\"jsonlite\",\"tools\",\"utils\")\n", "for (pkg in pkgs) {\n", " if (! (pkg %in% rownames(installed.packages()))) { install.packages(pkg) }\n", "}\n", "```\n", "\n", "### Install h2o\n", "\n", "Once the dependencies are installed, you can install H2O. We will use the latest stable version of the `h2o` R package, which at the time of writing is H2O v3.8.0.4 (aka \"Tukey-4\"). The latest stable version can be installed using the commands on the [H2O R Installation](http://www.h2o.ai/download/h2o/r) page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Start up an H2O cluster\n", "\n", "After the R package is installed, we can start up an H2O cluster. In a R terminal, we load the `h2o` package and start up an H2O cluster as follows:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Connection successful!\n", "\n", "R is connected to the H2O cluster: \n", " H2O cluster uptime: 22 seconds 115 milliseconds \n", " H2O cluster version: 3.10.0.3 \n", " H2O cluster version age: 9 days \n", " H2O cluster name: H2O_started_from_R_laurend_syo488 \n", " H2O cluster total nodes: 1 \n", " H2O cluster total memory: 3.28 GB \n", " H2O cluster total cores: 8 \n", " H2O cluster allowed cores: 8 \n", " H2O cluster healthy: TRUE \n", " H2O Connection ip: localhost \n", " H2O Connection port: 54321 \n", " H2O Connection proxy: NA \n", " R Version: R version 3.3.1 (2016-06-21) \n", "\n" ] } ], "source": [ "library(h2o)\n", "\n", "# Start an H2O Cluster on your local machine\n", "h2o.init(nthreads = -1) #nthreads = -1 uses all cores on your machine" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# This will not actually do anything since it's a fake IP address\n", "# h2o.init(ip=\"123.45.67.89\", port=54321)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download EEG Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following code downloads a copy of the [EEG Eye State](http://archive.ics.uci.edu/ml/datasets/EEG+Eye+State#) dataset. All data is from one continuous EEG measurement with the [Emotiv EEG Neuroheadset](https://emotiv.com/epoc.php). The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.\n", "\n", "![Emotiv Headset](http://dissociatedpress.com/wp-content/uploads/2013/03/emotiv-490.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can import the data directly into H2O using the `import_file` method in the Python API. The import path can be a URL, a local path, a path to an HDFS file, or a file on Amazon S3." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\r", " | \r", " | | 0%\r", " | \r", " |======================================================================| 100%\n" ] } ], "source": [ "#csv_url <- \"http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv\"\n", "csv_url <- \"https://h2o-public-test-data.s3.amazonaws.com/smalldata/eeg/eeg_eyestate_splits.csv\"\n", "data <- h2o.importFile(csv_url)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Explore Data\n", "Once we have loaded the data, let's take a quick look. First the dimension of the frame:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 14980
  2. \n", "\t
  3. 16
  4. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 14980\n", "\\item 16\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 14980\n", "2. 16\n", "\n", "\n" ], "text/plain": [ "[1] 14980 16" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dim(data)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's take a look at the top of the frame:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\t\n", "\t\n", "\t\n", "\t\n", "\t\n", "\t\n", "\n", "
AF3F7F3FC5T7P7O1O2P8T8FC6F4F8AF4eyeDetectionsplit
14329.234009.234289.234148.214350.264586.154096.924641.034222.054238.464211.284280.514635.94393.850valid
24324.624004.624293.854148.724342.054586.674097.444638.974210.774226.674207.694279.494632.824384.10test
34327.694006.674295.384156.414336.924583.594096.924630.264207.694222.054206.674282.054628.724389.230train
44328.724011.794296.414155.94343.594582.564097.444630.774217.444235.384210.774287.694632.314396.410train
54326.154011.794292.314151.284347.694586.674095.94627.694210.774244.14212.824288.214632.824398.460train
64321.034004.624284.14153.334345.644587.184093.334616.924202.564232.824209.744281.034628.214389.740train
\n" ], "text/latex": [ "\\begin{tabular}{r|llllllllllllllll}\n", " & AF3 & F7 & F3 & FC5 & T7 & P7 & O1 & O2 & P8 & T8 & FC6 & F4 & F8 & AF4 & eyeDetection & split\\\\\n", "\\hline\n", "\t1 & 4329.23 & 4009.23 & 4289.23 & 4148.21 & 4350.26 & 4586.15 & 4096.92 & 4641.03 & 4222.05 & 4238.46 & 4211.28 & 4280.51 & 4635.9 & 4393.85 & 0 & valid\\\\\n", "\t2 & 4324.62 & 4004.62 & 4293.85 & 4148.72 & 4342.05 & 4586.67 & 4097.44 & 4638.97 & 4210.77 & 4226.67 & 4207.69 & 4279.49 & 4632.82 & 4384.1 & 0 & test\\\\\n", "\t3 & 4327.69 & 4006.67 & 4295.38 & 4156.41 & 4336.92 & 4583.59 & 4096.92 & 4630.26 & 4207.69 & 4222.05 & 4206.67 & 4282.05 & 4628.72 & 4389.23 & 0 & train\\\\\n", "\t4 & 4328.72 & 4011.79 & 4296.41 & 4155.9 & 4343.59 & 4582.56 & 4097.44 & 4630.77 & 4217.44 & 4235.38 & 4210.77 & 4287.69 & 4632.31 & 4396.41 & 0 & train\\\\\n", "\t5 & 4326.15 & 4011.79 & 4292.31 & 4151.28 & 4347.69 & 4586.67 & 4095.9 & 4627.69 & 4210.77 & 4244.1 & 4212.82 & 4288.21 & 4632.82 & 4398.46 & 0 & train\\\\\n", "\t6 & 4321.03 & 4004.62 & 4284.1 & 4153.33 & 4345.64 & 4587.18 & 4093.33 & 4616.92 & 4202.56 & 4232.82 & 4209.74 & 4281.03 & 4628.21 & 4389.74 & 0 & train\\\\\n", "\\end{tabular}\n" ], "text/plain": [ " AF3 F7 F3 FC5 T7 P7 O1 O2 P8\n", "1 4329.23 4009.23 4289.23 4148.21 4350.26 4586.15 4096.92 4641.03 4222.05\n", "2 4324.62 4004.62 4293.85 4148.72 4342.05 4586.67 4097.44 4638.97 4210.77\n", "3 4327.69 4006.67 4295.38 4156.41 4336.92 4583.59 4096.92 4630.26 4207.69\n", "4 4328.72 4011.79 4296.41 4155.90 4343.59 4582.56 4097.44 4630.77 4217.44\n", "5 4326.15 4011.79 4292.31 4151.28 4347.69 4586.67 4095.90 4627.69 4210.77\n", "6 4321.03 4004.62 4284.10 4153.33 4345.64 4587.18 4093.33 4616.92 4202.56\n", " T8 FC6 F4 F8 AF4 eyeDetection split\n", "1 4238.46 4211.28 4280.51 4635.90 4393.85 0 valid\n", "2 4226.67 4207.69 4279.49 4632.82 4384.10 0 test\n", "3 4222.05 4206.67 4282.05 4628.72 4389.23 0 train\n", "4 4235.38 4210.77 4287.69 4632.31 4396.41 0 train\n", "5 4244.10 4212.82 4288.21 4632.82 4398.46 0 train\n", "6 4232.82 4209.74 4281.03 4628.21 4389.74 0 train" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "head(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first 14 columns are numeric values that represent EEG measurements from the headset. The \"eyeDetection\" column is the response. There is an additional column called \"split\" that was added (by me) in order to specify partitions of the data (so we can easily benchmark against other tools outside of H2O using the same splits). I randomly divided the dataset into three partitions: train (60%), valid (%20) and test (20%) and marked which split each row belongs to in the \"split\" column.\n", "\n", "Let's take a look at the column names. The data contains derived features from the medical images of the tumors." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 'AF3'
  2. \n", "\t
  3. 'F7'
  4. \n", "\t
  5. 'F3'
  6. \n", "\t
  7. 'FC5'
  8. \n", "\t
  9. 'T7'
  10. \n", "\t
  11. 'P7'
  12. \n", "\t
  13. 'O1'
  14. \n", "\t
  15. 'O2'
  16. \n", "\t
  17. 'P8'
  18. \n", "\t
  19. 'T8'
  20. \n", "\t
  21. 'FC6'
  22. \n", "\t
  23. 'F4'
  24. \n", "\t
  25. 'F8'
  26. \n", "\t
  27. 'AF4'
  28. \n", "\t
  29. 'eyeDetection'
  30. \n", "\t
  31. 'split'
  32. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 'AF3'\n", "\\item 'F7'\n", "\\item 'F3'\n", "\\item 'FC5'\n", "\\item 'T7'\n", "\\item 'P7'\n", "\\item 'O1'\n", "\\item 'O2'\n", "\\item 'P8'\n", "\\item 'T8'\n", "\\item 'FC6'\n", "\\item 'F4'\n", "\\item 'F8'\n", "\\item 'AF4'\n", "\\item 'eyeDetection'\n", "\\item 'split'\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 'AF3'\n", "2. 'F7'\n", "3. 'F3'\n", "4. 'FC5'\n", "5. 'T7'\n", "6. 'P7'\n", "7. 'O1'\n", "8. 'O2'\n", "9. 'P8'\n", "10. 'T8'\n", "11. 'FC6'\n", "12. 'F4'\n", "13. 'F8'\n", "14. 'AF4'\n", "15. 'eyeDetection'\n", "16. 'split'\n", "\n", "\n" ], "text/plain": [ " [1] \"AF3\" \"F7\" \"F3\" \"FC5\" \"T7\" \n", " [6] \"P7\" \"O1\" \"O2\" \"P8\" \"T8\" \n", "[11] \"FC6\" \"F4\" \"F8\" \"AF4\" \"eyeDetection\"\n", "[16] \"split\" " ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "names(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To select a subset of the columns to look at, typical R data.frame indexing applies:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\t\n", "\t\n", "\t\n", "\t\n", "\t\n", "\t\n", "\n", "
AF3eyeDetectionsplit
14329.230valid
24324.620test
34327.690train
44328.720train
54326.150train
64321.030train
\n" ], "text/latex": [ "\\begin{tabular}{r|lll}\n", " & AF3 & eyeDetection & split\\\\\n", "\\hline\n", "\t1 & 4329.23 & 0 & valid\\\\\n", "\t2 & 4324.62 & 0 & test\\\\\n", "\t3 & 4327.69 & 0 & train\\\\\n", "\t4 & 4328.72 & 0 & train\\\\\n", "\t5 & 4326.15 & 0 & train\\\\\n", "\t6 & 4321.03 & 0 & train\\\\\n", "\\end{tabular}\n" ], "text/plain": [ " AF3 eyeDetection split\n", "1 4329.23 0 valid\n", "2 4324.62 0 test\n", "3 4327.69 0 train\n", "4 4328.72 0 train\n", "5 4326.15 0 train\n", "6 4321.03 0 train" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "columns <- c('AF3', 'eyeDetection', 'split')\n", "head(data[columns])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's select a single column, for example -- the response column, and look at the data more closely:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ " eyeDetection\n", "1 0\n", "2 0\n", "3 0\n", "4 0\n", "5 0\n", "6 0\n", "\n", "[14980 rows x 1 column] " ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y <- 'eyeDetection'\n", "data[y]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It looks like a binary response, but let's validate that assumption:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ " C1\n", "1 0\n", "2 1\n", "\n", "[2 rows x 1 column] " ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "h2o.unique(data[y])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you don't specify the column types when you import the file, H2O makes a guess at what your column types are. If there are 0's and 1's in a column, H2O will automatically parse that as numeric by default. \n", "\n", "Therefore, we should convert the response column to a more efficient \"factor\" representation (called \"enum\" in Java) -- in this case it is a categorial variable with two levels, 0 and 1. If the only column in my data that is categorical is the response, I typically don't bother specifying the column type during the parse, and instead use this one-liner to convert it aftewards:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data[y] <- as.factor(data[y])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can check that there are two levels in our response column:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "2" ], "text/latex": [ "2" ], "text/markdown": [ "2" ], "text/plain": [ "[1] 2" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "h2o.nlevels(data[y])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can query the categorical \"levels\" as well ('0' and '1' stand for \"eye open\" and \"eye closed\") to see what they are:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. '0'
  2. \n", "\t
  3. '1'
  4. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item '0'\n", "\\item '1'\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. '0'\n", "2. '1'\n", "\n", "\n" ], "text/plain": [ "[1] \"0\" \"1\"" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "h2o.levels(data[y])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We may want to check if there are any missing values, so let's look for NAs in our dataset. For all the supervised H2O algorithms, H2O will handle missing values automatically, so it's not a problem if we are missing certain feature values. However, it is always a good idea to check to make sure that you are not missing any of the training labels. \n", "\n", "To figure out which, if any, values are missing, we can use the `h2o.nacnt` (NA count) method on any H2OFrame (or column). The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to an H2OFrame also apply to a single column." ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0" ], "text/latex": [ "0" ], "text/markdown": [ "0" ], "text/plain": [ "[1] 0" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "h2o.nacnt(data[y])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great, no missing labels. :-)\n", "\n", "Out of curiosity, let's see if there is any missing data in any of the columsn of this frame:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 0
  2. \n", "\t
  3. 0
  4. \n", "\t
  5. 0
  6. \n", "\t
  7. 0
  8. \n", "\t
  9. 0
  10. \n", "\t
  11. 0
  12. \n", "\t
  13. 0
  14. \n", "\t
  15. 0
  16. \n", "\t
  17. 0
  18. \n", "\t
  19. 0
  20. \n", "\t
  21. 0
  22. \n", "\t
  23. 0
  24. \n", "\t
  25. 0
  26. \n", "\t
  27. 0
  28. \n", "\t
  29. 0
  30. \n", "\t
  31. 0
  32. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\item 0\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 0\n", "2. 0\n", "3. 0\n", "4. 0\n", "5. 0\n", "6. 0\n", "7. 0\n", "8. 0\n", "9. 0\n", "10. 0\n", "11. 0\n", "12. 0\n", "13. 0\n", "14. 0\n", "15. 0\n", "16. 0\n", "\n", "\n" ], "text/plain": [ " [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "h2o.nacnt(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each column returns a zero, so there are no missing values in any of the columns." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an \"imbalanace\" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ " eyeDetection Count\n", "1 0 8257\n", "2 1 6723\n", "\n", "[2 rows x 2 columns] " ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "h2o.table(data[y])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).\n", "\n", "Let's calculate the percentage that each class represents:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ " Count\n", "1 0.5512016\n", "2 0.4487984\n", "\n", "[2 rows x 1 column] " ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "n <- nrow(data) # Total number of training samples\n", "h2o.table(data[y])['Count']/n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Split H2O Frame into a train and test set\n", "\n", "So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts: a training set, validation set and a test set.\n", "\n", "If you want H2O to do the splitting for you, you can use the `split_frame` method. However, we have explicit splits that we want (for reproducibility reasons), so we can just subset the Frame to get the partitions we want. \n", "\n", "Subset the `data` H2O Frame on the \"split\" column:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "8988" ], "text/latex": [ "8988" ], "text/markdown": [ "8988" ], "text/plain": [ "[1] 8988" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train <- data[data['split']==\"train\",]\n", "nrow(train)" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "2996" ], "text/latex": [ "2996" ], "text/markdown": [ "2996" ], "text/plain": [ "[1] 2996" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "valid <- data[data['split']==\"valid\",]\n", "nrow(valid)" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "2996" ], "text/latex": [ "2996" ], "text/markdown": [ "2996" ], "text/plain": [ "[1] 2996" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test <- data[data['split']==\"test\",]\n", "nrow(test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Machine Learning in H2O\n", "\n", "We will do a quick demo of the H2O software using a Gradient Boosting Machine (GBM). The goal of this problem is to train a model to predict eye state (open vs closed) from EEG data. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Train and Test a GBM model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the steps above, we have already created the training set and validation set, so the next step is to specify the predictor set and response variable.\n", "\n", "#### Specify the predictor set and response\n", "\n", "As with any machine learning algorithm, we need to specify the response and predictor columns in the training set. \n", "\n", "The `x` argument should be a vector of predictor names in the training frame, and `y` specifies the response column. We have already set `y <- \"eyeDetector\"` above, but we still need to specify `x`." ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 'AF3'
  2. \n", "\t
  3. 'F7'
  4. \n", "\t
  5. 'F3'
  6. \n", "\t
  7. 'FC5'
  8. \n", "\t
  9. 'T7'
  10. \n", "\t
  11. 'P7'
  12. \n", "\t
  13. 'O1'
  14. \n", "\t
  15. 'O2'
  16. \n", "\t
  17. 'P8'
  18. \n", "\t
  19. 'T8'
  20. \n", "\t
  21. 'FC6'
  22. \n", "\t
  23. 'F4'
  24. \n", "\t
  25. 'F8'
  26. \n", "\t
  27. 'AF4'
  28. \n", "\t
  29. 'eyeDetection'
  30. \n", "\t
  31. 'split'
  32. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 'AF3'\n", "\\item 'F7'\n", "\\item 'F3'\n", "\\item 'FC5'\n", "\\item 'T7'\n", "\\item 'P7'\n", "\\item 'O1'\n", "\\item 'O2'\n", "\\item 'P8'\n", "\\item 'T8'\n", "\\item 'FC6'\n", "\\item 'F4'\n", "\\item 'F8'\n", "\\item 'AF4'\n", "\\item 'eyeDetection'\n", "\\item 'split'\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 'AF3'\n", "2. 'F7'\n", "3. 'F3'\n", "4. 'FC5'\n", "5. 'T7'\n", "6. 'P7'\n", "7. 'O1'\n", "8. 'O2'\n", "9. 'P8'\n", "10. 'T8'\n", "11. 'FC6'\n", "12. 'F4'\n", "13. 'F8'\n", "14. 'AF4'\n", "15. 'eyeDetection'\n", "16. 'split'\n", "\n", "\n" ], "text/plain": [ " [1] \"AF3\" \"F7\" \"F3\" \"FC5\" \"T7\" \n", " [6] \"P7\" \"O1\" \"O2\" \"P8\" \"T8\" \n", "[11] \"FC6\" \"F4\" \"F8\" \"AF4\" \"eyeDetection\"\n", "[16] \"split\" " ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "names(train)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
    \n", "\t
  1. 'AF3'
  2. \n", "\t
  3. 'F7'
  4. \n", "\t
  5. 'F3'
  6. \n", "\t
  7. 'FC5'
  8. \n", "\t
  9. 'T7'
  10. \n", "\t
  11. 'P7'
  12. \n", "\t
  13. 'O1'
  14. \n", "\t
  15. 'O2'
  16. \n", "\t
  17. 'P8'
  18. \n", "\t
  19. 'T8'
  20. \n", "\t
  21. 'FC6'
  22. \n", "\t
  23. 'F4'
  24. \n", "\t
  25. 'F8'
  26. \n", "\t
  27. 'AF4'
  28. \n", "
\n" ], "text/latex": [ "\\begin{enumerate*}\n", "\\item 'AF3'\n", "\\item 'F7'\n", "\\item 'F3'\n", "\\item 'FC5'\n", "\\item 'T7'\n", "\\item 'P7'\n", "\\item 'O1'\n", "\\item 'O2'\n", "\\item 'P8'\n", "\\item 'T8'\n", "\\item 'FC6'\n", "\\item 'F4'\n", "\\item 'F8'\n", "\\item 'AF4'\n", "\\end{enumerate*}\n" ], "text/markdown": [ "1. 'AF3'\n", "2. 'F7'\n", "3. 'F3'\n", "4. 'FC5'\n", "5. 'T7'\n", "6. 'P7'\n", "7. 'O1'\n", "8. 'O2'\n", "9. 'P8'\n", "10. 'T8'\n", "11. 'FC6'\n", "12. 'F4'\n", "13. 'F8'\n", "14. 'AF4'\n", "\n", "\n" ], "text/plain": [ " [1] \"AF3\" \"F7\" \"F3\" \"FC5\" \"T7\" \"P7\" \"O1\" \"O2\" \"P8\" \"T8\" \"FC6\" \"F4\" \n", "[13] \"F8\" \"AF4\"" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x <- setdiff(names(train), c(\"eyeDetection\", \"split\")) #Remove the 13th and 14th columns\n", "x" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have specified `x` and `y`, we can train the GBM model using a few non-default model parameters. Since we are predicting a binary response, we set `distribution = \"bernoulli\"`." ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\r", " | \r", " | | 0%\r", " | \r", " |============= | 19%\r", " | \r", " |==================== | 29%\r", " | \r", " |=========================== | 38%\r", " | \r", " |================================ | 45%\r", " | \r", " |======================================================================| 100%\n" ] } ], "source": [ "model <- h2o.gbm(x = x, y = y,\n", " training_frame = train,\n", " validation_frame = valid,\n", " distribution = \"bernoulli\",\n", " ntrees = 100,\n", " max_depth = 4,\n", " learn_rate = 0.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inspect Model\n", "\n", "The type of results shown when you print a model, are determined by the following:\n", "- Model class of the estimator (e.g. GBM, RF, GLM, DL)\n", "- The type of machine learning problem (e.g. binary classification, multiclass classification, regression)\n", "- The data you specify (e.g. `training_frame` only, `training_frame` and `validation_frame`, or `training_frame` and `nfolds`)\n", "\n", "Below, we see a GBM Model Summary, as well as training and validation metrics since we supplied a `validation_frame`. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score.\n", "\n", "The scoring history is also printed, which shows the performance metrics over some increment such as \"number of trees\" in the case of GBM and RF.\n", "\n", "Lastly, for tree-based methods (GBM and RF), we also print variable importance." ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model Details:\n", "==============\n", "\n", "H2OBinomialModel: gbm\n", "Model ID: GBM_model_R_1456125581863_170 \n", "Model Summary: \n", " number_of_trees model_size_in_bytes min_depth max_depth mean_depth min_leaves\n", "1 100 23828 4 4 4.00000 12\n", " max_leaves mean_leaves\n", "1 16 15.17000\n", "\n", "\n", "H2OBinomialMetrics: gbm\n", "** Reported on training data. **\n", "\n", "MSE: 0.1076065\n", "R^2: 0.5657448\n", "LogLoss: 0.3600893\n", "AUC: 0.9464642\n", "Gini: 0.8929284\n", "\n", "Confusion Matrix for F1-optimal threshold:\n", " 0 1 Error Rate\n", "0 4281 635 0.129170 =635/4916\n", "1 537 3535 0.131876 =537/4072\n", "Totals 4818 4170 0.130396 =1172/8988\n", "\n", "Maximum Metrics: Maximum metrics at their respective thresholds\n", " metric threshold value idx\n", "1 max f1 0.450886 0.857802 206\n", "2 max f2 0.316901 0.899723 262\n", "3 max f0point5 0.582904 0.882212 158\n", "4 max accuracy 0.463161 0.870939 202\n", "5 max precision 0.990029 1.000000 0\n", "6 max recall 0.062219 1.000000 381\n", "7 max specificity 0.990029 1.000000 0\n", "8 max absolute_MCC 0.463161 0.739650 202\n", "9 max min_per_class_accuracy 0.448664 0.868999 207\n", "\n", "Gains/Lift Table: Extract with `h2o.gainsLift(, )` or `h2o.gainsLift(, valid=, xval=)`\n", "H2OBinomialMetrics: gbm\n", "** Reported on validation data. **\n", "\n", "MSE: 0.1200838\n", "R^2: 0.5156133\n", "LogLoss: 0.3894633\n", "AUC: 0.9238635\n", "Gini: 0.8477271\n", "\n", "Confusion Matrix for F1-optimal threshold:\n", " 0 1 Error Rate\n", "0 1328 307 0.187768 =307/1635\n", "1 176 1185 0.129317 =176/1361\n", "Totals 1504 1492 0.161215 =483/2996\n", "\n", "Maximum Metrics: Maximum metrics at their respective thresholds\n", " metric threshold value idx\n", "1 max f1 0.425963 0.830705 227\n", "2 max f2 0.329543 0.887175 268\n", "3 max f0point5 0.606576 0.850985 156\n", "4 max accuracy 0.482265 0.846796 206\n", "5 max precision 0.980397 1.000000 0\n", "6 max recall 0.084627 1.000000 374\n", "7 max specificity 0.980397 1.000000 0\n", "8 max absolute_MCC 0.482265 0.690786 206\n", "9 max min_per_class_accuracy 0.458183 0.839089 215\n", "\n", "Gains/Lift Table: Extract with `h2o.gainsLift(, )` or `h2o.gainsLift(, valid=, xval=)`\n" ] } ], "source": [ "print(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model Performance on a Test Set\n", "\n", "Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we just ran the model once, so our validation set (passed as `validation_frame`), could have also served as a \"test set.\" We technically have already created test set predictions and evaluated test set performance. \n", "\n", "However, when performing model selection over a variety of model parameters, it is common for users to train a variety of models (using different parameters) using the training set, `train`, and a validation set, `valid`. Once the user selects the best model (based on validation set performance), the true test of model performance is performed by making a final set of predictions on the held-out (never been used before) test set, `test`.\n", "\n", "You can use the `model_performance` method to generate predictions on a new dataset. The results are stored in an object of class, `\"H2OBinomialMetrics\"`. " ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "'H2OBinomialMetrics'" ], "text/latex": [ "'H2OBinomialMetrics'" ], "text/markdown": [ "'H2OBinomialMetrics'" ], "text/plain": [ "[1] \"H2OBinomialMetrics\"\n", "attr(,\"package\")\n", "[1] \"h2o\"" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "perf <- h2o.performance(model = model, newdata = test)\n", "class(perf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Individual model performance metrics can be extracted using methods like `r2`, `auc` and `mse`. In the case of binary classification, we may be most interested in evaluating test set Area Under the ROC Curve (AUC). " ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.522887989343577" ], "text/latex": [ "0.522887989343577" ], "text/markdown": [ "0.522887989343577" ], "text/plain": [ "[1] 0.522888" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "h2o.r2(perf)" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.92854017285095" ], "text/latex": [ "0.92854017285095" ], "text/markdown": [ "0.92854017285095" ], "text/plain": [ "[1] 0.9285402" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "h2o.auc(perf)" ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.116978343881296" ], "text/latex": [ "0.116978343881296" ], "text/markdown": [ "0.116978343881296" ], "text/plain": [ "[1] 0.1169783" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "h2o.mse(perf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Cross-validated Performance\n", "\n", "To perform k-fold cross-validation, you use the same code as above, but you specify `nfolds` as an integer greater than 1, or add a \"fold_column\" to your H2O Frame which indicates a fold ID for each row.\n", "\n", "Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the `nfolds` argument.\n", "\n", "When performing cross-validation, you can still pass a `validation_frame`, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which is called `data`." ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\r", " | \r", " | | 0%\r", " | \r", " |=== | 4%\r", " | \r", " |====== | 8%\r", " | \r", " |======== | 11%\r", " | \r", " |========= | 13%\r", " | \r", " |===================== | 30%\r", " | \r", " |======================== | 34%\r", " | \r", " |========================== | 38%\r", " | \r", " |============================ | 41%\r", " | \r", " |=============================== | 45%\r", " | \r", " |========================================= | 58%\r", " | \r", " |========================================== | 61%\r", " | \r", " |============================================ | 62%\r", " | \r", " |============================================= | 64%\r", " | \r", " |================================================= | 70%\r", " | \r", " |==================================================== | 74%\r", " | \r", " |===================================================== | 75%\r", " | \r", " |====================================================== | 77%\r", " | \r", " |====================================================== | 78%\r", " | \r", " |======================================================================| 100%\n" ] } ], "source": [ "cvmodel <- h2o.gbm(x = x, y = y,\n", " training_frame = train,\n", " validation_frame = valid,\n", " distribution = \"bernoulli\",\n", " ntrees = 100,\n", " max_depth = 4,\n", " learn_rate = 0.1,\n", " nfolds = 5)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This time around, we will simply pull the training and cross-validation metrics out of the model. To do so, you use the `auc` method again, and you can specify `train` or `xval` as `TRUE` to get the correct metric." ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1] 0.9464642\n", "[1] 0.9218678\n" ] } ], "source": [ "print(h2o.auc(cvmodel, train = TRUE))\n", "print(h2o.auc(cvmodel, xval = TRUE))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Grid Search\n", "\n", "One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over:\n", "- `ntrees`: Number of trees\n", "- `max_depth`: Maximum depth of a tree\n", "- `learn_rate`: Learning rate in the GBM\n", "\n", "We will define a grid as follows:" ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "collapsed": true }, "outputs": [], "source": [ "ntrees_opt <- c(5,50,100)\n", "max_depth_opt <- c(2,3,5)\n", "learn_rate_opt <- c(0.1,0.2)\n", "\n", "hyper_params = list('ntrees' = ntrees_opt,\n", " 'max_depth' = max_depth_opt,\n", " 'learn_rate' = learn_rate_opt)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `h2o.grid` function can be used to train a `\"H2OGrid\"` object for any of the H2O algorithms (specified by the `\"algorithm\"` argument." ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\r", " | \r", " | | 0%\r", " | \r", " |== | 3%\r", " | \r", " |=== | 5%\r", " | \r", " |==== | 6%\r", " | \r", " |====== | 8%\r", " | \r", " |======= | 10%\r", " | \r", " |======== | 11%\r", " | \r", " |============ | 17%\r", " | \r", " |============== | 20%\r", " | \r", " |=============== | 21%\r", " | \r", " |=============== | 22%\r", " | \r", " |================ | 23%\r", " | \r", " |================== | 25%\r", " | \r", " |================== | 26%\r", " | \r", " |=================== | 27%\r", " | \r", " |==================== | 28%\r", " | \r", " |========================= | 35%\r", " | \r", " |========================== | 37%\r", " | \r", " |========================== | 38%\r", " | \r", " |=========================== | 39%\r", " | \r", " |============================ | 40%\r", " | \r", " |============================= | 42%\r", " | \r", " |============================== | 43%\r", " | \r", " |=============================== | 44%\r", " | \r", " |=============================== | 45%\r", " | \r", " |===================================== | 52%\r", " | \r", " |====================================== | 54%\r", " | \r", " |======================================= | 55%\r", " | \r", " |======================================== | 56%\r", " | \r", " |========================================= | 59%\r", " | \r", " |========================================== | 60%\r", " | \r", " |=========================================== | 61%\r", " | \r", " |============================================ | 62%\r", " | \r", " |=============================================== | 68%\r", " | \r", " |================================================= | 71%\r", " | \r", " |================================================== | 72%\r", " | \r", " |=================================================== | 73%\r", " | \r", " |=================================================== | 74%\r", " | \r", " |===================================================== | 76%\r", " | \r", " |====================================================== | 77%\r", " | \r", " |======================================================= | 78%\r", " | \r", " |======================================================= | 79%\r", " | \r", " |============================================================ | 85%\r", " | \r", " |============================================================= | 87%\r", " | \r", " |============================================================== | 88%\r", " | \r", " |=============================================================== | 89%\r", " | \r", " |=============================================================== | 91%\r", " | \r", " |================================================================= | 93%\r", " | \r", " |================================================================== | 94%\r", " | \r", " |================================================================== | 95%\r", " | \r", " |=================================================================== | 95%\r", " | \r", " |======================================================================| 100%\n" ] } ], "source": [ "gs <- h2o.grid(algorithm = \"gbm\", \n", " grid_id = \"eeg_demo_gbm_grid\",\n", " hyper_params = hyper_params,\n", " x = x, y = y, \n", " training_frame = train, \n", " validation_frame = valid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Compare Models" ] }, { "cell_type": "code", "execution_count": 53, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "H2O Grid Details\n", "================\n", "\n", "Grid ID: eeg_demo_gbm_grid \n", "Used hyper parameters: \n", " - ntrees \n", " - max_depth \n", " - learn_rate \n", "Number of models: 18 \n", "Number of failed models: 0 \n", "\n", "Hyper-Parameter Search Summary: ordered by increasing logloss\n", " ntrees max_depth learn_rate model_ids logloss\n", "1 100 5 0.2 eeg_demo_gbm_grid_model_17 0.24919767209732\n", "2 50 5 0.2 eeg_demo_gbm_grid_model_16 0.321319350389403\n", "3 100 5 0.1 eeg_demo_gbm_grid_model_8 0.325041939824682\n", "4 100 3 0.2 eeg_demo_gbm_grid_model_14 0.398168927969941\n", "5 50 5 0.1 eeg_demo_gbm_grid_model_7 0.402409215186705\n", "6 50 3 0.2 eeg_demo_gbm_grid_model_13 0.455260965151754\n", "7 100 3 0.1 eeg_demo_gbm_grid_model_5 0.463893147947061\n", "8 50 3 0.1 eeg_demo_gbm_grid_model_4 0.51734929422505\n", "9 100 2 0.2 eeg_demo_gbm_grid_model_11 0.530497456235128\n", "10 5 5 0.2 eeg_demo_gbm_grid_model_15 0.548389974989351\n", "11 50 2 0.2 eeg_demo_gbm_grid_model_10 0.561668599565429\n", "12 100 2 0.1 eeg_demo_gbm_grid_model_2 0.564235794490373\n", "13 50 2 0.1 eeg_demo_gbm_grid_model_1 0.594214675563477\n", "14 5 5 0.1 eeg_demo_gbm_grid_model_6 0.600327168524549\n", "15 5 3 0.2 eeg_demo_gbm_grid_model_12 0.610367851324487\n", "16 5 3 0.1 eeg_demo_gbm_grid_model_3 0.642100038024138\n", "17 5 2 0.2 eeg_demo_gbm_grid_model_9 0.647268487315379\n", "18 5 2 0.1 eeg_demo_gbm_grid_model_0 0.663560995637836\n" ] } ], "source": [ "print(gs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, grids of models will return the grid results sorted by (increasing) logloss on the validation set. However, if we are interested in sorting on another model performance metric, we can do that using the `h2o.getGrid` function as follows:" ] }, { "cell_type": "code", "execution_count": 56, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "H2O Grid Details\n", "================\n", "\n", "Grid ID: eeg_demo_gbm_grid \n", "Used hyper parameters: \n", " - ntrees \n", " - max_depth \n", " - learn_rate \n", "Number of models: 18 \n", "Number of failed models: 0 \n", "\n", "Hyper-Parameter Search Summary: ordered by decreasing auc\n", " ntrees max_depth learn_rate model_ids auc\n", "1 100 5 0.2 eeg_demo_gbm_grid_model_17 0.967771493797284\n", "2 50 5 0.2 eeg_demo_gbm_grid_model_16 0.949609591795923\n", "3 100 5 0.1 eeg_demo_gbm_grid_model_8 0.94941792664595\n", "4 50 5 0.1 eeg_demo_gbm_grid_model_7 0.922075196552274\n", "5 100 3 0.2 eeg_demo_gbm_grid_model_14 0.913785959685157\n", "6 50 3 0.2 eeg_demo_gbm_grid_model_13 0.887706691652792\n", "7 100 3 0.1 eeg_demo_gbm_grid_model_5 0.884064379717198\n", "8 5 5 0.2 eeg_demo_gbm_grid_model_15 0.851187402678818\n", "9 50 3 0.1 eeg_demo_gbm_grid_model_4 0.848921799270639\n", "10 5 5 0.1 eeg_demo_gbm_grid_model_6 0.825662907513139\n", "11 100 2 0.2 eeg_demo_gbm_grid_model_11 0.812030639460551\n", "12 50 2 0.2 eeg_demo_gbm_grid_model_10 0.785379521713437\n", "13 100 2 0.1 eeg_demo_gbm_grid_model_2 0.78299280750123\n", "14 5 3 0.2 eeg_demo_gbm_grid_model_12 0.774673686150002\n", "15 50 2 0.1 eeg_demo_gbm_grid_model_1 0.754834657912535\n", "16 5 3 0.1 eeg_demo_gbm_grid_model_3 0.749285131682721\n", "17 5 2 0.2 eeg_demo_gbm_grid_model_9 0.692702793188135\n", "18 5 2 0.1 eeg_demo_gbm_grid_model_0 0.676144542037133\n" ] } ], "source": [ "# print out the auc for all of the models\n", "auc_table <- h2o.getGrid(grid_id = \"eeg_demo_gbm_grid\", sort_by = \"auc\", decreasing = TRUE)\n", "print(auc_table)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The \"best\" model in terms of validation set AUC is listed first in auc_table." ] }, { "cell_type": "code", "execution_count": 71, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.967771493797284" ], "text/latex": [ "0.967771493797284" ], "text/markdown": [ "0.967771493797284" ], "text/plain": [ "[1] 0.9677715" ] }, "execution_count": 71, "metadata": {}, "output_type": "execute_result" } ], "source": [ "best_model <- h2o.getModel(auc_table@model_ids[[1]])\n", "h2o.auc(best_model, valid = TRUE) #Validation AUC for best model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last thing we may want to do is generate predictions on the test set using the \"best\" model, and evaluate the test set AUC." ] }, { "cell_type": "code", "execution_count": 72, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.97191172060307" ], "text/latex": [ "0.97191172060307" ], "text/markdown": [ "0.97191172060307" ], "text/plain": [ "[1] 0.9719117" ] }, "execution_count": 72, "metadata": {}, "output_type": "execute_result" } ], "source": [ "best_perf <- h2o.performance(model = best_model, newdata = test)\n", "h2o.auc(best_perf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The test set AUC is approximately 0.97. Not bad!!" ] } ], "metadata": { "kernelspec": { "display_name": "R", "language": "R", "name": "ir" }, "language_info": { "codemirror_mode": "r", "file_extension": ".r", "mimetype": "text/x-r-source", "name": "R", "pygments_lexer": "r", "version": "3.3.1" } }, "nbformat": 4, "nbformat_minor": 0 }