{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# H2O GBM Tuning Tutorial for Python\n", "\n", "### Arno Candel, PhD, Chief Architect, H2O.ai\n", "### Ported to Python by Navdeep Gill, M.S., Hacker/Data Scientist, H2O.ai\n", "\n", "In this tutorial, we show how to build a well-tuned H2O GBM model for a supervised classification task. We specifically don't focus on feature engineering and use a small dataset to allow you to reproduce these results in a few minutes on a laptop. This script can be directly transferred to datasets that are hundreds of GBs large and H2O clusters with dozens of compute nodes.\n", "\n", "You can download the source [from H2O's github repository](https://github.com/h2oai/h2o-3/blob/master/h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb).\n", "\n", "Ports to [R Markdown](https://github.com/h2oai/h2o-3/blob/master/h2o-docs/src/product/tutorials/gbm/gbmTuning.Rmd) and [Flow UI](https://raw.githubusercontent.com/h2oai/h2o-3/master/h2o-docs/src/product/flow/packs/examples/GBM_TuningGuide.flow) (now part of Example Flows) are available as well.\n", "\n", "## Installation of the H2O Python Package\n", "Either download H2O from [H2O.ai's website](http://h2o.ai/download) or install the latest version of H2O into Python with the following set of commands:" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "Install dependencies from command line (prepending with `sudo` if needed):\n", "\n", "```\n", "[sudo] pip install -U requests\n", "[sudo] pip install -U tabulate\n", "```\n", "\n", "The following command removes the H2O module for Python.\n", "```\n", "[sudo] pip uninstall h2o\n", "```\n", "\n", "Next, use pip to install this version of the H2O Python module.\n", "```\n", "[sudo] pip install http://h2o-release.s3.amazonaws.com/h2o/rel-zahradnik/3/Python/h2o-3.30.0.3-py2.py3-none-any.whl\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Launch an H2O cluster on localhost" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Checking whether there is an H2O instance running at http://localhost:54321 ..... not found.\n", "Attempting to start a local H2O server...\n", " Java Version: java version \"1.8.0_231\"; Java(TM) SE Runtime Environment (build 1.8.0_231-b11); Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)\n", " Starting server from /Users/nmashayekhi/anaconda3/envs/py_36_new/lib/python3.6/site-packages/h2o/backend/bin/h2o.jar\n", " Ice root: /var/folders/pf/w6ctt7r5639fbfclslj7nw2c0000gp/T/tmp4c3rdmax\n", " JVM stdout: /var/folders/pf/w6ctt7r5639fbfclslj7nw2c0000gp/T/tmp4c3rdmax/h2o_nmashayekhi_started_from_python.out\n", " JVM stderr: /var/folders/pf/w6ctt7r5639fbfclslj7nw2c0000gp/T/tmp4c3rdmax/h2o_nmashayekhi_started_from_python.err\n", " Server is running at http://127.0.0.1:54321\n", "Connecting to H2O server at http://127.0.0.1:54321 ... successful.\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
H2O_cluster_uptime:01 secs
H2O_cluster_timezone:America/Los_Angeles
H2O_data_parsing_timezone:UTC
H2O_cluster_version:3.30.0.3
H2O_cluster_version_age:8 days
H2O_cluster_name:H2O_from_python_nmashayekhi_sfscj0
H2O_cluster_total_nodes:1
H2O_cluster_free_memory:3.556 Gb
H2O_cluster_total_cores:16
H2O_cluster_allowed_cores:16
H2O_cluster_status:accepting new members, healthy
H2O_connection_url:http://127.0.0.1:54321
H2O_connection_proxy:{\"http\": null, \"https\": null}
H2O_internal_security:False
H2O_API_Extensions:Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4
Python_version:3.6.9 final
" ], "text/plain": [ "-------------------------- ------------------------------------------------------------------\n", "H2O_cluster_uptime: 01 secs\n", "H2O_cluster_timezone: America/Los_Angeles\n", "H2O_data_parsing_timezone: UTC\n", "H2O_cluster_version: 3.30.0.3\n", "H2O_cluster_version_age: 8 days\n", "H2O_cluster_name: H2O_from_python_nmashayekhi_sfscj0\n", "H2O_cluster_total_nodes: 1\n", "H2O_cluster_free_memory: 3.556 Gb\n", "H2O_cluster_total_cores: 16\n", "H2O_cluster_allowed_cores: 16\n", "H2O_cluster_status: accepting new members, healthy\n", "H2O_connection_url: http://127.0.0.1:54321\n", "H2O_connection_proxy: {\"http\": null, \"https\": null}\n", "H2O_internal_security: False\n", "H2O_API_Extensions: Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4\n", "Python_version: 3.6.9 final\n", "-------------------------- ------------------------------------------------------------------" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import h2o\n", "import numpy as np\n", "import math\n", "from h2o.estimators.gbm import H2OGradientBoostingEstimator\n", "from h2o.grid.grid_search import H2OGridSearch\n", "h2o.init(nthreads=-1, strict_version_check=True)\n", "## optional: connect to a running H2O cluster\n", "#h2o.init(ip=\"mycluster\", port=55555)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Import the data into H2O \n", "Everything is scalable and distributed from now on. All processing is done on the fully multi-threaded and distributed H2O Java-based backend and can be scaled to large datasets on large compute clusters.\n", "Here, we use a small public dataset ([Titanic](https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/Titanic.html)), but you can use datasets that are hundreds of GBs large." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Parse progress: |█████████████████████████████████████████████████████████| 100%\n", "[1309, 14]\n" ] }, { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
pclass survivedname sex age sibsp parch ticket farecabin embarked boat bodyhome.dest
1 1Allen Miss. Elisabeth Walton female29 0 0 24160211.338 B5 S 2 nanSt Louis MO
1 1Allison Master. Hudson Trevor male 0.9167 1 2 113781151.55 C22 C26S 11 nanMontreal PQ / Chesterville ON
1 0Allison Miss. Helen Loraine female 2 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 0Allison Mr. Hudson Joshua Creighton male 30 1 2 113781151.55 C22 C26S nan 135Montreal PQ / Chesterville ON
1 0Allison Mrs. Hudson J C (Bessie Waldo Daniels)female25 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 1Anderson Mr. Harry male 48 0 0 19952 26.55 E12 S 3 nanNew York NY
1 1Andrews Miss. Kornelia Theodosia female63 1 0 13502 77.9583D7 S 10 nanHudson NY
1 0Andrews Mr. Thomas Jr male 39 0 0 112050 0 A36 S nan nanBelfast NI
1 1Appleton Mrs. Edward Dale (Charlotte Lamson) female53 2 0 11769 51.4792C101 S nan nanBayside Queens NY
1 0Artagaveytia Mr. Ramon male 71 0 0 nan 49.5042 C nan 22Montevideo Uruguay
" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n" ] }, { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
pclass survivedname sex age sibsp parch ticket farecabin embarked boat bodyhome.dest
1 1Allen Miss. Elisabeth Walton female29 0 0 24160211.338 B5 S 2 nanSt Louis MO
1 1Allison Master. Hudson Trevor male 0.9167 1 2 113781151.55 C22 C26S 11 nanMontreal PQ / Chesterville ON
1 0Allison Miss. Helen Loraine female 2 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 0Allison Mr. Hudson Joshua Creighton male 30 1 2 113781151.55 C22 C26S nan 135Montreal PQ / Chesterville ON
1 0Allison Mrs. Hudson J C (Bessie Waldo Daniels)female25 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 1Anderson Mr. Harry male 48 0 0 19952 26.55 E12 S 3 nanNew York NY
1 1Andrews Miss. Kornelia Theodosia female63 1 0 13502 77.9583D7 S 10 nanHudson NY
1 0Andrews Mr. Thomas Jr male 39 0 0 112050 0 A36 S nan nanBelfast NI
1 1Appleton Mrs. Edward Dale (Charlotte Lamson) female53 2 0 11769 51.4792C101 S nan nanBayside Queens NY
1 0Artagaveytia Mr. Ramon male 71 0 0 nan 49.5042 C nan 22Montevideo Uruguay
" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n" ] }, { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
pclass survivedname sex age sibsp parch ticket farecabin embarked boat bodyhome.dest
1 1Allen Miss. Elisabeth Walton female29 0 0 24160211.338 B5 S 2 nanSt Louis MO
1 1Allison Master. Hudson Trevor male 0.9167 1 2 113781151.55 C22 C26S 11 nanMontreal PQ / Chesterville ON
1 0Allison Miss. Helen Loraine female 2 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 0Allison Mr. Hudson Joshua Creighton male 30 1 2 113781151.55 C22 C26S nan 135Montreal PQ / Chesterville ON
1 0Allison Mrs. Hudson J C (Bessie Waldo Daniels)female25 1 2 113781151.55 C22 C26S nan nanMontreal PQ / Chesterville ON
1 1Anderson Mr. Harry male 48 0 0 19952 26.55 E12 S 3 nanNew York NY
1 1Andrews Miss. Kornelia Theodosia female63 1 0 13502 77.9583D7 S 10 nanHudson NY
1 0Andrews Mr. Thomas Jr male 39 0 0 112050 0 A36 S nan nanBelfast NI
1 1Appleton Mrs. Edward Dale (Charlotte Lamson) female53 2 0 11769 51.4792C101 S nan nanBayside Queens NY
1 0Artagaveytia Mr. Ramon male 71 0 0 nan 49.5042 C nan 22Montevideo Uruguay
" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "['pclass', 'sex', 'age', 'sibsp', 'parch', 'ticket', 'fare', 'cabin', 'embarked', 'boat', 'body', 'home.dest']\n" ] } ], "source": [ "## 'path' can point to a local file, hdfs, s3, nfs, Hive, directories, etc.\n", "df = h2o.import_file(path = \"http://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/titanic.csv\")\n", "print(df.dim)\n", "print(df.head)\n", "print(df.tail)\n", "print(df.describe)\n", "\n", "## pick a response for the supervised problem\n", "response = \"survived\"\n", "\n", "## the response variable is an integer, we will turn it into a categorical/factor for binary classification\n", "df[response] = df[response].asfactor() \n", "\n", "## use all other columns (except for the name & the response column (\"survived\")) as predictors\n", "predictors = df.columns\n", "del predictors[1:3]\n", "print(predictors)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From now on, everything is generic and directly applies to most datasets. We assume that all feature engineering is done at this stage and focus on model tuning. For multi-class problems, you can use `h2o.logloss()` or `h2o.confusion_matrix()` instead of `h2o.auc()` and for regression problems, you can use `h2o.mean_residual_deviance()` or `h2o.mse()`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Split the data for Machine Learning\n", "We split the data into three pieces: 60% for training, 20% for validation, 20% for final testing. \n", "Here, we use random splitting, but this assumes i.i.d. data. If this is not the case (e.g., when events span across multiple rows or data has a time structure), you'll have to sample your data non-randomly." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "train, valid, test = df.split_frame(\n", " ratios=[0.6,0.2], \n", " seed=1234, \n", " destination_frames=['train.hex','valid.hex','test.hex']\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Establish baseline performance\n", "As the first step, we'll build some default models to see what accuracy we can expect. Let's use the [AUC metric](http://mlwiki.org/index.php/ROC_Analysis) for this demo, but you can use `h2o.logloss()` and `stopping_metric=\"logloss\"` as well. It ranges from 0.5 for random models to 1 for perfect models.\n", "\n", "\n", "The first model is a default GBM, trained on the 60% training split" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "gbm Model Build progress: |███████████████████████████████████████████████| 100%\n", "Model Details\n", "=============\n", "H2OGradientBoostingEstimator : Gradient Boosting Machine\n", "Model Key: GBM_model_python_1590166894817_1\n", "\n", "\n", "Model Summary: \n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
number_of_treesnumber_of_internal_treesmodel_size_in_bytesmin_depthmax_depthmean_depthmin_leavesmax_leavesmean_leaves
050.050.022644.02.05.04.943.021.013.02
\n", "
" ], "text/plain": [ " number_of_trees number_of_internal_trees model_size_in_bytes \\\n", "0 50.0 50.0 22644.0 \n", "\n", " min_depth max_depth mean_depth min_leaves max_leaves mean_leaves \n", "0 2.0 5.0 4.94 3.0 21.0 13.02 " ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "ModelMetricsBinomial: gbm\n", "** Reported on train data. **\n", "\n", "MSE: 0.020967191978133064\n", "RMSE: 0.1448005247854201\n", "LogLoss: 0.0878847344331042\n", "Mean Per-Class Error: 0.025960784857711583\n", "AUC: 0.9960535168089666\n", "AUCPR: 0.9948602636749849\n", "Gini: 0.9921070336179332\n", "\n", "Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.49928839180236295: \n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
01ErrorRate
00478.01.00.0021(1.0/479.0)
1115.0286.00.0498(15.0/301.0)
2Total493.0287.00.0205(16.0/780.0)
\n", "
" ], "text/plain": [ " 0 1 Error Rate\n", "0 0 478.0 1.0 0.0021 (1.0/479.0)\n", "1 1 15.0 286.0 0.0498 (15.0/301.0)\n", "2 Total 493.0 287.0 0.0205 (16.0/780.0)" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "Maximum Metrics: Maximum metrics at their respective thresholds\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
metricthresholdvalueidx
0max f10.4992880.972789164.0
1max f20.1405740.970684190.0
2max f0point50.4992880.986888164.0
3max accuracy0.4992880.979487164.0
4max precision0.9963161.0000000.0
5max recall0.0562721.000000234.0
6max specificity0.9963161.0000000.0
7max absolute_mcc0.4992880.957042164.0
8max min_per_class_accuracy0.2758500.966777173.0
9max mean_per_class_accuracy0.4992880.974039164.0
10max tns0.996316479.0000000.0
11max fns0.996316300.0000000.0
12max fps0.009568479.000000399.0
13max tps0.056272301.000000234.0
14max tnr0.9963161.0000000.0
15max fnr0.9963160.9966780.0
16max fpr0.0095681.000000399.0
17max tpr0.0562721.000000234.0
\n", "
" ], "text/plain": [ " metric threshold value idx\n", "0 max f1 0.499288 0.972789 164.0\n", "1 max f2 0.140574 0.970684 190.0\n", "2 max f0point5 0.499288 0.986888 164.0\n", "3 max accuracy 0.499288 0.979487 164.0\n", "4 max precision 0.996316 1.000000 0.0\n", "5 max recall 0.056272 1.000000 234.0\n", "6 max specificity 0.996316 1.000000 0.0\n", "7 max absolute_mcc 0.499288 0.957042 164.0\n", "8 max min_per_class_accuracy 0.275850 0.966777 173.0\n", "9 max mean_per_class_accuracy 0.499288 0.974039 164.0\n", "10 max tns 0.996316 479.000000 0.0\n", "11 max fns 0.996316 300.000000 0.0\n", "12 max fps 0.009568 479.000000 399.0\n", "13 max tps 0.056272 301.000000 234.0\n", "14 max tnr 0.996316 1.000000 0.0\n", "15 max fnr 0.996316 0.996678 0.0\n", "16 max fpr 0.009568 1.000000 399.0\n", "17 max tpr 0.056272 1.000000 234.0" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "Gains/Lift Table: Avg response rate: 38.59 %, avg score: 38.61 %\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
groupcumulative_data_fractionlower_thresholdliftcumulative_liftresponse_ratescorecumulative_response_ratecumulative_scorecapture_ratecumulative_capture_rategaincumulative_gain
010.0102560.9934522.5913622.5913621.0000000.9946041.0000000.9946040.0265780.026578159.136213159.136213
120.0205130.9930002.5913622.5913621.0000000.9931561.0000000.9938800.0265780.053156159.136213159.136213
230.0320510.9927912.5913622.5913621.0000000.9928681.0000000.9935160.0299000.083056159.136213159.136213
340.0410260.9927012.5913622.5913621.0000000.9927481.0000000.9933480.0232560.106312159.136213159.136213
450.0500000.9926372.5913622.5913621.0000000.9926621.0000000.9932250.0232560.129568159.136213159.136213
560.1000000.9921172.5913622.5913621.0000000.9923821.0000000.9928030.1295680.259136159.136213159.136213
670.1500000.9915562.5913622.5913621.0000000.9917631.0000000.9924570.1295680.388704159.136213159.136213
780.2000000.9886652.5913622.5913621.0000000.9905351.0000000.9919760.1295680.518272159.136213159.136213
890.3000000.9661972.5913622.5913621.0000000.9845401.0000000.9894980.2591360.777409159.136213159.136213
9100.4000000.1968331.8936882.4169440.7307690.6396670.9326920.9020400.1893690.96677789.368771141.694352
10110.5025640.0741330.2267441.9699640.0875000.1138040.7602040.7411750.0232560.990033-77.32558196.996407
11120.6051280.0436220.0971761.6525420.0375000.0518640.6377120.6243430.0099671.000000-90.28239265.254237
12130.7000000.0300710.0000001.4285710.0000000.0371250.5512820.5447570.0000001.000000-100.00000042.857143
13140.8000000.0174630.0000001.2500000.0000000.0217120.4823720.4793760.0000001.000000-100.00000025.000000
14150.9192310.0125690.0000001.0878660.0000000.0140860.4198050.4190250.0000001.000000-100.0000008.786611
15161.0000000.0095680.0000001.0000000.0000000.0117300.3858970.3861280.0000001.000000-100.0000000.000000
\n", "
" ], "text/plain": [ " group cumulative_data_fraction lower_threshold lift \\\n", "0 1 0.010256 0.993452 2.591362 \n", "1 2 0.020513 0.993000 2.591362 \n", "2 3 0.032051 0.992791 2.591362 \n", "3 4 0.041026 0.992701 2.591362 \n", "4 5 0.050000 0.992637 2.591362 \n", "5 6 0.100000 0.992117 2.591362 \n", "6 7 0.150000 0.991556 2.591362 \n", "7 8 0.200000 0.988665 2.591362 \n", "8 9 0.300000 0.966197 2.591362 \n", "9 10 0.400000 0.196833 1.893688 \n", "10 11 0.502564 0.074133 0.226744 \n", "11 12 0.605128 0.043622 0.097176 \n", "12 13 0.700000 0.030071 0.000000 \n", "13 14 0.800000 0.017463 0.000000 \n", "14 15 0.919231 0.012569 0.000000 \n", "15 16 1.000000 0.009568 0.000000 \n", "\n", " cumulative_lift response_rate score cumulative_response_rate \\\n", "0 2.591362 1.000000 0.994604 1.000000 \n", "1 2.591362 1.000000 0.993156 1.000000 \n", "2 2.591362 1.000000 0.992868 1.000000 \n", "3 2.591362 1.000000 0.992748 1.000000 \n", "4 2.591362 1.000000 0.992662 1.000000 \n", "5 2.591362 1.000000 0.992382 1.000000 \n", "6 2.591362 1.000000 0.991763 1.000000 \n", "7 2.591362 1.000000 0.990535 1.000000 \n", "8 2.591362 1.000000 0.984540 1.000000 \n", "9 2.416944 0.730769 0.639667 0.932692 \n", "10 1.969964 0.087500 0.113804 0.760204 \n", "11 1.652542 0.037500 0.051864 0.637712 \n", "12 1.428571 0.000000 0.037125 0.551282 \n", "13 1.250000 0.000000 0.021712 0.482372 \n", "14 1.087866 0.000000 0.014086 0.419805 \n", "15 1.000000 0.000000 0.011730 0.385897 \n", "\n", " cumulative_score capture_rate cumulative_capture_rate gain \\\n", "0 0.994604 0.026578 0.026578 159.136213 \n", "1 0.993880 0.026578 0.053156 159.136213 \n", "2 0.993516 0.029900 0.083056 159.136213 \n", "3 0.993348 0.023256 0.106312 159.136213 \n", "4 0.993225 0.023256 0.129568 159.136213 \n", "5 0.992803 0.129568 0.259136 159.136213 \n", "6 0.992457 0.129568 0.388704 159.136213 \n", "7 0.991976 0.129568 0.518272 159.136213 \n", "8 0.989498 0.259136 0.777409 159.136213 \n", "9 0.902040 0.189369 0.966777 89.368771 \n", "10 0.741175 0.023256 0.990033 -77.325581 \n", "11 0.624343 0.009967 1.000000 -90.282392 \n", "12 0.544757 0.000000 1.000000 -100.000000 \n", "13 0.479376 0.000000 1.000000 -100.000000 \n", "14 0.419025 0.000000 1.000000 -100.000000 \n", "15 0.386128 0.000000 1.000000 -100.000000 \n", "\n", " cumulative_gain \n", "0 159.136213 \n", "1 159.136213 \n", "2 159.136213 \n", "3 159.136213 \n", "4 159.136213 \n", "5 159.136213 \n", "6 159.136213 \n", "7 159.136213 \n", "8 159.136213 \n", "9 141.694352 \n", "10 96.996407 \n", "11 65.254237 \n", "12 42.857143 \n", "13 25.000000 \n", "14 8.786611 \n", "15 0.000000 " ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "Scoring History: \n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
timestampdurationnumber_of_treestraining_rmsetraining_loglosstraining_auctraining_pr_auctraining_lifttraining_classification_error
02020-05-22 10:01:400.014 sec0.00.4868070.6668780.5000000.3858971.0000000.614103
12020-05-22 10:01:400.132 sec1.00.4544070.6033610.8851120.9022072.5913620.089744
22020-05-22 10:01:400.158 sec2.00.4267770.5530980.8851430.9022722.5913620.088462
32020-05-22 10:01:400.175 sec3.00.4032010.5122600.8851430.9022722.5913620.088462
42020-05-22 10:01:400.195 sec4.00.3831190.4785020.8851430.9022722.5913620.088462
52020-05-22 10:01:400.216 sec5.00.3660620.4502520.8851430.9022722.5913620.088462
62020-05-22 10:01:400.234 sec6.00.3516260.4263970.8851430.9022722.5913620.088462
72020-05-22 10:01:400.250 sec7.00.3394530.4061130.8851430.9022722.5913620.088462
82020-05-22 10:01:400.266 sec8.00.3292260.3887700.8851430.9022722.5913620.088462
92020-05-22 10:01:400.282 sec9.00.3206650.3738750.8851430.9022722.5913620.088462
102020-05-22 10:01:400.297 sec10.00.3135210.3610320.8851430.9022722.5913620.088462
112020-05-22 10:01:400.321 sec11.00.2989560.3356520.9363290.9462882.5913620.057692
122020-05-22 10:01:400.335 sec12.00.2813050.3063280.9856740.9825612.5913620.044872
132020-05-22 10:01:400.349 sec13.00.2662890.2824180.9864300.9834682.5913620.044872
142020-05-22 10:01:400.362 sec14.00.2534520.2623770.9870680.9843232.5913620.042308
152020-05-22 10:01:400.373 sec15.00.2507580.2557550.9870680.9843232.5913620.042308
162020-05-22 10:01:400.387 sec16.00.2401120.2391500.9872620.9845952.5913620.041026
172020-05-22 10:01:400.399 sec17.00.2309450.2247300.9872620.9845952.5913620.041026
182020-05-22 10:01:400.411 sec18.00.2232210.2123650.9875220.9847432.5913620.041026
192020-05-22 10:01:400.422 sec19.00.2162150.2016020.9880530.9854392.5913620.038462
\n", "
" ], "text/plain": [ " timestamp duration number_of_trees training_rmse \\\n", "0 2020-05-22 10:01:40 0.014 sec 0.0 0.486807 \n", "1 2020-05-22 10:01:40 0.132 sec 1.0 0.454407 \n", "2 2020-05-22 10:01:40 0.158 sec 2.0 0.426777 \n", "3 2020-05-22 10:01:40 0.175 sec 3.0 0.403201 \n", "4 2020-05-22 10:01:40 0.195 sec 4.0 0.383119 \n", "5 2020-05-22 10:01:40 0.216 sec 5.0 0.366062 \n", "6 2020-05-22 10:01:40 0.234 sec 6.0 0.351626 \n", "7 2020-05-22 10:01:40 0.250 sec 7.0 0.339453 \n", "8 2020-05-22 10:01:40 0.266 sec 8.0 0.329226 \n", "9 2020-05-22 10:01:40 0.282 sec 9.0 0.320665 \n", "10 2020-05-22 10:01:40 0.297 sec 10.0 0.313521 \n", "11 2020-05-22 10:01:40 0.321 sec 11.0 0.298956 \n", "12 2020-05-22 10:01:40 0.335 sec 12.0 0.281305 \n", "13 2020-05-22 10:01:40 0.349 sec 13.0 0.266289 \n", "14 2020-05-22 10:01:40 0.362 sec 14.0 0.253452 \n", "15 2020-05-22 10:01:40 0.373 sec 15.0 0.250758 \n", "16 2020-05-22 10:01:40 0.387 sec 16.0 0.240112 \n", "17 2020-05-22 10:01:40 0.399 sec 17.0 0.230945 \n", "18 2020-05-22 10:01:40 0.411 sec 18.0 0.223221 \n", "19 2020-05-22 10:01:40 0.422 sec 19.0 0.216215 \n", "\n", " training_logloss training_auc training_pr_auc training_lift \\\n", "0 0.666878 0.500000 0.385897 1.000000 \n", "1 0.603361 0.885112 0.902207 2.591362 \n", "2 0.553098 0.885143 0.902272 2.591362 \n", "3 0.512260 0.885143 0.902272 2.591362 \n", "4 0.478502 0.885143 0.902272 2.591362 \n", "5 0.450252 0.885143 0.902272 2.591362 \n", "6 0.426397 0.885143 0.902272 2.591362 \n", "7 0.406113 0.885143 0.902272 2.591362 \n", "8 0.388770 0.885143 0.902272 2.591362 \n", "9 0.373875 0.885143 0.902272 2.591362 \n", "10 0.361032 0.885143 0.902272 2.591362 \n", "11 0.335652 0.936329 0.946288 2.591362 \n", "12 0.306328 0.985674 0.982561 2.591362 \n", "13 0.282418 0.986430 0.983468 2.591362 \n", "14 0.262377 0.987068 0.984323 2.591362 \n", "15 0.255755 0.987068 0.984323 2.591362 \n", "16 0.239150 0.987262 0.984595 2.591362 \n", "17 0.224730 0.987262 0.984595 2.591362 \n", "18 0.212365 0.987522 0.984743 2.591362 \n", "19 0.201602 0.988053 0.985439 2.591362 \n", "\n", " training_classification_error \n", "0 0.614103 \n", "1 0.089744 \n", "2 0.088462 \n", "3 0.088462 \n", "4 0.088462 \n", "5 0.088462 \n", "6 0.088462 \n", "7 0.088462 \n", "8 0.088462 \n", "9 0.088462 \n", "10 0.088462 \n", "11 0.057692 \n", "12 0.044872 \n", "13 0.044872 \n", "14 0.042308 \n", "15 0.042308 \n", "16 0.041026 \n", "17 0.041026 \n", "18 0.041026 \n", "19 0.038462 " ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "See the whole table with table.as_data_frame()\n", "\n", "Variable Importances: \n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
variablerelative_importancescaled_importancepercentage
0boat630.0761111.0000000.722770
1home.dest118.9526900.1887910.136452
2sex64.1766280.1018550.073618
3ticket16.0904330.0255370.018458
4fare12.7288080.0202020.014601
5age11.5789690.0183770.013282
6cabin5.5596520.0088240.006378
7embarked3.7754840.0059920.004331
8parch3.2812730.0052080.003764
9body3.2746450.0051970.003756
10sibsp1.7255910.0027390.001979
11pclass0.5317370.0008440.000610
\n", "
" ], "text/plain": [ " variable relative_importance scaled_importance percentage\n", "0 boat 630.076111 1.000000 0.722770\n", "1 home.dest 118.952690 0.188791 0.136452\n", "2 sex 64.176628 0.101855 0.073618\n", "3 ticket 16.090433 0.025537 0.018458\n", "4 fare 12.728808 0.020202 0.014601\n", "5 age 11.578969 0.018377 0.013282\n", "6 cabin 5.559652 0.008824 0.006378\n", "7 embarked 3.775484 0.005992 0.004331\n", "8 parch 3.281273 0.005208 0.003764\n", "9 body 3.274645 0.005197 0.003756\n", "10 sibsp 1.725591 0.002739 0.001979\n", "11 pclass 0.531737 0.000844 0.000610" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n" ] } ], "source": [ "#We only provide the required parameters, everything else is default\n", "gbm = H2OGradientBoostingEstimator()\n", "gbm.train(x=predictors, y=response, training_frame=train)\n", "\n", "## Show a detailed model summary\n", "print(gbm)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.950014088475627\n" ] } ], "source": [ "## Get the AUC on the validation set\n", "perf = gbm.model_performance(valid)\n", "print(perf.auc())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The AUC is 95%, so this model is highly predictive!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The second model is another default GBM, but trained on 80% of the data (here, we combine the training and validation splits to get more training data), and cross-validated using 4 folds.\n", "Note that cross-validation takes longer and is not usually done for really large datasets." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "gbm Model Build progress: |███████████████████████████████████████████████| 100%\n" ] } ], "source": [ "## rbind() makes a copy here, so it's better to use split_frame with `ratios = c(0.8)` instead above\n", "cv_gbm = H2OGradientBoostingEstimator(nfolds = 4, seed = 0xDECAF)\n", "cv_gbm.train(x = predictors, y = response, training_frame = train.rbind(valid))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that the cross-validated performance is similar to the validation set performance:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.9493705528188287\n" ] } ], "source": [ "## Show a detailed summary of the cross validation metrics\n", "## This gives you an idea of the variance between the folds\n", "cv_summary = cv_gbm.cross_validation_metrics_summary().as_data_frame()\n", "#print(cv_summary) ## Full summary of all metrics\n", "#print(cv_summary.iloc[4]) ## get the row with just the AUCs\n", "\n", "## Get the cross-validated AUC by scoring the combined holdout predictions.\n", "## (Instead of taking the average of the metrics across the folds)\n", "perf_cv = cv_gbm.model_performance(xval=True)\n", "print(perf_cv.auc())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we train a GBM with \"I feel lucky\" parameters.\n", "We'll use early stopping to automatically tune the number of trees using the validation AUC. \n", "We'll use a lower learning rate (lower is always better, just takes more trees to converge).\n", "We'll also use stochastic sampling of rows and columns to (hopefully) improve generalization." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "gbm Model Build progress: |███████████████████████████████████████████████| 100%\n" ] } ], "source": [ "gbm_lucky = H2OGradientBoostingEstimator(\n", " ## more trees is better if the learning rate is small enough \n", " ## here, use \"more than enough\" trees - we have early stopping\n", " ntrees = 10000, \n", "\n", " ## smaller learning rate is better (this is a good value for most datasets, but see below for annealing)\n", " learn_rate = 0.01, \n", "\n", " ## early stopping once the validation AUC doesn't improve by at least 0.01% for 5 consecutive scoring events\n", " stopping_rounds = 5, stopping_tolerance = 1e-4, stopping_metric = \"AUC\", \n", "\n", " ## sample 80% of rows per tree\n", " sample_rate = 0.8, \n", "\n", " ## sample 80% of columns per split\n", " col_sample_rate = 0.8, \n", "\n", " ## fix a random number generator seed for reproducibility\n", " seed = 1234, \n", "\n", " ## score every 10 trees to make early stopping reproducible (it depends on the scoring interval)\n", " score_tree_interval = 10)\n", "\n", "gbm_lucky.train(x=predictors, y=response, training_frame=train, validation_frame=valid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This model doesn't seem to be better than the previous models:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.9424908424908425\n" ] } ], "source": [ "perf_lucky = gbm_lucky.model_performance(valid)\n", "print(perf_lucky.auc())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this small dataset, dropping 20% of observations per tree seems too aggressive in terms of adding regularization. For larger datasets, this is usually not a bad idea. But we'll let this parameter tune freshly below, so no worries." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Hyper-Parameter Search\n", "\n", "Next, we'll do real hyper-parameter optimization to see if we can beat the best AUC so far (around 94%).\n", "\n", "The key here is to start tuning some key parameters first (i.e., those that we expect to have the biggest impact on the results). From experience with gradient boosted trees across many datasets, we can state the following \"rules\":\n", "\n", "1. Build as many trees (`ntrees`) as it takes until the validation set error starts increasing.\n", "2. A lower learning rate (`learn_rate`) is generally better, but will require more trees. Using `learn_rate=0.02 `and `learn_rate_annealing=0.995` (reduction of learning rate with each additional tree) can help speed up convergence without sacrificing accuracy too much, and is great to hyper-parameter searches. For faster scans, use values of 0.05 and 0.99 instead.\n", "3. The optimum maximum allowed depth for the trees (`max_depth`) is data dependent, deeper trees take longer to train, especially at depths greater than 10.\n", "4. Row and column sampling (`sample_rate` and `col_sample_rate`) can improve generalization and lead to lower validation and test set errors. Good general values for large datasets are around 0.7 to 0.8 (sampling 70-80 percent of the data) for both parameters. Column sampling per tree (`col_sample_rate_per_tree`) can also be tuned. Note that it is multiplicative with `col_sample_rate`, so setting both parameters to 0.8 results in 64% of columns being considered at any given node to split.\n", "5. For highly imbalanced classification datasets (e.g., fewer buyers than non-buyers), stratified row sampling based on response class membership can help improve predictive accuracy. It is configured with `sample_rate_per_class` (array of ratios, one per response class in lexicographic order).\n", "6. Most other options only have a small impact on the model performance, but are worth tuning with a Random hyper-parameter search nonetheless, if highest performance is critical.\n", "\n", "First we want to know what value of `max_depth` to use because it has a big impact on the model training time and optimal values depend strongly on the dataset.\n", "We'll do a quick Cartesian grid search to get a rough idea of good candidate `max_depth` values. Each model in the grid search will use early stopping to tune the number of trees using the validation set AUC, as before.\n", "We'll use learning rate annealing to speed up convergence without sacrificing too much accuracy." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "gbm Grid Build progress: |████████████████████████████████████████████████| 100%\n" ] } ], "source": [ "## Depth 10 is usually plenty of depth for most datasets, but you never know\n", "hyper_params = {'max_depth' : list(range(1,30,2))}\n", "#hyper_params = {max_depth = [4,6,8,12,16,20]} ##faster for larger datasets\n", "\n", "#Build initial GBM Model\n", "gbm_grid = H2OGradientBoostingEstimator(\n", " ## more trees is better if the learning rate is small enough \n", " ## here, use \"more than enough\" trees - we have early stopping\n", " ntrees=10000,\n", " ## smaller learning rate is better\n", " ## since we have learning_rate_annealing, we can afford to start with a \n", " #bigger learning rate\n", " learn_rate=0.05,\n", " ## learning rate annealing: learning_rate shrinks by 1% after every tree \n", " ## (use 1.00 to disable, but then lower the learning_rate)\n", " learn_rate_annealing = 0.99,\n", " ## sample 80% of rows per tree\n", " sample_rate = 0.8,\n", " ## sample 80% of columns per split\n", " col_sample_rate = 0.8,\n", " ## fix a random number generator seed for reproducibility\n", " seed = 1234,\n", " ## score every 10 trees to make early stopping reproducible \n", " #(it depends on the scoring interval)\n", " score_tree_interval = 10, \n", " ## early stopping once the validation AUC doesn't improve by at least 0.01% for \n", " #5 consecutive scoring events\n", " stopping_rounds = 5,\n", " stopping_metric = \"AUC\",\n", " stopping_tolerance = 1e-4)\n", "\n", "#Build grid search with previously made GBM and hyper parameters\n", "grid = H2OGridSearch(gbm_grid,hyper_params,\n", " grid_id = 'depth_grid',\n", " search_criteria = {'strategy': \"Cartesian\"})\n", "\n", "\n", "#Train grid search\n", "grid.train(x=predictors, \n", " y=response,\n", " training_frame = train,\n", " validation_frame = valid)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " max_depth model_ids logloss\n", "0 13 depth_grid_model_7 0.20109637892392757\n", "1 9 depth_grid_model_5 0.20160720998146248\n", "2 7 depth_grid_model_4 0.20246242267462608\n", "3 5 depth_grid_model_3 0.20290080343982356\n", "4 11 depth_grid_model_6 0.2034349464898852\n", "5 19 depth_grid_model_10 0.20446595941168919\n", "6 21 depth_grid_model_11 0.20446595941168919\n", "7 23 depth_grid_model_12 0.20446595941168919\n", "8 25 depth_grid_model_13 0.20446595941168919\n", "9 27 depth_grid_model_14 0.20446595941168919\n", "10 29 depth_grid_model_15 0.20446595941168919\n", "11 17 depth_grid_model_9 0.20446595968647824\n", "12 15 depth_grid_model_8 0.20463752833415866\n", "13 3 depth_grid_model_2 0.20971798928576332\n", "14 1 depth_grid_model_1 0.23401163708609643\n", "\n" ] } ], "source": [ "## by default, display the grid search results sorted by increasing logloss (since this is a classification task)\n", "print(grid)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " max_depth model_ids auc\n", "0 13 depth_grid_model_7 0.9525218371372218\n", "1 9 depth_grid_model_5 0.9519019442096365\n", "2 11 depth_grid_model_6 0.9512820512820513\n", "3 7 depth_grid_model_4 0.9512256973795435\n", "4 5 depth_grid_model_3 0.9511411665257818\n", "5 19 depth_grid_model_10 0.9505494505494505\n", "6 21 depth_grid_model_11 0.9505494505494505\n", "7 23 depth_grid_model_12 0.9505494505494505\n", "8 25 depth_grid_model_13 0.9505494505494505\n", "9 27 depth_grid_model_14 0.9505494505494505\n", "10 29 depth_grid_model_15 0.9505494505494505\n", "11 17 depth_grid_model_9 0.9505494505494505\n", "12 15 depth_grid_model_8 0.9503240349394196\n", "13 1 depth_grid_model_1 0.9462383770076077\n", "14 3 depth_grid_model_2 0.9458157227387998\n", "\n" ] } ], "source": [ "## sort the grid models by decreasing AUC\n", "sorted_grid = grid.get_grid(sort_by='auc',decreasing=True)\n", "print(sorted_grid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It appears that `max_depth` values of 5 to 13 are best suited for this dataset, which is unusally deep!" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "MaxDepth 13\n", "MinDepth 5\n" ] } ], "source": [ "max_depths = sorted_grid.sorted_metric_table()['max_depth'][0:5]\n", "new_max = int(max(max_depths, key=int))\n", "new_min = int(min(max_depths, key=int))\n", "\n", "print(\"MaxDepth\", new_max)\n", "print(\"MinDepth\", new_min)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we know a good range for max_depth, we can tune all other parameters in more detail. Since we don't know what combinations of hyper-parameters will result in the best model, we'll use random hyper-parameter search to \"let the machine get luckier than a best guess of any human\"." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "# create hyperameter and search criteria lists (ranges are inclusive..exclusive))\n", "hyper_params_tune = {'max_depth' : list(range(new_min,new_max+1,1)),\n", " 'sample_rate': [x/100. for x in range(20,101)],\n", " 'col_sample_rate' : [x/100. for x in range(20,101)],\n", " 'col_sample_rate_per_tree': [x/100. for x in range(20,101)],\n", " 'col_sample_rate_change_per_level': [x/100. for x in range(90,111)],\n", " 'min_rows': [2**x for x in range(0,int(math.log(train.nrow,2)-1)+1)],\n", " 'nbins': [2**x for x in range(4,11)],\n", " 'nbins_cats': [2**x for x in range(4,13)],\n", " 'min_split_improvement': [0,1e-8,1e-6,1e-4],\n", " 'histogram_type': [\"UniformAdaptive\",\"QuantilesGlobal\",\"RoundRobin\"]}\n", "search_criteria_tune = {'strategy': \"RandomDiscrete\",\n", " 'max_runtime_secs': 3600, ## limit the runtime to 60 minutes\n", " 'max_models': 100, ## build no more than 100 models\n", " 'seed' : 1234,\n", " 'stopping_rounds' : 5,\n", " 'stopping_metric' : \"AUC\",\n", " 'stopping_tolerance': 1e-3\n", " }" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "gbm Grid Build progress: |████████████████████████████████████████████████| 100%\n", " col_sample_rate col_sample_rate_change_per_level \\\n", "0 0.49 1.04 \n", "1 0.92 0.93 \n", "2 0.35 1.09 \n", "3 0.5 0.94 \n", "4 0.97 0.96 \n", ".. .. ... ... \n", "95 0.5 1.03 \n", "96 0.96 0.94 \n", "97 0.61 0.97 \n", "98 0.87 1.0 \n", "99 0.24 1.08 \n", "\n", " col_sample_rate_per_tree histogram_type max_depth min_rows \\\n", "0 0.94 QuantilesGlobal 9 2.0 \n", "1 0.56 QuantilesGlobal 6 4.0 \n", "2 0.83 QuantilesGlobal 5 4.0 \n", "3 0.92 RoundRobin 13 2.0 \n", "4 0.96 QuantilesGlobal 6 1.0 \n", ".. ... ... ... ... \n", "95 0.45 RoundRobin 13 256.0 \n", "96 0.62 QuantilesGlobal 8 256.0 \n", "97 0.36 QuantilesGlobal 8 256.0 \n", "98 0.2 RoundRobin 12 256.0 \n", "99 0.3 UniformAdaptive 5 256.0 \n", "\n", " min_split_improvement nbins nbins_cats sample_rate model_ids \\\n", "0 0.0 32 256 0.86 final_grid_model_69 \n", "1 0.0 128 128 0.93 final_grid_model_97 \n", "2 1.0E-8 64 128 0.69 final_grid_model_39 \n", "3 0.0 128 2048 0.61 final_grid_model_15 \n", "4 1.0E-4 1024 64 0.32 final_grid_model_76 \n", ".. ... ... ... ... ... \n", "95 1.0E-8 512 16 0.28 final_grid_model_59 \n", "96 1.0E-6 64 4096 0.57 final_grid_model_96 \n", "97 1.0E-6 128 1024 0.65 final_grid_model_99 \n", "98 1.0E-6 512 1024 0.97 final_grid_model_52 \n", "99 1.0E-4 32 64 0.97 final_grid_model_45 \n", "\n", " logloss \n", "0 0.17067246483917042 \n", "1 0.17808872698061212 \n", "2 0.18137723622439125 \n", "3 0.18761536132107057 \n", "4 0.1888167753055619 \n", ".. ... \n", "95 0.5440442492091072 \n", "96 0.5450334515467662 \n", "97 0.5488192692893163 \n", "98 0.5501161246099107 \n", "99 0.5827120934746953 \n", "\n", "[100 rows x 13 columns]\n", "\n" ] } ], "source": [ "gbm_final_grid = H2OGradientBoostingEstimator(distribution='bernoulli',\n", " ## more trees is better if the learning rate is small enough \n", " ## here, use \"more than enough\" trees - we have early stopping\n", " ntrees=10000,\n", " ## smaller learning rate is better\n", " ## since we have learning_rate_annealing, we can afford to start with a \n", " #bigger learning rate\n", " learn_rate=0.05,\n", " ## learning rate annealing: learning_rate shrinks by 1% after every tree \n", " ## (use 1.00 to disable, but then lower the learning_rate)\n", " learn_rate_annealing = 0.99,\n", " ## score every 10 trees to make early stopping reproducible \n", " #(it depends on the scoring interval)\n", " score_tree_interval = 10,\n", " ## fix a random number generator seed for reproducibility\n", " seed = 1234,\n", " ## early stopping once the validation AUC doesn't improve by at least 0.01% for \n", " #5 consecutive scoring events\n", " stopping_rounds = 5,\n", " stopping_metric = \"AUC\",\n", " stopping_tolerance = 1e-4)\n", " \n", "#Build grid search with previously made GBM and hyper parameters\n", "final_grid = H2OGridSearch(gbm_final_grid, hyper_params = hyper_params_tune,\n", " grid_id = 'final_grid',\n", " search_criteria = search_criteria_tune)\n", "#Train grid search\n", "final_grid.train(x=predictors, \n", " y=response,\n", " ## early stopping based on timeout (no model should take more than 1 hour - modify as needed)\n", " max_runtime_secs = 3600, \n", " training_frame = train,\n", " validation_frame = valid)\n", "\n", "print(final_grid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the best models have even better validation AUCs than our previous best models, so the random grid search was successful!" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " col_sample_rate col_sample_rate_change_per_level \\\n", "0 0.92 0.93 \n", "1 0.49 1.04 \n", "2 0.35 1.09 \n", "3 0.61 1.04 \n", "4 0.81 0.94 \n", ".. .. ... ... \n", "95 0.5 1.03 \n", "96 0.87 1.0 \n", "97 0.24 1.08 \n", "98 0.57 1.1 \n", "99 0.96 0.94 \n", "\n", " col_sample_rate_per_tree histogram_type max_depth min_rows \\\n", "0 0.56 QuantilesGlobal 6 4.0 \n", "1 0.94 QuantilesGlobal 9 2.0 \n", "2 0.83 QuantilesGlobal 5 4.0 \n", "3 0.61 UniformAdaptive 11 1.0 \n", "4 0.89 QuantilesGlobal 8 16.0 \n", ".. ... ... ... ... \n", "95 0.45 RoundRobin 13 256.0 \n", "96 0.2 RoundRobin 12 256.0 \n", "97 0.3 UniformAdaptive 5 256.0 \n", "98 0.68 RoundRobin 12 256.0 \n", "99 0.62 QuantilesGlobal 8 256.0 \n", "\n", " min_split_improvement nbins nbins_cats sample_rate model_ids \\\n", "0 0.0 128 128 0.93 final_grid_model_97 \n", "1 0.0 32 256 0.86 final_grid_model_69 \n", "2 1.0E-8 64 128 0.69 final_grid_model_39 \n", "3 1.0E-4 64 16 0.69 final_grid_model_82 \n", "4 1.0E-8 1024 32 0.71 final_grid_model_70 \n", ".. ... ... ... ... ... \n", "95 1.0E-8 512 16 0.28 final_grid_model_59 \n", "96 1.0E-6 512 1024 0.97 final_grid_model_52 \n", "97 1.0E-4 32 64 0.97 final_grid_model_45 \n", "98 0.0 16 4096 0.58 final_grid_model_9 \n", "99 1.0E-6 64 4096 0.57 final_grid_model_96 \n", "\n", " auc \n", "0 0.974218089602705 \n", "1 0.9738799661876585 \n", "2 0.9698224852071006 \n", "3 0.9691462383770075 \n", "4 0.9684699915469147 \n", ".. ... \n", "95 0.7997464074387151 \n", "96 0.7965624119470274 \n", "97 0.7854888701042547 \n", "98 0.7836573682727528 \n", "99 0.7608058608058608 \n", "\n", "[100 rows x 13 columns]\n", "\n" ] } ], "source": [ "## Sort the grid models by AUC\n", "sorted_final_grid = final_grid.get_grid(sort_by='auc',decreasing=True)\n", "\n", "print(sorted_final_grid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also see the results of the grid search in [Flow](http://localhost:54321/):\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Inspection and Final Test Set Scoring\n", "\n", "Let's see how well the best model of the grid search (as judged by validation set AUC) does on the held out test set:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.9824897581604334\n" ] } ], "source": [ "#Get the best model from the list (the model name listed at the top of the table)\n", "best_model = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])\n", "performance_best_model = best_model.model_performance(test)\n", "print(performance_best_model.auc())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Good news. It does as well on the test set as on the validation set, so it looks like our best GBM model generalizes well to the unseen test set:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can inspect the winning model's parameters:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[\"model_id = {'__meta': {'schema_version': 3, 'schema_name': 'ModelKeyV3', 'schema_type': 'Key'}, 'name': 'final_grid_model_97', 'type': 'Key', 'URL': '/3/Models/final_grid_model_97'}\",\n", " \"training_frame = {'__meta': {'schema_version': 3, 'schema_name': 'FrameKeyV3', 'schema_type': 'Key'}, 'name': 'train.hex', 'type': 'Key', 'URL': '/3/Frames/train.hex'}\",\n", " \"validation_frame = {'__meta': {'schema_version': 3, 'schema_name': 'FrameKeyV3', 'schema_type': 'Key'}, 'name': 'valid.hex', 'type': 'Key', 'URL': '/3/Frames/valid.hex'}\",\n", " 'nfolds = 0',\n", " 'keep_cross_validation_models = True',\n", " 'keep_cross_validation_predictions = False',\n", " 'keep_cross_validation_fold_assignment = False',\n", " 'score_each_iteration = False',\n", " 'score_tree_interval = 10',\n", " 'fold_assignment = AUTO',\n", " 'fold_column = None',\n", " \"response_column = {'__meta': {'schema_version': 3, 'schema_name': 'ColSpecifierV3', 'schema_type': 'VecSpecifier'}, 'column_name': 'survived', 'is_member_of_frames': None}\",\n", " \"ignored_columns = ['name']\",\n", " 'ignore_const_cols = True',\n", " 'offset_column = None',\n", " 'weights_column = None',\n", " 'balance_classes = False',\n", " 'class_sampling_factors = None',\n", " 'max_after_balance_size = 5.0',\n", " 'max_confusion_matrix_size = 20',\n", " 'ntrees = 10000',\n", " 'max_depth = 6',\n", " 'min_rows = 4.0',\n", " 'nbins = 128',\n", " 'nbins_top_level = 1024',\n", " 'nbins_cats = 128',\n", " 'r2_stopping = 1.7976931348623157e+308',\n", " 'stopping_rounds = 5',\n", " 'stopping_metric = AUC',\n", " 'stopping_tolerance = 0.0001',\n", " 'max_runtime_secs = 3542.137',\n", " 'seed = 1234',\n", " 'build_tree_one_node = False',\n", " 'learn_rate = 0.05',\n", " 'learn_rate_annealing = 0.99',\n", " 'distribution = bernoulli',\n", " 'quantile_alpha = 0.5',\n", " 'tweedie_power = 1.5',\n", " 'huber_alpha = 0.9',\n", " 'checkpoint = None',\n", " 'sample_rate = 0.93',\n", " 'sample_rate_per_class = None',\n", " 'col_sample_rate = 0.92',\n", " 'col_sample_rate_change_per_level = 0.93',\n", " 'col_sample_rate_per_tree = 0.56',\n", " 'min_split_improvement = 0.0',\n", " 'histogram_type = QuantilesGlobal',\n", " 'max_abs_leafnode_pred = 1.7976931348623157e+308',\n", " 'pred_noise_bandwidth = 0.0',\n", " 'categorical_encoding = AUTO',\n", " 'calibrate_model = False',\n", " 'calibration_frame = None',\n", " 'custom_metric_func = None',\n", " 'custom_distribution_func = None',\n", " 'export_checkpoints_dir = None',\n", " 'monotone_constraints = None',\n", " 'check_constant_response = True']" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "params_list = []\n", "for key, value in best_model.params.items():\n", " params_list.append(str(key)+\" = \"+str(value['actual']))\n", "params_list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can confirm that these parameters are generally sound, by building a GBM model on the whole dataset (instead of the 60%) and using internal 5-fold cross-validation (re-using all other parameters including the seed):" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])\n", "#get the parameters from the Random grid search model and modify them slightly\n", "params = gbm.params\n", "new_params = {\"nfolds\":5, \"model_id\":None, \"training_frame\":None, \"validation_frame\":None, \n", " \"response_column\":None, \"ignored_columns\":None}\n", "for key in new_params.keys():\n", " params[key]['actual'] = new_params[key] \n", "gbm_best = H2OGradientBoostingEstimator()\n", "for key in params.keys():\n", " if key in dir(gbm_best) and getattr(gbm_best,key) != params[key]['actual']:\n", " setattr(gbm_best,key,params[key]['actual']) " ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "gbm Model Build progress: |███████████████████████████████████████████████| 100%\n" ] } ], "source": [ "gbm_best.train(x=predictors, y=response, training_frame=df)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Cross-Validation Metrics Summary: \n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
meansdcv_1_validcv_2_validcv_3_validcv_4_validcv_5_valid
0accuracy0.948099730.00631403130.94007490.948339460.94573640.94881890.95752895
1auc0.97434770.0095502970.96745390.96104170.97940050.98199270.98184973
2aucpr0.971583370.0086982360.968709470.95773260.97467780.97858390.978213
3err0.0519002640.00631403130.0599250940.0516605150.0542635660.0511811040.042471044
4err_count13.61.816590216.014.014.013.011.0
5f0point50.950915340.0177221480.96230160.954545440.9442060.924024640.96949893
6f10.92952870.0078248280.92380950.92307690.92631580.932642460.9417989
7f20.90960940.020972950.888278370.893617030.909090940.94142260.91563785
8lift_top_group2.62586880.157947392.38392852.82291672.6326532.67368412.6161616
9logloss0.195425150.0240048490.204803140.232149720.190312710.175944790.17391542
10max_per_class_error0.1029222160.0315531450.133928570.1250.102040820.052631580.1010101
11mcc0.890947040.0119987050.88008550.88739670.88454310.891668860.911041
12mean_per_class_accuracy0.93859440.0084722070.92980990.93178570.936479570.9485270.94636995
13mean_per_class_error0.0614055730.0084722070.0701900940.068214280.063520410.051473020.05363005
14mse0.0516550470.0069270980.056153560.0612374880.0499084440.046104450.0448713
15pr_auc0.971583370.0086982360.968709470.95773260.97467780.97858390.978213
16precision0.96606360.0298507590.98979590.97674420.956521750.91836730.98888886
17r20.78057820.0311569460.76940490.732301060.7881310.803080140.809974
18recall0.89707780.0315531450.86607140.8750.89795920.947368440.8989899
19rmse0.226875890.01509886050.236967410.24746210.223401980.214719470.21182847
\n", "
" ], "text/plain": [ " mean sd cv_1_valid \\\n", "0 accuracy 0.94809973 0.0063140313 0.9400749 \n", "1 auc 0.9743477 0.009550297 0.9674539 \n", "2 aucpr 0.97158337 0.008698236 0.96870947 \n", "3 err 0.051900264 0.0063140313 0.059925094 \n", "4 err_count 13.6 1.8165902 16.0 \n", "5 f0point5 0.95091534 0.017722148 0.9623016 \n", "6 f1 0.9295287 0.007824828 0.9238095 \n", "7 f2 0.9096094 0.02097295 0.88827837 \n", "8 lift_top_group 2.6258688 0.15794739 2.3839285 \n", "9 logloss 0.19542515 0.024004849 0.20480314 \n", "10 max_per_class_error 0.102922216 0.031553145 0.13392857 \n", "11 mcc 0.89094704 0.011998705 0.8800855 \n", "12 mean_per_class_accuracy 0.9385944 0.008472207 0.9298099 \n", "13 mean_per_class_error 0.061405573 0.008472207 0.070190094 \n", "14 mse 0.051655047 0.006927098 0.05615356 \n", "15 pr_auc 0.97158337 0.008698236 0.96870947 \n", "16 precision 0.9660636 0.029850759 0.9897959 \n", "17 r2 0.7805782 0.031156946 0.7694049 \n", "18 recall 0.8970778 0.031553145 0.8660714 \n", "19 rmse 0.22687589 0.0150988605 0.23696741 \n", "\n", " cv_2_valid cv_3_valid cv_4_valid cv_5_valid \n", "0 0.94833946 0.9457364 0.9488189 0.95752895 \n", "1 0.9610417 0.9794005 0.9819927 0.98184973 \n", "2 0.9577326 0.9746778 0.9785839 0.978213 \n", "3 0.051660515 0.054263566 0.051181104 0.042471044 \n", "4 14.0 14.0 13.0 11.0 \n", "5 0.95454544 0.944206 0.92402464 0.96949893 \n", "6 0.9230769 0.9263158 0.93264246 0.9417989 \n", "7 0.89361703 0.90909094 0.9414226 0.91563785 \n", "8 2.8229167 2.632653 2.6736841 2.6161616 \n", "9 0.23214972 0.19031271 0.17594479 0.17391542 \n", "10 0.125 0.10204082 0.05263158 0.1010101 \n", "11 0.8873967 0.8845431 0.89166886 0.911041 \n", "12 0.9317857 0.93647957 0.948527 0.94636995 \n", "13 0.06821428 0.06352041 0.05147302 0.05363005 \n", "14 0.061237488 0.049908444 0.04610445 0.0448713 \n", "15 0.9577326 0.9746778 0.9785839 0.978213 \n", "16 0.9767442 0.95652175 0.9183673 0.98888886 \n", "17 0.73230106 0.788131 0.80308014 0.809974 \n", "18 0.875 0.8979592 0.94736844 0.8989899 \n", "19 0.2474621 0.22340198 0.21471947 0.21182847 " ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "See the whole table with table.as_data_frame()\n", "\n" ] } ], "source": [ "print(gbm_best.cross_validation_metrics_summary())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It looks like the winning model performs slightly better on the validation and test sets than during cross-validation on the training set as the mean AUC on the 5 folds is estimated to be only 97.4%, but with a fairly large standard deviation of 0.9%. For small datasets, such a large variance is not unusual. To get a better estimate of model performance, the Random hyper-parameter search could have used `nfolds = 5` (or 10, or similar) in combination with 80% of the data for training (i.e., not holding out a validation set, but only the final test set). However, this would take more time, as `nfolds+1` models will be built for every set of parameters.\n", "\n", "Instead, to save time, let's just scan through the top 5 models and cross-validate their parameters with `nfolds=5` on the entire dataset:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "gbm Model Build progress: |███████████████████████████████████████████████| 100%\n", "final_grid_model_97\n", " auc\n", "mean 0.9743477\n", "sd 0.009550297\n", "cv_1_valid 0.9674539\n", "cv_2_valid 0.9610417\n", "cv_3_valid 0.9794005\n", "cv_4_valid 0.9819927\n", "cv_5_valid 0.98184973\n", "Name: 1, dtype: object\n", "gbm Model Build progress: |███████████████████████████████████████████████| 100%\n", "final_grid_model_69\n", " auc\n", "mean 0.9741264\n", "sd 0.009261287\n", "cv_1_valid 0.96854836\n", "cv_2_valid 0.9610417\n", "cv_3_valid 0.97665817\n", "cv_4_valid 0.9807349\n", "cv_5_valid 0.983649\n", "Name: 1, dtype: object\n", "gbm Model Build progress: |███████████████████████████████████████████████| 100%\n", "final_grid_model_39\n", " auc\n", "mean 0.9724971\n", "sd 0.009157102\n", "cv_1_valid 0.9625576\n", "cv_2_valid 0.9624107\n", "cv_3_valid 0.97927296\n", "cv_4_valid 0.97835153\n", "cv_5_valid 0.9798927\n", "Name: 1, dtype: object\n", "gbm Model Build progress: |███████████████████████████████████████████████| 100%\n", "final_grid_model_82\n", " auc\n", "mean 0.9690046\n", "sd 0.010956372\n", "cv_1_valid 0.96209675\n", "cv_2_valid 0.9530357\n", "cv_3_valid 0.97793365\n", "cv_4_valid 0.9755048\n", "cv_5_valid 0.976452\n", "Name: 1, dtype: object\n", "gbm Model Build progress: |███████████████████████████████████████████████| 100%\n", "final_grid_model_70\n", " auc\n", "mean 0.97103506\n", "sd 0.008409648\n", "cv_1_valid 0.96313363\n", "cv_2_valid 0.96068454\n", "cv_3_valid 0.97589284\n", "cv_4_valid 0.9776233\n", "cv_5_valid 0.9778409\n", "Name: 1, dtype: object\n" ] } ], "source": [ "for i in range(5): \n", " gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][i])\n", " #get the parameters from the Random grid search model and modify them slightly\n", " params = gbm.params\n", " new_params = {\"nfolds\":5, \"model_id\":None, \"training_frame\":None, \"validation_frame\":None, \n", " \"response_column\":None, \"ignored_columns\":None}\n", " for key in new_params.keys():\n", " params[key]['actual'] = new_params[key]\n", " new_model = H2OGradientBoostingEstimator()\n", " for key in params.keys():\n", " if key in dir(new_model) and getattr(new_model,key) != params[key]['actual']:\n", " setattr(new_model,key,params[key]['actual'])\n", " new_model.train(x = predictors, y = response, training_frame = df) \n", " cv_summary = new_model.cross_validation_metrics_summary().as_data_frame()\n", " print(gbm.model_id)\n", " print(cv_summary.iloc[1]) ## AUC" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The avid reader might have noticed that we just implicitly did further parameter tuning using the \"final\" test set (which is part of the entire dataset `df`), which is not good practice - one is not supposed to use the \"final\" test set more than once. Hence, we're not going to pick a different \"best\" model, but we're just learning about the variance in AUCs. It turns out, for this tiny dataset, that the variance is rather large, which is not surprising.\n", "\n", "Keeping the same \"best\" model, we can make test set predictions as follows:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "gbm prediction progress: |████████████████████████████████████████████████| 100%\n" ] }, { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
predict p0 p1
00.942511 0.0574889
00.965239 0.0347607
00.837052 0.162948
10.01447780.985522
10.01114830.988852
00.818008 0.181992
10.04702250.952977
10.02423290.975767
10.04065790.959342
00.893662 0.106338
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "preds = best_model.predict(test)\n", "preds.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the label (survived or not) is predicted as well (in the first predict column), and it uses the threshold with the highest F1 score (here: 0.528098) to make labels from the probabilities for survival (`p1`). The probability for death (`p0`) is given for convenience, as it is just `1-p1`." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "ModelMetricsBinomial: gbm\n", "** Reported on test data. **\n", "\n", "MSE: 0.045961929072573966\n", "RMSE: 0.21438733421677217\n", "LogLoss: 0.17808872698061212\n", "Mean Per-Class Error: 0.06486334178641873\n", "AUC: 0.974218089602705\n", "AUCPR: 0.9723275034473811\n", "Gini: 0.9484361792054099\n", "\n", "Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.41524673411844065: \n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
01ErrorRate
00168.01.00.0059(1.0/169.0)
1113.092.00.1238(13.0/105.0)
2Total181.093.00.0511(14.0/274.0)
\n", "
" ], "text/plain": [ " 0 1 Error Rate\n", "0 0 168.0 1.0 0.0059 (1.0/169.0)\n", "1 1 13.0 92.0 0.1238 (13.0/105.0)\n", "2 Total 181.0 93.0 0.0511 (14.0/274.0)" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "Maximum Metrics: Maximum metrics at their respective thresholds\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
metricthresholdvalueidx
0max f10.4152470.92929392.0
1max f20.2078640.924528109.0
2max f0point50.5233490.97014990.0
3max accuracy0.5233490.94890590.0
4max precision0.9902761.0000000.0
5max recall0.0579981.000000205.0
6max specificity0.9902761.0000000.0
7max absolute_mcc0.5233490.89463190.0
8max min_per_class_accuracy0.2078640.928994109.0
9max mean_per_class_accuracy0.4152470.93513792.0
10max tns0.990276169.0000000.0
11max fns0.990276104.0000000.0
12max fps0.023439169.000000267.0
13max tps0.057998105.000000205.0
14max tnr0.9902761.0000000.0
15max fnr0.9902760.9904760.0
16max fpr0.0234391.000000267.0
17max tpr0.0579981.000000205.0
\n", "
" ], "text/plain": [ " metric threshold value idx\n", "0 max f1 0.415247 0.929293 92.0\n", "1 max f2 0.207864 0.924528 109.0\n", "2 max f0point5 0.523349 0.970149 90.0\n", "3 max accuracy 0.523349 0.948905 90.0\n", "4 max precision 0.990276 1.000000 0.0\n", "5 max recall 0.057998 1.000000 205.0\n", "6 max specificity 0.990276 1.000000 0.0\n", "7 max absolute_mcc 0.523349 0.894631 90.0\n", "8 max min_per_class_accuracy 0.207864 0.928994 109.0\n", "9 max mean_per_class_accuracy 0.415247 0.935137 92.0\n", "10 max tns 0.990276 169.000000 0.0\n", "11 max fns 0.990276 104.000000 0.0\n", "12 max fps 0.023439 169.000000 267.0\n", "13 max tps 0.057998 105.000000 205.0\n", "14 max tnr 0.990276 1.000000 0.0\n", "15 max fnr 0.990276 0.990476 0.0\n", "16 max fpr 0.023439 1.000000 267.0\n", "17 max tpr 0.057998 1.000000 205.0" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "Gains/Lift Table: Avg response rate: 38.32 %, avg score: 38.27 %\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
groupcumulative_data_fractionlower_thresholdliftcumulative_liftresponse_ratescorecumulative_response_ratecumulative_scorecapture_ratecumulative_capture_rategaincumulative_gain
010.0109490.9881192.6095242.6095241.0000000.9899721.0000000.9899720.0285710.028571160.952381160.952381
120.0218980.9869382.6095242.6095241.0000000.9873641.0000000.9886680.0285710.057143160.952381160.952381
230.0328470.9860342.6095242.6095241.0000000.9864921.0000000.9879430.0285710.085714160.952381160.952381
340.0401460.9857322.6095242.6095241.0000000.9858491.0000000.9875620.0190480.104762160.952381160.952381
450.0510950.9845482.6095242.6095241.0000000.9852501.0000000.9870670.0285710.133333160.952381160.952381
560.1021900.9798172.6095242.6095241.0000000.9821131.0000000.9845900.1333330.266667160.952381160.952381
670.1496350.9731832.6095242.6095241.0000000.9761121.0000000.9819020.1238100.390476160.952381160.952381
780.2007300.9585812.6095242.6095241.0000000.9679631.0000000.9783540.1333330.523810160.952381160.952381
890.2992700.8829392.6095242.6095241.0000000.9292471.0000000.9621850.2571430.780952160.952381160.952381
9100.4014600.2073751.4911562.3248480.5714290.4618060.8909090.8348160.1523810.93333349.115646132.484848
10110.5000000.1206090.2899471.9238100.1111110.1650840.7372260.7028250.0285710.961905-71.00529192.380952
11120.5985400.0789340.0000001.6070850.0000000.0953670.6158540.6028160.0000000.961905-100.00000060.708479
12130.7007300.0618230.2795921.4134920.1071430.0694340.5416670.5250320.0285710.990476-72.04081641.349206
13140.8065690.0551470.0899841.2398190.0344830.0580270.4751130.4637500.0095241.000000-91.00164223.981900
14150.8978100.0476550.0000001.1138210.0000000.0508960.4268290.4217940.0000001.000000-100.00000011.382114
15161.0000000.0234390.0000001.0000000.0000000.0395180.3832120.3827290.0000001.000000-100.0000000.000000
\n", "
" ], "text/plain": [ " group cumulative_data_fraction lower_threshold lift \\\n", "0 1 0.010949 0.988119 2.609524 \n", "1 2 0.021898 0.986938 2.609524 \n", "2 3 0.032847 0.986034 2.609524 \n", "3 4 0.040146 0.985732 2.609524 \n", "4 5 0.051095 0.984548 2.609524 \n", "5 6 0.102190 0.979817 2.609524 \n", "6 7 0.149635 0.973183 2.609524 \n", "7 8 0.200730 0.958581 2.609524 \n", "8 9 0.299270 0.882939 2.609524 \n", "9 10 0.401460 0.207375 1.491156 \n", "10 11 0.500000 0.120609 0.289947 \n", "11 12 0.598540 0.078934 0.000000 \n", "12 13 0.700730 0.061823 0.279592 \n", "13 14 0.806569 0.055147 0.089984 \n", "14 15 0.897810 0.047655 0.000000 \n", "15 16 1.000000 0.023439 0.000000 \n", "\n", " cumulative_lift response_rate score cumulative_response_rate \\\n", "0 2.609524 1.000000 0.989972 1.000000 \n", "1 2.609524 1.000000 0.987364 1.000000 \n", "2 2.609524 1.000000 0.986492 1.000000 \n", "3 2.609524 1.000000 0.985849 1.000000 \n", "4 2.609524 1.000000 0.985250 1.000000 \n", "5 2.609524 1.000000 0.982113 1.000000 \n", "6 2.609524 1.000000 0.976112 1.000000 \n", "7 2.609524 1.000000 0.967963 1.000000 \n", "8 2.609524 1.000000 0.929247 1.000000 \n", "9 2.324848 0.571429 0.461806 0.890909 \n", "10 1.923810 0.111111 0.165084 0.737226 \n", "11 1.607085 0.000000 0.095367 0.615854 \n", "12 1.413492 0.107143 0.069434 0.541667 \n", "13 1.239819 0.034483 0.058027 0.475113 \n", "14 1.113821 0.000000 0.050896 0.426829 \n", "15 1.000000 0.000000 0.039518 0.383212 \n", "\n", " cumulative_score capture_rate cumulative_capture_rate gain \\\n", "0 0.989972 0.028571 0.028571 160.952381 \n", "1 0.988668 0.028571 0.057143 160.952381 \n", "2 0.987943 0.028571 0.085714 160.952381 \n", "3 0.987562 0.019048 0.104762 160.952381 \n", "4 0.987067 0.028571 0.133333 160.952381 \n", "5 0.984590 0.133333 0.266667 160.952381 \n", "6 0.981902 0.123810 0.390476 160.952381 \n", "7 0.978354 0.133333 0.523810 160.952381 \n", "8 0.962185 0.257143 0.780952 160.952381 \n", "9 0.834816 0.152381 0.933333 49.115646 \n", "10 0.702825 0.028571 0.961905 -71.005291 \n", "11 0.602816 0.000000 0.961905 -100.000000 \n", "12 0.525032 0.028571 0.990476 -72.040816 \n", "13 0.463750 0.009524 1.000000 -91.001642 \n", "14 0.421794 0.000000 1.000000 -100.000000 \n", "15 0.382729 0.000000 1.000000 -100.000000 \n", "\n", " cumulative_gain \n", "0 160.952381 \n", "1 160.952381 \n", "2 160.952381 \n", "3 160.952381 \n", "4 160.952381 \n", "5 160.952381 \n", "6 160.952381 \n", "7 160.952381 \n", "8 160.952381 \n", "9 132.484848 \n", "10 92.380952 \n", "11 60.708479 \n", "12 41.349206 \n", "13 23.981900 \n", "14 11.382114 \n", "15 0.000000 " ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n" ] }, { "data": { "text/plain": [] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "best_model.model_performance(valid)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'final_grid_model_97'" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Key of best model:\n", "best_model.key" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also see the \"best\" model in more detail in [Flow](http://localhost:54321/):\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model and the predictions can be saved to file as follows:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# uncomment if you want to export the best model\n", "# h2o.save_model(best_model, \"/tmp/bestModel.csv\", force=True)\n", "# h2o.export_file(preds, \"/tmp/bestPreds.csv\", force=True)" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "# print pojo to screen, or provide path to download location\n", "# h2o.download_pojo(best_model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model can also be exported as a plain old Java object (POJO) for H2O-independent (standalone/Storm/Kafka/UDF) scoring in any Java environment." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "```\n", "/*\n", " Licensed under the Apache License, Version 2.0\n", " http://www.apache.org/licenses/LICENSE-2.0.html\n", "\n", " AUTOGENERATED BY H2O at 2016-07-17T18:38:50.337-07:00\n", " 3.8.3.3\n", "\n", " Standalone prediction code with sample test data for GBMModel named final_grid_model_45\n", "\n", " How to download, compile and execute:\n", " mkdir tmpdir\n", " cd tmpdir\n", " curl http://127.0.0.1:54321/3/h2o-genmodel.jar > h2o-genmodel.jar\n", " curl http://127.0.0.1:54321/3/Models.java/final_grid_model_45 > final_grid_model_45.java\n", " javac -cp h2o-genmodel.jar -J-Xmx2g -J-XX:MaxPermSize=128m final_grid_model_45.java\n", "\n", " (Note: Try java argument -XX:+PrintCompilation to show runtime JIT compiler behavior.)\n", "*/\n", "import java.util.Map;\n", "import hex.genmodel.GenModel;\n", "import hex.genmodel.annotations.ModelPojo;\n", "\n", "...\n", "class final_grid_model_45_Tree_0_class_0 {\n", " static final double score0(double[] data) {\n", " double pred = (Double.isNaN(data[1]) || !GenModel.bitSetContains(GRPSPLIT0, 0, data[1 /* sex */]) ? \n", " (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT1, 13, data[7 /* cabin */]) ? \n", " (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT2, 9, data[7 /* cabin */]) ? \n", " (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT3, 9, data[7 /* cabin */]) ? \n", " (data[2 /* age */] <1.4174492f ? \n", " 0.13087687f : \n", " (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT4, 9, data[7 /* cabin */]) ? \n", " (Double.isNaN(data[3]) || data[3 /* sibsp */] <1.000313f ? \n", " (data[6 /* fare */] <7.91251f ? \n", " (Double.isNaN(data[5]) || data[5 /* ticket */] <368744.5f ? \n", " -0.08224204f : \n", " (Double.isNaN(data[2]) || data[2 /* age */] <13.0f ? \n", " -0.028962314f : \n", " -0.08224204f)) : \n", " (Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT5, 9, data[7 /* cabin */]) ? \n", " (data[6 /* fare */] <7.989957f ? \n", " (Double.isNaN(data[3]) || data[3 /* sibsp */] <0.0017434144f ? \n", " 0.07759714f : \n", " 0.13087687f) : \n", " (data[6 /* fare */] <12.546303f ? \n", " -0.07371729f : \n", " (Double.isNaN(data[4]) || data[4 /* parch */] <1.0020853f ? \n", " -0.037374903f : \n", " -0.08224204f))) : \n", " 0.0f)) : \n", " -0.08224204f) : \n", " 0.0f)) : \n", " 0.0f) : \n", " -0.08224204f) : \n", " -0.08224204f) : \n", "...\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Ensembling Techniques\n", "\n", "After learning above that the variance of the test set AUC of the top few models was rather large, we might be able to turn this into our advantage by using ensembling techniques. The simplest one is taking the average of the predictions (survival probabilities) of the top `k` grid search model predictions (here, we use `k=10`):" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "gbm prediction progress: |████████████████████████████████████████████████| 100%\n", "gbm prediction progress: |████████████████████████████████████████████████| 100%\n", "gbm prediction progress: |████████████████████████████████████████████████| 100%\n", "gbm prediction progress: |████████████████████████████████████████████████| 100%\n", "gbm prediction progress: |████████████████████████████████████████████████| 100%\n", "gbm prediction progress: |████████████████████████████████████████████████| 100%\n", "gbm prediction progress: |████████████████████████████████████████████████| 100%\n", "gbm prediction progress: |████████████████████████████████████████████████| 100%\n", "gbm prediction progress: |████████████████████████████████████████████████| 100%\n", "gbm prediction progress: |████████████████████████████████████████████████| 100%\n" ] } ], "source": [ "prob = None\n", "k=10\n", "for i in range(0,k): \n", " gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][i])\n", " if (prob is None):\n", " prob = gbm.predict(test)[\"p1\"]\n", " else:\n", " prob = prob + gbm.predict(test)[\"p1\"]\n", "prob = prob/k" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now have a blended probability of survival for each person on the Titanic." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
p1
0.0555282
0.0382219
0.143723
0.978605
0.982394
0.230839
0.937021
0.978544
0.939877
0.138475
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "prob.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can bring those ensemble predictions to our Python session's memory space and use other Python packages." ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9827540636976345" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.metrics import roc_auc_score\n", "# convert prob and test[response] h2oframes to pandas' frames and then convert them each to numpy array\n", "np_array_prob = prob.as_data_frame().values\n", "np_array_test = test[response].as_data_frame().values\n", "probInPy = np_array_prob\n", "labeInPy = np_array_test\n", "# compare true scores (test[response]) to probability scores (prob)\n", "roc_auc_score(labeInPy, probInPy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This simple blended ensemble test set prediction has an even higher AUC than the best single model, but we need to do more validation studies, ideally using cross-validation. We leave this as an exercise for the reader - take the parameters of the top `10` models, retrain them with `nfolds=5` on the full dataset, set `keep_holdout_predictions=True` and sum up their predicted probabilities, then score that with sklearn's roc_auc_score as shown above.\n", "\n", "For more sophisticated ensembling approaches, such as stacking via a superlearner, we refer to the [H2O Ensemble](https://github.com/h2oai/h2o-3/tree/master/h2o-r/ensemble) github page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "We learned how to build H2O GBM models for a binary classification task on a small but realistic dataset with numerical and categorical variables, with the goal to maximize the AUC (ranges from 0.5 to 1). We first established a baseline with the default model, then carefully tuned the remaining hyper-parameters without \"too much\" human guess-work. We used both Cartesian and Random hyper-parameter searches to find good models. We were able to get the AUC on a holdout test set from 95% range with the default model to 97% range after tuning, and to above 98% with some simple ensembling technique known as blending. We performed simple cross-validation variance analysis to learn that results were slightly \"lucky\" due to the specific train/valid/test set splits, and settled to expect 97% AUCs instead.\n", "\n", "Note that this script and the findings therein are directly transferrable to large datasets on distributed clusters including Spark/Hadoop environments.\n", "\n", "More information can be found here [http://www.h2o.ai/docs/](http://www.h2o.ai/docs/)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 1 }