{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# About \n", "\n", "This notebook demonstrates classifiers, which are provided by __Reproducible experiment platform (REP)__ package. <br /> REP contains following classifiers\n", "* __scikit-learn__\n", "* __TMVA__ \n", "* __XGBoost__ \n", "Also classifiers from `hep_ml` (as any other `sklearn`-compatible classifiers may be used)\n", "\n", "### In this notebook we show the most simple way to\n", "* train classifier\n", "* build predictions \n", "* measure quality\n", "* combine metaclassifiers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Loading data" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy, pandas\n", "from rep.utils import train_test_split\n", "from sklearn.metrics import roc_auc_score\n", "\n", "sig_data = pandas.read_csv('toy_datasets/toyMC_sig_mass.csv', sep='\\t')\n", "bck_data = pandas.read_csv('toy_datasets/toyMC_bck_mass.csv', sep='\\t')\n", "\n", "labels = numpy.array([1] * len(sig_data) + [0] * len(bck_data))\n", "data = pandas.concat([sig_data, bck_data])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### First rows of our data" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "<div style=\"max-height:1000px;max-width:1500px;overflow:auto;\">\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>CDF1</th>\n", " <th>CDF2</th>\n", " <th>CDF3</th>\n", " <th>DOCAone</th>\n", " <th>DOCAthree</th>\n", " <th>DOCAtwo</th>\n", " <th>FlightDistance</th>\n", " <th>FlightDistanceError</th>\n", " <th>Hlt1Dec</th>\n", " <th>Hlt2Dec</th>\n", " <th>...</th>\n", " <th>p1_IP</th>\n", " <th>p1_IPSig</th>\n", " <th>p1_Laura_IsoBDT</th>\n", " <th>p1_pt</th>\n", " <th>p2_IP</th>\n", " <th>p2_IPSig</th>\n", " <th>p2_Laura_IsoBDT</th>\n", " <th>p2_pt</th>\n", " <th>peakingbkg</th>\n", " <th>pt</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td> 1.000000</td>\n", " <td> 1.000000</td>\n", " <td> 1.000000</td>\n", " <td> 0.111337</td>\n", " <td> 0.012695</td>\n", " <td> 0.123426</td>\n", " <td> 162.650955</td>\n", " <td> 0.870942</td>\n", " <td> 0</td>\n", " <td> 0</td>\n", " <td>...</td>\n", " <td> 11.314665</td>\n", " <td> 83.196968</td>\n", " <td>-0.223668</td>\n", " <td> 699.066467</td>\n", " <td> 9.799975</td>\n", " <td> 64.790207</td>\n", " <td>-0.121159</td>\n", " <td> 521.628174</td>\n", " <td>NaN</td>\n", " <td> 220.742111</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td> 0.759755</td>\n", " <td> 0.597375</td>\n", " <td> 0.389256</td>\n", " <td> 0.021781</td>\n", " <td> 0.094551</td>\n", " <td> 0.088421</td>\n", " <td> 4.193265</td>\n", " <td> 1.262280</td>\n", " <td> 0</td>\n", " <td> 0</td>\n", " <td>...</td>\n", " <td> 0.720070</td>\n", " <td> 7.237868</td>\n", " <td>-0.256142</td>\n", " <td> 587.628935</td>\n", " <td> 0.882111</td>\n", " <td> 8.834325</td>\n", " <td>-0.203220</td>\n", " <td> 532.679950</td>\n", " <td>NaN</td>\n", " <td> 661.208843</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td> 1.000000</td>\n", " <td> 0.796142</td>\n", " <td> 0.566286</td>\n", " <td> 0.011852</td>\n", " <td> 0.004400</td>\n", " <td> 0.009153</td>\n", " <td> 1.580610</td>\n", " <td> 0.261697</td>\n", " <td> 0</td>\n", " <td> 0</td>\n", " <td>...</td>\n", " <td> 0.362181</td>\n", " <td> 4.173097</td>\n", " <td>-0.252788</td>\n", " <td> 802.746495</td>\n", " <td> 0.427290</td>\n", " <td> 5.008959</td>\n", " <td>-0.409469</td>\n", " <td> 674.122342</td>\n", " <td>NaN</td>\n", " <td> 1290.963982</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td> 0.716397</td>\n", " <td> 0.524712</td>\n", " <td> 0.279033</td>\n", " <td> 0.015171</td>\n", " <td> 0.083900</td>\n", " <td> 0.069127</td>\n", " <td> 7.884569</td>\n", " <td> 1.310151</td>\n", " <td> 0</td>\n", " <td> 0</td>\n", " <td>...</td>\n", " <td> 0.753449</td>\n", " <td> 6.615949</td>\n", " <td>-0.253550</td>\n", " <td> 564.203857</td>\n", " <td> 0.917409</td>\n", " <td> 8.695459</td>\n", " <td>-0.192284</td>\n", " <td> 537.791687</td>\n", " <td>NaN</td>\n", " <td> 692.654175</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td> 1.000000</td>\n", " <td> 0.996479</td>\n", " <td> 0.888159</td>\n", " <td> 0.005547</td>\n", " <td> 0.070438</td>\n", " <td> 0.064689</td>\n", " <td> -2.267649</td>\n", " <td> 0.139555</td>\n", " <td> 0</td>\n", " <td> 0</td>\n", " <td>...</td>\n", " <td> 0.589455</td>\n", " <td> 21.869143</td>\n", " <td>-0.254778</td>\n", " <td> 746.624928</td>\n", " <td> 0.388996</td>\n", " <td> 8.465344</td>\n", " <td>-0.217319</td>\n", " <td> 988.539221</td>\n", " <td>NaN</td>\n", " <td> 1328.837840</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "<p>5 rows × 40 columns</p>\n", "</div>" ], "text/plain": [ " CDF1 CDF2 CDF3 DOCAone DOCAthree DOCAtwo \\\n", "0 1.000000 1.000000 1.000000 0.111337 0.012695 0.123426 \n", "1 0.759755 0.597375 0.389256 0.021781 0.094551 0.088421 \n", "2 1.000000 0.796142 0.566286 0.011852 0.004400 0.009153 \n", "3 0.716397 0.524712 0.279033 0.015171 0.083900 0.069127 \n", "4 1.000000 0.996479 0.888159 0.005547 0.070438 0.064689 \n", "\n", " FlightDistance FlightDistanceError Hlt1Dec Hlt2Dec ... p1_IP \\\n", "0 162.650955 0.870942 0 0 ... 11.314665 \n", "1 4.193265 1.262280 0 0 ... 0.720070 \n", "2 1.580610 0.261697 0 0 ... 0.362181 \n", "3 7.884569 1.310151 0 0 ... 0.753449 \n", "4 -2.267649 0.139555 0 0 ... 0.589455 \n", "\n", " p1_IPSig p1_Laura_IsoBDT p1_pt p2_IP p2_IPSig \\\n", "0 83.196968 -0.223668 699.066467 9.799975 64.790207 \n", "1 7.237868 -0.256142 587.628935 0.882111 8.834325 \n", "2 4.173097 -0.252788 802.746495 0.427290 5.008959 \n", "3 6.615949 -0.253550 564.203857 0.917409 8.695459 \n", "4 21.869143 -0.254778 746.624928 0.388996 8.465344 \n", "\n", " p2_Laura_IsoBDT p2_pt peakingbkg pt \n", "0 -0.121159 521.628174 NaN 220.742111 \n", "1 -0.203220 532.679950 NaN 661.208843 \n", "2 -0.409469 674.122342 NaN 1290.963982 \n", "3 -0.192284 537.791687 NaN 692.654175 \n", "4 -0.217319 988.539221 NaN 1328.837840 \n", "\n", "[5 rows x 40 columns]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data[:5]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Splitting into train and test" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Get train and test data\n", "train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Classifiers\n", "\n", "All classifiers inherit from __sklearn.BaseEstimator__ and have the following methods:\n", " \n", "* `classifier.fit(X, y, sample_weight=None)` - train classifier\n", " \n", "* `classifier.predict_proba(X)` - return probabilities vector for all classes\n", "\n", "* `classifier.predict(X)` - return predicted labels\n", "\n", "* `classifier.staged_predict_proba(X)` - return probabilities after each iteration (not supported by TMVA)\n", "\n", "* `classifier.get_feature_importances()`\n", "\n", "\n", "Here we use `X` to denote matrix with data of shape `[n_samples, n_features]`, `y` is vector with labels (0 or 1) of shape [n_samples], <br /> `sample_weight` is vector with weights.\n", "\n", "\n", "## Difference from default scikit-learn interface\n", "X should be* `pandas.DataFrame`, `not numpy.array`. <br />\n", "Provided this, you'll be able to choose features used in training by setting e.g. `features=['FlightTime', 'p']` in constructor.\n", "\n", "\\* it works fine with `numpy.array` as well, but in this case all the features will be used." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Variables used in training" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [], "source": [ "variables = [\"FlightDistance\", \"FlightDistanceError\", \"IP\", \"VertexChi2\", \"pt\", \"p0_pt\", \"p1_pt\", \"p2_pt\", 'LifeTime','dira']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Sklearn\n", "wrapper for scikit-learn classifiers. In this example we use GradientBoosting with default settings" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training complete\n" ] } ], "source": [ "from rep.estimators import SklearnClassifier\n", "from sklearn.ensemble import GradientBoostingClassifier\n", "# Using gradient boosting with default settings\n", "sk = SklearnClassifier(GradientBoostingClassifier(), features=variables)\n", "# Training classifier\n", "sk.fit(train_data, train_labels)\n", "print('training complete')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Predicting probabilities, measuring the quality" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 0.02570074 0.97429926]\n", " [ 0.4970443 0.5029557 ]\n", " [ 0.47570811 0.52429189]\n", " ..., \n", " [ 0.02284076 0.97715924]\n", " [ 0.01029867 0.98970133]\n", " [ 0.07697665 0.92302335]]\n" ] } ], "source": [ "# predict probabilities for each class\n", "prob = sk.predict_proba(test_data)\n", "print prob" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ROC AUC 0.911885293509\n" ] } ], "source": [ "print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Predictions of classes" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([1, 1, 1, ..., 1, 1, 1])" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sk.predict(test_data)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "<div style=\"max-height:1000px;max-width:1500px;overflow:auto;\">\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>effect</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>FlightDistance</th>\n", " <td> 0.016774</td>\n", " </tr>\n", " <tr>\n", " <th>FlightDistanceError</th>\n", " <td> 0.056593</td>\n", " </tr>\n", " <tr>\n", " <th>IP</th>\n", " <td> 0.141333</td>\n", " </tr>\n", " <tr>\n", " <th>VertexChi2</th>\n", " <td> 0.116290</td>\n", " </tr>\n", " <tr>\n", " <th>pt</th>\n", " <td> 0.094450</td>\n", " </tr>\n", " <tr>\n", " <th>p0_pt</th>\n", " <td> 0.062632</td>\n", " </tr>\n", " <tr>\n", " <th>p1_pt</th>\n", " <td> 0.102965</td>\n", " </tr>\n", " <tr>\n", " <th>p2_pt</th>\n", " <td> 0.090848</td>\n", " </tr>\n", " <tr>\n", " <th>LifeTime</th>\n", " <td> 0.137255</td>\n", " </tr>\n", " <tr>\n", " <th>dira</th>\n", " <td> 0.180859</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " effect\n", "FlightDistance 0.016774\n", "FlightDistanceError 0.056593\n", "IP 0.141333\n", "VertexChi2 0.116290\n", "pt 0.094450\n", "p0_pt 0.062632\n", "p1_pt 0.102965\n", "p2_pt 0.090848\n", "LifeTime 0.137255\n", "dira 0.180859" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sk.get_feature_importances()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## TMVA" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", " TMVAClassifier wraps estimators from TMVA (CERN library for machine learning)\n", "\n", " Parameters:\n", " -----------\n", " :param str method: algorithm method (default='kBDT')\n", " :param features: features used in training\n", " :type features: list[str] or None\n", " :param str factory_options: options, for example::\n", "\n", " \"!V:!Silent:Color:Transformations=I;D;P;G,D\"\n", "\n", " :param str sigmoid_function: function which is used to convert TMVA output to probabilities;\n", "\n", " * *identity* (svm, mlp) --- the same output, use this for methods returning class probabilities\n", "\n", " * *sigmoid* --- sigmoid transformation, use it if output varies in range [-infinity, +infinity]\n", "\n", " * *bdt* (for bdt algorithms output varies in range [-1, 1])\n", "\n", " * *sig_eff=0.4* --- for rectangular cut optimization methods,\n", " and 0.4 will be used as signal efficiency to evaluate MVA,\n", " (put any float number from [0, 1])\n", "\n", " :param dict method_parameters: estimator options, example: NTrees=100, BoostType='Grad'\n", "\n", " .. warning::\n", " TMVA doesn't support *staged_predict_proba()* and *feature_importances__*\n", "\n", " .. warning::\n", " TMVA doesn't support multiclassification, only two-classes classification\n", " \n" ] } ], "source": [ "from rep.estimators import TMVAClassifier\n", "print TMVAClassifier.__doc__" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training complete\n" ] } ], "source": [ "tmva = TMVAClassifier(method='kBDT', NTrees=50, Shrinkage=0.05, features=variables)\n", "tmva.fit(train_data, train_labels)\n", "print('training complete')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Predict probabilities and estimate quality" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 0.2967351 0.7032649 ]\n", " [ 0.77204245 0.22795755]\n", " [ 0.79106683 0.20893317]\n", " ..., \n", " [ 0.29994913 0.70005087]\n", " [ 0.12615543 0.87384457]\n", " [ 0.48732808 0.51267192]]\n" ] } ], "source": [ "# predict probabilities for each class\n", "prob = tmva.predict_proba(test_data)\n", "print prob" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ROC AUC 0.90249291305\n" ] } ], "source": [ "print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([1, 0, 0, ..., 1, 1, 1])" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# predict labels\n", "tmva.predict(test_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## XGBoost" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", " Implements classification (multiclassification) from XGBoost library.\n", "\n", " Parameters:\n", " -----------\n", " :param features: list of features to train model\n", " :type features: None or list(str)\n", " :param int n_estimators: the number of round for boosting.\n", " :param int nthreads: number of parallel threads used to run xgboost.\n", " :param num_feature: feature dimension used in boosting, set to maximum dimension of the feature\n", " (set automatically by xgboost, no need to be set by user).\n", " :type num_feature: None or int\n", " :param float gamma: minimum loss reduction required to make a further partition on a leaf node of the tree.\n", " The larger, the more conservative the algorithm will be.\n", " :type gamma: None or float\n", " :param float eta: step size shrinkage used in update to prevent overfitting.\n", " After each boosting step, we can directly get the weights of new features\n", " and eta actually shrinkage the feature weights to make the boosting process more conservative.\n", " :param int max_depth: maximum depth of a tree.\n", " :param float scale_pos_weight: ration of weights of the class 1 to the weights of the class 0.\n", " :param float min_child_weight: minimum sum of instance weight(hessian) needed in a child.\n", " If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight,\n", " then the building process will give up further partitioning.\n", "\n", " .. note:: weights are normalized so that mean=1 before fitting. Roughly min_child_weight is equal to the number of events.\n", " :param float subsample: subsample ratio of the training instance.\n", " Setting it to 0.5 means that XGBoost randomly collected half of the data instances to grow trees\n", " and this will prevent overfitting.\n", " :param float colsample: subsample ratio of columns when constructing each tree.\n", " :param float base_score: the initial prediction score of all instances, global bias.\n", " :param int random_state: random number seed.\n", " :param boot verbose: if 1, will print messages during training\n", " :param float missing: the number considered by xgboost as missing value.\n", " \n" ] } ], "source": [ "from rep.estimators import XGBoostClassifier\n", "print XGBoostClassifier.__doc__" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training complete\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/axelr/xgboost/xgboost/wrapper/xgboost.py:80: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.\n", " if label != None:\n", "/Users/axelr/xgboost/xgboost/wrapper/xgboost.py:82: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.\n", " if weight !=None:\n" ] } ], "source": [ "# XGBoost with default parameters\n", "xgb = XGBoostClassifier(features=variables)\n", "xgb.fit(train_data, train_labels, sample_weight=numpy.ones(len(train_labels)))\n", "print('training complete')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Predict probabilities and estimate quality" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ROC AUC: 0.926181519399\n" ] } ], "source": [ "prob = xgb.predict_proba(test_data)\n", "print 'ROC AUC:', roc_auc_score(test_labels, prob[:, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Predict labels" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([1, 0, 1, ..., 1, 1, 1])" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "xgb.predict(test_data)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "<div style=\"max-height:1000px;max-width:1500px;overflow:auto;\">\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>effect</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>FlightDistance</th>\n", " <td> 994</td>\n", " </tr>\n", " <tr>\n", " <th>FlightDistanceError</th>\n", " <td> 1028</td>\n", " </tr>\n", " <tr>\n", " <th>IP</th>\n", " <td> 994</td>\n", " </tr>\n", " <tr>\n", " <th>VertexChi2</th>\n", " <td> 904</td>\n", " </tr>\n", " <tr>\n", " <th>pt</th>\n", " <td> 1036</td>\n", " </tr>\n", " <tr>\n", " <th>p0_pt</th>\n", " <td> 786</td>\n", " </tr>\n", " <tr>\n", " <th>p1_pt</th>\n", " <td> 1140</td>\n", " </tr>\n", " <tr>\n", " <th>p2_pt</th>\n", " <td> 1094</td>\n", " </tr>\n", " <tr>\n", " <th>LifeTime</th>\n", " <td> 420</td>\n", " </tr>\n", " <tr>\n", " <th>dira</th>\n", " <td> 264</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " effect\n", "FlightDistance 994\n", "FlightDistanceError 1028\n", "IP 994\n", "VertexChi2 904\n", "pt 1036\n", "p0_pt 786\n", "p1_pt 1140\n", "p2_pt 1094\n", "LifeTime 420\n", "dira 264" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "xgb.get_feature_importances()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Advantages of common interface" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As one can see above, all the classifiers implement the same interface, \n", "this simplifies work, simplifies comparison of different classifiers, \n", "but this is not the only profit. \n", "\n", "`Sklearn` provides different tools to combine different classifiers and transformers. \n", "One of this tools is `AdaBoost`, which is abstract metaclassifier built on the top of some other classifier (usually, decision dree)\n", "\n", "Let's show that now you can run AdaBoost over classifiers from other libraries! <br />\n", "_(isn't boosting over neural network what you were dreaming of all your life?)_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## AdaBoost over TMVA classifier" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training complete\n" ] } ], "source": [ "from sklearn.ensemble import AdaBoostClassifier\n", "\n", "# Construct AdaBoost with TMVA as base estimator\n", "base_tmva = TMVAClassifier(method='kBDT', NTrees=15, Shrinkage=0.05)\n", "ada_tmva = SklearnClassifier(AdaBoostClassifier(base_estimator=base_tmva, n_estimators=5), features=variables)\n", "ada_tmva.fit(train_data, train_labels)\n", "print('training complete')" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AUC 0.876710239759\n" ] } ], "source": [ "prob = ada_tmva.predict_proba(test_data)\n", "print 'AUC', roc_auc_score(test_labels, prob[:, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## AdaBoost over XGBoost" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training complete!\n" ] } ], "source": [ "# Construct AdaBoost with xgboost base estimator\n", "base_xgb = XGBoostClassifier(n_estimators=50)\n", "# ada_xgb = SklearnClassifier(AdaBoostClassifier(base_estimator=base_xgb, n_estimators=1), features=variables)\n", "ada_xgb = AdaBoostClassifier(base_estimator=base_xgb, n_estimators=1)\n", "ada_xgb.fit(train_data[variables], train_labels)\n", "print('training complete!')" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AUC 0.921637688032\n" ] } ], "source": [ "# predict probabilities for each class\n", "prob = ada_xgb.predict_proba(test_data[variables])\n", "print 'AUC', roc_auc_score(test_labels, prob[:, 1])" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AUC 0.948215398032\n" ] } ], "source": [ "# predict probabilities for each class\n", "prob = ada_xgb.predict_proba(train_data[variables])\n", "print 'AUC', roc_auc_score(train_labels, prob[:, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Other advantages of common interface\n", "There are many things you can do with classifiers now: \n", "* cloning\n", "* getting / setting parameters as dictionaries \n", "* do automatic hyperparameter optimization \n", "* build pipelines (`sklearn.pipeline`)\n", "* use hierarchical training, training on subsets\n", "* passing over internet / train classifiers on other machines\n", "\n", "And you can replace classifiers at any moment." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Exercises\n", "\n", "### Exercise 1. Play with parameters in each type of classifiers\n", "\n", "### Exercise 2. Add weight column and train models with weights in training" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.5" } }, "nbformat": 4, "nbformat_minor": 0 }