{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# About \n", "\n", "This notebook demonstrates classifiers, which are provided by __Reproducible experiment platform (REP)__ package.
REP contains following classifiers\n", "* __scikit-learn__\n", "* __TMVA__ \n", "* __XGBoost__ \n", "* estimators from __hep_ml__\n", "* __theanets__\n", "* __PyBrain__\n", "* __Neurolab__\n", "\n", "(and any `sklearn`-compatible classifiers may be used).\n", "\n", "Neural network libraries are introduced in different notebook.\n", "\n", "### In this notebook we show the most simple way to\n", "* train classifier\n", "* build predictions \n", "* measure quality\n", "* combine metaclassifiers\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Loading data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### download particle identification Data Set from UCI" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "File `MiniBooNE_PID.txt' already there; not retrieving.\r\n" ] } ], "source": [ "!cd toy_datasets; wget -O MiniBooNE_PID.txt -nc --no-check-certificate https://archive.ics.uci.edu/ml/machine-learning-databases/00199/MiniBooNE_PID.txt" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy, pandas\n", "from rep.utils import train_test_split\n", "from sklearn.metrics import roc_auc_score\n", "\n", "data = pandas.read_csv('toy_datasets/MiniBooNE_PID.txt', sep='\\s*', skiprows=[0], header=None, engine='python')\n", "labels = pandas.read_csv('toy_datasets/MiniBooNE_PID.txt', sep=' ', nrows=1, header=None)\n", "labels = [1] * labels[1].values[0] + [0] * labels[2].values[0]\n", "data.columns = ['feature_{}'.format(key) for key in data.columns]" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### First rows of our data" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
feature_0feature_1feature_2feature_3feature_4feature_5feature_6feature_7feature_8feature_9...feature_40feature_41feature_42feature_43feature_44feature_45feature_46feature_47feature_48feature_49
02.594130.46880320.69160.3226480.0096820.3743930.8034790.8965923.596650.249282...101.174-31.37300.4422595.864530.0000000.0905190.1769090.4575850.0717690.245996
13.863880.64578118.13750.2335290.0307330.3612391.0697400.8787143.592430.200793...186.51645.9597-0.4785076.111260.0011820.091800-0.4655720.9355230.3336130.230621
23.385841.19714036.08070.2008660.0173410.2608411.1089500.8844053.431590.177167...129.931-11.5608-0.2970088.272040.0038540.141721-0.2105591.0134500.2555120.180901
34.285240.510155674.20100.2819230.0091740.0000000.9988220.8233903.163820.171678...163.978-18.45860.4538862.481120.0000000.1809380.4079684.3412700.4730810.258990
45.936620.83299359.87960.2328530.0250660.2335561.3700400.7874243.665460.174862...229.55542.9600-0.9757522.661090.0000000.170836-0.8144034.6794901.9249900.253893
\n", "

5 rows × 50 columns

\n", "
" ], "text/plain": [ " feature_0 feature_1 feature_2 feature_3 feature_4 feature_5 \\\n", "0 2.59413 0.468803 20.6916 0.322648 0.009682 0.374393 \n", "1 3.86388 0.645781 18.1375 0.233529 0.030733 0.361239 \n", "2 3.38584 1.197140 36.0807 0.200866 0.017341 0.260841 \n", "3 4.28524 0.510155 674.2010 0.281923 0.009174 0.000000 \n", "4 5.93662 0.832993 59.8796 0.232853 0.025066 0.233556 \n", "\n", " feature_6 feature_7 feature_8 feature_9 ... feature_40 \\\n", "0 0.803479 0.896592 3.59665 0.249282 ... 101.174 \n", "1 1.069740 0.878714 3.59243 0.200793 ... 186.516 \n", "2 1.108950 0.884405 3.43159 0.177167 ... 129.931 \n", "3 0.998822 0.823390 3.16382 0.171678 ... 163.978 \n", "4 1.370040 0.787424 3.66546 0.174862 ... 229.555 \n", "\n", " feature_41 feature_42 feature_43 feature_44 feature_45 feature_46 \\\n", "0 -31.3730 0.442259 5.86453 0.000000 0.090519 0.176909 \n", "1 45.9597 -0.478507 6.11126 0.001182 0.091800 -0.465572 \n", "2 -11.5608 -0.297008 8.27204 0.003854 0.141721 -0.210559 \n", "3 -18.4586 0.453886 2.48112 0.000000 0.180938 0.407968 \n", "4 42.9600 -0.975752 2.66109 0.000000 0.170836 -0.814403 \n", "\n", " feature_47 feature_48 feature_49 \n", "0 0.457585 0.071769 0.245996 \n", "1 0.935523 0.333613 0.230621 \n", "2 1.013450 0.255512 0.180901 \n", "3 4.341270 0.473081 0.258990 \n", "4 4.679490 1.924990 0.253893 \n", "\n", "[5 rows x 50 columns]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data[:5]" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Splitting into train and test" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Get train and test data\n", "train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.25)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Classifiers\n", "\n", "All classifiers inherit from __sklearn.BaseEstimator__ and have the following methods:\n", " \n", "* `classifier.fit(X, y, sample_weight=None)` - train classifier\n", " \n", "* `classifier.predict_proba(X)` - return probabilities vector for all classes\n", "\n", "* `classifier.predict(X)` - return predicted labels\n", "\n", "* `classifier.staged_predict_proba(X)` - return probabilities after each iteration (not supported by TMVA)\n", "\n", "* `classifier.get_feature_importances()`\n", "\n", "\n", "Here we use `X` to denote matrix with data of shape `[n_samples, n_features]`, `y` is vector with labels (0 or 1) of shape [n_samples],
`sample_weight` is vector with weights.\n", "\n", "\n", "## Difference from default scikit-learn interface\n", "X should be* `pandas.DataFrame`, `not numpy.array`.
\n", "Provided this, you'll be able to choose features used in training by setting e.g. `features=['FlightTime', 'p']` in constructor.\n", "\n", "\\* it works fine with `numpy.array` as well, but in this case all the features will be used." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Variables used in training" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [], "source": [ "variables = list(data.columns[:15])" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Sklearn\n", "wrapper for scikit-learn classifiers. In this example we use GradientBoosting with default settings" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training complete\n" ] } ], "source": [ "from rep.estimators import SklearnClassifier\n", "from sklearn.ensemble import GradientBoostingClassifier\n", "# Using gradient boosting with default settings\n", "sk = SklearnClassifier(GradientBoostingClassifier(), features=variables)\n", "# Training classifier\n", "sk.fit(train_data, train_labels)\n", "print('training complete')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Predicting probabilities, measuring the quality" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 0.99242263 0.00757737]\n", " [ 0.07570713 0.92429287]\n", " [ 0.9342327 0.0657673 ]\n", " ..., \n", " [ 0.95540457 0.04459543]\n", " [ 0.16007055 0.83992945]\n", " [ 0.99436947 0.00563053]]\n" ] } ], "source": [ "# predict probabilities for each class\n", "prob = sk.predict_proba(test_data)\n", "print prob" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ROC AUC 0.969997858413\n" ] } ], "source": [ "print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Predictions of classes" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([0, 1, 0, ..., 0, 1, 0])" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sk.predict(test_data)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
effect
feature_00.201533
feature_10.158487
feature_20.098564
feature_30.076836
feature_40.030222
feature_50.031973
feature_60.032771
feature_70.029889
feature_80.050288
feature_90.021392
feature_100.023092
feature_110.050503
feature_120.125505
feature_130.052451
feature_140.016493
\n", "
" ], "text/plain": [ " effect\n", "feature_0 0.201533\n", "feature_1 0.158487\n", "feature_2 0.098564\n", "feature_3 0.076836\n", "feature_4 0.030222\n", "feature_5 0.031973\n", "feature_6 0.032771\n", "feature_7 0.029889\n", "feature_8 0.050288\n", "feature_9 0.021392\n", "feature_10 0.023092\n", "feature_11 0.050503\n", "feature_12 0.125505\n", "feature_13 0.052451\n", "feature_14 0.016493" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sk.get_feature_importances()" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## TMVA" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", " TMVAClassifier wraps classifiers from TMVA (CERN library for machine learning)\n", "\n", " Parameters:\n", " -----------\n", " :param str method: algorithm method (default='kBDT')\n", " :param features: features used in training\n", " :type features: list[str] or None\n", " :param str factory_options: options, for example::\n", "\n", " \"!V:!Silent:Color:Transformations=I;D;P;G,D\"\n", "\n", " :param str sigmoid_function: function which is used to convert TMVA output to probabilities;\n", "\n", " * *identity* (use for svm, mlp) --- the same output, use this for methods returning class probabilities\n", "\n", " * *sigmoid* --- sigmoid transformation, use it if output varies in range [-infinity, +infinity]\n", "\n", " * *bdt* (for bdt algorithms output varies in range [-1, 1])\n", "\n", " * *sig_eff=0.4* --- for rectangular cut optimization methods,\n", " for instance, here 0.4 will be used as signal efficiency to evaluate MVA,\n", " (put any float number from [0, 1])\n", "\n", " :param dict method_parameters: estimator options, example: NTrees=100, BoostType='Grad'\n", "\n", " .. warning::\n", " TMVA doesn't support *staged_predict_proba()* and *feature_importances__*\n", "\n", " .. warning::\n", " TMVA doesn't support multiclassification, only two-class classification\n", "\n", " `TMVA guide `_\n", " \n" ] } ], "source": [ "from rep.estimators import TMVAClassifier\n", "print TMVAClassifier.__doc__" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training complete\n" ] } ], "source": [ "tmva = TMVAClassifier(method='kBDT', NTrees=50, Shrinkage=0.05, features=variables)\n", "tmva.fit(train_data, train_labels)\n", "print('training complete')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Predict probabilities and estimate quality" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 0.88198512 0.11801488]\n", " [ 0.22006905 0.77993095]\n", " [ 0.67851749 0.32148251]\n", " ..., \n", " [ 0.66461504 0.33538496]\n", " [ 0.42547775 0.57452225]\n", " [ 0.92236464 0.07763536]]\n" ] } ], "source": [ "# predict probabilities for each class\n", "prob = tmva.predict_proba(test_data)\n", "print prob" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ROC AUC 0.956336458008\n" ] } ], "source": [ "print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([0, 1, 0, ..., 0, 1, 0])" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# predict labels\n", "tmva.predict(test_data)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## XGBoost" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Implements classification (and multiclassification) from XGBoost library. \n", " Base class for XGBoostClassifier and XGBoostRegressor. XGBoost tree booster is used.\n", "\n", " Parameters:\n", " -----------\n", " :param int n_estimators: the number of trees built.\n", " :param int nthreads: number of parallel threads used to run xgboost.\n", " :param num_feature: feature dimension used in boosting, set to maximum dimension of the feature\n", " (set automatically by xgboost, no need to be set by user).\n", " :type num_feature: None or int\n", " :param float gamma: minimum loss reduction required to make a further partition on a leaf node of the tree.\n", " The larger, the more conservative the algorithm will be.\n", " :type gamma: None or float\n", " :param float eta: step size shrinkage used in update to prevent overfitting.\n", " After each boosting step, we can directly get the weights of new features\n", " and eta actually shrinkage the feature weights to make the boosting process more conservative.\n", " :param int max_depth: maximum depth of a tree.\n", " :param float scale_pos_weight: ration of weights of the class 1 to the weights of the class 0.\n", " :param float min_child_weight: minimum sum of instance weight(hessian) needed in a child.\n", " If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight,\n", " then the building process will give up further partitioning.\n", "\n", " .. note:: weights are normalized so that mean=1 before fitting. Roughly min_child_weight is equal to the number of events.\n", " :param float subsample: subsample ratio of the training instance.\n", " Setting it to 0.5 means that XGBoost randomly collected half of the data instances to grow trees\n", " and this will prevent overfitting.\n", " :param float colsample: subsample ratio of columns when constructing each tree.\n", " :param float base_score: the initial prediction score of all instances, global bias.\n", " :param int random_state: random number seed.\n", " :param boot verbose: if 1, will print messages during training\n", " :param float missing: the number considered by xgboost as missing value.\n", " \n" ] } ], "source": [ "from rep.estimators import XGBoostClassifier\n", "print XGBoostClassifier.__doc__" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training complete\n" ] } ], "source": [ "# XGBoost with default parameters\n", "xgb = XGBoostClassifier(features=variables)\n", "xgb.fit(train_data, train_labels)\n", "print('training complete')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Predict probabilities and estimate quality" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ROC AUC: 0.975258121592\n" ] } ], "source": [ "prob = xgb.predict_proba(test_data)\n", "print 'ROC AUC:', roc_auc_score(test_labels, prob[:, 1])" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### Predict labels" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([0, 1, 0, ..., 0, 1, 0])" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "xgb.predict(test_data)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
effect
feature_0656
feature_1800
feature_2740
feature_3608
feature_4378
feature_5480
feature_6462
feature_7522
feature_8638
feature_9556
feature_10540
feature_11598
feature_12772
feature_13592
feature_14486
\n", "
" ], "text/plain": [ " effect\n", "feature_0 656\n", "feature_1 800\n", "feature_2 740\n", "feature_3 608\n", "feature_4 378\n", "feature_5 480\n", "feature_6 462\n", "feature_7 522\n", "feature_8 638\n", "feature_9 556\n", "feature_10 540\n", "feature_11 598\n", "feature_12 772\n", "feature_13 592\n", "feature_14 486" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "xgb.get_feature_importances()" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Advantages of common interface" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As one can see above, all the classifiers implement the same interface, \n", "this simplifies work, simplifies comparison of different classifiers, \n", "but this is not the only profit. \n", "\n", "`Sklearn` provides different tools to combine different classifiers and transformers. \n", "One of this tools is `AdaBoost`, which is abstract metaclassifier built on the top of some other classifier (usually, decision dree). Also bagging is other frequently used ensembling meta-algorithm.\n", "\n", "Let's show that now you can run AdaBoost over classifiers from other libraries!
\n", "_(isn't boosting over neural network what you were dreaming of all your life?)_" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "## AdaBoost over XGBoost" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.ensemble import AdaBoostClassifier" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training complete!\n", "AUC 0.975709190087\n", "AUC 0.998466758443\n", "CPU times: user 34 s, sys: 266 ms, total: 34.3 s\n", "Wall time: 34.4 s\n" ] } ], "source": [ "%%time\n", "base_xgb = XGBoostClassifier(n_estimators=20)\n", "ada_xgb = SklearnClassifier(AdaBoostClassifier(base_estimator=base_xgb, n_estimators=5))\n", "ada_xgb.fit(train_data[variables], train_labels)\n", "print('training complete!')\n", "\n", "# predict probabilities for each class\n", "prob = ada_xgb.predict_proba(test_data[variables])\n", "print 'AUC', roc_auc_score(test_labels, prob[:, 1])\n", "\n", "# predict probabilities for each class\n", "prob = ada_xgb.predict_proba(train_data[variables])\n", "print 'AUC', roc_auc_score(train_labels, prob[:, 1])" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "## AdaBoost over TMVA classifier\n", "\n", "the following code shows that you can do the same with i.e. TMVA, uncomment it to try" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# base_tmva = TMVAClassifier(method='kBDT', NTrees=20)\n", "# ada_tmva = SklearnClassifier(AdaBoostClassifier(base_estimator=base_tmva, n_estimators=5), features=variables)\n", "# ada_tmva.fit(train_data, train_labels)\n", "# print('training complete')\n", "\n", "# prob = ada_tmva.predict_proba(test_data)\n", "# print 'AUC', roc_auc_score(test_labels, prob[:, 1])" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Other advantages of common interface\n", "There are many things you can do with classifiers now: \n", "* cloning\n", "* getting / setting parameters as dictionaries \n", "* automatic hyperparameter optimization \n", "* build pipelines (`sklearn.pipeline`)\n", "* use hierarchical training, training on subsets\n", "* passing over internet / train classifiers on other machines\n", "\n", "And you can replace classifiers at any moment." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "## Exercises\n", "\n", "Exercise 1. Play with parameters in each type of classifiers\n", "\n", "Exercise 2. Add weight column and train models with weights in training" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 0 }