{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "
\n", " \n", "
\n", "
\n", " \n", "
\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Book DNN, Training methods\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "data": { "application/javascript": [ "\n", "require(['notebook'],\n", " function() {\n", " IPython.CodeCell.config_defaults.highlight_modes['magic_text/x-c++src'] = {'reg':[/^%%cpp/]};\n", " console.log(\"JupyROOT - %%cpp magic configured\");\n", " }\n", ");\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Welcome to JupyROOT 6.07/07\n" ] } ], "source": [ "import ROOT\n", "from ROOT import TFile, TMVA, TCut" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Enable JS visualization" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%jsmva on" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "## Declarations, building training and testing trees " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For more details please see this notebook." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
DataSetInfo
Dataset: tmva_class_exampleAdded class \"Signal\"
Add Tree TreeS of type Signal with 6000 events
DataSetInfo
Dataset: tmva_class_exampleAdded class \"Background\"
Add Tree TreeB of type Background with 6000 events
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "outputFile = TFile( \"TMVA.root\", 'RECREATE' )\n", "\n", "TMVA.Tools.Instance()\n", "\n", "factory = TMVA.Factory(JobName=\"TMVAClassification\", TargetFile=outputFile,\n", " V=False, Color=True, DrawProgressBar=True, Transformations=[\"I\", \"D\", \"P\", \"G\",\"D\"],\n", " AnalysisType=\"Classification\")\n", "\n", "dataset = \"tmva_class_example\"\n", "loader = TMVA.DataLoader(dataset)\n", "\n", "loader.AddVariable( \"myvar1 := var1+var2\", 'F' )\n", "loader.AddVariable( \"myvar2 := var1-var2\", \"Expression 2\", 'F' )\n", "loader.AddVariable( \"var3\", \"Variable 3\", 'F' )\n", "loader.AddVariable( \"var4\", \"Variable 4\", 'F' )\n", "\n", "loader.AddSpectator( \"spec1:=var1*2\", \"Spectator 1\", 'F' )\n", "loader.AddSpectator( \"spec2:=var1*3\", \"Spectator 2\", 'F' )\n", "\n", "if ROOT.gSystem.AccessPathName( \"./tmva_class_example.root\" ) != 0: \n", " ROOT.gSystem.Exec( \"wget https://root.cern.ch/files/tmva_class_example.root\")\n", " \n", "input = TFile.Open( \"./tmva_class_example.root\" )\n", "\n", "# Get the signal and background trees for training\n", "signal = input.Get( \"TreeS\" )\n", "background = input.Get( \"TreeB\" )\n", " \n", "# Global event weights (see below for setting event-wise weights)\n", "signalWeight = 1.0\n", "backgroundWeight = 1.0\n", "\n", "mycuts = TCut(\"\")\n", "mycutb = TCut(\"\")\n", "\n", "loader.AddSignalTree(signal, signalWeight)\n", "loader.AddBackgroundTree(background, backgroundWeight)\n", "loader.fSignalWeight = signalWeight\n", "loader.fBackgroundWeight = backgroundWeight\n", "loader.fTreeS = signal\n", "loader.fTreeB = background\n", "\n", "loader.PrepareTrainingAndTestTree(SigCut=mycuts, BkgCut=mycutb,\n", " nTrain_Signal=0, nTrain_Background=0, SplitMode=\"Random\", NormMode=\"NumEvents\", V=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Booking methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The booking of a method can be done as the notebook refered before shows. The new feature introduced here is how we can pass the training strategy when we want to book DNN. Instead of a long strategy string, we can create a list, containing dictionaries, which contain the options for the corresponding layer." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "scrolled": false }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" }, { "data": { "text/html": [ "
Factory Booking method: Cuts
Use optimization method: \"Monte Carlo\"
Use efficiency computation method: \"Event Selection\"
Use \"FSmart\" cuts for variable: 'myvar1'
Use \"FSmart\" cuts for variable: 'myvar2'
Use \"FSmart\" cuts for variable: 'var3'
Use \"FSmart\" cuts for variable: 'var4'
Factory Booking method: SVM
SVM
Dataset: tmva_class_exampleCreate Transformation \"Norm\" with events from all classes.
Norm Transformation, Variable selection :
Input : variable 'myvar1' <---> Output : variable 'myvar1'
Input : variable 'myvar2' <---> Output : variable 'myvar2'
Input : variable 'var3' <---> Output : variable 'var3'
Input : variable 'var4' <---> Output : variable 'var4'
Factory Booking method: MLP
MLP
Dataset: tmva_class_exampleCreate Transformation \"N\" with events from all classes.
Norm Transformation, Variable selection :
Input : variable 'myvar1' <---> Output : variable 'myvar1'
Input : variable 'myvar2' <---> Output : variable 'myvar2'
Input : variable 'var3' <---> Output : variable 'var3'
Input : variable 'var4' <---> Output : variable 'var4'
MLP Building Network.
Initializing weights
Factory Booking method: LD
DataSetFactory
Dataset: tmva_class_exampleNumber of events in input trees
Number of training and testing events
Signaltraining events3000
testing events3000
training and testing events6000
Backgroundtraining events3000
testing events3000
training and testing events6000
DataSetInfo Correlation matrix (Signal)
DataSetInfo Correlation matrix (Background)
DataSetFactory
Dataset: tmva_class_example
Factory Booking method: DNN
DNN
Dataset: tmva_class_exampleCreate Transformation \"Normalize\" with events from all classes.
Norm Transformation, Variable selection :
Input : variable 'myvar1' <---> Output : variable 'myvar1'
Input : variable 'myvar2' <---> Output : variable 'myvar2'
Input : variable 'var3' <---> Output : variable 'var3'
Input : variable 'var4' <---> Output : variable 'var4'
Factory Booking method: Likelihood
Factory Booking method: BDT
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "factory.BookMethod( DataLoader=loader, Method=TMVA.Types.kCuts, MethodTitle=\"Cuts\",\n", " H=False, V=False, FitMethod=\"MC\", EffSel=True, SampleSize=200000, VarProp=\"FSmart\" )\n", "\n", "factory.BookMethod( DataLoader=loader, Method=TMVA.Types.kSVM, MethodTitle=\"SVM\", \n", " Gamma=0.25, Tol=0.001, VarTransform=\"Norm\" )\n", "\n", "factory.BookMethod( DataLoader=loader, Method=TMVA.Types.kMLP, MethodTitle=\"MLP\", \n", " H=False, V=False, NeuronType=\"tanh\", VarTransform=\"N\", NCycles=600, HiddenLayers=\"N+5\",\n", " TestRate=5, UseRegulator=False )\n", "\n", "factory.BookMethod( DataLoader=loader, Method=TMVA.Types.kLD, MethodTitle=\"LD\", \n", " H=False, V=False, VarTransform=\"None\", CreateMVAPdfs=True, PDFInterpolMVAPdf=\"Spline2\",\n", " NbinsMVAPdf=50, NsmoothMVAPdf=10 )\n", "\n", "trainingStrategy = [{\n", " \"LearningRate\": 1e-1,\n", " \"Momentum\": 0.0,\n", " \"Repetitions\": 1,\n", " \"ConvergenceSteps\": 300,\n", " \"BatchSize\": 20,\n", " \"TestRepetitions\": 15,\n", " \"WeightDecay\": 0.001,\n", " \"Regularization\": \"NONE\",\n", " \"DropConfig\": \"0.0+0.5+0.5+0.5\",\n", " \"DropRepetitions\": 1,\n", " \"Multithreading\": True\n", " \n", " }, {\n", " \"LearningRate\": 1e-2,\n", " \"Momentum\": 0.5,\n", " \"Repetitions\": 1,\n", " \"ConvergenceSteps\": 300,\n", " \"BatchSize\": 30,\n", " \"TestRepetitions\": 7,\n", " \"WeightDecay\": 0.001,\n", " \"Regularization\": \"L2\",\n", " \"DropConfig\": \"0.0+0.1+0.1+0.1\",\n", " \"DropRepetitions\": 1,\n", " \"Multithreading\": True\n", " \n", " }, {\n", " \"LearningRate\": 1e-2,\n", " \"Momentum\": 0.3,\n", " \"Repetitions\": 1,\n", " \"ConvergenceSteps\": 300,\n", " \"BatchSize\": 40,\n", " \"TestRepetitions\": 7,\n", " \"WeightDecay\": 0.001,\n", " \"Regularization\": \"L2\",\n", " \"Multithreading\": True\n", " \n", " },{\n", " \"LearningRate\": 1e-3,\n", " \"Momentum\": 0.1,\n", " \"Repetitions\": 1,\n", " \"ConvergenceSteps\": 200,\n", " \"BatchSize\": 70,\n", " \"TestRepetitions\": 7,\n", " \"WeightDecay\": 0.001,\n", " \"Regularization\": \"NONE\",\n", " \"Multithreading\": True\n", " \n", "}]\n", "\n", "factory.BookMethod(DataLoader=loader, Method=TMVA.Types.kDNN, MethodTitle=\"DNN\", \n", " H = False, V=False, VarTransform=\"Normalize\", ErrorStrategy=\"CROSSENTROPY\",\n", " Layout=[\"TANH|100\", \"TANH|50\", \"TANH|10\", \"LINEAR\"],\n", " TrainingStrategy=trainingStrategy)\n", "\n", "factory.BookMethod(loader, TMVA.Types.kLikelihood, \"Likelihood\", \n", " \"NSmoothSig[0]=20:NSmoothBkg[0]=20:NSmoothBkg[1]=10:NSmooth=1:NAvEvtPerBin=50\",\n", " H=True, V=False,TransformOutput=True,PDFInterpol=\"Spline2\")\n", "\n", "factory.BookMethod(DataLoader= loader, Method=TMVA.Types.kBDT, MethodTitle=\"BDT\",\n", " H=False,V=False,NTrees=850,MinNodeSize=\"2.5%\",MaxDepth=3,BoostType=\"AdaBoost\", AdaBoostBeta=0.5,\n", " UseBaggedBoost=True,BaggedSampleFraction=0.5, SeparationType=\"GiniIndex\", nCuts=20 )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Train Methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When you use the jsmva magic, the original C++ version of Factory::TrainAllMethods is rewritten by a new training method, which will produce notebook compatible output during the training, so we can trace the process (progress bar, error plot). For some methods (MLP, DNN, BDT) there will be created a tracer plot (for MLP, DNN test and training error vs epoch, for BDT error fraction and boost weight vs tree number). There are also some method which doesn't support interactive tracing, so for these methods just a simple text will be printed, just to we know that TrainAllMethods function is training this method currently.\n", "\n", "For methods where is possible to trace the training interactively there is a stop button, which can stop the training process. This button just stops the training of the current method, and doesn't stop the TrainAllMethods completely. " ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "scrolled": false }, "outputs": [ { "data": { "text/html": [ "

Dataset: tmva_class_example

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

Train method: Cuts

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", "
\n", "
\n", "
0%
\n", "
\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

Train method: SVM

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", "
\n", "
\n", "
0%
\n", "
\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

Train method: MLP

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", "
\n", "
\n", "
0%
\n", "
\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

Train method: LD

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "Training..." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "End" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

Train method: DNN

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", "
\n", "
\n", "
0%
\n", "
\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

Train method: Likelihood

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "Training..." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "End" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

Train method: BDT

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", "
\n", "
\n", "
0%
\n", "
\n", "
\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
FitterBase Sampling, please be patient ...
Elapsed time : 7.18 sec
Cuts Cut values for requested signal efficiency: 0.1
Corresponding background efficiency : 0.0276667
Transformation applied to input variables : None
Dataset: 0-8.07609 < myvar1 <= 1e+30
-1e+30 < myvar2 <= -1.39928
-4.38911 < var3 <= 1e+30
-0.326104 < var4 <= 1e+30
Cuts Cut values for requested signal efficiency: 0.2
Corresponding background efficiency : 0.0703333
Transformation applied to input variables : None
Dataset: 0-1.5167 < myvar1 <= 1e+30
-1e+30 < myvar2 <= -0.779147
-1.11898 < var3 <= 1e+30
0.0215451 < var4 <= 1e+30
Cuts Cut values for requested signal efficiency: 0.3
Corresponding background efficiency : 0.112333
Transformation applied to input variables : None
Dataset: 0-5.98314 < myvar1 <= 1e+30
-1e+30 < myvar2 <= -0.524807
-3.02722 < var3 <= 1e+30
-0.447307 < var4 <= 1e+30
Cuts Cut values for requested signal efficiency: 0.4
Corresponding background efficiency : 0.162667
Transformation applied to input variables : None
Dataset: 0-2.69242 < myvar1 <= 1e+30
-1e+30 < myvar2 <= -0.184483
-1.36369 < var3 <= 1e+30
-0.365154 < var4 <= 1e+30
Cuts Cut values for requested signal efficiency: 0.5
Corresponding background efficiency : 0.218667
Transformation applied to input variables : None
Dataset: 0-7.85356 < myvar1 <= 1e+30
-1e+30 < myvar2 <= 0.123474
-2.65367 < var3 <= 1e+30
-0.448008 < var4 <= 1e+30
Cuts Cut values for requested signal efficiency: 0.6
Corresponding background efficiency : 0.274333
Transformation applied to input variables : None
Dataset: 0-3.97177 < myvar1 <= 1e+30
-1e+30 < myvar2 <= 0.40185
-5.12732 < var3 <= 1e+30
-0.557958 < var4 <= 1e+30
Cuts Cut values for requested signal efficiency: 0.7
Corresponding background efficiency : 0.34
Transformation applied to input variables : None
Dataset: 0-3.65468 < myvar1 <= 1e+30
-1e+30 < myvar2 <= 0.704599
-4.09057 < var3 <= 1e+30
-0.796842 < var4 <= 1e+30
Cuts Cut values for requested signal efficiency: 0.8
Corresponding background efficiency : 0.421
Transformation applied to input variables : None
Dataset: 0-7.13956 < myvar1 <= 1e+30
-1e+30 < myvar2 <= 1.07747
-4.51161 < var3 <= 1e+30
-0.984625 < var4 <= 1e+30
Cuts Cut values for requested signal efficiency: 0.9
Corresponding background efficiency : 0.507333
Transformation applied to input variables : None
Dataset: 0-7.23836 < myvar1 <= 1e+30
-1e+30 < myvar2 <= 2.11518
-4.85289 < var3 <= 1e+30
-0.765707 < var4 <= 1e+30
Elapsed time for training with 6000 events : 7.19 sec
Cuts
Dataset: tmva_class_exampleEvaluation of Cuts on training sample (6000 events)
Elapsed time for evaluation of 6000 events : 0.00187 sec
Creating xml weight file: tmva_class_example/weights/TMVAClassification_Cuts.weights.xml
Creating standalone class: tmva_class_example/weights/TMVAClassification_Cuts.class.C
Write monitoring histograms to file: TMVA.root:/tmva_class_example/Method_Cuts/Cuts
TFHandler_SVM
VariableMeanRMSMinMax
myvar10.0839890.36407-1.00001.0000
myvar20.00947780.27696-1.00001.0000
var30.0802790.36720-1.00001.0000
var40.129860.39603-1.00001.0000
Building SVM Working Set...with 6000 event instances
Elapsed time for Working Set build : 1.22 sec
Sorry, no computing time forecast available for SVM, please wait ...
Elapsed time : 1.88 sec
Elapsed time for training with 6000 events : 3.12 sec
SVM
Dataset: tmva_class_exampleEvaluation of SVM on training sample (6000 events)
Elapsed time for evaluation of 6000 events : 1 sec
Creating xml weight file: tmva_class_example/weights/TMVAClassification_SVM.weights.xml
Creating standalone class: tmva_class_example/weights/TMVAClassification_SVM.class.C
TFHandler_MLP
VariableMeanRMSMinMax
myvar10.0839890.36407-1.00001.0000
myvar20.00947780.27696-1.00001.0000
var30.0802790.36720-1.00001.0000
var40.129860.39603-1.00001.0000
Training Network
Elapsed time for training with 6000 events : 2.2 sec
MLP
Dataset: tmva_class_exampleEvaluation of MLP on training sample (6000 events)
Elapsed time for evaluation of 6000 events : 0.0112 sec
Creating xml weight file: tmva_class_example/weights/TMVAClassification_MLP.weights.xml
Creating standalone class: tmva_class_example/weights/TMVAClassification_MLP.class.C
Write special histos to file: TMVA.root:/tmva_class_example/Method_MLP/MLP
LD Results for LD coefficients:
Variable:Coefficient:
myvar1:-0.359
myvar2:-0.109
var3:-0.211
var4:+0.722
(offset):-0.054
Elapsed time for training with 6000 events : 0.00279 sec
LD
Dataset: tmva_class_exampleEvaluation of LD on training sample (6000 events)
Elapsed time for evaluation of 6000 events : 0.00224 sec
Dataset: tmva_class_example Separation from histogram (PDF): 0.452 (0.000)
Evaluation of LD on training sample
Creating xml weight file: tmva_class_example/weights/TMVAClassification_LD.weights.xml
Creating standalone class: tmva_class_example/weights/TMVAClassification_LD.class.C
TFHandler_DNN
VariableMeanRMSMinMax
myvar10.0839890.36407-1.00001.0000
myvar20.00947780.27696-1.00001.0000
var30.0802790.36720-1.00001.0000
var40.129860.39603-1.00001.0000
TFHandler_DNN
VariableMeanRMSMinMax
myvar10.0839890.36407-1.00001.0000
myvar20.00947780.27696-1.00001.0000
var30.0802790.36720-1.00001.0000
var40.129860.39603-1.00001.0000
TFHandler_DNN
VariableMeanRMSMinMax
myvar10.0751130.36776-1.10741.0251
myvar20.00755950.27349-0.906631.0008
var30.0702280.37106-1.06491.0602
var40.120900.39854-1.18711.0199
Add Layer with 100 nodes.
Add Layer with 50 nodes.
Add Layer with 10 nodes.
Add Layer with 1 nodes.
Total number of Synapses = 6010
Training with learning rate = 0.1, momentum = 0, repetitions = 1
Drop configuration
drop repetitions = 1
Layer 0 = 0
Layer 1 = 0.5
Layer 2 = 0.5
Layer 3 = 0.5
Training with learning rate = 0.01, momentum = 0.5, repetitions = 1
Drop configuration
drop repetitions = 1
Layer 0 = 0
Layer 1 = 0.1
Layer 2 = 0.1
Layer 3 = 0.1
Training with learning rate = 0.01, momentum = 0.3, repetitions = 1
Training with learning rate = 0.001, momentum = 0.1, repetitions = 1
Elapsed time for training with 6000 events : 5.08 sec
DNN
Dataset: tmva_class_exampleEvaluation of DNN on training sample (6000 events)
Elapsed time for evaluation of 6000 events : 0.21 sec
Creating xml weight file: tmva_class_example/weights/TMVAClassification_DNN.weights.xml
Creating standalone class: tmva_class_example/weights/TMVAClassification_DNN.class.C
Write monitoring histograms to file: TMVA.root:/tmva_class_example/Method_DNN/DNN
================================================================
Dataset: Likelihood
--- Short description:
The maximum-likelihood classifier models the data with probability
density functions (PDF) reproducing the signal and background
distributions of the input variables. Correlations among the
variables are ignored.
--- Performance optimisation:
Required for good performance are decorrelated input variables
(PCA transformation via the option \"VarTransform=Decorrelate\"
may be tried). Irreducible non-linear correlations may be reduced
by precombining strongly correlated input variables, or by simply
removing one of the variables.
--- Performance tuning via configuration options:
High fidelity PDF estimates are mandatory, i.e., sufficient training
statistics is required to populate the tails of the distributions
It would be a surprise if the default Spline or KDE kernel parameters
provide a satisfying fit to the data. The user is advised to properly
tune the events per bin and smooth options in the spline cases
individually per variable. If the KDE kernel is used, the adaptive
Gaussian kernel may lead to artefacts, so please always also try
the non-adaptive one.
All tuning parameters must be adjusted individually for each input
variable!
================================================================
Filling reference histograms
Building PDF out of reference histograms
Elapsed time for training with 6000 events : 0.0403 sec
Likelihood
Dataset: tmva_class_exampleEvaluation of Likelihood on training sample (6000 events)
Elapsed time for evaluation of 6000 events : 0.009 sec
Creating xml weight file: tmva_class_example/weights/TMVAClassification_Likelihood.weights.xml
Creating standalone class: tmva_class_example/weights/TMVAClassification_Likelihood.class.C
Write monitoring histograms to file: TMVA.root:/tmva_class_example/Method_Likelihood/Likelihood
BDT #events: (reweighted) sig: 3000 bkg: 3000
#events: (unweighted) sig: 3000 bkg: 3000
Training 850 Decision Trees ... patience please
Elapsed time for training with 6000 events : 2.28 sec
BDT
Dataset: tmva_class_exampleEvaluation of BDT on training sample (6000 events)
Elapsed time for evaluation of 6000 events : 0.673 sec
Creating xml weight file: tmva_class_example/weights/TMVAClassification_BDT.weights.xml
Creating standalone class: tmva_class_example/weights/TMVAClassification_BDT.class.C
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "factory.TrainAllMethods()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Close the factory's output file" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "outputFile.Close()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python [Root]", "language": "python", "name": "Python [Root]" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 0 }