{ "metadata": { "name": "jobman_integration" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Random hyperparameter search using Jobman" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites\n", "\n", "For this tutorial we assume the reader is familiar with Jobman and its `jobdispatch` helper script. For more information, see the [jobman documentation](http://deeplearning.net/software/jobman/).\n", "\n", "## Problem overview\n", "\n", "Suppose you have a yaml file describing an experiment for which you'd like to do hyperparameter optimization by random search:\n", "\n", "```\n", "!obj:pylearn2.train.Train {\n", " dataset: &train !obj:pylearn2.datasets.mnist.MNIST {\n", " which_set: 'train',\n", " start: 0,\n", " stop: 50000\n", " },\n", " model: !obj:pylearn2.models.mlp.MLP {\n", " layers: [\n", " !obj:pylearn2.models.mlp.Sigmoid {\n", " layer_name: 'h0',\n", " dim: 500,\n", " sparse_init: 15,\n", " }, !obj:pylearn2.models.mlp.Softmax {\n", " layer_name: 'y',\n", " n_classes: 10,\n", " irange: 0.\n", " }\n", " ],\n", " nvis: 784,\n", " },\n", " algorithm: !obj:pylearn2.training_algorithms.sgd.SGD {\n", " batch_size: 100,\n", " learning_rate: 1e-3,\n", " learning_rule: !obj:pylearn2.training_algorithms.learning_rule.Momentum {\n", " init_momentum: 0.5,\n", " },\n", " monitoring_batches: 10,\n", " monitoring_dataset : *train,\n", " termination_criterion: !obj:pylearn2.termination_criteria.EpochCounter {\n", " max_epochs: 1\n", " },\n", " },\n", " save_path: \"mlp.pkl\",\n", " save_freq : 5\n", "}\n", "```\n", "\n", "Here's how you can do it using Pylearn2 and Jobman:\n", "\n", "* Consider your yaml file as a string and adapt it by replacing hyperparameter values with statements of the form `%(hyperparameter_name)x`, just like you'd do for string substitution\n", "* Write an extraction method which takes a `Train` object as input and returns results extracted from it as output\n", "* Write a Jobman-compatible experiment method which will do string substitution on the yaml string using a dictionary of your hyperparameters, instantiate a `Train` object, train it and extract results by calling the extraction \n", " method on the `Train` object\n", "* Write a separate Jobman-compatible configuration file containing your yaml file in a string representation, your hyperparameters and the name of a method which will be used to extract results\n", "* Call the `jobman` executable with the experiment method and the configuration file\n", "\n", "Let's now break it down a little." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adapting an existing yaml file\n", "\n", "Very little has to be done: just replace all desired hyperparameter values by a string substitution statement.\n", "\n", "For example, if you want to optimize the learning rate and the momentum coefficient, here's how your yaml file would look like:\n", "\n", "```\n", "!obj:pylearn2.train.Train {\n", " dataset: &train !obj:pylearn2.datasets.mnist.MNIST {\n", " which_set: 'train',\n", " start: 0,\n", " stop: 50000\n", " },\n", " model: !obj:pylearn2.models.mlp.MLP {\n", " layers: [\n", " !obj:pylearn2.models.mlp.Sigmoid {\n", " layer_name: 'h0',\n", " dim: 500,\n", " sparse_init: 15,\n", " }, !obj:pylearn2.models.mlp.Softmax {\n", " layer_name: 'y',\n", " n_classes: 10,\n", " irange: 0.\n", " }\n", " ],\n", " nvis: 784,\n", " },\n", " algorithm: !obj:pylearn2.training_algorithms.sgd.SGD {\n", " batch_size: 100,\n", " learning_rate: %(learning_rate)f,\n", " learning_rule: !obj:pylearn2.training_algorithms.learning_rule.Momentum {\n", " init_momentum: %(init_momentum)f,\n", " },\n", " monitoring_batches: 10,\n", " monitoring_dataset : *train,\n", " termination_criterion: !obj:pylearn2.termination_criteria.EpochCounter {\n", " max_epochs: 1\n", " },\n", " },\n", " save_path: \"mlp.pkl\",\n", " save_freq : 5\n", "}\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Writing the experiment method\n", "\n", "Luckily for you, there's already an experiment method built in Pylearn2: `pylearn2.scripts.jobman.experiment.train_experiment`.\n", "\n", "Like all methods compatible with Jobman, it expects two arguments (`state` and `channel`) and returns `channel.COMPLETE` when done. Furthermore, it expects `state` to contain at least:\n", "\n", "* a `yaml_template` key pointing to the (not yet complete) yaml string describing the experiment, \n", "* a `hyper_parameters` key pointing to a `DD` object containing all variable hyperparameters for the experiment, and\n", "* an `extract_results` key pointing to a string\n", "\n", "Here's the method's implementation:\n", "\n", " def train_experiment(state, channel):\n", " \"\"\"\n", " Train a model specified in state, and extract required results.\n", " \n", " This function builds a YAML string from ``state.yaml_template``, taking\n", " the values of hyper-parameters from ``state.hyper_parameters``, creates\n", " the corresponding object and trains it (like train.py), then run the\n", " function in ``state.extract_results`` on it, and store the returned values\n", " into ``state.results``.\n", " \n", " To know how to use this function, you can check the example in tester.py\n", " (in the same directory).\n", " \"\"\"\n", " yaml_template = state.yaml_template\n", " \n", " # Convert nested DD into nested ydict.\n", " hyper_parameters = expand(flatten(state.hyper_parameters), dict_type=ydict)\n", " \n", " # This will be the complete yaml string that should be executed\n", " final_yaml_str = yaml_template % hyper_parameters\n", " \n", " # Instantiate an object from YAML string\n", " train_obj = pylearn2.config.yaml_parse.load(final_yaml_str)\n", " \n", " try:\n", " iter(train_obj)\n", " iterable = True\n", " except TypeError:\n", " iterable = False\n", " if iterable:\n", " raise NotImplementedError(\n", " ('Current implementation does not support running multiple '\n", " 'models in one yaml string. Please change the yaml template '\n", " 'and parameters to contain only one single model.'))\n", " else:\n", " # print \"Executing the model.\"\n", " train_obj.main_loop()\n", " # This line will call a function defined by the user and pass train_obj\n", " # to it.\n", " state.results = jobman.tools.resolve(state.extract_results)(train_obj)\n", " return channel.COMPLETE\n", "\n", "It simply builds a dictionary out of `state.hyper_parameters` does string substitution on `state.yaml_template` with it.\n", "\n", "It then instantiates the `Train` object as described in the yaml string and calls its `main_loop` method.\n", "\n", "Finally, when the method returns, it calls the method referenced in the `state.extract_results` string by passing it the `Train` object as argument. This method is responsible to extract any relevant results from the `Train` object and returning them, either as is or in a `DD` object. The return value is stored in `state.results`.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Writing the extraction method\n", "\n", "Your extraction method should accept a `Train` object instance and return either a single value (`float`, `int`, `str`, etc.) or a `DD` object containing your values.\n", "\n", "For the purpose of this tutorial, let's write a simple method which extracts the misclassification rate and the NLL from the model's monitor:\n", "\n", " def results_extractor(train_obj):\n", " channels = train_obj.model.monitor.channels\n", " train_y_misclass = channels['y_misclass'].val_record[-1]\n", " train_y_nll = channels['y_nll'].val_record[-1]\n", " \n", " return DD(train_y_misclass=train_y_misclass,\n", " train_y_nll=train_y_nll)\n", "\n", "Here we extract misclassification rate and NLL values at the last training epoch from their respective channels of the model's monitor and return a `DD` object containing those values." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Writing the configuration file\n", "\n", "Your configuration file should contain\n", "\n", "* `yaml_template`: a yaml string representing your experiment\n", "* `hyper_parameters.[name]`: the value of the `[name]` hyperparameter. You must have at least one such item, but you can have as many as you want.\n", "* `extract_results`: a string of the `module.method` form representing the result extraction method which is to be used\n", "\n", "Here's how a configuration file could look for our experiment:\n", "\n", " yaml_template:=@__builtin__.open('mlp.yaml').read()\n", " \n", " hyper_parameters.learning_rate:=@utils.log_uniform(1e-5, 1e-1)\n", " hyper_parameters.init_momentum:=@utils.log_uniform(0.5, 1.0)\n", "\n", " extract_results = \"sheldon.code.pylearn2.scripts.jobman.extraction.trivial_extractor\"\n", "\n", "Notice how we're using the key:=@method statement. This serves two purposes:\n", "\n", "1. We don't have to copy the yaml file to the configuration file as a long, hard to edit string.\n", "2. We don't have to hard-code hyperparameter values, which means every time `jobman` is called with this configuration file, it'll get different hyperparameters.\n", "\n", "For reference, here's `utils.log_uniform`'s implementation:\n", "\n", " def log_uniform(low, high):\n", " \"\"\"\n", " Generates a number that's uniformly distributed in the log-space between\n", " `low` and `high`\n", " \n", " Parameters\n", " ----------\n", " low : float\n", " Lower bound of the randomly generated number\n", " high : float\n", " Upper bound of the randomly generated number\n", " \n", " Returns\n", " -------\n", " rval : float\n", " Random number uniformly distributed in the log-space specified by `low`\n", " and `high`\n", " \"\"\"\n", " log_low = numpy.log(low)\n", " log_high = numpy.log(high)\n", " \n", " log_rval = numpy.random.uniform(log_low, log_high)\n", " rval = float(numpy.exp(log_rval))\n", " \n", " return rval\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Running the whole thing\n", "\n", "Here's how you would call `jobman` to train your model:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "!jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workid_prefix=jobman_demo jobman_demo/mlp.conf" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "/opt/lisa/os/epd-7.1.2/lib/python2.7/site-packages/scikits/__init__.py:1: UserWarning: Module jobman was already imported from /data/lisa/exp/dumouliv/jobman/__init__.pyc, but /opt/lisa/os/lib64/python2.7/site-packages is being added to sys.path\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "The working directory is: /data/lisa/exp/dumouliv/Pylearn2/pylearn2/scripts/tutorials/jobman1\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "/data/lisa/exp/dumouliv/Pylearn2/pylearn2/models/mlp.py:44: UserWarning: MLP changing the recursion limit.\r\n", " warnings.warn(\"MLP changing the recursion limit.\")\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "/data/lisa/exp/dumouliv/Pylearn2/pylearn2/space/__init__.py:272: UserWarning: It looks like the subclass of Space does not call the superclass __init__ method. Currently this is a warning. It will become an error on or after 2014-06-17.\r\n", " \"subclass of Space does not call the superclass __init__ \"\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Parameter and initial learning rate summary:\r\n", "\th0_W: 0.000205\r\n", "\th0_b: 0.000205\r\n", "\tsoftmax_b: 0.000205\r\n", "\tsoftmax_W: 0.000205\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Compiling sgd_update...\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Compiling sgd_update done. Time elapsed: 3.680988 seconds\r\n", "compiling begin_record_entry...\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "compiling begin_record_entry done. Time elapsed: 0.094591 seconds\r\n", "Monitored channels: \r\n", "\th0_col_norms_max\r\n", "\th0_col_norms_mean\r\n", "\th0_col_norms_min\r\n", "\th0_max_x_max_u\r\n", "\th0_max_x_mean_u\r\n", "\th0_max_x_min_u\r\n", "\th0_mean_x_max_u\r\n", "\th0_mean_x_mean_u\r\n", "\th0_mean_x_min_u\r\n", "\th0_min_x_max_u\r\n", "\th0_min_x_mean_u\r\n", "\th0_min_x_min_u\r\n", "\th0_range_x_max_u\r\n", "\th0_range_x_mean_u\r\n", "\th0_range_x_min_u\r\n", "\th0_row_norms_max\r\n", "\th0_row_norms_mean\r\n", "\th0_row_norms_min\r\n", "\tlearning_rate\r\n", "\tmomentum\r\n", "\tmonitor_seconds_per_epoch\r\n", "\tobjective\r\n", "\ty_col_norms_max\r\n", "\ty_col_norms_mean\r\n", "\ty_col_norms_min\r\n", "\ty_max_max_class\r\n", "\ty_mean_max_class\r\n", "\ty_min_max_class\r\n", "\ty_misclass\r\n", "\ty_nll\r\n", "\ty_row_norms_max\r\n", "\ty_row_norms_mean\r\n", "\ty_row_norms_min\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Compiling accum...\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "graph size: 116\r\n", "Compiling accum done. Time elapsed: 1.706080 seconds\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Monitoring step:\r\n", "\tEpochs seen: 0\r\n", "\tBatches seen: 0\r\n", "\tExamples seen: 0\r\n", "\th0_col_norms_max: 6.23503405999\r\n", "\th0_col_norms_mean: 3.82355643971\r\n", "\th0_col_norms_min: 2.06193996111\r\n", "\th0_max_x_max_u: 0.999166448331\r\n", "\th0_max_x_mean_u: 0.835546521642\r\n", "\th0_max_x_min_u: 0.484522687851\r\n", "\th0_mean_x_max_u: 0.898498338615\r\n", "\th0_mean_x_mean_u: 0.477264282788\r\n", "\th0_mean_x_min_u: 0.14077357946\r\n", "\th0_min_x_max_u: 0.502200790533\r\n", "\th0_min_x_mean_u: 0.13427006672\r\n", "\th0_min_x_min_u: 0.000345345510862\r\n", "\th0_range_x_max_u: 0.982607199247\r\n", "\th0_range_x_mean_u: 0.701276454921\r\n", "\th0_range_x_min_u: 0.212314451878\r\n", "\th0_row_norms_max: 5.89326124667\r\n", "\th0_row_norms_mean: 2.98549156744\r\n", "\th0_row_norms_min: 0.0\r\n", "\tlearning_rate: 0.000205\r\n", "\tmomentum: 0.583961\r\n", "\tmonitor_seconds_per_epoch: 0.0\r\n", "\tobjective: 2.30258509299\r\n", "\ty_col_norms_max: 0.0\r\n", "\ty_col_norms_mean: 0.0\r\n", "\ty_col_norms_min: 0.0\r\n", "\ty_max_max_class: 0.1\r\n", "\ty_mean_max_class: 0.1\r\n", "\ty_min_max_class: 0.1\r\n", "\ty_misclass: 0.903\r\n", "\ty_nll: 2.30258509299\r\n", "\ty_row_norms_max: 0.0\r\n", "\ty_row_norms_mean: 0.0\r\n", "\ty_row_norms_min: 0.0\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Time this epoch: 8.862564 seconds\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Monitoring step:\r\n", "\tEpochs seen: 1\r\n", "\tBatches seen: 500\r\n", "\tExamples seen: 50000\r\n", "\th0_col_norms_max: 6.23503528894\r\n", "\th0_col_norms_mean: 3.82355957684\r\n", "\th0_col_norms_min: 2.0619418242\r\n", "\th0_max_x_max_u: 0.999166295949\r\n", "\th0_max_x_mean_u: 0.835562596825\r\n", "\th0_max_x_min_u: 0.484561123332\r\n", "\th0_mean_x_max_u: 0.898487726921\r\n", "\th0_mean_x_mean_u: 0.47726432182\r\n", "\th0_mean_x_min_u: 0.140795730801\r\n", "\th0_min_x_max_u: 0.502187137372\r\n", "\th0_min_x_mean_u: 0.134253984725\r\n", "\th0_min_x_min_u: 0.000345513621585\r\n", "\th0_range_x_max_u: 0.982615216558\r\n", "\th0_range_x_mean_u: 0.701308612099\r\n", "\th0_range_x_min_u: 0.212336533604\r\n", "\th0_row_norms_max: 5.89326388489\r\n", "\th0_row_norms_mean: 2.98549391807\r\n", "\th0_row_norms_min: 9.32520084457e-07\r\n", "\tlearning_rate: 0.000205\r\n", "\tmomentum: 0.583961\r\n", "\tmonitor_seconds_per_epoch: 8.862564\r\n", "\tobjective: 2.21611916125\r\n", "\ty_col_norms_max: 0.0675190825355\r\n", "\ty_col_norms_mean: 0.0446384068231\r\n", "\ty_col_norms_min: 0.0272975089866\r\n", "\ty_max_max_class: 0.134260376897\r\n", "\ty_mean_max_class: 0.114756053537\r\n", "\ty_min_max_class: 0.104770463666\r\n", "\ty_misclass: 0.57\r\n", "\ty_nll: 2.21611916125\r\n", "\ty_row_norms_max: 0.0153922340481\r\n", "\ty_row_norms_mean: 0.0060778646612\r\n", "\ty_row_norms_min: 0.00163779261751\r\n", "Saving to mlp.pkl...\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Saving to mlp.pkl done. Time elapsed: 0.421483 seconds\r\n", "The experiment returned value is None\r\n" ] } ], "prompt_number": 2 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multiple runs using `jobdispatch`\n", "\n", "Launching 10 hyperoptimization jobs is as easy as" ] }, { "cell_type": "code", "collapsed": false, "input": [ "!jobdispatch --local --repeat_jobs=10 /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "\r\n", "\r\n", "The jobs will be launched on the system: Local\r\n", "With options: ['repeat_jobs:10', \"tasks_filename:['nb0', 'compact']\", 'launch_cmd:Local']\r\n", "We generate the DBI object with 10 command\r\n", "Fri Jan 10 15:41:21 2014\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI] The Log file are under LOGS.NOBACKUP/jobman_cmdline_-g_numbered_pylearn2.scripts.jobman.experiment.train_experiment_workdir_prefix_jobman_demo__jobman_demo_mlp.conf_2014-01-10_15-41-21.548948\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,1/10,Fri Jan 10 15:41:22 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,2/10,Fri Jan 10 15:41:43 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,3/10,Fri Jan 10 15:42:04 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,4/10,Fri Jan 10 15:42:25 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,5/10,Fri Jan 10 15:42:45 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,6/10,Fri Jan 10 15:43:06 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,7/10,Fri Jan 10 15:43:27 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,8/10,Fri Jan 10 15:43:48 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,9/10,Fri Jan 10 15:44:08 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,10/10,Fri Jan 10 15:44:29 2014] /opt/lisa/os/bin/jobman cmdline -g numbered pylearn2.scripts.jobman.experiment.train_experiment workdir_prefix=jobman_demo/ jobman_demo/mlp.conf\r\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "[DBI,Fri Jan 10 15:44:50 2014] left running: 0/10\r\n", "[DBI] 10 jobs. finished: 10, running: 0, waiting: 0, init: 0\r\n", "[DBI] jobs unfinished (starting at 1): []\r\n", "[DBI] The Log file are under LOGS.NOBACKUP/jobman_cmdline_-g_numbered_pylearn2.scripts.jobman.experiment.train_experiment_workdir_prefix_jobman_demo__jobman_demo_mlp.conf_2014-01-10_15-41-21.548948\r\n" ] } ], "prompt_number": 3 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Parsing results using `jobman.tools.find_conf_files`\n", "\n", "Once all relevant results have been extracted, you'll probably want to find the best set of hyperparameters.\n", "\n", "One way to do that is to call `jobman.tools.find_conf_files` on the directory containing your experiment directories; the method will return a list of `DD` objects for all experiment files present in that directory and in its subdirectories. You can then go through that list and quickly extract the best hyperparameters:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import numpy\n", "from jobman import tools\n", "\n", "def parse_results(cwd):\n", " optimal_dd = None\n", " optimal_measure = numpy.inf\n", "\n", " for tup in tools.find_conf_files(cwd):\n", " dd = tup[1]\n", " if 'results.train_y_misclass' in dd:\n", " if dd['results.train_y_misclass'] < optimal_measure:\n", " optimal_measure = dd['results.train_y_misclass']\n", " optimal_dd = dd\n", " \n", " print \"Optimal \" + \"results.train_y_misclass\" + \": \" + str(optimal_measure)\n", " for key, value in optimal_dd.items():\n", " if 'hyper_parameters' in key:\n", " print key + \": \" + str(value)\n", "\n", "parse_results(\"jobman_demo/\")" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "WARNING: jobman_demo/__init__.pyc/current.conf file not found. Skipping it" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", "WARNING: jobman_demo/mlp.conf/current.conf file not found. Skipping it\n", "WARNING: jobman_demo/__init__.py/current.conf file not found. Skipping it\n", "WARNING: jobman_demo/mlp.pkl/current.conf file not found. Skipping it\n", "WARNING: jobman_demo/utils.py/current.conf file not found. Skipping it\n", "WARNING: jobman_demo/utils.pyc/current.conf file not found. Skipping it\n", "WARNING: jobman_demo/mlp.yaml/current.conf file not found. Skipping it\n", "Optimal results.train_y_misclass: 0.217\n", "hyper_parameters.learning_rate: 0.00191878940445\n", "hyper_parameters.init_momentum: 0.782112604517\n" ] } ], "prompt_number": 4 }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [] } ], "metadata": {} } ] }