{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Boston Housing - A regression problem\n", "In this notebook we give a quick introduction working with pailab's repository and tools where we use the boston housing regression problem as use case." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:37.134347Z", "start_time": "2020-01-02T10:19:37.114374Z" }, "code_folding": [] }, "outputs": [], "source": [ "#import all things you need to get startes\n", "import time\n", "import pandas as pd\n", "import logging as logging\n", "import pprint\n", "import plotly\n", "plotly.offline.init_notebook_mode(connected=True)\n", "\n", "# Here start the repository specific imports\n", "import pailab.ml_repo.memory_handler as memory_handler\n", "from pailab import RepoInfoKey, MeasureConfiguration, MLRepo, DataSet, MLObjectType, FIRST_VERSION, LAST_VERSION\n", "from pailab.job_runner.job_runner import SimpleJobRunner, JobState, SQLiteJobRunner\n", "\n", "#You may set the loglevel and log-format here. \n", "logging.basicConfig(level=logging.FATAL)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Read the data\n", "As an example machine learning task to ilustrate the way of working with the repository we use the Boston housing data from the [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml/index.php) where we have applied some preprocessing. The data consists of house prices together with the house features `'RM'`, `'LSTAT'`, and `'PTRATIO'`:\n", "- `'RM'` is the average number of rooms among homes in the neighborhood.\n", "- `'LSTAT'` is the percentage of homeowners in the neighborhood considered \"lower class\" (working poor).\n", "- `'PTRATIO'` is the ratio of students to teachers in primary and secondary schools in the neighborhood.\n", "We just read the csv-file containing the data (also in the repository) into a pandas dataframe." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:37.274190Z", "start_time": "2020-01-02T10:19:37.258181Z" }, "collapsed": true }, "outputs": [], "source": [ "data = pd.read_csv('housing.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Create a new repository\n", "We first create a new repository for our task. The repository is the central key around all functionality is built. Similar to a repository used for source control in classical software development it contains all data and algorithms needed for the machine learning task. The repository needs storages for \n", "- scripts containing the machine learning algorithms and interfaces,\n", "- numerical objects such as arrays and matrices representing data, e.g. input data, data from the valuation of the models,\n", "- json documents representing parameters, e.g. training parameter, model parameter.\n", "\n", "To keep things simple, we just start using in memory storages. Note that the used memory interfaces are except for testing and playing around not be the first choice, since when ending the session, everything will be lost...\n", "\n", "In addition to the storages the repository needs a reference to a JobRunner which the platform can use to execute machine learning jobs. For this example we use the most simple one, executing everything sequential in the same thread, the repository runs in." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:37.413971Z", "start_time": "2020-01-02T10:19:37.386009Z" }, "code_folding": [], "collapsed": true }, "outputs": [], "source": [ "# setting up the repository\n", "config = {'user': 'test_user',\n", " 'workspace': 'c:/temp',\n", " 'repo_store': \n", " {\n", " 'type': 'memory_handler', \n", " 'config': {}\n", " },\n", " 'numpy_store':\n", " {\n", " 'type': 'memory_handler',\n", " 'config':{}\n", " },\n", " 'job_runner':\n", " {\n", " 'type': 'simple',\n", " 'config': {\n", " 'throw_job_error': True\n", " }\n", " }\n", " }\n", "ml_repo = MLRepo( user = 'test_user', config=config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Add a tree\n", "To navigate in a simple way over all objects, one can add a so-called tree to the repository. The tree allows one to use auto completion to acces objcts and respectiv methods." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:37.509844Z", "start_time": "2020-01-02T10:19:37.501888Z" }, "collapsed": true }, "outputs": [], "source": [ "from pailab.tools.tree import MLTree\n", "MLTree.add_tree(ml_repo)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adding data\n", "The data in the repository is handled by two different data objects:\n", "- RawData is the object containing real data.\n", "- DataSet is the object conaining the logical data, i.e. a reference to a RawData object together with a specification, which data from the RawData will be used. Here, one can specify a fixed version of the underlying RawData object (then changes to the RawData will not affect the derived DataSet) or a fixed or floating subset of the RawData by defininga start and endindex cutting the derived data just out of the original data.\n", "\n", "Normally one will add RawData and then define DataSets which are used to train or test a model which is exactly the way shown in the following." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:37.637674Z", "start_time": "2020-01-02T10:19:37.617728Z" }, "collapsed": true }, "outputs": [], "source": [ "# Add RawData. A convenient way to add RawData is simply to use the method add on the raw_data collection.\n", "# This method just takes a pandas dataframe and the specification, which columns belong to the input \n", "#and which to the targets.\n", "ml_repo.tree.raw_data.add('boston_housing', data, \n", " input_variables=['RM', 'LSTAT', 'PTRATIO'], \n", " target_variables = ['MEDV'])\n", "# based on the raw data we now define training and test sets\n", "ml_repo.tree.training_data.add('sample1', ml_repo.tree.raw_data.boston_housing(), 0, 300)\n", "ml_repo.tree.test_data.add('sample2', ml_repo.tree.raw_data.boston_housing(), 301, None)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When creating the DataSet we have to set two important informations for the repository, given as a dictionary:\n", "- The object name. Each object in the repository needs to have a unique name in the repository.\n", "- The object type which gives. In our example here we say that we specify that the DataSet are training and test data. Note that on can have only one training data object pre repository while the repository can obtain many different test data sets.\n", "\n", "Some may wonder what is now stored in *version_list*.\n", "** Adding an object (independent if it is a data object or some other object such as a parameter), the object gets a version number and no object will be removed, adding just adds a new version.** The add method returns a dictionary of the object names together with their version number." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adding a model\n", "The next step to do machine learning would be to define a model which will be used in the repository. A model consists of the following pieces\n", "- a skript where the code for the model valuation is defined together with the function name of the evaluation method\n", "- a skript where the code for the model training is defined together with th function nam of the training method\n", "- a model parameter object defining the model parameter and which must have implemented the correct interface so that it can be used within the repository (see the documentation on integrating new objects, normally there is not more to do then just simply add *@repo_object_init()* to the line above your *__init__* method)\n", "- a training parameter object defining training parameters (such as number of optimization steps etc.), if necessary for your algorithms (this oen is optional)\n", "\n", "** SKLearn models as an example**\n", "\n", "We do not have to define the pieces defined above, if we use the sklearn module. Instead we can use the pailab.externals.sklearn module interfacing \n", "the sklearn package so that this can be used within the repository. This interface provides a simple method (add_model) to add an arbitrary sklearn model as a model which can be handled by the repository. This method adds a bunch of repo objects to the repository (according to the pieces described above):\n", "- An object defining the function to be called to evaluate the model\n", "- An object defining the function to be called to train the model\n", "- An object defining the model\n", "- An object defining the model parameter\n", "For the following we just use a DecisionTree as our model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.013172Z", "start_time": "2020-01-02T10:19:38.001187Z" }, "collapsed": true }, "outputs": [], "source": [ "import pailab.externals.sklearn_interface as sklearn_interface\n", "from sklearn.tree import DecisionTreeRegressor\n", "sklearn_interface.add_model(ml_repo, DecisionTreeRegressor(), model_param={'max_depth': 5})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Train the model\n", "Now, model taining is very simple, since you have defined training and testing data as well as methods to value and fit your model and the model parameter.\n", "So, you can just call *run_training* on the repository, and the training is perfomred automatically.\n", "The training job is executed via the JobRunner you specified setting up the repository. All method of the repository involving jobs return the job id when adding the job to the JobRunner so that you can control the status of the task and see if it sucessfully finished." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.141000Z", "start_time": "2020-01-02T10:19:38.121029Z" } }, "outputs": [], "source": [ "job_id = ml_repo.run_training() \n", "print(job_id)\n", "job_info = ml_repo._job_runner.get_info(job_id[0], job_id[1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run evaluation\n", "To measure errors and to provide plots the model must be evaluated on all test and training datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.264836Z", "start_time": "2020-01-02T10:19:38.240871Z" } }, "outputs": [], "source": [ "job_id = ml_repo.run_evaluation()\n", "# print information about the job\n", "info = ml_repo._job_runner.get_info(job_id[0][0], job_id[0][1]) \n", "print(str(info))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Add and compute measures" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.376686Z", "start_time": "2020-01-02T10:19:38.364702Z" }, "collapsed": true }, "outputs": [], "source": [ "ml_repo.add_measure(MeasureConfiguration.MAX)\n", "ml_repo.add_measure(MeasureConfiguration.R2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.444596Z", "start_time": "2020-01-02T10:19:38.376686Z" }, "collapsed": true }, "outputs": [], "source": [ "job_ids = ml_repo.run_measures()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.496556Z", "start_time": "2020-01-02T10:19:38.444596Z" } }, "outputs": [], "source": [ "ml_repo.tree.models.DecisionTreeRegressor.measures.sample1.max.load()\n", "print(str(ml_repo.tree.models.DecisionTreeRegressor.measures.sample1.max.obj.value))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the steps\n", "- *run_evaluation*\n", "- *run_measures*\n", "\n", "are not necessary if *run_training* is called with the keyword argument *run_descendants=True*. \n", "In This case the repository would have automatically triggered all evaluations and measurement calculations automatically." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Working with the repository\n", "This section shows how one can work with the audit and revision functionality of the repository." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.652316Z", "start_time": "2020-01-02T10:19:38.640334Z" } }, "outputs": [], "source": [ "for k in MLObjectType:\n", " names = ml_repo.get_names(k.value)\n", " for n in names: \n", " print(n + '\\t ' + k.value)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Repository information such as version number, author, date of change are attached to the repo objects and can simply be retrieved:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.792130Z", "start_time": "2020-01-02T10:19:38.772188Z" } }, "outputs": [], "source": [ "ml_repo.tree.models.DecisionTreeRegressor.measures.sample1.r2.load()\n", "pprint.pprint(ml_repo.tree.models.DecisionTreeRegressor.measures.sample1.r2.obj.repo_info.get_dictionary())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.840067Z", "start_time": "2020-01-02T10:19:38.792130Z" } }, "outputs": [], "source": [ "ml_repo.tree.models.DecisionTreeRegressor.measures.sample1.r2.history()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The commits can also be queried and printed. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:38.987868Z", "start_time": "2020-01-02T10:19:38.927949Z" } }, "outputs": [], "source": [ "for k in ml_repo.get_commits():\n", " pprint.pprint(k.to_dict())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Labeling models\n", "There is the possibility to label a certain version of a model. The label can then be used to access the model instead of the version number. It is vry useful to\n", "compare e.g. the current productive model (labeld e.g. 'prod') against other model versions. abels are supported by many functions and tools and make life much easier. So the consistency checks only check for the latest and labeled models if there are changes make a rerun of training/evaluation/measures needed. Also som figures will automatically highlight the results belonging to labeled versions.\n", "\n", "Let us label the latest model version in the repo." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:39.087735Z", "start_time": "2020-01-02T10:19:39.079747Z" }, "collapsed": true }, "outputs": [], "source": [ "ml_repo.tree.models.DecisionTreeRegressor.set_label('prod',version = LAST_VERSION, message='we found our first production model')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tests\n", "It is possible to define model tests.\n", "### Regressiontests\n", "Regressiontests compare measuresments on the repositories dataset of a model to th measurements of labeled reference model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:39.227550Z", "start_time": "2020-01-02T10:19:39.211602Z" } }, "outputs": [], "source": [ "import pailab.tools.tests\n", "reg_test = pailab.tools.tests.RegressionTestDefinition(reference='prod', models=None, data=None, labels=None, measures=None, tol=1e-3)\n", "reg_test.repo_info.name='reg_test'\n", "#reg_test.repo_info.category = MLObjectType.TEST_DEFINITION\n", "ml_repo.add(reg_test, message='regression test definition')\n", "#ml_repo.tree.models.DecisionTreeRegressor.measures.sample1." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:39.295490Z", "start_time": "2020-01-02T10:19:39.227550Z" } }, "outputs": [], "source": [ "ml_repo.run_tests()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Consistency checks\n", "Pailab's *checks*-submodule provides functionality to check for consistency and quality issues as well as for outstanding tasks (such as rerunning a training after the training set has been changed).\n", "\n", "### Model consistency\n", "There are different checks to test model consistency such as if the tests of a model are up to date and succeeded or if the latest model is trained on the latest trainin data. All model tests are performed for **labeled** models and the latest model only.\n", "In our first example we change a model parameter but do not train for a new model version wih this parameter.\n", "\n", "The following checks are performed:\n", "- Is the latest model calibrated on the latest parameters and training data\n", "- Are all labeled models (including latest model) evaluated on the latest available training and test data\n", "- Are all measures of all labeled models computed on the latest data\n", "- Have all tests been run on the labeled models" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:39.383340Z", "start_time": "2020-01-02T10:19:39.371388Z" } }, "outputs": [], "source": [ "param = ml_repo.get('DecisionTreeRegressor/model_param')\n", "param.sklearn_params['max_depth'] = 2\n", "version = ml_repo.add(param)\n", "print(param.sklearn_params)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After we have changed the model parameter, we use the *tools* submodules *check_model* method for open tasks/inconsistencies. This method can be called for a certain model or also for a lbeled model. If nothing is specified, all labeled models will be checked.\n", "Applying the method to the latest model we see that the output shows that the models last version has been calibrated using a different model parameter version then the current version. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:39.539133Z", "start_time": "2020-01-02T10:19:39.519173Z" } }, "outputs": [], "source": [ "import pailab.tools.checker as checker\n", "results = checker.run(ml_repo)\n", "pprint.pprint(results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the labeled model would not show an issue. Only the latest model is affected by this change (it is the definition of latest model that it has been calibrated on latest inputs).\n", "\n", "We can resolve this issue by simply training the model again (now on the new training data set)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:39.686966Z", "start_time": "2020-01-02T10:19:39.670973Z" } }, "outputs": [], "source": [ "ml_repo.run_training()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the solution to simply retrain introduced new issues: The model has not yet been evaluated and no measures have been computed. (If we would have set **run_descendants=False** as argument, the preceding steps would have also been performed and the issues would not have been present)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:39.846724Z", "start_time": "2020-01-02T10:19:39.830776Z" } }, "outputs": [], "source": [ "results = checker.run(ml_repo)\n", "pprint.pprint(results)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:39.930644Z", "start_time": "2020-01-02T10:19:39.846724Z" } }, "outputs": [], "source": [ "ml_repo.run_evaluation(run_descendants=True)# we use run_descendants so that the issues with th measures are resolved too" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:39.986535Z", "start_time": "2020-01-02T10:19:39.930644Z" } }, "outputs": [], "source": [ "results = checker.run(ml_repo)\n", "pprint.pprint(results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Checking training and test data consistency\n", "There may be also data inconsistencies, so for example the training and test data overlap. The **check_data** methods performs such a test.\n", "\n", "Let us first add a new test data set which overlaps with the training data" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:40.046454Z", "start_time": "2020-01-02T10:19:40.038466Z" }, "collapsed": true }, "outputs": [], "source": [ "ml_repo.tree.test_data.add('sample3', ml_repo.tree.raw_data.boston_housing(), 0, 50)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:40.110369Z", "start_time": "2020-01-02T10:19:40.050450Z" } }, "outputs": [], "source": [ "checker.run(ml_repo)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Checking usage of RawData\n", "The check_data method includes also a check if all RawData is used in DataSets or if there is some unused data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Checking test status\n", "There is also a check if all latest tests have been applied to all labeled models and all latest models." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:40.541924Z", "start_time": "2020-01-02T10:19:40.525944Z" } }, "outputs": [], "source": [ "checker.Tests.run(ml_repo)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:40.681766Z", "start_time": "2020-01-02T10:19:40.545917Z" } }, "outputs": [], "source": [ "ml_repo.run_evaluation()\n", "ml_repo.run_measures()\n", "ml_repo.run_tests()\n", "pprint.pprint(checker.Tests.run(ml_repo))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Easy access to repo objects and names\n", "Since the names are chosen equally to a directory structure (with subdirectories) they may be long and difficult to remember (especially the order). And even if one remembers the long names, it is a lot of work to type them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you now type the objects name followd by a '.' by pressing tab you will see the possible next solutions makin it very easy to 'browse' to the object's name you have in mind. By using '()' operator you get a list of names below th current naming level, and in addition you can filter the names by giving a string: Only names containing this string are put into the list." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:40.893455Z", "start_time": "2020-01-02T10:19:40.885494Z" }, "collapsed": true }, "outputs": [], "source": [ "ml_repo.tree.models.DecisionTreeRegressor.model_param.load()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:40.945384Z", "start_time": "2020-01-02T10:19:40.901441Z" } }, "outputs": [], "source": [ "ml_repo.tree.models.DecisionTreeRegressor.model_param.obj.repo_info.get_dictionary()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Append RawData\n", "One can append data to the RawData object. The repository manages which objects are affected by appending data and directly updates these objects." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:41.096817Z", "start_time": "2020-01-02T10:19:41.080837Z" } }, "outputs": [], "source": [ "train_data = ml_repo.get_training_data(full_object = False)\n", "print(train_data.repo_info[RepoInfoKey.NAME] +': ' +str(train_data))\n", "test_data = ml_repo.get_names(MLObjectType.TEST_DATA)\n", "for k in test_data:\n", " t = ml_repo.get(k)\n", " print(str(t)+ ' Version: ' + str(t.repo_info[RepoInfoKey.VERSION]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:41.152741Z", "start_time": "2020-01-02T10:19:41.096817Z" }, "collapsed": true }, "outputs": [], "source": [ "from numpy import array\n", "ml_repo.tree.raw_data.boston_housing.append(x_data = array([[ 6.575, 4.98, 15.3]]), y_data =array([[504000.0]]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:41.208699Z", "start_time": "2020-01-02T10:19:41.152741Z" } }, "outputs": [], "source": [ "print(train_data.repo_info[RepoInfoKey.NAME] +': ' +str(train_data))\n", "for k in test_data:\n", " t = ml_repo.get(k)\n", " print(str(t) + ' Version: ' + str(t.repo_info[RepoInfoKey.VERSION]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:41.268588Z", "start_time": "2020-01-02T10:19:41.208699Z" } }, "outputs": [], "source": [ "results = checker.run(ml_repo)\n", "print(results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Repo-Analysis\n", "Having parameters, evaluations, measures in one place enables out of the box analysis- and plotting functionality. The submodule *plot* provides automated, standardized plots." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:41.348656Z", "start_time": "2020-01-02T10:19:41.344659Z" } }, "outputs": [], "source": [ "import pailab.analysis.plot as plot" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plotting\n", "### Plot errors and measures" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:42.314509Z", "start_time": "2020-01-02T10:19:41.540398Z" }, "code_folding": [] }, "outputs": [], "source": [ "# create more different model params etc. to make the repo more interesting ;-)\n", "for j in range(2):\n", " training_data = ml_repo.get(ml_repo.tree.training_data.sample1())\n", " training_data.end_index += 50\n", " ml_repo.add(training_data, message='add 50 datapoints to end_index')\n", " for i in range(6,12):\n", " #print(i)\n", " param = ml_repo.get(ml_repo.tree.models.DecisionTreeRegressor.model_param())\n", " param.sklearn_params['max_depth'] = i\n", " version = ml_repo.add(param)\n", " ml_repo.add(param)\n", " ml_repo.run_training()\n", " ml_repo.run_evaluation()\n", " ml_repo.run_measures()\n", " if j == 1 and i==6:\n", " ml_repo.tree.models.DecisionTreeRegressor.set_label('prod', message='')\n", " if j == 1 and i==8:\n", " ml_repo.tree.models.DecisionTreeRegressor.set_label('candidate', message='')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Plot error measure vs parameter" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:42.809846Z", "start_time": "2020-01-02T10:19:42.314509Z" } }, "outputs": [], "source": [ "import pailab.analysis.plot_helper as plt_helper\n", "import pailab.analysis.plot as plot\n", "plot.measure_by_parameter(ml_repo, ml_repo.tree.models.DecisionTreeRegressor.measures('max'), 'max_depth', data_versions=LAST_VERSION)\n", "#ml_repo.models.DecisionTreeRegressor.measures('r2')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Plot error vs input variable" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:43.181318Z", "start_time": "2020-01-02T10:19:42.809846Z" } }, "outputs": [], "source": [ "plot.scatter_model_error(ml_repo, ml_repo.tree.models.DecisionTreeRegressor.model(), ml_repo.tree.test_data.sample2(), 'PTRATIO')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Plot histogram of model error" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:43.516871Z", "start_time": "2020-01-02T10:19:43.181318Z" } }, "outputs": [], "source": [ "plot.histogram_model_error(ml_repo, ml_repo.tree.models.DecisionTreeRegressor.model(), ml_repo.tree.test_data.sample2())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plot data" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:43.960310Z", "start_time": "2020-01-02T10:19:43.516871Z" } }, "outputs": [], "source": [ "plot.histogram_data(ml_repo, {ml_repo.tree.test_data.sample2() :['last'], ml_repo.tree.training_data.sample1(): ['first','last']}, x_coordinate = 'LSTAT') #, y_coordinate='MEDV')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plot dependency graph" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:43.972263Z", "start_time": "2020-01-02T10:19:43.964278Z" }, "collapsed": true }, "outputs": [], "source": [ "# Uncomment the following lines to plot the dependencies. Note that this functionality needs graphviz to be installed \n", "#from pailab.tools.dependency_graph import get_dependency_graph\n", "#get_dependency_graph(ml_repo, node = 'DecisionTreeRegressor/measure/sample2/r2')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Model interpretation\n", "In this section we present some techniques to interpret/explain a model from the repository. All algorithems pailab provides ar included in the *tools.interpretation* submodule. For an oveview of algorithms to interpret machine learning we refer to the very nice book [Interpretable Machine learning](https://christophm.github.io/interpretable-ml-book/) by Christoph Molnar.\n", "\n", "**Note: All algorithms presented in the following provide caching functionality. Just set the respective parameter *cache* to *True* to activate caching to avoid redundant time consuming computations.** " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:44.036176Z", "start_time": "2020-01-02T10:19:43.972263Z" }, "collapsed": true }, "outputs": [], "source": [ "import pailab.tools.interpretation as interpretation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ICE - Individual conditional explanation\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:44.152020Z", "start_time": "2020-01-02T10:19:44.036176Z" }, "collapsed": true }, "outputs": [], "source": [ "x_values = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]\n", "ice_results = interpretation.compute_ice(ml_repo, x_values, ml_repo.tree.test_data.sample2(), ml_repo.tree.models.DecisionTreeRegressor(), \n", " x_coordinate = 'LSTAT', start_index = 1, cache = True, scale = '')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:44.799185Z", "start_time": "2020-01-02T10:19:44.152020Z" } }, "outputs": [], "source": [ "plot.ice(ice_results, ice_points = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) # we limit the number of graphs by selecting certain points setting ice_points parameter. Remove the parameter to get a plot with all ice curves." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ICE and functional clustering" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we have seen above, plotting a lot of ICE plots in one figure may be very confusing, maybe even shadowing certain artefacts. To get nice representatives of the different ICE graphs, pailab provides some kind of functional clustering. To activate the clustering, just call the method again, but this time define the parameter *clustering_param*. This parameter must be a dictionary of parameters to control the behaviour of functional clustering. For a description of all available parameters, see the API documentation of the *tools.interpretation.functional_clustering* method. To apply functional custering using all default settings, just use an empty dictionary. In our example below we set the number of clusters to ten." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:44.907013Z", "start_time": "2020-01-02T10:19:44.799185Z" }, "collapsed": true }, "outputs": [], "source": [ "ice_results=interpretation.compute_ice(ml_repo, x_values, ml_repo.tree.test_data.sample2(), ml_repo.tree.models.DecisionTreeRegressor(),\n", " x_coordinate = 'LSTAT', start_index = 1, clustering_param = {'n_clusters' : 10}, cache = True)\n", "#instead of defining the model name and version, we can also use a label as shown in th commented lines below\n", "#ice_results=interpretation.compute_ice(ml_repo, x_values, ml_repo.tree.test_data.sample2(), model_label ='prod',\n", "# x_coordinate = 'LSTAT', start_index = 1, clustering_param = {'n_clusters' : 20}, cache = True)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:45.713935Z", "start_time": "2020-01-02T10:19:44.907013Z" } }, "outputs": [], "source": [ "plot.ice_clusters(ice_results)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:46.592763Z", "start_time": "2020-01-02T10:19:45.713935Z" } }, "outputs": [], "source": [ "plot.ice(ice_results, clusters = [2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Comparing two models\n", "ICE plots are a usefull tool to compare two models which can be simpyl accomplished by calling above methods with a second ICE_Result object computed for another model. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:46.672653Z", "start_time": "2020-01-02T10:19:46.596759Z" }, "collapsed": true }, "outputs": [], "source": [ "ice_results_2 = interpretation.compute_ice(ml_repo, x_values, ml_repo.tree.test_data.sample2(), ml_repo.tree.models.DecisionTreeRegressor(), #model_label ='prod',\n", " model_version = FIRST_VERSION,\n", " x_coordinate = 'LSTAT', start_index = 1, cache = True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:49.334318Z", "start_time": "2020-01-02T10:19:46.684638Z" } }, "outputs": [], "source": [ "plot.ice(ice_results, clusters = [2,4], ice_results_2 = ice_results_2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Calling the method *plot.ice_clusters* with the derived results for the second model computes the average of seconds models ICE curves on the clusters of the first model and plots them together with the first models clusters." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:50.001426Z", "start_time": "2020-01-02T10:19:49.334318Z" } }, "outputs": [], "source": [ "plot.ice_clusters(ice_results, ice_results_2 = ice_results_2, clusters=[0, 1,2])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:50.916205Z", "start_time": "2020-01-02T10:19:50.001426Z" } }, "outputs": [], "source": [ "plot.ice_diff(ice_results, ice_results_2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2020-01-02T10:19:50.928189Z", "start_time": "2020-01-02T10:19:50.920203Z" }, "collapsed": true }, "outputs": [], "source": [ "# plot decision tree\n", "#from graphviz import Source\n", "#from sklearn import tree\n", "#from IPython.display import SVG\n", "#model = ml_repo.get(ml_repo.tree.models.DecisionTreeRegressor.model())#, version = -1)\n", "\n", "#graph = Source(tree.export_graphviz(model.model, out_file=None\n", "# , filled = True))\n", "#display(SVG(graph.pipe(format='svg')))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "celltoolbar": "Initialization Cell", "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" }, "toc": { "base_numbering": 1, "colors": { "hover_highlight": "#DAA520", "navigate_num": "#000000", "navigate_text": "#333333", "running_highlight": "#FF0000", "selected_highlight": "#FFD700", "sidebar_border": "#EEEEEE", "wrapper_background": "#FFFFFF" }, "moveMenuLeft": true, "nav_menu": { "height": "12px", "width": "252px" }, "navigate_menu": true, "number_sections": true, "sideBar": true, "skip_h1_title": false, "threshold": 4, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": { "height": "786px", "left": "0px", "right": "1470.45px", "top": "65.9943px", "width": "324.774px" }, "toc_section_display": "block", "toc_window_display": true, "widenNotebook": false }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }