{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "![NeuronUnit Logo](https://raw.githubusercontent.com/scidash/assets/master/logos/neuronunit-logo-text.png)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### NeuronUnit is based on SciUnit, a discpline-agnostic framework for data-driven unit testing of scientific models. \n", "\n", "# Chapter 1\n", "NeuronUnit can be used to test neuron and ion channel models against the experimental data that you think they should aim to recapitulate. \n", "- Have you read the [SciUnit documentation](https://github.com/scidash/sciunit/blob/master/docs/chapter1.ipynb)? It might be good to start there!
\n", "- A few common Python packages are used below, all of which are available from PyPI.
" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Any NeuronUnit test script will have the following actions: \n", "Most of these will be abstracted away in SciUnit or NeuronUnit modules that make things easier for the developer: \n", "1. Instantiate a model(s) from a model class, with parameters of interest to build a specific model. \n", "1. Instantiate a test(s) from a test class, with parameters of interest to build a specific test. \n", "2. Check that the model has the capabilities required to take the test. \n", "1. Make the model take the test. \n", "2. Generate a score from that test run. \n", "1. Bind the score to the specific model/test combination and any related data from test execution. \n", "1. Visualize the score (i.e. print or display the result of the test). " ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Here, we will break down how this is accomplished in NeuronUnit. \n", "Although NeuronUnit contains several model and test classes that make it easy to work with standards in neuron modeling and electrophysiology data reporting, here we will use toy model and test classes constructed on-the-fly so the process of model and test construction is fully transparent. \n", "\n", "Here is a toy model class:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "deletable": true, "editable": true, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "import sciunit\n", "import neuronunit\n", "from neuronunit.capabilities import ProducesMembranePotential # This class contains a set of methods \n", " # for the model to implement.\n", "from neo.core import AnalogSignal # Neo is a package for representing physiology data. \n", "import numpy as np # NumPy is how most numerical data is represented in Python. \n", "from quantities import ms # Quantities is a package for representing units. \n", "\n", "class ToyNeuronModel(sciunit.Model, # Each model must subclass SciUnit's base Model class. \n", " ProducesMembranePotential): # Each model will also subclass one of more capabilities. \n", " \"\"\"A toy neuron model that is always at it resting potential\"\"\"\n", " def __init__(self, v_rest, name=None): # Models are instantiated with their parameters. \n", " self.v_rest = v_rest\n", " sciunit.Model.__init__(self, name=name) # SciUnit takes care of the rest. \n", "\n", " def get_membrane_potential(self):\n", " \"\"\"Gets a 'neo.core.AnalogSignal' object encoding the membrane potential trace for this cell model\"\"\"\n", " array = np.ones(10000) * self.v_rest # A trivial membrane potential trace that is constant. \n", " dt = 1*ms # Time per sample in milliseconds. \n", " vm = AnalogSignal(array,units=mV,sampling_rate=1.0/dt) # Creating the required output. \n", " return vm" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Inheriting from a SciUnit `Capability` is a how a SciUnit `Model` lets potential tests know what the `Model` can and cannot do. \n", "- It tells tests that the model will be implement any method stubbed out in the definition of that `Capability`. \n", "- Tests can then check to make sure a `Model` has the required `Capabilities`, and then can interacts with the `Model` through those `Capabilities` (e.g. running a simulation and getting a membrane potential trace). \n", "- The `ToyNeuronModel` class inherits from `sciunit.Model` (as do all NeuronUnit models), and also from `ProducesMembranePotential`, which is a subclass of `sciunit.Capability`. \n", "\n", "Let's see what the `ProducesMembranePotential` capability looks like: " ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "```python\n", "class ProducesMembranePotential(sciunit.Capability):\n", " \"\"\"Indicates that the model produces a somatic membrane potential.\"\"\"\n", " \n", " def get_membrane_potential(self):\n", " \"\"\"Must return a neo.core.AnalogSignal.\"\"\"\n", " raise NotImplementedError()\n", "\n", " def get_median_vm(self):\n", " \"\"\"Returns the median of the membrane potential\"\"\"\n", " vm = self.get_membrane_potential() # Uses the method above to first get the trace. \n", " return np.median(vm) # Then returns the median of it. \n", "```" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "`ProducesMembranePotential` has two methods. \n", "- The first, `get_membrane_potential` is **unimplemented** by design. Since there is no way to know how each model will generate and return a membrane potential time series, the `get_membrane_potential` method in this capability is left unimplemented, while the docstring describes what the model must implement in order to satisfy that capability. In the `ToyNeuronModel` above, we see that the model implements it by simply creating a long array of resting potential values, and returning it as a `neo.core.AnalogSignal` object. A more useful model would probably implement it by running a simulation and extracting membrane potential values from memory or disk. \n", "- The second, `get_median_vm` is **already implemented**, which means there is no need for the model to do implement it again. For its implementation to work, however, the implementation of `get_membrane_potential` must be complete. Pre-implemented capability methods such as these allow the developer to focus on implementing only a few core interactions with the model, and then getting a lot of extra functionality for free. In the example above, once we know that the membrane potential is being returned as a `neo.core.AnalogSignal`, we can simply take the median using numpy. We know that the membrane potential isn't being returned as a list or a tuple or some other object on which numpy's median function won't necessarily work. \n", "\n", "Let's construct a single instance of this simple `model`, by choosing a value for the single membrane potential argument. This toy `model` will now have a -60 mV membrane potential at all times: " ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "deletable": true, "editable": true }, "outputs": [], "source": [ "from quantities import mV\n", "\n", "my_neuron_model = ToyNeuronModel(-60.0*mV, name='my_neuron_model')" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now we can then construct a simple test to use on this model or any other that expresses the appropriate capabilities: " ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "deletable": true, "editable": true, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "from quantities import Quantity\n", "from sciunit.scores import BooleanScore # One of many score types. \n", "from sciunit.converters import RangeToBoolean\n", "\n", "class ToyAveragePotentialTest(sciunit.Test):\n", " \"\"\"Tests the average membrane potential of a neuron.\"\"\"\n", " \n", " def __init__(self,\n", " observation={'mean':None,'std':None},\n", " name=\"Average potential test\"):\n", " \"\"\"Takes the mean and standard deviation of reference membrane potentials.\"\"\"\n", "\n", " sciunit.Test.__init__(self,observation,name) # Call the base constructor. \n", " self.required_capabilities += (ProducesMembranePotential,) # This test will require a model to \n", " # express these capabilities\n", " self.converter = RangeToBoolean(-2,2)\n", "\n", " description = \"A test of the average membrane potential of a cell.\"\n", " score_type = BooleanScore # The test will return this kind of score. \n", "\n", " def validate_observation(self, observation):\n", " \"\"\"An optional method that makes sure an observation to be used as reference data has the right form\"\"\"\n", " try:\n", " assert type(observation['mean']) is Quantity # The mean reported value must be expressed with units. \n", " assert type(observation['std']) is Quantity # Ditto for the standard deviation. \n", " except Exception as e:\n", " raise sciunit.ObservationError((\"Observation must be of the form \"\n", " \"{'mean':float*mV,'std':float*mV}\")) # If not, raise this exception. \n", "\n", " def generate_prediction(self, model, verbose=True):\n", " \"\"\"Implementation of sciunit.Test.generate_prediction.\"\"\"\n", " vm = model.get_median_vm() # If the model has the capability 'ProducesMembranePotential', \n", " # then it implements this method\n", " prediction = {'mean':vm} # Once we have our prediction, wrap it in a form that matches the observation. \n", " return prediction\n", "\n", " \n", " def bind_score(self,score,model,observation,prediction):\n", " score.related_data['mean_vm'] = prediction['mean'] # Binds some related data about the test run. \n", " return score" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### The test constructor takes an *observation* to parameterize the test\n", "This observation could come from anywhere: \n", "- Data from your lab\n", "- Results you saw in a paper\n", "- Data from http://neuroelectro.org (see the `neuronunit.neuroelectro` module)\n", "- Or it could be made up for illustration purposes, as below: " ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "deletable": true, "editable": true, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "from quantities import mV\n", "my_observation = {'mean':-62.1*mV, \n", " 'std':3.5*mV}\n", "my_average_potential_test = ToyAveragePotentialTest(my_observation, name='my_average_potential_test')" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "A few things happen upon test instantiation, including validation of the observation. Since the observation has the correct format (for this test class), `ToyAveragePotentialTest.validate_observation` will complete successfully and the test will be ready for use. \n", "\n", "### Now we judge our toy model using the test we wrote above:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "deletable": true, "editable": true, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "score = my_average_potential_test.judge(my_neuron_model)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "The `sciunit.Test.judge` method does several things. \n", "1. It checks to makes sure that `my_neuron_model` expresses the capabilities required to take the test. It doesn't check to see if they are implemented correctly (how could it know?) but it does check to make sure the model at least claims (through inheritance) to express these capabilities. The required capabilities are none other than those in the test's `required_capabilities` attribute. Since `ProducesMembranePotential` is the only required capability, and the `ToyNeuronModel` class inherits from this capability class, that check passes. \n", "2. It calls the test's `generate_prediction` method, which uses the model's capabilities to make the model return some quantity of interest, in this case the median membrane potential. \n", "3. It calls the test's `compute_score` method, which compares the observation the test was instantiated with against the prediction returned in the previous step. This comparison of quantities is cast into a `score` (in this case, a Z Score), bounds to some model output of interest (in this case, the model's mean membrane potential), and that `score` object is returned. \n", "4. The `score` returned is checked to make sure it is of the type promised in the class definition, i.e. that a Z Score is returned if a `ZScore` is listed in the test's `score_type` attribute. \n", "5. The `score` is bound to the `test` that returned it, the `model` that took the `test`, and the `prediction` and `observation` that were used to compute it. \n", "\n", "If all these checks pass (and there are no runtime errors) then we will get back a `score` object. Usually this will be the kind of `score` we can use to evaluate `model`/`data` agreement. If one of the `capabilities` required by the test is not expressed by the `model`, `judge` returns a special `NAScore` score type, which can be thought of as a blank. It's not an error -- it just means that the model, as written, is not capable of taking that test. \n", "\n", "If there are runtime errors, they will be raised during test execution; however if the optional `stop_on_error` keyword argument is set to `False` when we call `judge`, then it we will return a special `ErrorScore` score type, encoding the error that was raised. This is useful when batch testing many model and test combinations, so that the whole script doesn't get halted. One can always check the scores at the end and then fix and re-run the subset of model/test combinations that returned an `ErrorScore`.\n", "\n", "For any score, we can summarize it like so: " ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "deletable": true, "editable": true, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "score.summarize()" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "and we can get more information about the score: " ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "deletable": true, "editable": true, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "'A boolean score, which must be True or False.\\nConverts a score to Pass if its value is within the range [-2,2], otherwise Fail.\\nTrue if the observation and prediction were sufficiently similar; False otherwise'" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "score.describe()" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### On to [Chapter 2](chapter2.ipynb)!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.3" } }, "nbformat": 4, "nbformat_minor": 4 }