{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## PIG 12 - High-level interface examples\n", "\n", "This notebook shows how the most common use cases for IACT analysis with Gammapy\n", "would be done with the proposed high-level interface for PIG 12.\n", "\n", "- Basic idea is to have a single config-driven analysis.\n", "- And a single `Analysis` class which is the \"driver\" or \"manager\" for one analysis.\n", "- Very similar to end-user interface of Fermipy or HAP in HESS.\n", "\n", "### What is an analysis?\n", "\n", "- An analysis is a single analysis and model, not parameterised variations.\n", "- An analysis is for a single region (e.g. 10 deg region with 5 sources), you can't run the GPS or all-sky pipeline via this solution.\n", "- Making an SED is still part of one Analysis, because many SED methods require the full model and all energy data.\n", "- Making a lightcurve could be part of one Analysis (like it is in Fermi), or could be a higher-level, creating on Analysis and running it for each time bin. This exercise of pro / con is left to the LC PIG. Christoph expects that datasets serialisation is much simpler if LC / time bins isn't part of serialisation for one analysis, i.e. it would be something higher-level.\n", "- Source detection with iteratively adding sources is not the responsibility of Analysis. It's higher-level, could use Analysis or not, not discussed here.\n", "\n", "# Who does what?\n", "\n", "The idea is to have a layer of \"Tools\" that do things:\n", "\n", "- Analysis is the boss, manager, driver\n", "- Then there's a middle management agents/tools (e.g. MapMaker, ReflectedBgEstimator, ...)\n", "- Then there's minions or data structures (e.g. DataSet)\n", "\n", "TODO: design how these interact, not clear who drives what.\n", "Very important: what's the responsibility of `Analysis` vs `Observations` vs `Datasets` vs `Fit`\n", "\n", "### Main analyses\n", "\n", "- 3D map analysis\n", "- 2d map analysis (same as 3D?)\n", "- 1D spec analysis\n", "\n", "Different cases:\n", "\n", "- different background models\n", "- joint vs stacked\n", "- on vs on/off\n", "- simulate or fit\n", "\n", "### After analysis or in between steps\n", "\n", "- spectral points (for any analysis)\n", "- lightcurve?\n", "- diagnostics (residuals, significance, TS)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Config file\n", "\n", "Example config file.\n", "We will develop a schema and validate / give good error messages on read.\n", "\n", "```\n", "analysis:\n", " process:\n", " # add options to allow either in-memory or disk-based processing\n", " out_folder: \".\" # default is current working directory\n", " store_per_obs: {true, false}\n", " reduce:\n", " type: {\"1D, \"3D\"}\n", " stacked: {true, false}\n", " background: {\"irf\", \"reflected\", \"ring\"}\n", " roi: max_offset\n", " exclusion: exclusion.fits\n", " fit:\n", " energy_min, energy_max\n", " logging:\n", " level: debug\n", "\n", "dataspace:\n", " spatial: center, width, binsz\n", " energy: min, max, nbins\n", " time: min, max\n", " # PSF RAD and EDISP MIGRA not exposed for now\n", " # Per-obs energy_safe and roi_max ?\n", "\n", "model:\n", " # Model configuration will mostly be designed in different PIG\n", " sources:\n", " source_1: spectrum: powerlaw, spatial: shell\n", " diffuse: gal_diffuse.fits\n", " background:\n", " IRF\n", "\n", "\n", "# If Gammapy has configurable maker classes, there could be a generic\n", "# way to overwrite parameters from them from the config file.\n", "# Make every \"maker\" class in Gammapy sub-class of a \"Tool\" or \"Maker\" base class\n", "# that does config handling in a uniform way?\n", "reflected_background_maker:\n", " n_regions: 99\n", "\n", "meta:\n", " # User comments, not used by framework\n", " target_name: \n", " description: \n", "```\n", "\n", "### User workflow\n", "\n", "Typically you would start like this:\n", "\n", "```\n", "$ mkdir analysis99\n", "$ cd analysis99\n", "$ edit gammapy_analysis_config.yaml\n", "```\n", "\n", "Then you would type `ipython` or `juypter` and put the code below.\n", "\n", "IF we change observation selection to be also config-file driven,\n", "we could add a ``gammapy analysis data_reduction`` or ``gammapy analysis optimise``\n", "which does the slow parts before going to ``ipython`` or ``jupyter``,\n", "using all the information from the config file.\n", "\n", "For example, ``gammapy analysis data_reduction`` would do this:\n", "\n", "```\n", "from gammapy import Analysis\n", "analysis = Analysis() # uses the default config filename, or could look for it or take it as an argument for the CLI\n", "analysis.data_reduction()\n", "analysis.write()\n", "```\n", "\n", "and ``gammapy analysis optimise`` would do this:\n", "\n", "```\n", "from gammapy import Analysis\n", "analysis.read() # recover the state after data reduction\n", "# Assumes we have serialisation and e.g. \"analysis_state.yaml\"\n", "analysis.optimise()\n", "analysis.write()\n", "```\n", "\n", "To generate the config file, we could add a ``gammapy analysis config``\n", "which dumps the config file with all lines commented out, and the user\n", "can then fill in the numbers they care about. Alternative: users would\n", "copy & paste from config file example in the docs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data / observation selection\n", "\n", "For now, we don't put this in the config file.\n", "It's something the user does before, keep as-is for now.\n", "\n", "There are some advantages to do observation selection automatically, but it's hard to make if very flexible and satisfy all the ways users might want to select observations (e.g. \"near Crab nebula\" or \"from 2017\" or \"only good quality\" or by zenith angle, ...\n", "\n", "Some observation selection / discarding will happen automatically, e.g. we will not process runs that are 100 deg away from the target position." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from gammapy.data import DataStore\n", "\n", "data_store = DataStore.from_dir(\"$GAMMAPY_DATA/cta-1dc/index/gps/\")\n", "obs_ids = [110380, 111140, 111159]\n", "observations = data_store.get_observations(obs_ids)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Analysis class\n", "\n", "Basic idea is to have one high-level ``Analysis`` class for the end user." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from gammapy import Analysis\n", "\n", "analysis = Analysis(config, observations)\n", "\n", "# analysis.observations: gammapy.data.Observations\n", "\n", "analysis.reduce_data() # often slow, can be hours\n", "\n", "# analysis.datasets: gammapy.datasets.Datasets\n", "\n", "# If the user wants they can save all results from data reduction\n", "# and re-start later. This stores config, datasets, ... all the\n", "# analysis class state.\n", "# analysis.write()\n", "# analysis = Analysis.read())\n", "\n", "analysis.optimise() # often slow, can be hours\n", "\n", "# Again, we could write and read, do the slow things only once.\n", "# e.g. supervisor comes in and asks about significance of some\n", "# model component or whatever\n", "\n", "# Everything is accessible via the \"analysis\"\n", "# It's like the Analysis in Fermipy or Sherpa \"session\" or \"HESSArray\" in HAP\n", "# a global object that gives you access to everything.\n", "# Method calls modify data members (e.g. models), but in between method calls\n", "# advanced users can do a lot of custom processing.\n", "# Many advanced use cases can be done with the Analysis API.\n", "profile = analysis.model(\"source_42\").spectrum.plot()\n", "\n", "# Do we need energy_binning for the SED points in config or only here?\n", "sed = analysis.spectral_points(\"source_42\", energy_binning)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## How Analysis interacts with lower-level code\n", "\n", "One example of what ``analysis.reduce_data`` could do:\n", "\n", "```\n", "class Analysis:\n", " def reduce_data(self):\n", " maker = DataReduction.from_config(self.config)\n", " maker.run()\n", " self.datasets = maker.datasets\n", "```\n", "\n", "A different way would be like this:\n", "```\n", "class Analysis:\n", " def reduce_data(self):\n", " config = self.make_data_reduction_config(self.config)\n", " maker = self.make_data_recution_class(config)\n", " maker.run()\n", " self.datasets = maker.datasets\n", "```\n", "\n", "Either way, we will need \"registries\" mapping options to classes, e.g.:\n", "```\n", "DATA_REDUCERS = {\n", " \"1d\": \"gammapy.spectrum.SpectrumExtration\",\n", " \"3d\": \"gammapy.cube.MapMaker\",\n", "}\n", "```\n", "\n", "By having registries (i.e. Python dicts), both within Gammapy and users\n", "can add their own, we are a \"framework\" that is extensible even at runtime.\n", "\n", "For this to work, we need to have a limited number or data containers (e.g. DataSet subclasses),\n", "because \"makers\" require a certain structure of containers, modify them in-place or make\n", "new containers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example of a Tool\n", "\n", "As an example, let's see what the MapMaker would look like.\n", "\n", "TODO: how do we compose lower-level Tools into Analysis chains?\n", "Who creates the Tool objects when?\n", "\n", "Analysis needs to know how to turn ``config`` into Python objects.\n", "E.g. for ``geom``, but also for ``exclusion_mask: exclusion.fits``, make a Map object" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# There's some base class, similar to what ctapipe has\n", "# Base class: uniform scheme (ABC?), parameter handling/validation, logging, provenance\n", "from gammapy import Tool\n", "\n", "class MapMaker(Tool):\n", " config_spec = {\n", " offset_max = \"2 deg\"\n", " geom = \"???\" # is this part of the parameters, or passed to run?\n", " }\n", " def __init__(self, config):\n", " self.config = config\n", " \n", " def run(self, observations):\n", " datasets = Datasets()\n", " for observation in observations:\n", " dataset = self.process(dataset)\n", " datasets.append(dataset)\n", " return datasets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Serialisation\n", "\n", "The read and write would work like this:\n", "- Everyone in Gammapy knows how to serialise themselves\n", "- Composite objects serialise each part.\n", "\n", "Example:\n", "```\n", "class Analysis:\n", " def write(self):\n", " out_path = self.out_path\n", " self.config.write(out_path)\n", " if self.datasets is not None:\n", " self.datasets.write(out_path)\n", " if self.the_other_thing is not None:\n", " self.the_other_thing.write(out_path)\n", " \n", " self.analysis_state.write(\"analysis_state.yaml\")\n", " def read(self, out_path=\".\"):\n", " state = AnalysisState.read(\"analysis_state.yaml\")\n", " if state.has_config()\n", " self.config = state.read_config()\n", " if state.has_datasets():\n", " self.datasets = state.read_datasets()\n", " if state.has_the_other_thing():\n", " self.datasets = state.read_the_other_thing() \n", "```\n", "\n", "A lot of this is on `Dataset` and `Datasets`, and `Model`, serialisation would always call down to parts of composite objects.\n", "\n", "Serialisation will be a mix of YAML (for models and \"index\" files and FITS files for maps etc.)\n", "\n", "In v1.0, we will not have a framework supporting different serialisation backends, we will have one way." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## TODO\n", "\n", "- What about lightcurve?\n", "- What about simulate / fake?\n", "- What about flexible background modeling?\n", "- How would one write an interative detection method on top of this?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.0" } }, "nbformat": 4, "nbformat_minor": 2 }