{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# HSC Re-Run: Making Forced Photometry Light Curves from Scratch\n",
    "\n",
    "<br>Owners: **Justin Myles** ([@jtmyles](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@jtmyles)), **Phil Marshall** ([@drphilmarshall](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@drphilmarshall))\n",
    "<br>Last Verified to Run: **2018-09-21**\n",
    "<br>Verified Stack Release: **16.0**\n",
    "\n",
    "This project addresses issue [#63: HSC Re-run](https://github.com/LSSTScienceCollaborations/StackClub/issues/63)\n",
    "\n",
    "This notebook demonstrates the pipeline described in the [LSST Science Piplines data processing tutorial](https://pipelines.lsst.io/), from ingesting images (using the [obs_subaru](https://github.com/lsst/obs_subaru) package) through image processing, coaddition, source detection and object measurement all the way through to measuring forced photometry light curves in a small patch of the HSC sky (in the [ci_hsc](https://github.com/lsst/ci_hsc/) repository). It does this by calling a `bash` script, having first identified a minimal data set for demonstration purposes.  \n",
    "\n",
    "### Learning Objectives:\n",
    "After working through and studying this notebook you should be able to understand how to use the DRP pipeline from image visualization through to a forced photometry light curve. Specific learning objectives include: \n",
    "   1. How the command line tasks are configured, executed and linked together in a complete pipeline.\n",
    "   2. [Configuring](https://pipelines.lsst.io/v/w-2018-12/modules/lsst.pipe.base/command-line-task-config-howto.html) and executing pipeline tasks in python as well as on the command line.\n",
    "   3. The actual sequence of steps involved in the DRP pipeline.\n",
    "\n",
    "### Logistics\n",
    "This notebook is intended to be runnable on `lsst-lsp-stable.ncsa.illinois.edu` from a local git clone of https://github.com/LSSTScienceCollaborations/StackClub.\n",
    "\n",
    "\n",
    "## Set Up"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import re\n",
    "import sys\n",
    "import glob\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import astropy.io.fits as fitsio\n",
    "import warnings\n",
    "warnings.filterwarnings(\"ignore\", category=UserWarning)\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "from matplotlib.ticker import ScalarFormatter, FormatStrFormatter\n",
    "%matplotlib inline\n",
    "\n",
    "import eups.setupcmd\n",
    "import lsst.daf.persistence as dafPersist\n",
    "\n",
    "HOME = os.environ['HOME']\n",
    "DATAREPO = \"{}/repositories/ci_hsc/\".format(HOME)\n",
    "DATADIR = \"{}/DATA/\".format(HOME)\n",
    "CI_HSC = \"/project/shared/data/ci_hsc/\"\n",
    "os.system(\"mkdir -p {}\".format(DATADIR));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Pipeline Preview\n",
    "\n",
    "The pipeline described in the [LSST Science Piplines data processing tutorial](https://pipelines.lsst.io/) contains a complete set of command line tasks that can be assembled into an end-to-end data reduction pipeline script. Let's see what this script looks like."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "! cat Re-RunHSC.sh"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll come back to each step in turn throughout the rest of this notebook."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part I: Interacting with data. Introduction to the Butler\n",
    "https://pipelines.lsst.io/getting-started/data-setup.html\n",
    "\n",
    "Part I runs the following command-line tasks\n",
    "```\n",
    "source /opt/lsst/software/stack/loadLSST.bash\n",
    "eups list lsst_distrib\n",
    "setup lsst_distrib\n",
    "\n",
    "setup -j -r /project/shared/data/ci_hsc\n",
    "echo lsst.obs.hsc.HscMapper > $DATADIR/_mapper\n",
    "\n",
    "ingestImages.py $DATADIR $CI_HSC_DIR/raw/*.fits --mode=link\n",
    "installTransmissionCurves.py $DATADIR\n",
    "\n",
    "ln -s $CI_HSC_DIR/CALIB/ $DATADIR/CALIB\n",
    "mkdir -p $DATADIR/ref_cats\n",
    "ln -s $CI_HSC_DIR/ps1_pv3_3pi_20170110 $DATADIR/ref_cats/ps1_pv3_3pi_20170110\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In summary, these lines establish a directory with read/write permission where all data products made by this tutorial will be stored. In the underlying Python, this repository is represented by an instance of the Butler class (which inherits directly from Object). \n",
    "\n",
    "The first thing we do in our Butler repository is write a `_mapper` file, which tells the Butler which instrument was used to collect the data stored in the Butler. It needs this file to find and organize data in a format specific to the appropriate camera. This illustrates the Butler's motivating concept: *the LSST DM Stack should be capable of interacting with data collected from a variety of instruments*. The Butler facilitates this process by abstracting the data read/write process. \n",
    "\n",
    "Second, we use the `ingestImages.py` task to organize the data in a format specific to HSC. This task reads the raw FITS files and stores the schema associated with the data in the Butler repository.\n",
    "\n",
    "Third, we use the `installTransmissionCurves.py` task to install transmission curves for the data and link the calibration files associated with the raw data to the Butler.\n",
    "\n",
    "Last, we link a reference catalog to the Butler for use with astrometry.\n",
    "\n",
    "Doing some these steps in Python might look something like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "#!setup -j -r /home/jmyles/repositories/ci_hsc\n",
    "\n",
    "setup = eups.setupcmd.EupsSetup([\"-j\",\"-r\", DATAREPO])\n",
    "status = setup.run()\n",
    "print('setup exited with status {}'.format(status))\n",
    "\n",
    "with open(DATADIR + \"_mapper\", \"w\") as f:\n",
    "    f.write(\"lsst.obs.hsc.HscMapper\")\n",
    "    \n",
    "#!installTransmissionCurves.py /home/jmyles/DATA\n",
    "\n",
    "from lsst.obs.hsc import makeTransmissionCurves, HscMapper\n",
    "from lsst.daf.persistence import Butler\n",
    "\n",
    "butler = Butler(outputs={'root': datadir, 'mode': 'rw', 'mapper': HscMapper})\n",
    "\n",
    "for start, nested in makeTransmissionCurves.getFilterTransmission().items():\n",
    "    for name, curve in nested.items():\n",
    "        if curve is not None:\n",
    "            butler.put(curve, \"transmission_filter\", filter=name)\n",
    "for start, nested in makeTransmissionCurves.getSensorTransmission().items():\n",
    "    for ccd, curve in nested.items():\n",
    "        if curve is not None:\n",
    "            butler.put(curve, \"transmission_sensor\", ccd=ccd)\n",
    "for start, curve in makeTransmissionCurves.getOpticsTransmission().items():\n",
    "    if curve is not None:\n",
    "        butler.put(curve, \"transmission_optics\")\n",
    "for start, curve in makeTransmissionCurves.getAtmosphereTransmission().items():\n",
    "    if curve is not None:\n",
    "        butler.put(curve, \"transmission_atmosphere\")\n",
    "        \n",
    "# ingest calibration images into Butler repo\n",
    "os.system(\"ln -s {} {}\".format(datarepo + \"CALIB/\", datadir + \"CALIB\"))\n",
    "\n",
    "# ingest reference catalog into Butler repo\n",
    "os.system(\"mkdir -p {}\".format(DATADIR + \"ref_cats\"))\n",
    "os.system(\"ln -s {}ps1_pv3_3pi_20170110 {}ref_cats/ps1_pv3_3pi_20170110\".format(DATAREPO, DATADIR))        \n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part 2: Calibrating single frames\n",
    "https://pipelines.lsst.io/getting-started/processccd.html\n",
    "\n",
    "Part II runs the following command-line task:\n",
    "\n",
    "    processCcd.py $DATADIR --rerun processCcdOutputs --id\n",
    "    \n",
    "This applies photometric and astrometric calibrations to the raw images. \n",
    "\n",
    "The id flag allows you to select data by data ID: an unspecified id selects all raw data. Other example arguments are raw, filter, visit, ccd, and field. \n",
    "\n",
    "All command-line tasks write output datasets to a Butler repository. The --rerun flag here tells the tasks to write to `processCcdOutputs`.\n",
    "\n",
    "For information on further unpacking, see Alex Drlica-Wagner's [Pipeline Tasks](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/processccd/kadrlica/ImageProcessing/PipelineTasks.ipynb) notebook on unpacking command-line tasks."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from stackclub import where_is\n",
    "\n",
    "from lsst.pipe.tasks.processCcd import ProcessCcdTask, ProcessCcdConfig\n",
    "\n",
    "processCcdConfig = ProcessCcdConfig()\n",
    "processCcdTaskInstance = ProcessCcdTask(butler=butler)\n",
    "\n",
    "where_is(processCcdTaskInstance, in_the=\"source\")\n",
    "\n",
    "ProcessCcdTask.parseAndRun()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "# Running this in python might look something like this:\n",
    "from stackclub import where_is\n",
    "processCcd.py\n",
    "from lsst.pipe.tasks.processCcd import ProcessCcdTask\n",
    "\n",
    "processCcdConfig = ProcessCcdConfig()\n",
    "processCcdTaskInstance = ProcessCcdTask(butler=butler)\n",
    "\n",
    "where_is(processCcdTaskInstance, in_the=\"source\")\n",
    "\n",
    "ProcessCcdTask.parseAndRun()\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Part 3: Displaying exposures and source tables output by processCcd.py\n",
    "https://pipelines.lsst.io/getting-started/display.html\n",
    "\n",
    "This part of the tutorial is omitted."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Part 4: Coadding images\n",
    "https://pipelines.lsst.io/getting-started/coaddition.html"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Part IV runs the following command-line tasks:\n",
    "\n",
    "    makeDiscreteSkyMap.py $DATADIR --id --rerun processCcdOutputs:coadd --config skyMap.projection=\"TAN\"\n",
    "\n",
    "    makeCoaddTempExp.py $DATADIR --rerun coadd \\\n",
    "        --selectId filter=HSC-R \\\n",
    "        --id filter=HSC-R tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2 \\\n",
    "        --config doApplyUberCal=False doApplySkyCorr=False\n",
    "\n",
    "    makeCoaddTempExp.py $DATADIR --rerun coadd \\\n",
    "        --selectId filter=HSC-I \\\n",
    "        --id filter=HSC-I tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2 \\\n",
    "        --config doApplyUberCal=False doApplySkyCorr=False\n",
    "\n",
    "    assembleCoadd.py $DATADIR --rerun coadd \\\n",
    "        --selectId filter=HSC-R \\\n",
    "        --id filter=HSC-R tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2\n",
    "\n",
    "    assembleCoadd.py $DATADIR --rerun coadd \\\n",
    "        --selectId filter=HSC-I \\\n",
    "        --id filter=HSC-I tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2\n",
    "\n",
    "Since we want the deepest possible image for source detection in this example, we construct coadded images of the exposures. A sky map is a tiling of the celestial sphere. It is composed of one or more tracts, where a tract is in turn composed of one or more overlapping patches of sky which share a single WCS. \n",
    "\n",
    "First, we define a skymap with `makeDiscreteSkyMap.py` so that we can warp all of the exposure to fit on a single coordinate system. \n",
    "Second, we warp the images and store them as temporary exposures with `makeCoaddTempExp`. \n",
    "Finally, once we have warped images, we perform coaddition with `assembleCoadd.py`\n",
    "\n",
    "the configuration field specifying the WCS Projection can be:\n",
    "    - STG: stereographic projection\n",
    "    - MOL: Molleweide's projection\n",
    "    - TAN: tangent-plane projection"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Part 5: Source detection\n",
    "\n",
    "Part V runs the following command-line tasks:\n",
    "\n",
    "    detectCoaddSources.py $DATADIR --rerun coadd:coaddPhot \\\n",
    "        --id filter=HSC-R tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2\n",
    "\n",
    "    detectCoaddSources.py $DATADIR --rerun coaddPhot \\\n",
    "        --id filter=HSC-I tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2\n",
    "\n",
    "    mergeCoaddDetections.py $DATADIR --rerun coaddPhot --id filter=HSC-R^HSC-I\n",
    "\n",
    "    measureCoaddSources.py $DATADIR --rerun coaddPhot --id filter=HSC-R\n",
    "    measureCoaddSources.py $DATADIR --rerun coaddPhot --id filter=HSC-I\n",
    "\n",
    "    mergeCoaddMeasurements.py $DATADIR --rerun coaddPhot --id filter=HSC-R^HSC-I\n",
    "\n",
    "    forcedPhotCoadd.py $DATADIR --rerun coaddPhot:coaddForcedPhot --id filter=HSC-R\n",
    "    forcedPhotCoadd.py $DATADIR --rerun coaddForcedPhot --id filter=HSC-I\n",
    "\n",
    "    forcedPhotCcd.py $DATADIR --rerun coaddPhot:ccdForcedPhot --id filter=HSC-R \\ \n",
    "    --clobber-config --configfile=/project/shared/data/ci_hsc/forcedPhotCcdConfig.py &> ccd_r.txt\n",
    "    \n",
    "    forcedPhotCcd.py $DATADIR --rerun ccdForcedPhot --id filter=HSC-I \\ \n",
    "    --clobber-config --configfile=/project/shared/data/ci_hsc/forcedPhotCcdConfig.py &> ccd_i.txt\n",
    "\n",
    "The first pair of commands does source detection on the coadds in each band. \n",
    "\n",
    "The source catalogs are then merged so that we can measure photometry for a consistent table of sources across filters. \n",
    "\n",
    "`measureCoaddSources.py` does deblending with this complete catalog and measures regular photometry.\n",
    "\n",
    "We run `mergeCoaddMeasurements.py` to write a table that identifies the reference filter that has the best position measurement for each source in the tables you created with `measureCoaddSources.py`\n",
    "\n",
    "These accurate positions are used for forced photometry with `forcedPhotCoadd.py` as well as `forcedPhotCcd.py`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Part 6: Multi-band catalog analysis\n",
    "https://pipelines.lsst.io/getting-started/multiband-analysis.html\n",
    "\n",
    "We now turn to making plots of the photometry we've done. To start, we access the sources identified from the coadd images."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "butler_coadd = dafPersist.Butler(inputs=DATADIR + 'rerun/coaddForcedPhot/')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Grab their measured forced photometry."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# We pass a datasetRefOrType and a DataId (dict) to the butler\n",
    "# datasetRefOrType : deepCoadd_forced_src\n",
    "# see all options at\n",
    "# /opt/lsst/software/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/obs_subaru/16.0+1/python/lsst/obs/hsc\n",
    "\n",
    "rSources = butler_coadd.get('deepCoadd_forced_src', {'filter': 'HSC-R', 'tract': 0, 'patch': '1,1'})\n",
    "iSources = butler_coadd.get('deepCoadd_forced_src', {'filter': 'HSC-I', 'tract': 0, 'patch': '1,1'})\n",
    "print('{} sources with forced photometry measured from coadds'.format(len(rSources)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Discard sources with negative fluxes and convert fluxes to magnitudes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rCoaddCalib = butler_coadd.get('deepCoadd_calexp_calib',  {'filter': 'HSC-R', 'tract': 0, 'patch': '1,1'})\n",
    "iCoaddCalib = butler_coadd.get('deepCoadd_calexp_calib',  {'filter': 'HSC-I', 'tract': 0, 'patch': '1,1'})\n",
    "\n",
    "rCoaddCalib.setThrowOnNegativeFlux(False)\n",
    "iCoaddCalib.setThrowOnNegativeFlux(False)\n",
    "\n",
    "rMags_coadd = rCoaddCalib.getMagnitude(rSources['base_PsfFlux_flux'])\n",
    "iMags_coadd = iCoaddCalib.getMagnitude(iSources['base_PsfFlux_flux'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Make selection from catalog for stars only."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "deblended = rSources['deblend_nChild'] == 0\n",
    "\n",
    "refTable = butler_coadd.get('deepCoadd_ref', {'filter': 'HSC-R^HSC-I', 'tract': 0, 'patch': '1,1'})\n",
    "inInnerRegions = refTable['detect_isPatchInner'] & refTable['detect_isTractInner'] # define inner regions\n",
    "isSkyObject = refTable['merge_peak_sky'] # reject sky objects\n",
    "isPrimary = refTable['detect_isPrimary']\n",
    "\n",
    "isStellar = iSources['base_ClassificationExtendedness_value'] < 1.\n",
    "isGoodFlux = ~iSources['base_PsfFlux_flag']\n",
    "selected = isPrimary & isStellar & isGoodFlux"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Make color-magnitude diagram."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.style.use('seaborn-notebook')\n",
    "plt.figure(1, figsize=(4, 4), dpi=140)\n",
    "plt.title('Coadd Forced Photometry (Stars)')\n",
    "plt.scatter(rMags_coadd[selected] - iMags_coadd[selected],\n",
    "            iMags_coadd[selected],\n",
    "            edgecolors='None', s=2, c='k')\n",
    "\n",
    "plt.xlim(-0.5, 3)\n",
    "plt.ylim(25, 14)\n",
    "plt.xlabel('$r-i$')\n",
    "plt.ylabel('$i$')\n",
    "plt.subplots_adjust(left=0.125, bottom=0.1)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Instantiate Butler with forced photometry measured for individual exposures"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "butler_ccd = dafPersist.Butler(inputs=DATADIR + 'rerun/ccdForcedPhot/')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In order to associate individual visits with the MJD of the exposure, we go back to the raw images stored in the Butler repository. This may be replaceable with cleaner code that takes advantage of some Butler feature that accomplishes the same goal.\n",
    "\n",
    "Note: this should be improved"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# get visitInfo (for exposure time, obs Date, coords, info on observatory)\n",
    "# also look into butler registry butler.query_metadata()\n",
    "\n",
    "# associate each visit ID with an MJD\n",
    "# store in lookup hashtable\n",
    "visit_to_mjd = {}   \n",
    "\n",
    "raw_files = glob.glob(CI_HSC + 'raw/HSCA*.fits')\n",
    "\n",
    "for infile in raw_files:\n",
    "    visit_id = infile[len(CI_HSC) + len(\"raw/HSCA\"):-7]\n",
    "    hdulist = fitsio.open(infile)\n",
    "    try:\n",
    "        mjd = hdulist[1].header['MJD']\n",
    "    except:\n",
    "        mjd = hdulist[0].header['MJD']\n",
    "    visit_to_mjd[visit_id] = mjd"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Doing forced photometry on individual exposures saves the source tables in different files. Here we query the Butler for all the data and store the tables together. This may be replaceable with cleaner code that takes advantage of some Butler feature that accomplishes the same goal.\n",
    "\n",
    "Note: this should be improved"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_id_fields = [('filter', str), ('pointing', int), ('visit', int), \n",
    "                  ('ccd', int), ('field', str), ('dateObs', str), \n",
    "                  ('taiObs', str), ('expTime', float), ('tract', int)]\n",
    "\n",
    "i_tables = []\n",
    "r_tables = []\n",
    "\n",
    "for line in open(DATADIR + 'data_ids.txt'):\n",
    "    fields = line.split(\",\")\n",
    "    \n",
    "    data_id_dict = {data_id_fields[i][0] : data_id_fields[i][1](fields[i].split(':')[1]) for i in range(len(fields))}\n",
    "    print(data_id_dict)\n",
    "    \n",
    "    sources = butler_ccd.get('forced_src', data_id_dict)\n",
    "    source_table = sources.asAstropy().to_pandas()\n",
    "    source_table['visit'] = fields[2].split(':')[1]\n",
    "    source_table['mjd'] = [visit_to_mjd[key] if key in visit_to_mjd else 56598.2 for key in source_table['visit'] ] # TODO: this is problematic. fix this\n",
    "\n",
    "    if fields[0] == 'filter:HSC-R':\n",
    "        r_tables.append(source_table)\n",
    "    elif fields[0] == 'filter:HSC-I':\n",
    "        i_tables.append(source_table)\n",
    "    else:\n",
    "        print('Failed to read filter')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.style.use('seaborn-notebook')\n",
    "fig, axarr = plt.subplots(1, 2, figsize=(8, 4), dpi=140)\n",
    "\n",
    "fig.suptitle('Forced Photometry')\n",
    "\n",
    "axarr[0].set_title('Coadd (Stars Only)')\n",
    "axarr[0].scatter(rMags_coadd[selected] - iMags_coadd[selected],\n",
    "                 iMags_coadd[selected],\n",
    "                 edgecolors='None', s=2, c='k')\n",
    "\n",
    "axarr[0].set_xlim(-0.5, 3)\n",
    "axarr[0].set_ylim(25, 14)\n",
    "axarr[0].set_xlabel('$r-i$')\n",
    "axarr[0].set_ylabel('$i$')\n",
    "\n",
    "# datasetRefOrType : forced_src\n",
    "# see all options at\n",
    "# /opt/lsst/software/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/obs_subaru/16.0+1/python/lsst/obs/hsc\n",
    "iDataId = {'filter': 'HSC-I', 'pointing': 671, 'visit': 903986, 'ccd': 16, 'field': 'STRIPE82L', 'dateObs': '2013-11-02', \n",
    "           'taiObs': '2013-11-02', 'expTime': 30.0, 'tract': 0, 'patch' : '1,1'}\n",
    "rDataId = {'filter': 'HSC-R', 'pointing': 533, 'visit': 903334, 'ccd': 16, 'field': 'STRIPE82L', 'dateObs': '2013-06-17', \n",
    "          'taiObs': '2013-06-17', 'expTime': 30.0, 'tract': 0, 'patch' : '1,1'}\n",
    "\n",
    "iSources = butler_ccd.get('forced_src', iDataId)\n",
    "rSources = butler_ccd.get('forced_src', rDataId)\n",
    "\n",
    "iCcdCalib = butler_ccd.get('calexp_calib', iDataId)\n",
    "rCcdCalib = butler_ccd.get('calexp_calib', rDataId)\n",
    "\n",
    "rMags_ccd = rCcdCalib.getMagnitude(rSources['base_PsfFlux_flux'])\n",
    "iMags_ccd = iCcdCalib.getMagnitude(iSources['base_PsfFlux_flux'])\n",
    "\n",
    "axarr[1].set_title('Single Exposure (All Sources)')\n",
    "plt.scatter(rMags_ccd - iMags_ccd, iMags_ccd, edgecolors='None', s=2, c='k')\n",
    "axarr[1].set_xlim(-0.5, 3)\n",
    "axarr[1].set_ylim(25, 14)\n",
    "axarr[1].set_xlabel('$r-i$')\n",
    "axarr[1].set_ylabel('$i$')\n",
    "plt.subplots_adjust(left=0.125, bottom=0.1)\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we concatenate the concatenate the measured sources into two tables (one for each filter). Then group by object ID to draw a five epoch light curve."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "iSources = pd.concat(i_tables)    \n",
    "rSources = pd.concat(r_tables)\n",
    "\n",
    "iGrouped = iSources.groupby('objectId')\n",
    "rGrouped = rSources.groupby('objectId')\n",
    "\n",
    "print('{} i band objects, {} measurements'.format(len(np.unique(iSources['objectId'])), len(iSources['objectId'])))\n",
    "print('{} r band objects, {} measurements'.format(len(np.unique(rSources['objectId'])), len(rSources['objectId'])))\n",
    "\n",
    "objids = [name for name, group in iGrouped if len(group) == 5]\n",
    "print('{} objects with 5 epocs'.format(len(objids)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Choose an object to draw a light curve."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "obj = objids[1]\n",
    "print('e.g. objectId:', obj, '(showing select rows from table)')\n",
    "iSources[iSources['objectId'] == obj][['id','coord_ra','coord_dec','objectId','base_PsfFlux_flux','base_PsfFlux_fluxSigma', 'visit','mjd']]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# http://slittlefair.staff.shef.ac.uk/teaching/phy217/lectures/stats/L18/index.html\n",
    "yerr = 1.09 * iSources[iSources['objectId'] == obj]['base_PsfFlux_fluxSigma'].values / iSources[iSources['objectId'] == obj]['base_PsfFlux_flux'].values\n",
    "yerr"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.style.use('seaborn-notebook')\n",
    "plt.figure(3, figsize=(4, 4), dpi=140)\n",
    "plt.title('objectId: {}'.format(obj))\n",
    "plt.errorbar(iSources[iSources['objectId'] == obj]['mjd'].values, \n",
    "            iCcdCalib.getMagnitude(iSources[iSources['objectId'] == obj]['base_PsfFlux_flux'].values),\n",
    "            yerr=yerr,\n",
    "            markersize=6, color='k', fmt='.')\n",
    "\n",
    "plt.ylabel('$i$' + ' mag')\n",
    "plt.xlabel('MJD')\n",
    "\n",
    "plt.subplots_adjust(left=0.125, bottom=0.1)\n",
    "ax = plt.gca()\n",
    "ax.ticklabel_format(useOffset=56598, style='plain', axis='x', useMathText=True)\n",
    "ax.invert_yaxis()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "As the plots above show, we have completed an end-to-end processing of the `ci_hsc` test dataset, following the steps in the http://pipelines.lsst.io \"getting started\" tutorial. We saw how the command line tasks take care of all teh book-keeping, and how they are configured and run. "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}