{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Accessing DC2 forced source data in PostgreSQL at NERSC" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Owner: **Joanne Bogart [@jrbogart](https://github.com/LSSTDESC/DC2-analysis/issues/new?body=@jrbogart)** \n", "Last Verified to Run: **2020-08-03**\n", "\n", "This notebook demonstrates access to forced source data via the PostgreSQL database at NERSC. Currently the only forced source dataset available is the one for Run1.2p v4. Because of the size of forced source (not many columns but a lot of rows; over 680 million just for 1.2p v4) database access is likely to perform better than file access. It is also possible to efficiently correlate the information in the forced source dataset and the object catalog. A view has been provided so that the most useful quantities from the object catalog may be fetched easily along with forced source fields.\n", "\n", "__Learning objectives__:\n", "\n", "After going through this notebook, you should be able to:\n", " 1. Find out what Forced source information is available and query it.\n", " 2. Find out what information is kept per visit and query it.\n", " 3. Use the forcedsource view to get forced source information and most commonly needed fields from the object catalog and visit table associated with the forced source entries.\n", " 4. Make use of standard tools to, e.g., plot light curves\n", "\n", "__Logistics__: This notebook is intended to be run through the JupyterHub NERSC interface available here: https://jupyter.nersc.gov. To setup your NERSC environment, please follow the instructions available here: \n", "https://confluence.slac.stanford.edu/display/LSSTDESC/Using+Jupyter+at+NERSC\n", "### Prerequisites\n", "* You should work through the first PostgreSQL notebook, \"Accessing DC2 Data in PostgreSQL at NERSC\", before tackling this one.\n", " " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import psycopg2\n", "import numpy as np\n", "%matplotlib inline \n", "import matplotlib.pyplot as plt\n", "import pandas as pd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make the db connection" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "dbname = 'desc_dc2_drp'\n", "dbuser = 'desc_dc2_drp_user'\n", "dbhost = 'nerscdb03.nersc.gov'\n", "dbconfig = {'dbname' : dbname, 'user' : dbuser, 'host' : dbhost}\n", "dbconn = psycopg2.connect(**dbconfig)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "schema = 'run12p_v4' # Currently (May 2019) only dataset with forced source " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Display all tables and views belonging to the schema. Most of them are for the object catalog" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ccdvisit\n", "dpdd\n", "dpdd_forced\n", "dpdd_qserv\n", "dpdd_ref\n", "forced2\n", "forced3\n", "forced4\n", "forced5\n", "forcedsource\n", "forcedsourcenative\n", "forcedsource_qserv\n", "misc_ref\n", "position\n", "_temp:forced_bit\n", "_temp:forced_patch\n" ] } ], "source": [ "q1 = \"SELECT DISTINCT table_name FROM information_schema.columns WHERE table_schema='{schema}' ORDER BY table_name\".format(**locals())\n", "with dbconn.cursor() as cursor:\n", " # Could have several queries interspersed with other code in this block\n", " cursor.execute(q1)\n", " for record in cursor:\n", " print(record[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Where Forced Source data can be found\n", "\n", "**\\_temp:forced\\_patch** and **\\_temp:forced_bit** are artifacts of the ingest process and are of no interest here.\n", "\n", "forcedsourcenative has columns from the forced source data most likely to be of interest. These include objectid (identical in meaning to its use in the object catalog) and ccdvisitid. ccdvisitid uniquely identifies a row in the ccdvisit table and is computed from visit, raft and sensor ids. " ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "SELECT column_name, data_type FROM information_schema.columns WHERE table_schema='run12p_v4' AND table_name='forcedsourcenative' order by column_name \n", "There are 20 columns in table forcedsourcenative. They are:\n", "\n", "Name Data Type\n", "base_pixelflags_flag boolean \n", "base_pixelflags_flag_bad boolean \n", "base_pixelflags_flag_cr boolean \n", "base_pixelflags_flag_crcenter boolean \n", "base_pixelflags_flag_edge boolean \n", "base_pixelflags_flag_interpolated boolean \n", "base_pixelflags_flag_interpolatedcenter boolean \n", "base_pixelflags_flag_offimage boolean \n", "base_pixelflags_flag_saturated boolean \n", "base_pixelflags_flag_saturatedcenter boolean \n", "base_pixelflags_flag_suspect boolean \n", "base_pixelflags_flag_suspectcenter boolean \n", "base_psfflux_flag boolean \n", "base_psfflux_flag_badcentroid boolean \n", "base_psfflux_flag_edge boolean \n", "base_psfflux_flag_nogoodpixels boolean \n", "base_psfflux_instflux real \n", "base_psfflux_instfluxerr real \n", "ccdvisitid bigint \n", "objectid bigint \n" ] } ], "source": [ "tbl = 'forcedsourcenative'\n", "q2 = \"SELECT column_name, data_type FROM information_schema.columns WHERE table_schema='{schema}' AND table_name='{tbl}' order by column_name \".format(**locals())\n", "print(q2)\n", "with dbconn.cursor() as cursor:\n", " cursor.execute(q2)\n", " records = cursor.fetchall()\n", " print(\"There are {} columns in table {}. They are:\\n\".format(len(records), tbl))\n", " print(\"Name Data Type\")\n", " for record in records:\n", " print(\"{0!s:55} {1!s:20}\".format(record[0], record[1]) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is a similar query for the **ccdvisit** table." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "There are 12 columns in table ccdvisit. They are:\n", "\n", "Name Data Type\n", "ccdvisitid bigint \n", "visitid integer \n", "ccdname character \n", "raftname character \n", "filtername character \n", "obsstart timestamp without time zone\n", "expmidpt double precision \n", "exptime double precision \n", "zeropoint real \n", "seeing real \n", "skybg real \n", "skynoise real \n" ] } ], "source": [ "tbl = 'ccdvisit'\n", "q2_pos = \"SELECT column_name, data_type FROM information_schema.columns WHERE table_schema='{schema}' AND table_name='{tbl}'\".format(**locals())\n", "with dbconn.cursor() as cursor:\n", " cursor.execute(q2_pos)\n", " records = cursor.fetchall()\n", " print(\"There are {} columns in table {}. They are:\\n\".format(len(records), tbl))\n", " print(\"Name Data Type\")\n", " for record in records:\n", " print(\"{0!s:55} {1!s:20}\".format(record[0], record[1]) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of these columns two (`expmidpt`, `exptime`) are, in this data set, always null. For many purposes `ccdname` and `raftname` are of no interest and `visitid` is encompassed by `ccdvisitid`. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is a query which finds all visits. The example only prints out the total number." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 1.9 ms, sys: 0 ns, total: 1.9 ms\n", "Wall time: 955 ms\n", "1991 visits found\n" ] } ], "source": [ "q3 = \"SELECT DISTINCT visitid FROM {schema}.ccdvisit\".format(**locals())\n", "with dbconn.cursor() as cursor:\n", " %time cursor.execute(q3)\n", " records = cursor.fetchall()\n", " print(\"{} visits found\".format(len(records)))\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The view `forcedsource` selects fields of interest from `forcedsourcenative`, `ccdvisit` and the object catalog. This query fetches all its fields." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "There are 47 columns in view forcedsource. They are:\n", "\n", "Name Data Type\n", "objectid bigint \n", "ccdvisitid bigint \n", "ra double precision \n", "dec double precision \n", "extendedness real \n", "blendedness real \n", "filtername character \n", "obsstart timestamp without time zone\n", "psflux_g real \n", "psflux_i real \n", "psflux_r real \n", "psflux_u real \n", "psflux_y real \n", "psflux_z real \n", "psflux_flag_g boolean \n", "psflux_flag_i boolean \n", "psflux_flag_r boolean \n", "psflux_flag_u boolean \n", "psflux_flag_y boolean \n", "psflux_flag_z boolean \n", "psflux real \n", "psflux_flag boolean \n", "psfluxerr_g real \n", "psfluxerr_i real \n", "psfluxerr_r real \n", "psfluxerr_u real \n", "psfluxerr_y real \n", "psfluxerr_z real \n", "mag_g real \n", "mag_i real \n", "mag_r real \n", "mag_u real \n", "mag_y real \n", "mag_z real \n", "magerr_g real \n", "magerr_i real \n", "magerr_r real \n", "magerr_u real \n", "magerr_y real \n", "magerr_z real \n", "psfluxerr real \n", "mag double precision \n", "magerr real \n", "good boolean \n", "forcedsourcevisit_good boolean \n", "clean boolean \n", "coord USER-DEFINED \n" ] } ], "source": [ "tbl = 'forcedsource'\n", "q4 = \"SELECT column_name, data_type FROM information_schema.columns WHERE table_schema='{schema}' AND table_name='{tbl}'\".format(**locals())\n", "with dbconn.cursor() as cursor:\n", " cursor.execute(q4)\n", " records = cursor.fetchall()\n", " print(\"There are {} columns in view {}. They are:\\n\".format(len(records), tbl))\n", " print(\"Name Data Type\")\n", " for record in records:\n", " print(\"{0!s:55} {1!s:20}\".format(record[0], record[1]) ) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Some explanation is in order. Most of these fields (e.g. `psflux_g`, `psflux_flag_g`, `psfluxerr_g`, `mag_g`, `magerr_g` and similar for the other bands) come from the corresponding object in the object catalog and have the same name they have in the object dpdd. `ra`, `dec` (as well as `coord`, which is a more convenient way to express location in some circumstances), `extendedness`, `blendedness`, `good`, and `clean` also come from the object catalog. (For information about dpdd quantities like clean and other fields mentioned above, see the [SCHEMA.md](https://github.com/LSSTDESC/gcr-catalogs/blob/master/GCRCatalogs/SCHEMA.md#schema-for-dc2-coadd-catalogs) file of the LSSTDESC/gcr-catalogs repository.)\n", "\n", "`filtername` and `obsstart` come from `ccdvisit`. The rest all come from `forcedsourcenative` one way or another. Some fields, such as `psflux`, come directly but have been renamed to match the names suggested for the forced source dpdd as defined in LSE-163. Others (e.g. `mag`, `magerr`) have been computed from one or more fields in `forcedsourcenative`. `forcedsourcevisit_good` is similar to `good` (meaning no flagged pixels) but uses flags from `forcedsourcenative` rather than the object catalog." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Light curves\n", "For this purpose first find objects with lots of visits. This query takes 5 or 6 minutes to execute, the only query in this notebook which is slow; the others all return essentially immediately. Then cut on various measures of goodness. For the remaining objects save object id, ra, dec and extendedness. Since the query is relatively long the function writes out the data. You can either just read in such a file or recreate it yourself by uncommenting the call to the function,\n", "substituting something reasonable for `'some_path'`, and then skip the step which reads from a file created previously by calling the function." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def getGoodVisited(outpath, schema):\n", " '''\n", " Get and save information for good objects which show up in at least 400 visits\n", " \n", " Parameters\n", " ----------\n", " outpath : str Path to output csv file\n", " schema : str Database schema in which data are stored\n", " \n", " Returns\n", " -------\n", " Pandas data frame with certain properties for all objects making the cuts \n", " '''\n", " # First find visit count per object\n", " avisited_q = 'select objectid,count(objectid) as avisit_count from {}.forcedsourcenative ' \n", " avisited_q += 'group by (objectid) '\n", " avisited_q += 'order by avisit_count desc'\n", " avisited_qf = avisited_q.format(schema)\n", " print(avisited_qf) # this is the time-consuming query\n", " avisited_records = []\n", " with dbconn.cursor() as cursor:\n", " %time cursor.execute(avisited_qf)\n", " avisited_records = cursor.fetchall()\n", " avisited_df = pd.DataFrame(avisited_records, columns=['objectid', 'visit_count'])\n", " print(avisited_df.shape)\n", " \n", " # Keep the ones with at least 400 visits\n", " over400_df = avisited_df.query('visit_count > 400')\n", " print(over400_df.shape)\n", " \n", " # Now query on those object in the set which also satisfy some cuts\n", " i_list = list(over400_df['objectid'])\n", " s_list = []\n", " for i in i_list : s_list.append(str(i))\n", " objectcut = ' AND objectid in (' + ','.join(s_list) + ')'\n", "\n", " global_cuts = 'clean '\n", "\n", " min_SNR = 25 \n", " max_err = 1/min_SNR\n", " band_cuts = ' (magerr_g < {max_err}) AND (magerr_i < {max_err}) AND (magerr_r < {max_err}) '.format(**locals())\n", " where = ' WHERE ' + global_cuts + ' AND ' + band_cuts \n", "\n", " goodobjects_q = \"SELECT objectid, extendedness, ra, dec, mag_i, mag_g, mag_r from {schema}.dpdd \".format(**locals()) + where + objectcut\n", " # Don't normally print out this query because object_cut can be a very long string. \n", " records = []\n", " with dbconn.cursor() as cursor:\n", " %time cursor.execute(goodobjects_q)\n", " records = cursor.fetchall()\n", " nObj = len(records)\n", " \n", " df = pd.DataFrame(records, columns=['objectid', 'extendedness', 'ra', 'dec', 'mag_i', 'mag_g', 'mag_r'])\n", " print(\"Total: \", nObj)\n", " over400out = open(outpath,'w') \n", " df.to_csv(over400out)\n", " return df\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** For information about dpdd quantities like `clean` and other fields mentioned above, see the [SCHEMA.md](https://github.com/LSSTDESC/gcr-catalogs/blob/master/GCRCatalogs/SCHEMA.md#schema-for-dc2-coadd-catalogs) file of the LSSTDESC/gcr-catalogs repository " ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "# recreate good objects table\n", "# df = getGoodVisited('some_path', schema)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "# Read in good objects table\n", "retrieve_path = '../tutorials/assets/{}-over400visits.csv'.format(schema)\n", "df = pd.read_csv(retrieve_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Get the data\n", "Query `forcedsource` for an entry and use the returned data to plot light curves. " ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "# This object happens to be a star\n", "star_ix = 1045\n", "star_id = df['objectid'][star_ix]" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "\n", "def getlc(schema, objectid):\n", " q_template = 'select filtername as band,obsstart,mag,magerr from {schema}.forcedsource ' \n", " q_template += 'where objectid={objectid} and forcedsourcevisit_good and not psflux_flag order by filtername,obsstart'\n", " lc_q = q_template.format(**locals())\n", " print(lc_q)\n", " with dbconn.cursor() as cursor:\n", " %time cursor.execute(lc_q)\n", " records = cursor.fetchall()\n", " \n", " df = pd.DataFrame(records, \n", " columns=['filtername', 'obsstart', 'mag', 'magerr'])\n", " #print('Printing i-band data from getlc for object ', objectid)\n", " #iband = df[(df.filtername == 'i')]\n", " #for ival in list(iband['mag']): print(ival)\n", " \n", " return df" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "select filtername as band,obsstart,mag,magerr from run12p_v4.forcedsource where objectid=21326423885118366 and forcedsourcevisit_good and not psflux_flag order by filtername,obsstart\n", "CPU times: user 2.8 ms, sys: 0 ns, total: 2.8 ms\n", "Wall time: 6.58 s\n" ] } ], "source": [ "star_data = getlc(schema, star_id)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | filtername | \n", "obsstart | \n", "mag | \n", "magerr | \n", "
---|---|---|---|---|
0 | \n", "g | \n", "2022-09-17 05:19:22.051 | \n", "18.651415 | \n", "0.003244 | \n", "
1 | \n", "g | \n", "2022-09-20 04:54:29.750 | \n", "18.637620 | \n", "0.002824 | \n", "
2 | \n", "g | \n", "2022-10-02 05:09:06.797 | \n", "18.638224 | \n", "0.002789 | \n", "
3 | \n", "g | \n", "2022-10-05 04:38:55.075 | \n", "18.652247 | \n", "0.003655 | \n", "
4 | \n", "g | \n", "2022-10-15 03:58:41.405 | \n", "18.639867 | \n", "0.003311 | \n", "
... | \n", "... | \n", "... | \n", "... | \n", "... | \n", "
415 | \n", "z | \n", "2028-10-23 03:19:44.458 | \n", "17.978046 | \n", "0.004209 | \n", "
416 | \n", "z | \n", "2028-11-11 02:28:09.696 | \n", "17.971891 | \n", "0.004239 | \n", "
417 | \n", "z | \n", "2028-12-26 02:24:59.443 | \n", "17.961123 | \n", "0.004538 | \n", "
418 | \n", "z | \n", "2029-09-29 06:36:27.389 | \n", "17.985021 | \n", "0.004129 | \n", "
419 | \n", "z | \n", "2029-10-02 05:07:00.307 | \n", "17.982122 | \n", "0.004054 | \n", "
420 rows × 4 columns
\n", "