{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Accessing DC2 data in PostgreSQL at NERSC" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Owner: **Joanne Bogart [@jrbogart](https://github.com/LSSTDESC/DC2-analysis/issues/new?body=@jrbogart)** \n", "Last Verified to Run: \n", "\n", "This notebook is an introduction to use of the PostgreSQL database at NERSC. Currently the only fully supported datasets are the object catalogs for Run1.2i and Run1.2p v4. Since object catalogs as well as other kinds of catalogs are also available via GCR one might question the need for another form of access. The justification (for those applications only using object catalogs) is performance. Typical queries such as the one labeled `q5` below run significantly faster. Ingest also tends to be faster. The Run1.2p v4 data was available via PostgreSQL within a day of the completion of stack processing and transfer to NERSC.\n", "\n", "__Learning objectives__:\n", "\n", "After going through this notebook, you should be able to:\n", " 1. Connect to the DESC DC2 PostgreSQL database at NERSC.\n", " 2. Find out what tables are in the database, what their constituent columns are, and how they relate to DPDD and native object catalog quantities.\n", " 3. Make queries, selecting columns of interest subject to typical cuts. \n", " 4. Make use of standard tools, such as a Pandas, for plotting or other analysis of query results.\n", "\n", "__Logistics__: This notebook is intended to be run through the JupyterHub NERSC interface available here: https://jupyter-dev.nersc.gov. To setup your NERSC environment, please follow the instructions available here: https://confluence.slac.stanford.edu/display/LSSTDESC/Using+Jupyter-dev+at+NERSC\n", "\n", "### Prerequisites\n", "* A file ~/.pgpass containing a line like this:\n", "\n", "`nerscdb03.nersc.gov:54432:desc_dc2_drp:desc_dc2_drp_user:`_password_\n", "\n", "This line allows you to use the desc_dc2_drp_user account, which has *SELECT* privileges on the database, without entering a password in plain text. There is a separate account for adding to or modifying the database. .pgpass must be protected so that only owner may read and write it.\n", " \n", "You can obtain the file by running the script `/global/common/software/lsst/dbaccess/postgres_reader.sh`. It will copy a suitable file to your home directory and set permissions. \n", "\n", "If you already have a `.pgpass` file in your home directory the script will stop without doing anything to avoid clobbering your file. In that case, see the file `reader.pgpass` in the same directory. You can merge it into your `.pgpass` file by hand. \n", "\n", "* Access to the psycopg2 package which provides a Python interface to PostgreSQL. The recommended way to achieve this is to use the desc-python kernel on jupyter-dev.\n", " \n", "This notebook uses psycopg2 directly for queries. It is also possible to use sqlalchemy but you will still need a PostgreSQL driver. Of these psycopg2 is the most popular.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import psycopg2\n", "\n", "import numpy as np\n", "\n", "%matplotlib inline \n", "import matplotlib.pyplot as plt\n", "\n", "import pandas as pd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make the db connection" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dbname = 'desc_dc2_drp'\n", "dbuser = 'desc_dc2_drp_user'\n", "dbhost = 'nerscdb03.nersc.gov'\n", "dbconfig = {'dbname' : dbname, 'user' : dbuser, 'host' : dbhost}\n", "dbconn = psycopg2.connect(**dbconfig)\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "schema = 'run12i' # change value to 'run12p_v4' for the Run 1.2p catalog " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tables for the Run1.2i data as well as a view to make dpdd quantities more easily accessible are in the `schema` (acts like a namespace) run12i. The value for `schema` above will change for other datasets. \n", "\n", "There is a special system schema, **information_schema**, which contains tables describing the structure of user tables. Of these **information_schema.columns** is most likely to be useful. The following lists all tables and views belonging to schema run12i. (I will use the convention of writing SQL keywords in all caps in queries. It's not necessary; the SQL interpreter ignores case.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "q1 = \"SELECT DISTINCT table_name FROM information_schema.columns WHERE table_schema='{schema}' ORDER BY table_name\".format(**locals())\n", "with dbconn.cursor() as cursor:\n", " # Could have several queries interspersed with other code in this block\n", " cursor.execute(q1)\n", " for record in cursor:\n", " print(record[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**\\_temp:forced\\_patch** is an artifact of the ingest process which is of no interest here.\n", "All the remaining entries with the exception of **dpdd** are tables. Each table has rows of data, one per object in the catalog. The rows consist of \"native quantities\" as produced by running the dm stack on the simulated data. **dpdd** is a _view_* making the quantities identified in [GCRCatalogs/SCHEMA.md](https://github.com/LSSTDESC/gcr-catalogs/blob/master/GCRCatalogs/SCHEMA.md#schema-for-dc2-coadd-catalogs) available. Information is broken across several tables because there are too many columns for a single table. All tables have a field ```object_id```. In the **dpdd** view it's called ```objectId``` because that's the official name for it. The following code will list all quantities in the **dpdd** view. Note the database ignores case; all quantities appear in lower case only.\n", "\n", "*A _view_ definition consists of references to quantities stored in the tables or computable from them; the view has no data of its own. The view name is used in queries just like a table name." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tbl = 'dpdd'\n", "q2 = \"SELECT column_name, data_type FROM information_schema.columns WHERE table_schema='{schema}' AND table_name='{tbl}' order by column_name \".format(**locals())\n", "print(q2)\n", "with dbconn.cursor() as cursor:\n", " cursor.execute(q2)\n", " records = cursor.fetchall()\n", " print(\"There are {} columns in table {}. They are:\\n\".format(len(records), tbl))\n", " print(\"Name Data Type\")\n", " for record in records:\n", " print(\"{0!s:55} {1!s:20}\".format(record[0], record[1]) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is a similar query for the **position** table." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tbl = 'position'\n", "q2_pos = \"SELECT column_name, data_type FROM information_schema.columns WHERE table_schema='{schema}' AND table_name='{tbl}'\".format(**locals())\n", "with dbconn.cursor() as cursor:\n", " cursor.execute(q2_pos)\n", " records = cursor.fetchall()\n", " print(\"There are {} columns in table {}. They are:\\n\".format(len(records), tbl))\n", " print(\"Name Data Type\")\n", " for record in records:\n", " print(\"{0!s:55} {1!s:20}\".format(record[0], record[1]) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is a query which counts up objects per tract and stores the results (queries return a list of tuples) in a pandas DataFrame. It makes use of a user-defined function (UDF*) ```tract_from_object_id```, which is by far the fastest way to determine the tract. \n", "\n", "*The UDF `tract_from_object_id` is one of several which minimize query time by making optimal use of the structure of the database. See the second tutorial in this series for a discussion of some of the others. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "q3 = \"SELECT tract_from_object_id(object_id), COUNT(object_id) FROM {schema}.position WHERE detect_isprimary GROUP BY tract_from_object_id(object_id)\".format(**locals())\n", "with dbconn.cursor() as cursor:\n", " %time cursor.execute(q3)\n", " df = pd.DataFrame(cursor.fetchall(), columns=['tract', 'count'])\n", " print(df)\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the same query, but made on the dpdd view rather than the position table. There is no need to specify \"is primary\" because the dpdd view already has this constraint." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "q4 = \"SELECT tract_from_object_id(objectid), COUNT(objectid) FROM {schema}.dpdd GROUP BY tract_from_object_id(objectid)\".format(**locals())\n", "with dbconn.cursor() as cursor:\n", " cursor.execute(q4)\n", " df = pd.DataFrame(cursor.fetchall(), columns=['tract', 'count'])\n", " print(df)\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following can be compared with a similar query in the GCR Intro notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "q5 = \"SELECT ra,dec FROM {schema}.dpdd\".format(**locals())\n", "with dbconn.cursor() as cursor:\n", " %time cursor.execute(q5)\n", " %time records = cursor.fetchall()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Color-color\n", "Techniques are adapted from the notebook [object_pandas_stellar_locus](https://github.com/LSSTDESC/DC2-analysis/blob/master/tutorials/object_pandas_stellar_locus.ipynb).\n", "#### Using cuts\n", "Put some typical cuts in a WHERE clause: select `clean` objects (no flagged pixels; not skipped by deblender) for which signal to noise in bands of interest is acceptable.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "global_cuts = 'clean '\n", "t = None\n", "\n", "min_SNR = 25 \n", "max_err = 1/min_SNR\n", "band_cuts = ' (magerr_g < {}) AND (magerr_i < {}) AND (magerr_r < {}) '.format(max_err,max_err,max_err)\n", "\n", "where = ' WHERE ' + global_cuts + ' AND ' + band_cuts \n", "q6 = \"SELECT mag_g,mag_r,mag_i FROM {schema}.dpdd \".format(**locals()) + where\n", "print(q6)\n", "records = []\n", "with dbconn.cursor() as cursor:\n", " %time cursor.execute(q6)\n", " records = cursor.fetchall()\n", " nObj = len(records)\n", " \n", "df = pd.DataFrame(records, columns=['mag_g', 'mag_r', 'mag_i'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Plotting" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_stellar_locus_davenport(color1='gmr', color2='rmi',\n", " datafile='../tutorials/assets/Davenport_2014_MNRAS_440_3430_table1.txt'):\n", " #datafile='assets/Davenport_2014_MNRAS_440_3430_table1.txt'):\n", " data = pd.read_table(datafile, sep='\\s+', header=1)\n", " return data[color1], data[color2]\n", " \n", " \n", "def plot_stellar_locus(color1='gmr', color2='rmi',\n", " color='red', linestyle='--', linewidth=2.5):\n", " model_gmr, model_rmi = get_stellar_locus_davenport(color1, color2)\n", " plot_kwargs = {'linestyle': linestyle, 'linewidth': linewidth, 'color': color,\n", " 'scalex': False, 'scaley': False}\n", " plt.plot(model_gmr, model_rmi, **plot_kwargs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_color_color(z, color1, color2, range1=(-1, +2), range2=(-1, +2), bins=31, title=None):\n", " \"\"\"Plot a color-color diagram. Overlay stellar locus. \"\"\"\n", " band1, band2 = color1[0], color1[-1]\n", " band3, band4 = color2[0], color2[-1]\n", " H, xedges, yedges = np.histogram2d(\n", " z['mag_%s' % band1] - z['mag_%s' % band2],\n", " z['mag_%s' % band3] - z['mag_%s' % band4],\n", " range=(range1, range2), bins=bins)\n", " \n", " zi = H.T\n", " xi = (xedges[1:] + xedges[:-1])/2\n", " yi = (yedges[1:] + yedges[:-1])/2\n", "\n", " cmap = 'viridis_r'\n", " plt.figure(figsize=(8, 8))\n", " plt.pcolormesh(xi, yi, zi, cmap=cmap)\n", " plt.contour(xi, yi, zi)\n", " plt.xlabel('%s-%s' % (band1, band2))\n", " plt.ylabel('%s-%s' % (band3, band4))\n", " if not title == None:\n", " plt.suptitle(title, size='xx-large', y=0.92)\n", "\n", " plot_stellar_locus(color1, color2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_color_color(df, 'gmr', 'rmi')\n", "print('Using schema {}, cut on max err={}, found {} objects'.format(schema, max_err, nObj))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now make the same plot, but for Run 1.2p data. The query takes noticeably longer because the Run 1.2p catalog is about 5 times bigger than the one for Run 1.2i." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "schema = 'run12p_v4'\n", "\n", "global_cuts = 'clean '\n", "t = None\n", "\n", "min_SNR = 25 \n", "max_err = 1/min_SNR\n", "band_cuts = ' (magerr_g < {}) AND (magerr_i < {}) AND (magerr_r < {}) '.format(max_err,max_err,max_err)\n", "where = ' WHERE ' + global_cuts + ' AND ' + band_cuts \n", "q7 = \"SELECT mag_g,mag_r,mag_i FROM {schema}.dpdd \".format(**locals()) + where\n", "print(q7)\n", "records = []\n", "with dbconn.cursor() as cursor:\n", " %time cursor.execute(q7)\n", " records = cursor.fetchall()\n", " nObj = len(records)\n", " \n", "df = pd.DataFrame(records, columns=['mag_g', 'mag_r', 'mag_i'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_color_color(df, 'gmr', 'rmi', title=t)\n", "print(q5)\n", "print('Using schema {} , max err={}, found {} objects'.format(schema, max_err, nObj))" ] } ], "metadata": { "kernelspec": { "display_name": "desc-python", "language": "python", "name": "desc-python" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }