{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# CosmoDC2: The Movie (feat. Apache Spark)\n", "\n", "
Kernel: **desc-pyspark**\n", "
Author: **LAL team**\n", "
Last Verified to Run: **2018-12-20**\n", " \n", "The goal of this notebook is to have a first 3D look at the latest DC2 extragalatic catalog (v1.1.4) using Apache Spark.\n", "The use of Jupyter notebook at NERSC being (currently) memory limited, is not necessarily the best suited to do that, but may be used for local data visualization.\n", "It is also an opportunity to help people learning Apache Spark while both enjoying the beauty of 3D displays.\n", "\n", "Finally it can be seen as the making-of the future awards-winning COSMO-DC2 movie!\n", "\n", "**Logistics:** \n", "\n", "This notebook is intended to be run through the JupyterHub NERSC interface with the desc-pyspark kernel. The kernel is automatically installed in your environment when you use the kernel setup script:\n", "\n", "```bash\n", "source /global/common/software/lsst/common/miniconda/kernels/setup.sh\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Spark in a nuttshell\n", "### Why using (or even testing) Spark?\n", "\n", "Spark is an industry-standards solution to treat a large amount of data on several nodes, ie **in parallel** without ever writing a single line of MPI! And this is ***simple***. \n", "\n", "Even better it allows to analyze your data **interactively** (in cache) as if you had a huge RAM available just for you. \n", "\n", "You don't have to care about complicate **optimisation**, it is automatically performed for you (and probably better than what you would do with sweat and tears)\n", "\n", "And yes it is available as a set of python functions:and you can even install it on your laptop to play with. Even more: you can developp on your laptop and your results will (most certainly) **scale** when you willl apply them at NERSC onto the full dataset\n", "\n", "By large we mean really Large (typically at the Tebayte scale). But before we get to that amount of data with LSST why not start learning it?\n", "\n", "\n", "### usefull doc\n", "- If you want to understand better how this magic happens see https://arxiv.org/abs/1807.03078 and it associated notebook https://github.com/astrolabsoftware/1807.03078/blob/master/Spark4Physicists.ipynb\n", "- the base for starting is https://spark.apache.org/docs/2.3.1/ \n", "- since we will essentially use here the python Spark-SQL capabilities focuss on\n", "https://spark.apache.org/docs/latest/api/python/pyspark.sql.html\n", "- in particular, have a look at the set of pyspark sql functions:\n", "https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.functions\n", "\n", "Now time to start: let import some base packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from time import time\n", "\n", "# For plot\n", "%matplotlib inline\n", "import matplotlib\n", "import matplotlib.pyplot as plt\n", "from mpl_toolkits.mplot3d import Axes3D\n", "\n", "# Spark imports\n", "from pyspark.sql import SparkSession\n", "from pyspark.sql.types import IntegerType\n", "from pyspark.sql import functions as F" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Reading parquet data\n", "\n", "### Initialisation\n", "\n", "Let's first initialise our Apache Spark session, and set up a simple timer to benchmark performances." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Initialise our Spark session\n", "spark = SparkSession.builder.getOrCreate()\n", "print(\"spark session started\")\n", "\n", "#usefull class to benchmark\n", "class Timer:\n", " \"\"\" A simple class for printing elapsed time (s) since last call\n", " \"\"\"\n", " def __init__(self):\n", " self.t0 = time()\n", " \n", " def start(self):\n", " self.t0 = time()\n", " \n", " def split(self):\n", " t1 = time()\n", " print(\"{:2.1f}s\".format(t1 - self.t0))\n", "\n", "timer = Timer()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load the extra-galactic catalog\n", "read a file from a format called \"parquet\" (produced by the Data Access Task Force)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "timer.start()\n", "\n", "# Path to cosmoDC2 (parquet format)\n", "filename = \"/global/cscratch1/sd/plaszczy/CosmoDC2/xyz_v1.1.4.parquet\"\n", "\n", "# Spark DataFrame\n", "df_all = spark.read.parquet(filename)\n", "\n", "# Le's inspect the schema of the data\n", "df_all.printSchema()\n", "\n", "timer.split()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fast isn't it? \n", "but be carefull there is a trick there... \n", "\n", "Spark uses **lazy evaluation** when reading data so that (at this level) nothing is actually loaded in memory: only headers are read. Real loading appears later, for instance when data are put in cache and an action is triggered.\n", "For more see e.g. https://arxiv.org/abs/1807.03078\n", "\n", "\n", "\n", "Note: Apache Spark has no efficient PySpark connector to read data in hdf5 file. Therefore we first converted the cosmoDC2 data set into parquet (similar to what DPDD tools offer). For the purpose of this notebook, we only convert a few columns of interest. The file is accessible at NERSC for DESC members." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reject faint galaxies and put relevant quantities in cache\n", "\n", "for a nicer look we reject faint galaxies that do have negative `halo_id` and keep only spatial positions. Then we load and cache in memory some relavant quantities. Finally we count how many galaxies remain. Note that only the final _action_ (`count`) triggers the whole computation. The order of _transformations_ (`select`, `filter`, ...) does not matter as Spark is produces an optimized version of the whole chain prior to perform the action. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "timer.start()\n", "\n", "# Create a new dataframe selecting only interesting columns\n", "# and keep only entries with positive halo_id.\n", "cols = [\"halo_id\", \"position_x\", \"position_y\", \"position_z\", \"redshift\"]\n", "cond = \"halo_id > 0\"\n", "df = df_all.select(cols).filter(cond)\n", "\n", "# We do not need halo_id anymore, so le's drop it\n", "df = df.drop(\"halo_id\")\n", "\n", "# Put the data in cache for speeding up later use \n", "# and trigger an action (count). \n", "# This is the longer step but must be executed only once!\n", "N = df.cache().count()\n", "\n", "print(\"Number of galaxies={} M\".format(N / 1e6))\n", "timer.split()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get some basic statistics\n", "\n", "DataFrames have built-in tools to quickly inspect the data distribution. See the ones available in https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameStatFunctions\n", "\n", "here we just plot a summary statistics" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "timer.start()\n", "\n", "# describe() returns a DataFrame\n", "stat = df.describe()\n", "stat.show()\n", "\n", "timer.split()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Redshift histogram\n", "\n", "Let's look at the distribution of redshifts.\n", "This is slightly more technical (for details see: https://arxiv.org/abs/1807.03078 ), but it is extremely efficient for large dataset and will directly scale when (much) more data will be available, outperforming traditional tools.\n", "The idea to do this efficiently (and in parallel) is to \n", "- add a column of bin numbers\n", "- groupBy (ie index by) this bin-number and count the number of elements in each group (that's histogrammming right?)\n", "- drop the bin number and add the center of the bin locations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "matplotlib.rcParams.update({'font.size': 17})\n", "timer.start()\n", "\n", "# Get min/max redshift values\n", "m = df.select(F.min(\"redshift\"), F.max(\"redshift\")).first()\n", "zmin, zmax = m[0], m[1]\n", "\n", "# Number of bins, and spacings for our histogram\n", "Nbins = 50\n", "dz = (zmax - zmin) / Nbins\n", "step = (zmax - zmin) / Nbins\n", "\n", "# Add a columns of bins values\n", "zbin = df.select(\n", " df[\"redshift\"], \n", " ((df[\"redshift\"] - zmin - dz / 2) / dz).cast(IntegerType()).alias('bin'))\n", "\n", "# GroupBy this column + count how many in each group (= histogram)\n", "h = zbin.groupBy(\"bin\").count().orderBy(F.asc(\"bin\"))\n", "\n", "# Add column of bin center locations + drop bin number\n", "h = h.select(\"bin\",(zmin + dz / 2 + h['bin'] * dz).alias('loc'), \"count\").drop(\"bin\")\n", "\n", "# Data is now reduced to a histogram so we can\n", "# safely transfer back the data to the driver, in the pandas world \n", "hp = h.toPandas()\n", "\n", "plt.figure(figsize=(10, 7))\n", "\n", "plt.bar(hp['loc'].values, hp['count'].values, step, color='w', edgecolor='k')\n", "plt.xlabel(\"redshift\", fontsize=20)\n", "\n", "timer.split()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Altoough you do not see it explicitely this is really **distributed** computing. You send the algorithm to the data (over each node) and recover the reduced values (histograms).\n", "\n", "You are more used working the other way: you recover data in memory and run your algorithm\n", "\n", "This can be emulated with the following piece of code where the \"collect\" function fills the driver memory and one then calls the classical hist function :\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "timer.start()\n", "z_values = [_[0] for _ in df.select('redshift').collect()]\n", "plt.hist(z_values, range=(min(z_values), max(z_values)), bins=Nbins)\n", "plt.xlabel('redshift')\n", "timer.split()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As any comparison it is very unfair, since the driver here uses one thread, but it is also unfair to use Spark on such a low amount of data (42 millions)! But you got the idea of distributed (painless) computing that will prove its superiority for > 100GB data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A first 3D look\n", "\n", "Since jupyter notebooks are not well suited to both intensive computation (CPU!) + intensive visualization (GPU!), we illustrate how it can be used from matplotlib but there are definitely more powerfull tools to do so (that we will discuss next).\n", "\n", "In the following we make a 30 Mpc zoom centers on the mean position in the dataset, where we begin to see the cosmic web." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "matplotlib.rcParams.update({'font.size': 17})\n", "\n", "# Size of the box to display\n", "Mpc = 30.\n", "\n", "# Get the mean positions from the previous stat DataFrame\n", "means = stat.collect()[1]\n", "m_x = float(means['position_x'])\n", "m_y = float(means['position_y'])\n", "m_z = float(means['position_z'])\n", "\n", "dfcut = df.filter( \n", " (F.abs(df.position_x - m_x) < Mpc) &\n", " (F.abs(df.position_y - m_y) < Mpc) &\n", " (F.abs(df.position_z - m_z) < Mpc))\n", "\n", "# Visualise it!\n", "p = dfcut.toPandas()\n", "\n", "fig = plt.figure(figsize=(10, 10))\n", "ax = fig.add_subplot(111, projection='3d')\n", "\n", "ax.scatter(p.position_x, p.position_y, p.position_z, s=1)\n", "\n", "ax.set_title('A long time ago in a galaxy far, \\n far away... (z $\\sim$ {:.2f})'.format(p.redshift.mean()))\n", "ax.set_xlabel(r'x [Mpc/h]', labelpad=20)\n", "ax.set_ylabel(r'y [Mpc/h]', labelpad=20)\n", "ax.set_zlabel(r'z [Mpc/h]', labelpad=20)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### CosmoDC2 the movie!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By copying the data to the LAL cluster, using more sophisticated tools for vizualization and projecting data on our wall-of-screens we obtain the \n", "following MOVIE.\n", "Note we also colored data according to their redshift (with a colormap that is held as secret as coca-cola receipe)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import HTML\n", "\n", "# Display video from Youtube\n", "HTML('')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "desc-pyspark", "language": "python", "name": "desc-pyspark" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }