{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Analysis of Beam Simulator Images and Brighter-fatter Correction\n", "<br>Owner(s): **Andrew Bradshaw** ([@andrewkbradshaw](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@andrewkbradshaw))\n", "<br>Last Verified to Run: **2021-04-01**\n", "<br>Verified Stack Release: **21.0.0**\n", "\n", "This notebook demonstrates the [brighter-fatter systematic error](https://arxiv.org/abs/1402.0725) on images of stars and galaxies illuminated on an ITL-3800C-002 CCD at the [UC Davis LSST beam simulator laboratory](https://arxiv.org/abs/1411.5667). Using a series of images at increasing exposure times, we demonstrate the broadening of image profiles on DM stack shape measurements, and a [possible correction method](https://arxiv.org/abs/1711.06273) which iteratively applies a kernel to restore electrons to the pixels from which they were deflected. To keep things simple, for now we skip most DM stack instrument signature removal (ISR) and work on a subset of images which are already processed arrays (500x500) of electrons.\n", "\n", "### Learning Objectives:\n", "\n", "After working through this tutorial you should be able to: \n", "1. Characterize and measure objects (stars/galaxies) in LSST beam simulator images\n", "2. Test the Brighter-Fatter kernel correction method on those images\n", "3. Build your own tests of stack ISR algorithms\n", "\n", "### Logistics\n", "This notebook is intended to be runnable on `lsst-lsp-stable.ncsa.illinois.edu` from a local git clone of https://github.com/LSSTScienceCollaborations/StackClub.\n", "\n", "## Set-up" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# What version of the Stack are we using?\n", "! echo $HOSTNAME\n", "! eups list -s | grep lsst_distrib" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pwd" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "from matplotlib.colors import LogNorm\n", "from itertools import cycle\n", "from astropy.io import fits\n", "import time,glob,os\n", "\n", "\n", "# if running stack v16.0, silence a long matplotlib Agg warning with:\n", "#import warnings\n", "#warnings.filterwarnings(\"ignore\", category=UserWarning)\n", "\n", "%matplotlib inline\n", "\n", "# What version of the Stack am I using?\n", "! echo $HOSTNAME\n", "! eups list -s lsst_distrib\n", "\n", "# make a directory to write the catalogs\n", "username=os.environ.get('USERNAME')\n", "cat_dir='/home/'+username+'/DATA/beamsim/'\n", "if not os.path.exists(cat_dir):\n", " ! mkdir /home/$USER/DATA/beamsim/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1: Read in an image\n", "Cut-outs of beam simulator star/galaxy images have been placed in the shared data directory at `/project/shared/data/beamsim/bfcorr/`. We skip (for now) most of the instrument signature removal (ISR) steps because these are preprocessed images (bias subtracted, gain corrected). We instead start by reading in one of those `.fits` files and making an image plane `afwImage.ExposureF` as well as a variance plane (based upon the image), which is then ready for characterization and calibration in the following cells." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import lsst.afw.image as afwImage\n", "from lsst.ip.isr.isrFunctions import updateVariance\n", "\n", "# where the data lives, choosing one image to start\n", "imnum=0 # for this dataset, choose 0-19 as an example\n", "fitsglob='/project/shared/data/beamsim/bfcorr/*part.fits'\n", "fitsfilename = np.sort(glob.glob(fitsglob))[imnum] \n", "#fitsfilename ='/home/sarujin/testdata/ITL-3800C-002_spot_spot_400_20171108114719whole.fits'\n", "# Read in a single image to an afwImage.ImageF object\n", "image_array=afwImage.ImageF.readFits(fitsfilename)\n", "image = afwImage.ImageF(image_array)\n", "exposure = afwImage.ExposureF(image.getBBox())\n", "exposure.setImage(image)\n", "hdr=fits.getheader(fitsfilename) # the header has some useful info in it\n", "print(\"Read in \",fitsfilename.split('/')[-1])\n", "\n", "# Set the variance plane using the image plane via updateVariance function\n", "gain = 1.0 # because these images are already gain corrected\n", "readNoise = 10.0 # in electrons\n", "updateVariance(exposure.maskedImage, gain, readNoise)\n", "\n", "# Another way of setting variance and/or masks?\n", "#mask = afwImage.makeMaskFromArray(np.zeros((4000,4072)).astype('int32'))\n", "#variance = afwImage.makeImageFromArray((readNoise**2 + image_array.array())\n", "#masked_image = afwImage.MaskedImageF(image, mask, variance)\n", "#exposure = afwImage.ExposureF(masked_image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now visualize the image and its electron distribution using matplotlib. Things to note: 1) the array is (purposefully) tilted with respect to the pixel grid, 2) most pixel values are at the background/sky level (a function of the mask opacity and illumination), but there is a pileup of counts around ~200k electrons indicating full well and saturation in some of the brightest pixels of the image" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(12,5)),plt.subplots_adjust(wspace=.3)\n", "plt.suptitle('Star/galaxy beam sim image and histogram \\n'+fitsfilename.split('/')[-1])\n", "\n", "plt.subplot(121)\n", "plt.imshow(exposure.image.array,vmax=1e3,origin='lower')\n", "plt.colorbar(label='electrons')\n", "\n", "plt.subplot(122)\n", "plt.hist(exposure.image.array.flatten(),bins=1000,histtype='step')\n", "plt.yscale('log')#,plt.xscale('log')\n", "plt.xlabel('Number of electrons in pixel'),plt.ylabel('Number of pixels')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2: Perform image characterization and initial measurement\n", "We now perform a base-level characterization of the image using the stack. We set some configuration settings which are specific to our sestup which has a very small optical PSF, setting a PSF size and turning off some other aspects such as cosmic ray rejection because of this." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from lsst.pipe.tasks.characterizeImage import CharacterizeImageTask, CharacterizeImageConfig\n", "import lsst.meas.extensions.shapeHSM\n", "\n", "# first set a few configs that are specific to our beam simulator data\n", "charConfig = CharacterizeImageConfig()\n", "#this set the fwhm of the simple PSF to that of optics\n", "charConfig.installSimplePsf.fwhm = .5\n", "charConfig.doMeasurePsf = False\n", "charConfig.doApCorr = False # necessary\n", "charConfig.repair.doCosmicRay = False \n", "# we do have some cosmic rays, but we also have subpixel mask features and an undersampled PSF\n", "charConfig.detection.background.binSize = 10 # worth playing around with\n", "#charConfig.background.binSize = 50\n", "charConfig.detection.minPixels = 5 # also worth playing around with\n", "\n", "# Add the HSM (Hirata/Seljak/Mandelbaum) adaptive moments shape measurement plugin\n", "charConfig.measurement.plugins.names |= [\"ext_shapeHSM_HsmSourceMoments\"]\n", "# to configure hsm you would do something like\n", "# charConfig.measurement.plugins[\"ext_shapeHSM_hsmSourceMoments\"].addFlux = True\n", "# (see sfm.py in meas_base for all the configuration options for the measurement task)\n", "\n", "charTask = CharacterizeImageTask(config=charConfig)\n", "\n", "charTask.run?\n", "# use charTask.run instead of characterize for v16.0+22\n", "# could also perform similar functions with processCcdTask.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Display which plugins are being used for measurement\n", "charConfig.measurement.plugins.active " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "tstart=time.time()\n", "charResult = charTask.run(exposure) # charTask.run(exposure) stack v16.0+22\n", "print(\"Characterization took \",str(time.time()-tstart)[:4],\" seconds\")\n", "print(\"Detected \",len(charResult.sourceCat),\" objects \")\n", "\n", "plt.title('X/Y locations of detections')\n", "plt.plot(charResult.sourceCat['base_SdssCentroid_x'],charResult.sourceCat['base_SdssCentroid_y'],'r.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This figure illustrates the centroids of detections made during characterization. Note that not all objects have been detected in this first round." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# display some of the source catalog measurements filtered by searchword\n", "searchword='flux'\n", "for name in charResult.sourceCat.schema.getOrderedNames():\n", " if searchword in name.lower():\n", " print(name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Looking at the mask plane, which started off as all zeros\n", "# and now has some values of 2^5\n", "maskfoo=exposure.mask\n", "print(\"Unique mask plane values: \",np.unique(maskfoo.array))\n", "print(\"Mask dictionary entries: \",maskfoo.getMaskPlaneDict())\n", "\n", "plt.figure(figsize=(12,5)),plt.subplots_adjust(wspace=.3)\n", "plt.subplot(121)\n", "plt.imshow(maskfoo.array,origin='lower'),plt.colorbar()\n", "plt.subplot(122)\n", "plt.hist(maskfoo.array.flatten()),plt.xlabel('Mask plane values')\n", "plt.yscale('log')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above figures illustrate the new mask plane of the exposure object which was added and modified during characterization. Values of 0 and 5 are now seen, which correspond to unassociated pixels and those which are \"detected\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3: Further image calibration and measurement\n", "This builds on the exposure output from characterization, using the new mask plane as well as the source catalog. Similar to the characterization, we turn off some processing which is suited to our particular setup. For this dataset a calibrate task is almost unncessary (as it is not on-sky data and we don't have a reference catalog), however, it does provide a background-subtracted image and for completeness it is included here. The steps in calibration that are turned on/off can be seen by printing the calibration config object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from lsst.pipe.tasks.calibrate import CalibrateTask, CalibrateConfig\n", "\n", "calConfig = CalibrateConfig()\n", "calConfig.doAstrometry = False\n", "calConfig.doPhotoCal = False\n", "calConfig.doApCorr = False\n", "calConfig.doDeblend = False # these are well-separated objects, deblending adds time & trouble\n", "# these images should have a uniform background, so measure it\n", "# on scales which are larger than the objects\n", "calConfig.detection.background.binSize = 50\n", "calConfig.detection.minPixels = 5\n", "calConfig.measurement.plugins.names |= [\"ext_shapeHSM_HsmSourceMoments\"]\n", "# to configure hsm you would do something like\n", "#charConfig.measurement.plugins[\"ext_shapeHSM_hsmSourceMoments\"].addFlux = True\n", "\n", "calTask = CalibrateTask(config= calConfig, icSourceSchema=charResult.sourceCat.schema)\n", "\n", "#calTask.run? # for stack v16.0+22 \n", "calTask.run?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "tstart=time.time()\n", "# for stack v16.0+22, change to calTask.run(charResult.exposure)\n", "calResult = calTask.run(charResult.exposure, background=charResult.background,\n", " icSourceCat = charResult.sourceCat)\n", "\n", "print(\"Calibration took \",str(time.time()-tstart)[:4],\" seconds\")\n", "print(\"Detected \",len(calResult.sourceCat),\" objects \")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below we look at some of the measurements in the source catalog which has been attached to the calibration result. We also save the source catalog to `$fitsfilename.cat` in `/home/$USER/beamsim/`, which was created in the first cell. This will allow the results from each image to be read in after these measurements are performed on each image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Catalogs will be saved to: \"+cat_dir)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "src=calResult.sourceCat #.copy(deep=True) ?\n", "#print(src.asAstropy)\n", "\n", "# catalog directory\n", "src.writeFits(cat_dir+fitsfilename.split('/')[-1].replace('.fits','.cat'))\n", "# read back in and access via:\n", "#catalog=fits.open(fitsfilename+'.cat')\n", "#catalog[1].data['base_SdssShape_xx'] etc.\n", "\n", "par_names=['base_SdssShape_xx','base_SdssShape_yy','base_SdssShape_instFlux']\n", "par_mins=[0,0,0]\n", "par_maxs=[5,5,1e6]\n", "n_par=len(par_names)\n", "\n", "\n", "plt.figure(figsize=(5*n_par,6)),plt.subplots_adjust(wspace=.25)\n", "for par_name,par_min,par_max,i in zip(par_names,par_mins,par_maxs,range(n_par)):\n", " plt.subplot(2,n_par,i+1)\n", " plt.scatter(src['base_SdssCentroid_x'],src['base_SdssCentroid_y'],c=src[par_name],marker='o',vmin=par_min,vmax=par_max)\n", " plt.xlabel('X'),plt.ylabel('Y'),plt.colorbar(label=par_name)\n", "\n", "\n", " plt.subplot(2,n_par,n_par+i+1)\n", " plt.hist(src[par_name],range=[par_min,par_max],bins=20,histtype='step')\n", " plt.xlabel(par_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above figures show the 2-dimensional distribution of detected objects measured parameter values and their histogram. By default, two shape parameters (in pixels) and a flux measurement (in electrons) are shown, but this can be modified through the `par_names` variable in the cell above. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 4: Apply the brighter-fatter kernel correction to an image\n", "This brighter fatter correction method takes in a \"kernel\" (derived from theory or flat fields) which models the broadening of incident image profiles assuming the pixel boundary displacement can be represented as the gradient of a scalar field. Given a kernel and this assumption, the incident image profile can in theory be reconstructed using an iterative process, which we test here using our beam simulator images. See [this paper](https://arxiv.org/abs/1711.06273) and the IsrTask docstring below for more details about the theory and its assumptions. The kernel used here is not generated by the stack but rather through similar code which was written at UC Davis by Craig Lage. Future additions to the notebook will use a stack-generated kernel when available." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#from lsst.ip.isr.isrTask import IsrTask # brighterFatterCorrection lives here\n", "#isr=IsrTask()\n", "import lsst.ip.isr as isr\n", "\n", "pre_bfcorr_exposure=exposure.clone() #save a copy of the pre-bf corrected image\n", "\n", "isr.brighterFatterCorrection?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Read in the kernel (determined from e.g. simulations or flat fields)\n", "kernel=fits.getdata('/project/shared/data/beamsim/bfcorr/BF_kernel-ITL_3800C_002.fits')\n", "exposure=pre_bfcorr_exposure.clone() # save the pre-bf correction image\n", "\n", "# define the maximum number of iterations and threshold for differencing convergence (e-)\n", "bf_maxiter,bf_threshold=20,10\n", "\n", "# Perform the correction\n", "tstart=time.time()\n", "isr.brighterFatterCorrection(exposure,kernel,bf_maxiter,bf_threshold,False)\n", "print(\"Brighter-fatter correction took\",time.time()-tstart,\" seconds\")\n", "#takes 99 seconds for 4kx4k exposure, 21x21 kernel, 20 iterations, 10 thresh\n", "\n", "# Plot kernel and image differences\n", "plt.figure(),plt.title('BF kernel')\n", "plt.imshow(kernel),plt.colorbar()\n", "\n", "imagediff=(pre_bfcorr_exposure.image.array-exposure.image.array)\n", "imagediffpct=np.sum(imagediff)/np.sum(pre_bfcorr_exposure.image.array)*100.\n", "print(str(imagediffpct)[:5],' percent change in flux')\n", "\n", "plt.figure(figsize=(16,10))\n", "plt.subplot(231),plt.title('Before')\n", "plt.imshow(pre_bfcorr_exposure.image.array,vmin=0,vmax=1e3,origin='lower'),plt.colorbar()\n", "plt.subplot(232),plt.title('After')\n", "plt.imshow(exposure.image.array,vmin=0,vmax=1e3,origin='lower'),plt.colorbar()\n", "plt.subplot(233),plt.title('Before - After')\n", "vmin,vmax=-10,10\n", "plt.imshow(imagediff,vmin=vmin,vmax=vmax,origin='lower'),plt.colorbar()\n", "\n", "nbins=1000\n", "plt.subplot(234)\n", "plt.hist(pre_bfcorr_exposure.image.array.flatten(),bins=nbins,histtype='step',label='before')\n", "plt.yscale('log')\n", "plt.subplot(235)\n", "plt.hist(exposure.image.array.flatten(),bins=nbins,histtype='step',label='after')\n", "plt.yscale('log')\n", "plt.subplot(236)\n", "plt.hist(imagediff.flatten(),bins=nbins,histtype='step',label='difference')\n", "plt.yscale('log')\n", "plt.legend()\n", "plt.xlabel('Pixel values [e-]')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "kernel.sum(),kernel.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above figures illustrate the way that the brighter-fatter correction works: by iteratively convolving a physically-motivated kernel with the electron image to redistribute charge from the periphery to the center of objects." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 5: Run the above steps (with and without brighter-fatter correction) on 20 exposures of increasing exposure time\n", "Here we re-do all of the previous work, which was done with one image, on a series of images with increasing exposure times. We will generate this series of catalogs both with and without applying the brighter-fatter correction, allowing us to test the fidelity of the brighter-fatter correction with our beam simulator images. To do this in a simple way, we create a function to perform all of the above tasks, called `make_bf_catalogs`, which only takes in a list of filenames but uses some of the same global configuration values (`charTask.config` and `calTask.config`) which we set above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fitsglob='/project/shared/data/beamsim/bfcorr/*part.fits' # fits filenames to read in\n", "fitsfilelist=np.sort(glob.glob(fitsglob))\n", "\n", "def make_bf_catalogs(fitsfilelist,do_bf_corr=False,do_verbose_print=True):\n", " for fitsfilename in fitsfilelist:\n", " tstart=time.time()\n", " image_array=afwImage.ImageF.readFits(fitsfilename)\n", " image = afwImage.ImageF(image_array)\n", "\n", " exposure = afwImage.ExposureF(image.getBBox())\n", " exposure.setImage(image)\n", "\n", " updateVariance(exposure.maskedImage, gain, readNoise)\n", " \n", " # start the characterization and measurement, \n", " # optionally beginning with the brighter-fatter correction\n", " if do_bf_corr:\n", " isr.brighterFatterCorrection(exposure,kernel,bf_maxiter,bf_threshold,False)\n", " # print(\"Brighter-fatter correction took\",str(time.time()-tstart)[:4],\" seconds\")\n", " # for stack v16.0+22 use charTask.run() and calTask.run()\n", " charResult = charTask.run(exposure) \n", " calResult = calTask.run(charResult.exposure, background=charResult.background,\n", " icSourceCat = charResult.sourceCat)\n", " src=calResult.sourceCat\n", "\n", " # write out the source catalog, appending -bfcorr for the corrected catalogs\n", " catfilename=cat_dir+fitsfilename.replace('.fits','.cat').split('/')[-1]#\n", " if do_bf_corr: catfilename=catfilename.replace('.cat','-bfcorr.cat')\n", " src.writeFits(catfilename)\n", "\n", " if do_verbose_print: \n", " print(fitsfilename.split('/')[-1],\" char. & calib. took \",\n", " str(time.time()-tstart)[:4],\" seconds to measure \",\n", " len(calResult.sourceCat),\" objects \")\n", "\n", " \n", "# Run the catalog maker on the series of uncorrected and corrected images\n", "make_bf_catalogs(fitsfilelist,do_bf_corr=True,do_verbose_print=True)\n", "make_bf_catalogs(fitsfilelist,do_bf_corr=False,do_verbose_print=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now read in those catalogs, both corrected and uncorrected (this could be improved with e.g. pandas)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cat_arr = []\n", "catglob=cat_dir+'ITL*part.cat' # uncorrected catalogs\n", "for catfilename in np.sort(glob.glob(catglob)): cat_arr.append(fits.getdata(catfilename))\n", "\n", "bf_cat_arr = []\n", "catglob=cat_dir+'ITL*part-bfcorr.cat' # corrected catalogs\n", "for catfilename in np.sort(glob.glob(catglob)): bf_cat_arr.append(fits.getdata(catfilename))\n", "ncats=len(cat_arr)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Show issues with multiply detected sources which which we remedy with matching rejection\n", "for i in range(ncats):\n", " xfoo,yfoo=cat_arr[i]['base_SdssCentroid_x'],cat_arr[i]['base_SdssCentroid_y']\n", " plt.plot(xfoo,yfoo,'o',alpha=.4)\n", "plt.title('Centroids of sequential exposures')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above image illustrates a problem with comparing images with different exposure times. Namely, that different sets of objects may be detected. To remedy this, we use a fiducial frame as reference and simply match the catalogs by looking for *single* object matches within a specified distance of those fiducial objects. We then collect a shape measurement (e.g. `base_SdssShape_xx/yy`) for that object as well as a brightness measurement (e.g. `base_SdssShape_flux`) to test for a trend in size vs. brightness." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fidframe=10 # frame number to compare to\n", "maxdist=.5 # max distance to match objects between frames\n", "\n", "# choose which stack measurements to use for centroids and shape\n", "# +TODO use 'ext_shapeHSM_HsmSourceMoments_xx','ext_shapeHSM_HsmSourceMoments_yy'\n", "cen_param1,cen_param2='base_SdssCentroid_x','base_SdssCentroid_y'\n", "bf_param1,bf_param2='base_SdssShape_xx','base_SdssShape_yy'\n", "flux_param='base_CircularApertureFlux_25_0_instFlux'#'base_CircularApertureFlux_25_0_Flux' #'base_GaussianFlux_flux' # or or 'base_SdssShape_flux'\n", "\n", "# get the centroids (used for matching) from the fiducial frame \n", "x0s,y0s=cat_arr[fidframe][cen_param1],cat_arr[fidframe][cen_param2]\n", "nspots=len(x0s)\n", "\n", "# make an array to hold that number of objects and their centroid/shape/flux measurements\n", "# the 8 rows collect x/y centroid, x/y shape, x/y corrected shape, flux, and corrected flux\n", "bf_dat=np.empty((ncats,nspots),\n", " dtype=np.dtype([('x', float), ('y', float),('shapex', float), ('shapey', float),\n", " ('corrshapex', float), ('corrshapey', float),\n", " ('flux', float), ('corrflux', float)]))\n", "bf_dat[:]=np.nan # so that un-matched objects aren't plotted/used by default\n", "\n", "\n", "# loop over catalogs\n", "for i in range(ncats):\n", " # get the centroids of objects in the bf-corrected and uncorrected images\n", " x1,y1=cat_arr[i][cen_param1],cat_arr[i][cen_param2]\n", " x1_bf,y1_bf=bf_cat_arr[i][cen_param1],bf_cat_arr[i][cen_param2]\n", " # loop over fiducial frame centroids to find matches\n", " for j in range(nspots): \n", " x0,y0=x0s[j],y0s[j] # fiducial centroid to match\n", " # find objects in both catalogs which are within maxdist\n", " bf_gd=np.where(np.sqrt((x1_bf-x0)**2+(y1_bf-y0)**2)<maxdist)[0]\n", " gd=np.where(np.sqrt((x1-x0)**2+(y1-y0)**2)<maxdist)[0]\n", " if (len(bf_gd)==1 & len(gd)==1): # only take single matches\n", " xx,yy=cat_arr[i][bf_param1][gd],cat_arr[i][bf_param2][gd] # centroids\n", " xx_bf,yy_bf=bf_cat_arr[i][bf_param1][bf_gd],bf_cat_arr[i][bf_param2][bf_gd] # sizes\n", " flux,flux_bf=cat_arr[i][flux_param][gd],bf_cat_arr[i][flux_param][bf_gd] # fluxes\n", " bf_dat[i,j]=x0,y0,xx,yy,xx_bf,yy_bf,flux,flux_bf # keep those above measurements" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plot the brighter-fatter effect on those shape measurements and the corrected version\n", "Alongside stamps of each object, below we show the trend of X and Y sizes before and after brighter-fatter correction. By default this makes a dozen plots in as many seconds and saves them to the catalog directory." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.arange(10,20)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sz=13 # stamp size\n", "\n", "# loop over some objects and save a summary figure\n", "# [0,6,12,18,23,35,44,46,52,56,69,71,114] are good indices to look with default values \n", "# or e.g. np.random.choice(np.arange(nspots),size=10)\n", "indexfoo=[12,14,29,34,41,56]\n", "\n", "for index in indexfoo:\n", " plt.figure(figsize=(14,4)),plt.subplots_adjust(wspace=.3)\n", " \n", " # grab a postage stamp, integerizing the centroid and shipping\n", " # if it is near the edge, +TODO in a stackly manner\n", " xc,yc=bf_dat['x'][fidframe,index].astype('int')+1,bf_dat['y'][fidframe,index].astype('int')+1\n", " if ((np.abs(xc-250)>250 - sz ) | (np.abs(yc-250)>250 - sz )): continue\n", " stamp=exposure.getImage().array[yc-sz:yc+sz,xc-sz:xc+sz]\n", " \n", " # show the stamp with log scale (1,max)\n", " plt.subplot(131),plt.title('stamp '+str(index).zfill(3)+' (log scale)')\n", " plt.imshow(stamp,origin='lower',norm=LogNorm(1,stamp.max())),plt.colorbar()\n", " \n", " # x size vs flux\n", " plt.subplot(132),plt.title('x (row) size vs. flux')\n", " plt.plot(bf_dat['flux'][:,index],bf_dat['shapex'][:,index],'r.',label='Uncorrected')\n", " plt.plot(bf_dat['corrflux'][:,index],bf_dat['corrshapex'][:,index],'g.',label='Corrected')\n", " plt.xlabel(flux_param),plt.ylabel(bf_param1),plt.xscale('log')\n", " plt.legend(loc='upper left')\n", "\n", " # y size vs flux\n", " plt.subplot(133),plt.title('y (column) size vs. flux')\n", " plt.plot(bf_dat['flux'][:,index],bf_dat['shapey'][:,index],'r.',label='Uncorrected')\n", " plt.plot(bf_dat['corrflux'][:,index],bf_dat['corrshapey'][:,index],'g.',label='Corrected')\n", " plt.xlabel(flux_param),plt.ylabel(bf_param2)\n", " plt.xscale('log')\n", " plt.savefig(cat_dir+str(index).zfill(5)+'bfcorr.png')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above figures illustrate the brighter-fatter effect (slight increasing size with flux) in the red dots, and the corrected image analysis in green. Curiously, some of the objects indicate that the default correction method is properly correcting star-like objects, but over- or under-correcting the effect in galaxy images. This could be due to a violation of some of the underlying assumptions in the method, including the small-pixel approximation or the linearity of kernel correction. Some of the remaining trends could be related to an increase in signal-to-noise in the images, however this is a universally applicable issue with shape measurement and is beyond the scope of this notebook. In some of the figures, a rapid increase in size can be seen at the highest fluxes, indicating saturation of pixel wells which is unrelated to the brighter-fatter effect." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now plot the flux lost/gained in the process of brighter-fatter correction, by subtracting the flux of the corrected images from the uncorrected ones. The flux measurement is the same as the one used in the above figures and is measured in electrons." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "colorpalette=cycle(plt.cm.viridis(np.linspace(0,1,len(indexfoo))))\n", "stylepalette=cycle(['s','*','o'])\n", "plt.figure(figsize=(14,8))\n", "for nfoo in indexfoo:\n", " flux_foo=bf_dat['flux'][:,nfoo]\n", " fluxdiffpct_foo=(bf_dat['flux'][:,nfoo]-bf_dat['corrflux'][:,nfoo])/bf_dat['flux'][:,nfoo]*100.\n", " plt.plot(flux_foo,fluxdiffpct_foo,label=str(nfoo).zfill(3),c=next(colorpalette),marker=next(stylepalette))\n", "plt.xscale('log')#,plt.yscale('symlog')\n", "plt.legend()\n", "plt.xlabel(flux_param,fontsize=20)\n", "plt.ylabel('Measured flux change due to correction \\n (before - after) [%]',fontsize=20)\n", "#plt.savefig(cat_dir+'BF_corr_flux_change.png',dpi=150)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# +TODO other ways of doing matching, catalog stacking" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "LSST", "language": "python", "name": "lsst" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.8" } }, "nbformat": 4, "nbformat_minor": 4 }