{ "cells": [ { "cell_type": "markdown", "metadata": { "hide_input": false }, "source": [ "![OpenSARlab notebook banner](NotebookAddons/blackboard-banner.png)\n", "\n", "# Exploring SAR Data and SAR Time Series Analysis using Jupyter Notebooks\n", "\n", "### Franz J Meyer; University of Alaska Fairbanks & Josef Kellndorfer, [Earth Big Data, LLC](http://earthbigdata.com/)\n", "\n", "\n", "\n", "This notebook will introduce you to the analysis of deep multi-temporal SAR image data stacks in the framework of *Jupyter Notebooks*. The Jupyter Notebook environment is easy to launch in any web browser for interactive data exploration with provided or new training data. Notebooks are comprised of text written in a combination of executable python code and markdown formatting including latex style mathematical equations. Another advantage of Jupyter Notebooks is that they can easily be expanded, changed, and shared with new data sets or newly available time series steps. Therefore, they provide an excellent basis for collaborative and repeatable data analysis.\n", "\n", "**We introduce the following data analysis concepts:**\n", "\n", "- How to load your own SAR data into Jupyter Notebooks and create a time series stack \n", "- How to apply calibration constants to covert initial digital number (DN) data into calibrated radar cross section information.\n", "- How to subset images and create a time series of your subset data.\n", "- How to explore the time-series information in SAR data stacks for environmental analysis." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**Important Note about JupyterHub**\n", "\n", "Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import url_widget as url_w\n", "notebookUrl = url_w.URLWidget()\n", "display(notebookUrl)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from IPython.display import Markdown\n", "from IPython.display import display\n", "\n", "notebookUrl = notebookUrl.value\n", "user = !echo $JUPYTERHUB_USER\n", "env = !echo $CONDA_PREFIX\n", "if env[0] == '':\n", " env[0] = 'Python 3 (base)'\n", "if env[0] != '/home/jovyan/.local/envs/rtc_analysis':\n", " display(Markdown(f'WARNING:'))\n", " display(Markdown(f'This notebook should be run using the \"rtc_analysis\" conda environment.'))\n", " display(Markdown(f'It is currently using the \"{env[0].split(\"/\")[-1]}\" environment.'))\n", " display(Markdown(f'Select the \"rtc_analysis\" from the \"Change Kernel\" submenu of the \"Kernel\" menu.'))\n", " display(Markdown(f'If the \"rtc_analysis\" environment is not present, use Create_OSL_Conda_Environments.ipynb to create it.'))\n", " display(Markdown(f'Note that you must restart your server after creating a new environment before it is usable by notebooks.'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 0. Importing Relevant Python Packages\n", "\n", "In this notebook we will use the following scientific libraries:\n", "\n", "- [Pandas](https://pandas.pydata.org/) is a Python library that provides high-level data structures and a vast variety of tools for analysis. The great feature of this package is the ability to translate rather complex operations with data into one or two commands. Pandas contains many built-in methods for filtering and combining data, as well as the time-series functionality.\n", "- [GDAL](https://www.gdal.org/) is a software library for reading and writing raster and vector geospatial data formats. It includes a collection of programs tailored for geospatial data processing. Most modern GIS systems (such as ArcGIS or QGIS) use GDAL in the background.\n", "- [NumPy](http://www.numpy.org/) is one of the principal packages for scientific applications of Python. It is intended for processing large multidimensional arrays and matrices, and an extensive collection of high-level mathematical functions and implemented methods makes it possible to perform various operations with these objects. \n", "- [Matplotlib](https://matplotlib.org/index.html) is a low-level library for creating two-dimensional diagrams and graphs. With its help, you can build diverse charts, from histograms and scatterplots to non-Cartesian coordinates graphs. Moreover, many popular plotting libraries are designed to work in conjunction with matplotlib.\n", "- [SciPy](https://www.scipy.org/about.html) is a library that provides functions for numerical integration, interpolation, optimization, linear algebra and statistics.\n", "\n", "**Our first step is to import them:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "from pathlib import Path\n", "import json # for loads\n", "import math # for ceil\n", "import re\n", "\n", "import pandas as pd # for DatetimeIndex\n", "from osgeo import gdal # for Info\n", "gdal.UseExceptions()\n", "import numpy as np # for copy, isnan, log10, ma.masked_where, max, mean, min, percentile, power, unique, var, where \n", "import scipy.signal\n", "\n", "%matplotlib inline\n", "import matplotlib.pylab as plb # for figure, grid, rcParams, savefig\n", "import matplotlib.pyplot as plt\n", "from matplotlib import animation\n", "from matplotlib import rc\n", "\n", "from ipyfilechooser import FileChooser\n", "\n", "from IPython.display import HTML\n", "\n", "import opensarlab_lib as asfn\n", "asfn.jupytertheme_matplotlib_format()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 1. Load Your Prepared Data Stack Into the Notebook\n", "\n", "This notebook assumes that you've prepared your own data stack of **RTC image products** over your personal area of interest. This can be done using the **Prepare_Data_Stack_Hyp3** and **Subset_Data_Stack notebooks**.\n", " \n", "This notebook expects [Radiometric Terrain Corrected](https://media.asf.alaska.edu/uploads/RTC/rtc_atbd_v1.2_final.pdf) (RTC) image products as input, so be sure to select an RTC process when creating the subscription for your input data within HyP3. Prefer a **unique orbit geometry** (ascending or descending) to keep geometric differences between images low. \n", "\n", "**Begin by writing a function to retrieve and the absolute paths to each of our tiffs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_tiff_paths(paths):\n", " tiff_paths = !ls $paths | sort -t_ -k5,5\n", " return tiff_paths" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Select the directory holding your tiffs**\n", "- Click the `Select` button\n", "- Navigate to your data directory\n", "- Click the `Select` button\n", "- Confirm that the desired path appears in green text\n", "- Click the `Change` button to alter your selection" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fc = FileChooser('/home/jovyan/notebooks')\n", "display(fc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Determine the path to the analysis directory containing the tiff directory:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tiff_dir = Path(fc.selected_path)\n", "analysis_dir = tiff_dir.parent\n", "print(f\"analysis_dir: {analysis_dir}\")\n", "\n", "wildcard_path = tiff_dir/\"*.tif*\"\n", "tiff_paths = get_tiff_paths(wildcard_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Write a function to extract the tiff dates from a wildcard path:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_date(pth):\n", " date_regex = r'(?<=_)\\d{8}(?=T\\d{6})'\n", " try:\n", " return re.search(date_regex, str(pth)).group(0)\n", " except AttributeError:\n", " raise Exception(f\"Date string not found in {pth}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Call get_dates() to collect the product acquisition dates:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dates = sorted([get_date(t) for t in tiff_dir.rglob(f'*.tif*')])\n", "print(dates)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Gather the upper-left and lower-right corner coordinates of the data stack:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "coords = [[], []]\n", "info = (gdal.Info(str(tiff_paths[0]), options = ['-json']))\n", "info = json.dumps(info)\n", "coords[0] = (json.loads(info))['cornerCoordinates']['upperLeft']\n", "coords[1] = (json.loads(info))['cornerCoordinates']['lowerRight']\n", "print(coords)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Grab the stack's UTM zone.** Note that any UTM zone conflicts should already have been handled in the Prepare_Data_Stack_Hyp3 notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "info = (gdal.Info(str(tiff_paths[0]), options = ['-json']))\n", "info = json.dumps(info)\n", "info = (json.loads(info))['coordinateSystem']['wkt']\n", "utm = info.split('ID')[-1].split(',')[1][0:-2]\n", "print(f\"UTM Zone: {utm}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "Now we stack up the data by creating a virtual raster table with links to all subset data files:\n", "\n", "**Create the virtual raster table for the subset GeoTiffs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "image_file = Path(f\"{analysis_dir}/raster_stack.vrt\")\n", "!gdalbuildvrt -separate $image_file $wildcard_path" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 3. Now You Can Work With Your Data\n", "\n", "Now you are ready to perform time series analysis on your data stack\n", "\n", "### 3.1 Define Data Directory and Path to VRT\n", "\n", "**Create an index of timedelta64 data with Pandas:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tindex = pd.DatetimeIndex(dates)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Print the bands and dates for all images in the virtual raster table (VRT):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "j = 1\n", "print(f\"Bands and dates for {image_file}\")\n", "for i in tindex:\n", " print(\"{:4d} {}\".format(j, i.date()), end=' ')\n", " j += 1\n", " if j%5 == 1:\n", " print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 3.2 Open Your Data Stack and Visualize Some Layers\n", "\n", "We will **open your VRT** and visualize some layers using Matplotlib." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "img = gdal.Open(str(image_file))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Print the bands, pixels, and lines:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"Number of bands: {img.RasterCount}\")\n", "print(f\"Number of pixels: {img.RasterXSize}\")\n", "print(f\"Number of lines: {img.RasterYSize}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Read in raster data for the first two bands:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "raster_1 = img.GetRasterBand(1).ReadAsArray() # change the number passed to GetRasterBand() to \n", "where_are_NaNs = np.isnan(raster_1) # read rasters from different bands\n", "raster_1[where_are_NaNs] = 0\n", "\n", "raster_2 = img.GetRasterBand(2).ReadAsArray() #must pass a valid band number to GetRasterBand()\n", "where_are_NaNs = np.isnan(raster_2)\n", "raster_2[where_are_NaNs] = 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot images and histograms for bands 1 and 2:**\n", "\n", "Note: Depending the histograms plotted by this cell, you may wish to adjust vmax when calling imshow() on ax1 and ax3. Increase the vmax value if the histogram cuts off much of the end of the peak, making your image too bright to see features well. Decrease vmax if the histogram extends much beyond the end of the peak, which will make your image appear dark." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Setup the pyplot plots\n", "fig = plb.figure(figsize=(18,10)) # Initialize figure with a size\n", "ax1 = fig.add_subplot(221) # 221 determines: 2 rows, 2 plots, first plot\n", "ax2 = fig.add_subplot(222) # 222 determines: 2 rows, 2 plots, second plot\n", "ax3 = fig.add_subplot(223) # 223 determines: 2 rows, 2 plots, third plot\n", "ax4 = fig.add_subplot(224) # 224 determines: 2 rows, 2 plots, fourth plot\n", "\n", "# Plot the band 1 image\n", "band_number = 1\n", "ax1.imshow(raster_1,cmap='gray', vmin=0, vmax=0.3) # see note above regarding vmax adjustments\n", "ax1.set_title('Image Band {} {}'.format(band_number, tindex[band_number-1].date()))\n", "\n", "# Flatten the band 1 image into a 1 dimensional vector and plot the histogram:\n", "h = ax2.hist(raster_1.flatten(), bins=200, range=(0, 0.3))\n", "ax2.xaxis.set_label_text('Amplitude? (Uncalibrated DN Values)')\n", "ax2.set_title('Histogram Band {} {}'.format(band_number, tindex[band_number-1].date()))\n", "\n", "# Plot the band 2 image\n", "band_number = 2\n", "ax3.imshow(raster_2,cmap='gray', vmin=0, vmax=0.3) # see note above regarding vmax adjustments\n", "ax3.set_title('Image Band {} {}'.format(band_number, tindex[band_number-1].date()))\n", "\n", "# Flatten the band 2 image into a 1 dimensional vector and plot the histogram:\n", "h = ax4.hist(raster_2.flatten(),bins=200, range=(0,0.3))\n", "ax4.xaxis.set_label_text('Amplitude? (Uncalibrated DN Values)')\n", "ax4.set_title('Histogram Band {} {}'.format(band_number, tindex[band_number-1].date()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 3.3 Calibration and Data Conversion between dB and Power Scales\n", "\n", "**Note, that if your data were generated by HyP3, this step is not necessary!** HyP3 performs the full data calibration and provides you with calibrated data in power scale.\n", " \n", "If, your data is from a different source, however, calibration may be necessary to ensure that image gray values correspond to proper radar cross section information. \n", "\n", "Calibration coefficients for SAR data are often defined in the decibel (dB) scale due to the high dynamic range of the imaging system. For the L-band ALOS PALSAR data at hand, the conversion from uncalibrated DN values to calibrated radar cross section values in dB scale is performed by applying a standard **calibration factor of -83 dB**. \n", "\n", "$\\gamma^0_{dB} = 20 \\cdot log10(DN) -83$\n", "\n", "The data at hand are radiometrically terrain corrected images, which are often expressed as terrain flattened $\\gamma^0$ backscattering coefficients. For forest and land cover monitoring applications $\\gamma^o$ is the preferred metric.\n", "\n", "**To apply the calibration constant for your data and export in *dB* scale, uncomment the following code cell:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ " #caldB=20*np.log10(rasterstack)-83" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While **dB**-scaled images are often \"visually pleasing\", they are often not a good basis for mathematical operations on data. For instance, when we compute the mean of observations, it makes a difference whether we do that in power or dB scale. Since dB scale is a logarithmic scale, we cannot simply average data in that scale. \n", " \n", "Please note that the **correct scale** in which operations need to be performed **is the power scale.** This is critical, e.g. when speckle filters are applied, spatial operations like block averaging are performed, or time series are analyzed.\n", "\n", "To **convert from dB to power**, apply: $\\gamma^o_{pwr} = 10^{\\frac{\\gamma^o_{dB}}{10}}$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#calPwr=np.power(10.,caldB/10.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 3.4 Create a Time Series Animation\n", "\n", "Now we are ready to create a time series animation from the calibrated SAR data.\n", "\n", "**First, create a raster from band 0 and a raster stack from all the images:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "band = img.GetRasterBand(1)\n", "raster0 = band.ReadAsArray()\n", "band_number = 0 # Needed for updates\n", "rasterstack = img.ReadAsArray()\n", "img = None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Create and move into a directory in which to store our plots and animations:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "product_path = analysis_dir/f'plots_and_animations'\n", "if not product_path.exists():\n", " product_path.mkdir()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Create a masked raster stack:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rs2 = np.ma.masked_where(rasterstack==0, rasterstack)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Generate a matplotlib time-series animation:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture \n", "fig = plt.figure(figsize=(14, 8))\n", "ax = fig.subplots()\n", "ax.axis('off')\n", "vmin = np.percentile(rasterstack.flatten(), 5)\n", "vmax = np.percentile(rasterstack.flatten(), 95)\n", "\n", "r0dB = 20 * np.log10(raster0) - 83\n", "\n", "im = ax.imshow(raster0, cmap='gray', vmin=vmin, vmax=vmax)\n", "ax.set_title(\"{}\".format(tindex[0].date()))\n", "\n", "def animate(i):\n", " ax.set_title(\"{}\".format(tindex[i].date()))\n", " im.set_data(rasterstack[i])\n", "\n", "# Interval is given in milliseconds\n", "ani = animation.FuncAnimation(fig, animate, frames=rasterstack.shape[0], interval=400)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Configure matplotlib's RC settings for the animation:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rc('animation', embed_limit=40971520.0) # We need to increase the limit maybe to show the entire animation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Create a javascript animation of the time-series running inline in the notebook:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "HTML(ani.to_jshtml())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Delete the dummy png** that was saved to the current working directory while generating the javascript animation in the last code cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [], "source": [ "try:\n", " product_path/Path('None0000000.png').unlink()\n", "except FileNotFoundError:\n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the animation (animation.gif):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ani.save(f\"{product_path}/animation.gif\", writer='pillow', fps=2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 3.5 Plot the Time Series of Means Calculated Across the Subset\n", "\n", "To create the time series of means, we will go through the following steps:\n", "1. Ensure that you use the data in **power scale** ($\\gamma^o_{pwr}$) for your mean calculations.\n", "1. compute means.\n", "1. convert the resulting mean values into dB scale for visualization.\n", "1. plot time series of means.\n", "\n", "**Compute the means:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rs_means_pwr = np.mean(rs2, axis=(1, 2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Convert resulting mean value time-series to dB scale for visualization:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rs_means_dB = 10.*np.log10(rs_means_pwr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot and save the time series of means (RCSoverTime.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " plt.rcParams.update({'font.size': 14})\n", " fig = plt.figure(figsize=(16, 4))\n", " ax1 = fig.subplots()\n", " window_length = len(rs_means_pwr)-1\n", " if window_length % 2 == 0:\n", " window_length -= 1\n", " polyorder = math.ceil(window_length*0.1)\n", " yhat = scipy.signal.savgol_filter(rs_means_pwr, window_length, polyorder) \n", " ax1.plot(tindex, yhat, color='red', marker='o', markerfacecolor='white', linewidth=3, markersize=6)\n", " ax1.plot(tindex, rs_means_pwr, color='gray', linewidth=0.5)\n", " plt.grid()\n", " ax1.set_xlabel('Date')\n", " ax1.set_ylabel(r'$\\overline{\\gamma^o}$ [power]')\n", " plt.savefig(f\"{product_path}/RCSoverTime.png\", dpi=72, transparent='true')\n", "except ValueError as e:\n", " print(f\"Error: polyorder: {polyorder} >= window_length: {window_length}\")\n", " raise" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 3.6 Calculate Coefficient of Variance\n", "\n", "The coefficient of variance describes how much the $\\sigma_{0}$ or $\\gamma_{0}$ measurements in a pixel vary over time. Hence, the coefficient of variance can indicate different vegetation cover and soil moisture regimes in your area.\n", "\n", "**Write a function to convert our plots into GeoTiffs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def geotiff_from_plot(source_image, out_filename, extent, utm, cmap=None, vmin=None, vmax=None, interpolation=None, dpi=300):\n", " assert \".\" not in out_filename, 'Error: Do not include the file extension in out_filename'\n", " assert type(extent) == list and len(extent) == 2 and len(extent[0]) == 2 and len(\n", " extent[1]) == 2, 'Error: extent must be a list in the form [[upper_left_x, upper_left_y], [lower_right_x, lower_right_y]]'\n", " \n", " plt.figure()\n", " plt.axis('off')\n", " plt.imshow(source_image, cmap=cmap, vmin=vmin, vmax=vmax, interpolation=interpolation)\n", " temp = f\"{out_filename}_temp.png\"\n", " plt.savefig(temp, dpi=dpi, transparent='true', bbox_inches='tight', pad_inches=0)\n", "\n", " cmd = f\"gdal_translate -of Gtiff -a_ullr {extent[0][0]} {extent[0][1]} {extent[1][0]} {extent[1][1]} -a_srs EPSG:{utm} {temp} {out_filename}.tiff\"\n", " !{cmd}\n", " try:\n", " Path(temp).unlink()\n", " except FileNotFoundError:\n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot the Coefficient of Variance Map and save it as a png (Coeffvar.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test = np.var(rasterstack,0)\n", "mtest = np.mean(rasterstack[rasterstack.nonzero()],0)\n", "coeffvar = test/(mtest+0.001)\n", "\n", "plt.rcParams.update({'font.size': 14})\n", "fig = plt.figure(figsize=(13, 10))\n", "ax = fig.subplots()\n", "ax.axis('off')\n", "vmin = np.percentile(coeffvar.flatten(), 5)\n", "vmax = np.percentile(coeffvar.flatten(), 95)\n", "ax.set_title('Coefficient of Variance Map')\n", "\n", "im = ax.imshow(coeffvar, cmap='jet', vmin=vmin, vmax=vmax)\n", "fig.colorbar(im, ax=ax)\n", "plt.savefig(f\"{product_path}/Coeffvar.png\", dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the coefficient of variance map as a GeoTiff (Coeffvar.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(coeffvar, f\"{product_path}/Coeffvar\", coords, utm, cmap='jet', vmin=vmin, vmax=vmax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 3.7 Threshold Coefficient of Variance Map\n", "\n", "This is an example how to threshold the derived coefficient of variance map. This can be useful, e.g., to detect areas of active agriculture.\n", "\n", "**Plot and save the coefficient of variance histogram and CDF (thresh_coeff_var_histogram.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.rcParams.update({'font.size': 14})\n", "fig = plt.figure(figsize=(14, 6)) # Initialize figure with a size\n", "ax1 = fig.add_subplot(121) # 121 determines: 2 rows, 2 plots, first plot\n", "ax2 = fig.add_subplot(122)\n", "# Second plot: Histogram\n", "# IMPORTANT: To get a histogram, we first need to *flatten* \n", "# the two-dimensional image into a one-dimensional vector.\n", "h = ax1.hist(coeffvar.flatten(), bins=200, range=(0, 0.03))\n", "ax1.xaxis.set_label_text('Coefficient of Variation')\n", "ax1.set_title('Coeffvar Histogram')\n", "plt.grid()\n", "n, bins, patches = ax2.hist(coeffvar.flatten(), bins=200, range=(0, 0.03), cumulative='True', density='True', histtype='step', label='Empirical')\n", "ax2.xaxis.set_label_text('Coefficient of Variation')\n", "ax2.set_title('Coeffvar CDF')\n", "plt.grid()\n", "plt.savefig(f\"{product_path}/thresh_coeff_var_histogram.png\", dpi=72, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot the Threshold Coefficient of Variance Map and save it as a png (Coeffvarthresh.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.rcParams.update({'font.size': 14})\n", "outind = np.where(n > 0.80)\n", "threshind = np.min(outind)\n", "thresh = bins[threshind]\n", "coeffvarthresh = np.copy(coeffvar)\n", "coeffvarthresh[coeffvarthresh < thresh] = 0\n", "coeffvarthresh[coeffvarthresh > 0.1] = 0\n", "fig = plt.figure(figsize=(13, 10))\n", "ax = fig.subplots()\n", "ax.axis('off')\n", "vmin = np.percentile(coeffvar.flatten(), 5)\n", "vmax = np.percentile(coeffvar.flatten(), 95)\n", "ax.set_title(r'Thresholded Coeffvar Map [$\\alpha=95\\%$]')\n", "im = ax.imshow(coeffvarthresh, cmap='jet', vmin=vmin, vmax=vmax)\n", "bar = fig.colorbar(im, ax=ax)\n", "plt.savefig(f\"{product_path}/Coeffvarthresh.png\", dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the Threshold Coefficient of Variance Map as a GeoTiff (Coeffvarthresh.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(coeffvarthresh, f\"{product_path}/Coeffvarthresh\", coords, utm, cmap='jet', vmin=vmin, vmax=vmax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Time_Series_From_Prepared_Stack.ipynb - Version 1.4.2 - February 2024\n", "\n", "*Version Changes:*\n", "\n", "- *update get_date function*\n", "- *Use raw strings for LateX in matplotlib labels*" ] } ], "metadata": { "kernelspec": { "display_name": "rtc_analysis", "language": "python", "name": "conda-env-.local-rtc_analysis-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.1" } }, "nbformat": 4, "nbformat_minor": 4 }