{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "![SAR, InSAR, PolSAR, and banner](https://opensarlab-docs.asf.alaska.edu/opensarlab-notebook-assets/notebook_images/blackboard-banner.png)\n", "\n", "# Change Point Detection in SAR Amplitude Time Series Data\n", "\n", "### Franz J Meyer; University of Alaska Fairbanks & Josef Kellndorfer, [Earth Big Data, LLC](http://earthbigdata.com/)\n", "\n", "\n", "\n", "This notebook applies Change Point Detection on a deep multi-temporal SAR image data stack acquired by Sentinel-1. Specifically, the lab applies the method of *Cumulative Sums* to perform change detection on a 60 image deep Sentinel-1 data stack over Niamey, Niger. \n", "\n", "**In this notebook we introduce the following data analysis concepts:**\n", "\n", "- The concepts of time series slicing by month, year, and date.\n", "- The concepts and workflow of Cumulative Sum-based change point detection.\n", "- The identification of change dates for each identified change point." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**Important Note about JupyterHub**\n", "\n", "Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "slideshow": { "slide_type": "slide" }, "tags": [] }, "outputs": [], "source": [ "import url_widget as url_w\n", "notebookUrl = url_w.URLWidget()\n", "display(notebookUrl)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" }, "slideshow": { "slide_type": "slide" }, "tags": [] }, "outputs": [], "source": [ "from IPython.display import Markdown\n", "from IPython.display import display\n", "\n", "notebookUrl = notebookUrl.value\n", "user = !echo $JUPYTERHUB_USER\n", "env = !echo $CONDA_PREFIX\n", "if env[0] == '':\n", " env[0] = 'Python 3 (base)'\n", "if env[0] != '/home/jovyan/.local/envs/rtc_analysis':\n", " display(Markdown(f'WARNING:'))\n", " display(Markdown(f'This notebook should be run using the \"rtc_analysis\" conda environment.'))\n", " display(Markdown(f'It is currently using the \"{env[0].split(\"/\")[-1]}\" environment.'))\n", " display(Markdown(f'Select the \"rtc_analysis\" from the \"Change Kernel\" submenu of the \"Kernel\" menu.'))\n", " display(Markdown(f'If the \"rtc_analysis\" environment is not present, use Create_OSL_Conda_Environments.ipynb to create it.'))\n", " display(Markdown(f'Note that you must restart your server after creating a new environment before it is usable by notebooks.'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 0. Importing Relevant Python Packages\n", "\n", "Our first step is to **import the necessary python libraries into your Jupyter Notebook:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "from pathlib import Path\n", "from copy import deepcopy\n", "\n", "import pandas as pd\n", "from osgeo import gdal # for Info\n", "gdal.UseExceptions()\n", "import numpy as np\n", "\n", "%matplotlib inline\n", "import matplotlib\n", "import matplotlib.pylab as plt\n", "import matplotlib.patches as patches\n", "\n", "import opensarlab_lib as asfn\n", "asfn.jupytertheme_matplotlib_format()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 1. Load Data Stack for this Lab \n", "\n", " \n", "\n", "This notebook will be using a 60-image deep C-band Sentinel-1 SAR data stack over Niamey, Niger to demonstrate the concepts of time series change detection. The data are available to us through the services of the [Alaska Satellite Facility](https://www.asf.alaska.edu). \n", "\n", "Specifically we will use a small image segment over the campus of [AGRHYMET Regional Centre](http://www.agrhymet.ne/eng/), a regional organization supporting West Africa in the use or remote sensing. \n", "\n", "This site was picked as we had information about construction going on at this site sometime in the 2015 - 2017 time frame. Land was cleared and a building was erected. In this notebook, we will see if we can detect the construction activity and if we are able to determine when construction began and when it ended.\n", "\n", "In this case, we will retrieve the relevant data from an [Amazon Web Service (AWS)](https://aws.amazon.com/) cloud storage bucket.\n", "\n", "---\n", "\n", "### 1.1 Download The Data:\n", "\n", "Before we download anything, **create a working directory for this analysis and change into it:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = Path(\"/home/jovyan/notebooks/SAR_Training/English/Master/data_Change_Detection_Amplitude_Time_Series_Example\")\n", "\n", "if not path.is_dir():\n", " path.mkdir()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Download the data from the AWS bucket:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!aws --region=us-west-2 --no-sign-request s3 cp s3://asf-jupyter-data-west/Niamey.zip $path/Niamey.zip" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Unzip the file and clean up:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "niamey_path = path/\"Niamey.zip\"\n", "asfn.asf_unzip(str(path), str(niamey_path))\n", "\n", "if niamey_path.is_file():\n", " niamey_path.unlink()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 1.2 Switch to the Data Directory:\n", "\n", "The following lines set variables that capture path variables needed for data processing. **Change into the unzipped /cra directory and define variables for names of the files containing data and image information:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cra_path = niamey_path.cwd()/'data_Change_Detection_Amplitude_Time_Series_Example/cra'\n", "date_file = None\n", "image_file = None\n", "\n", "if cra_path.exists():\n", " date_file = str(list(cra_path.rglob('S32631X402380Y1491460sS1_A_vv_0001_A_mtfil.dates')).pop())\n", " image_file = str(list(cra_path.rglob('S32631X402380Y1491460sS1_A_vv_0001_A_mtfil.vrt')).pop())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 1.3 Assess Image Acquisition Dates\n", "\n", "Before we start analyzing the available image data, we want to examine the content of our data stack. **To do so, we read the image acquisition dates for all files in the time series and create a *pandas* date index:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "with open(date_file, 'r') as d:\n", " dates = d.readlines()\n", "time_index = pd.DatetimeIndex(dates)\n", "j = 1\n", "print('Bands and dates for', image_file)\n", "for i in time_index:\n", " print(\"{:4d} {}\".format(j, i.date()), end=' ')\n", " j += 1\n", " if j%5 == 1: print()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 1.4 Read in the Data Stack\n", "\n", "**We read in the time series raster stack from the entire data set:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pixiedust": { "displayParams": {} } }, "outputs": [], "source": [ "raster_stack = gdal.Open(image_file).ReadAsArray()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 1.5 Lay the groundwork for saving plots and level-3 products.\n", "**Create a directory in which to store our output, and move into it:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "product_path = path.cwd()/'data_Change_Detection_Amplitude_Time_Series_Example/plots_and_products'\n", "\n", "if not product_path.exists():\n", " print(f'Created {product_path}')\n", " product_path.mkdir()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will need the upper-left and lower-right corner coordinates when saving our products as GeoTiffs. In this situation, you have been given a pre-subset vrt image stack. \n", "\n", "**Retrieve the corner coordinates from the vrt using gdal.Info():**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vrt = gdal.Open(image_file)\n", "vrt_info = gdal.Info(vrt, format='json')\n", "coords = [vrt_info['cornerCoordinates']['upperLeft'], vrt_info['cornerCoordinates']['lowerRight']]\n", "print(coords)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Retrieve the utm zone from the vrt:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "utm_zone = vrt_info['coordinateSystem']['wkt'].split(',')[-1][0:-2]\n", "print(f\"utm zone: {utm_zone}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Write a function to convert our plots into GeoTiffs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# do not include a file extension in out_filename\n", "# extent must be in the form of a list: [[upper_left_x, upper_left_y], [lower_right_x, lower_right_y]]\n", "def geotiff_from_plot(source_image, out_filename, extent, utm_zone, cmap=None, vmin=None, vmax=None, interpolation=None, dpi=300):\n", " plt.figure()\n", " plt.axis('off')\n", " plt.imshow(source_image, cmap=cmap, vmin=vmin, vmax=vmax, interpolation=interpolation)\n", " temp = Path(f\"{out_filename}_temp.png\")\n", " plt.savefig(temp, dpi=dpi, transparent='true', bbox_inches='tight', pad_inches=0)\n", "\n", " cmd = f\"gdal_translate -of Gtiff -a_ullr {extent[0][0]} {extent[0][1]} {extent[1][0]} {extent[1][1]} -a_srs EPSG:{utm_zone} {temp} {out_filename}.tiff\"\n", " !{cmd}\n", " try:\n", " temp.unlink()\n", " except FileNotFoundError:\n", " print('File Not Found')\n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 2. Plot the Global Means of the Time Series\n", "\n", "To accomplish this task, complete the following steps:\n", "1. Conversion to power-scale\n", "1. Compute mean values\n", "1. Convert to dB-scale\n", "1. Create time series of means using Pandas\n", "1. Plot time series of means\n", "\n", "**Convert to Power-scale:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "caldB = -83\n", "calPwr = np.power(10.0, caldB/10.0)\n", "raster_stack_pwr = np.power(raster_stack, 2.0) * calPwr" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Compute means:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rs_means_pwr = np.mean(raster_stack_pwr, axis=(1, 2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Convert to dB-scale:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rs_means_dB = 10.0 * np.log10(rs_means_pwr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Make a pandas time series object:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ts = pd.Series(rs_means_dB,index=time_index)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Use pandas to plot the time series object with band numbers as data point labels. Save the plot as a png (time_series_means.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.rcParams.update({'font.size': 14})\n", "plt.figure(figsize=(16, 8))\n", "plt.title(f\"Time Series of Means\")\n", "ts.plot()\n", "xl = plt.xlabel('Date')\n", "yl = plt.ylabel('$\\overline{\\gamma^o}$ [dB]')\n", "for xyb in zip(ts.index, rs_means_dB, range(1, len(ts)+1)):\n", " plt.annotate(xyb[2], xy=xyb[0:2])\n", "plt.grid()\n", "\n", "plt.savefig(f'{product_path}/time_series_means.png', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 3. Generate Time Series for Point Locations or Subsets\n", "\n", "In python, we can use the matrix slicing tools (similar to those used in Matlab) to obtain subsets of the data. For example, to pick one pixel at a line/pixel location and obtain all band values, use:\n", "\n", "\\[:, line, pixel\\] notation. \n", "\n", "Or, if we are interested in a subset at a offset location we can use:\n", "\n", "\\[:, yoffset:(yoffset+yrange), xoffset:(xoffset+xrange)\\]\n", "\n", "In the section below we will learn how to generate time series plots for point locations (pixels) or areas (e.g. a 5x5 window region). To show individual bands, we define a *show_image* function which incorporates the matrix slicing from above.\n", "\n", "---\n", "\n", "### 3.1 Plotting Time Series for Subset\n", "\n", "**Write a function to plot the calibrated time series for a pre-defined subset:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Preconditions:\n", "# raster_stack must be a stack of images in SAR power units\n", "# time_index must be a pandas date-time index\n", "# band_number must represent a valid bandnumber in the raster_stack\n", "def show_image(raster_stack, time_index, band_number, output_filename=None, subset=None, vmin=None, vmax=None):\n", " fig = plt.figure(figsize=(16, 8))\n", " ax1 = fig.add_subplot(121)\n", " ax2 = fig.add_subplot(122)\n", " \n", " # If vmin or vmax are None we use percentiles as limits:\n", " if vmin == None:\n", " vmin = np.percentile(raster_stack[band_number-1].flatten(), 5)\n", " if vmax == None:\n", " vmax = np.percentile(raster_stack[band_number-1].flatten(), 95)\n", "\n", " ax1.imshow(raster_stack[band_number-1], cmap='gray', vmin=vmin, vmax=vmax)\n", " ax1.set_title(f'Image Band {band_number} {time_index[band_number-1].date()}')\n", " if subset == None:\n", " bands, ydim, xdim = raster_stack.shape\n", " subset = (0, 0, xdim, ydim)\n", " \n", " ax1.add_patch(patches.Rectangle((subset[0], subset[1]), subset[2], subset[3], fill=False, edgecolor='red'))\n", " ax1.xaxis.set_label_text('Pixel')\n", " ax1.yaxis.set_label_text('Line')\n", " ax1.legend(['Subset AOI'], loc='best')\n", " \n", " ts_pwr = np.mean(raster_stack[:, subset[1]:(subset[1]+subset[3]), subset[0]:(subset[0]+subset[2])], axis=(1,2))\n", " ts_dB = 10.0 * np.log10(ts_pwr)\n", " ax2.plot(time_index, ts_dB)\n", " ax2.yaxis.set_label_text('$\\gamma^o$ [dB]')\n", " ax2.set_title('$\\gamma^o$ Backscatter Time Series')\n", " # Add a vertical line for the date where the image is displayed\n", " ax2.axvline(time_index[band_number-1], color='green')\n", " ax2.legend(['Time Series', f'Band {band_number} Date'], loc='best')\n", " plt.grid()\n", "\n", " fig.autofmt_xdate()\n", " \n", " if output_filename:\n", " plt.savefig(f'{product_path}/{output_filename}', dpi=72)\n", " print(f\"Saved plot: {output_filename}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Call show_image() on different bands to compare the information content of different time steps in our area of interest.\n", "\n", "**Call show_image() on band number 24:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "band_number = 24 \n", "subset = [5, 20, 3, 3]\n", "show_image(raster_stack_pwr, time_index, band_number, subset=subset, output_filename=f\"band_{band_number}.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Call show_image() on band number 43:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "band_number = 43\n", "show_image(raster_stack_pwr, time_index, band_number, subset=subset, output_filename=f\"band_{band_number}.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 3.2 Helper Function to Generate a Time Series Object\n", "\n", "**Write a function that creates an object representing the time series for an image subset:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Extract the means along the time series axes\n", "# raster shape is time steps, lines, pixels. \n", "# With axis=1,2, we average lines and pixels for each time step (axis 0)\n", "# returns pandas time series object\n", "def timeSeries(raster_stack_pwr, time_index, subset, ndv=0.0):\n", " raster = raster_stack_pwr.copy()\n", " if ndv != np.nan:\n", " raster[np.equal(raster, ndv)] = np.nan\n", " ts_pwr = np.nanmean(raster[:,subset[1]:(subset[1]+subset[3]), subset[0]:(subset[0]+subset[2])], axis=(1, 2))\n", " # convert the means to dB\n", " ts_dB = 10.0 * np.log10(ts_pwr)\n", " # make the pandas time series object\n", " ts = pd.Series(ts_dB, index=time_index)\n", " return ts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Call timeSeries() to make a time series object for the chosen subset:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ts = timeSeries(raster_stack_pwr, time_index, subset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot the time series object:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = ts.plot(figsize=(16, 4))\n", "fig.yaxis.set_label_text('mean dB')\n", "fig.set_title('Time Series for Chosen Subset')\n", "plt.grid()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 4. Create Seasonal Subsets of Time Series Records \n", "\n", "Let's expand upon SAR time series analysis. Often it is desirable to subset time series by season or months to compare data acquired under similar weather/growth/vegetation cover conditions. For example, in analyzing C-Band backscatter data, it might be useful to limit comparative analysis to dry season observations only as soil moisture might confuse signals during the wet seasons. To subset time series along the time axis we will make use of the following *Pandas* datatime index tools:\n", "\n", "- month\n", "- day of year\n", "\n", "**Extract a hectare-sized area around our subset location (5,20,5,5):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "subset = (5, 20, 5, 5)\n", "time_series_1 = timeSeries(raster_stack_pwr, time_index, subset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Convert the time series to a pandas DataFrame** to allow for more processing options." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_frame = pd.DataFrame(time_series_1, index=ts.index, columns=['g0'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Label the data value column as 'g0' for $\\gamma^0$ and plot the time series backscatter profile:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ylim = (-15, -5)\n", "data_frame.plot(figsize=(16, 4))\n", "plt.title('Sentinel-1 C-VV Time Series Backscatter Profile, Subset: 5,20,5,5 ')\n", "plt.ylabel('$\\gamma^o$ [dB]')\n", "plt.ylim(ylim)\n", "_ = plt.legend([\"C-VV Time Series\"])\n", "plt.grid()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 4.1 Change Start Date of Time Series to November 2015\n", "\n", "**Plot the cropped time series and save it as a png (time_series_backscatter_profile.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_frame_sub1 = data_frame[data_frame.index>'2015-11-01']\n", "# Plot\n", "data_frame_sub1.plot(figsize=(16, 4))\n", "plt.title('Sentinel-1 C-VV Time Series Backscatter Profile, Subset: {}'.format(subset))\n", "plt.ylabel('$\\gamma^o$ [dB]')\n", "plt.ylim(ylim)\n", "_ = plt.legend([\"C-VV Time Series\"])\n", "plt.grid()\n", "plt.savefig(f'{product_path}/time_series_backscatter_profile', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 4.2 Subset Time Series by Months\n", "\n", "Using the Pandas *DateTimeIndex* object index.month and numpy's logical_and function, we can easily subset the time series by month:\n", "\n", "**Create subset data_frames. In one, replace the data from June-February with NaNs. In the other, replace the data from March-May with NaNs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_frame_sub2 = deepcopy(data_frame_sub1)\n", "for index, row in data_frame_sub2.iterrows():\n", " if index.month < 3 or index.month > 5:\n", " row['g0'] = np.nan\n", " \n", "data_frame_sub3 = deepcopy(data_frame_sub1)\n", "for index, row in data_frame_sub3.iterrows():\n", " if index.month > 2 and index.month < 6:\n", " row['g0'] = np.nan" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot the time series backscatter profile for March - May. Save the plot as a png (march2may_time_series_backscatter_profile.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot\n", "fig, ax = plt.subplots(figsize=(16, 4))\n", "data_frame_sub2.plot(ax=ax)\n", "plt.title(f'Sentinel-1 C-VV Time Series Backscatter Profile, Subset: {subset}')\n", "plt.ylabel('$\\gamma^o$ [dB]')\n", "plt.ylim(ylim)\n", "_ = plt.legend([\"C-VV Time Series (March - May)\"], loc='best')\n", "plt.grid()\n", "plt.savefig(f'{product_path}/march2may_time_series_backscatter_profile', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using numpy's **invert** function, we can invert a selection. In this example, we extract all other months from the time series.\n", "\n", "**Plot the time series backscatter profile for June - Feburary. Save the plot as a png (june2feb_time_series_backscatter_profile.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot\n", "fig, ax = plt.subplots(figsize=(16, 4))\n", "data_frame_sub3.plot(ax=ax)\n", "plt.title(f'Sentinel-1 C-VV Time Series Backscatter Profile, Subset: {subset}')\n", "plt.ylabel('$\\gamma^o$ [dB]')\n", "plt.ylim(ylim)\n", "_ = plt.legend([\"C-VV Time Series (June-February)\"], loc='best')\n", "plt.grid()\n", "plt.savefig(f'{product_path}/june2feb_time_series_backscatter_profile', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 4.3 Split Time Series by Year to Compare Year-to-Year Patterns\n", "\n", "Sometimes it is useful to compare year-to-year $\\sigma^0$ values to identify changes in backscatter characteristics. This helps to distinguish true change from seasonal variability.\n", "\n", "**Split the time series into different years:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_frame_by_year = data_frame_sub1.groupby(pd.Grouper(freq=\"YE\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot the split time series. Save the plot as a png (yearly_time_series_backscatter_profile.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(16, 4))\n", "for label, df in data_frame_by_year:\n", " df.g0.plot(ax=ax, label=label.year)\n", "plt.legend()\n", "# data_frame_by_year.plot(ax=ax)\n", "plt.title('Sentinel-1 C-VV Time Series Backscatter Profile, Subset: {}'.format(subset))\n", "plt.ylabel('$\\gamma^o$ [dB]')\n", "plt.ylim(ylim)\n", "plt.grid()\n", "plt.savefig(f'{product_path}/yearly_time_series_backscatter_profile', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 4.4 Create a Pivot Table to Group Years and Sort Data for Plotting Overlapping Time Series\n", "\n", "Pivot Tables are a technique in data processing. They enable a person to arrange and rearrange (or \"pivot\") statistics in order to draw attention to useful information. To do so, we first **add columns for day-of-year and year to the data frame:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Add day of year\n", "data_frame_sub1 = data_frame_sub1.assign(doy=data_frame_sub1.index.dayofyear)\n", "# Add year\n", "data_frame_sub1 = data_frame_sub1.assign(year=data_frame_sub1.index.year)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Create a pivot table which has day-of-year as the index and years as columns:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pivot_table = pd.pivot_table(data_frame_sub1, index=['doy'], columns=['year'], values=['g0'])\n", "# Set the names for the column indices\n", "pivot_table.columns.set_names(['g0', 'year'], inplace=True) \n", "print(pivot_table.head(10))\n", "print('...\\n',pivot_table.tail(10))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, there are NaN values on the days in a year where no acquisition took place. Now we use time weighted interpolation to fill the dates for all the observations in any given year. For **time weighted interpolation** to work we need to create a dummy year as a date index, perform the interpolation, and reset the index to the day of year.\n", "\n", "**Create a dummy year as a date index:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Add fake dates for year 100 to enable time sensitive interpolation \n", "# of missing values in the pivot table\n", "year_doy = ['2100-{}'.format(x) for x in pivot_table.index]\n", "y100_doy=pd.DatetimeIndex(pd.to_datetime(year_doy,format='%Y-%j'))\n", "\n", "# make a copy of the piv table and add two columns\n", "pivot_table_2 = pivot_table.copy()\n", "pivot_table_2 = pivot_table_2.assign(d100=y100_doy) # add the fake year dates\n", "pivot_table_2 = pivot_table_2.assign(doy=pivot_table_2.index) # add doy as a column to replace as index later again\n", "\n", "# Set the index to the dummy year\n", "pivot_table_2.set_index('d100', inplace=True, drop=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Perform the time-weighted interpolation:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pivot_table_2 = pivot_table_2.interpolate(method='time')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Reset the index to the day of year:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pivot_table_2.set_index('doy', inplace=True, drop=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Inspect the new pivot table and see whether we interpolated the NaN values where it made sense:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(pivot_table_2.head(10))\n", "print('...\\n',pivot_table_2.tail(10))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot the time series data with overlapping years. Save the plot as a png (overlapping_years_time_series_backscatter_profile.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pivot_table_2.plot(figsize=(16, 8))\n", "plt.title('Sentinel-1 C-VV Time Series Backscatter Profile,\\\n", "Subset: 5,20,5,5 ')\n", "plt.ylabel('$\\gamma^o$ [dB]')\n", "plt.xlabel('Day of Year')\n", "_ = plt.ylim(ylim)\n", "plt.grid()\n", "plt.savefig(f'{product_path}/overlapping_years_time_series_backscatter_profile', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 5. Time Series Change Detection\n", "\n", "Now we are ready to perform efficient change detection on the time series data. We will discuss two approaches:\n", "\n", "1. Year-to-year differencing of the subsetted time series\n", "1. Cumulative Sum-based change detection\n", "\n", "**Set a dB change threshold:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "threshold = 3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Calculate the difference between years (2016 and 2017):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "diff_2017_2016 = pivot_table_2.g0[2017] - pivot_table_2.g0[2016]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 5.1 Change Detection based on Year-to-Year Differencing\n", "\n", "**Compute and plot the differences between the interpolated time series and look for change using a threshold value. Save the plot as a png (year2year_differencing_time_series.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_ = diff_2017_2016.plot(kind='line', figsize=(16,8))\n", "plt.title('Year-to-Year Difference Time Series')\n", "plt.ylabel('$\\Delta\\gamma^o$ [dB]')\n", "plt.xlabel('Day of Year')\n", "plt.grid()\n", "plt.savefig(f'{product_path}/year2year_differencing_time_series', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Calculate the days-of-year on which the threshold was exceeded:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "threshold_exceeded = diff_2017_2016[abs(diff_2017_2016) > threshold]\n", "print(threshold_exceeded)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the *threshold_exceeded* dataframe we can infer the first date at which the threshold was exceeded. We would label this date as a **change point**. As an additional criteria for labeling a change point, one can also consider the number of observations after an identified change point that also exceeded the threshold. If only one or two observations differed from the year before this could be considered an outlier. Additional smoothing of the time series may sometimes be useful to avoid false detections.\n", "\n", "---\n", "### 5.2 Cumulative Sums for Change Detection\n", "\n", "Another approach to detect change in regularly acquired data is employing the method of **cumulative sums**. Changes are determined by comparing the time series data against its mean. A full explanation and examples from the financial sector can be found at [http://www.variation.com/cpa/tech/changepoint.html](http://www.variation.com/cpa/tech/changepoint.html)\n", "\n", "---\n", "\n", "**5.2.A First let's consider a time series and its mean observation**: \n", "\n", "We look at two full years of observations from Sentinel-1 data for an area where we suspect change. In the following, we define $X$ as our time series\n", "\n", "\\begin{equation}\n", "X = (X_1,X_2,...,X_n)\n", "\\end{equation}\n", "\n", "with $X_i$ being the SAR backscatter values at times $i=1,...,n$ and $n$ is the number of observations in the time series.\n", "\n", "**Create a times series of the subset and calculate the backscatter values:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "subset = (5, 20, 3, 3)\n", "time_series_1 = timeSeries(raster_stack_pwr, time_index, subset)\n", "backscatter_values = time_series_1[time_series_1.index>'2015-10-31']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**5.2.B Filtering the time series for outliers**:\n", "\n", "It is advantageous in noisy SAR time series like those from C-Band Sentinel-1 data to reduce noise by **applying a filter along the time axis**. Pandas offers a *\"rolling\"* function for these purposes. Using the *rolling* function, we will apply a *median filter* to our data.\n", "\n", "**Calculate the median backscatter values and plot them against the original values. Save the plot as a png (Original vs. Median Time Series):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "backscatter_values_median = backscatter_values.rolling(5, center=True).median()\n", "fig, ax = plt.subplots(figsize=(16, 4))\n", "backscatter_values_median.plot()\n", "backscatter_values.plot()\n", "plt.title('Original vs. Median Time Series')\n", "plt.ylabel('$\\gamma^o$ [dB]')\n", "plt.xlabel('Time')\n", "plt.grid()\n", "plt.savefig(f'{product_path}/original_vs_median_time_series', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Calculate the time series' mean value and plot it against the original values. Save the plot as a png (original_time_series_vs_mean_val.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(16, 4))\n", "backscatter_values.plot()\n", "plt.title('Original Time Series vs. Mean Value')\n", "plt.ylabel('$\\gamma^o$ [dB]')\n", "ax.axhline(backscatter_values.mean(), color='red')\n", "_ = plt.legend(['$\\gamma^o$', '$\\overline{\\gamma^o}$'])\n", "plt.grid()\n", "plt.savefig(f'{product_path}/original_time_series_vs_mean_val', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Calculate the time series' mean value and plot it against the median values. Save the plot as a png (median_time_series_vs_mean_val.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "backscatter_values_mean = backscatter_values.mean()\n", "\n", "fig, ax = plt.subplots(figsize=(16, 4))\n", "backscatter_values_median.plot()\n", "plt.title('Median Time Series vs. Mean Value')\n", "plt.ylabel('$\\gamma^o$ [dB]')\n", "ax.axhline(backscatter_values.mean(), color='red')\n", "_ = plt.legend(['$\\gamma^o$', '$\\overline{\\gamma^o}$'])\n", "plt.grid()\n", "plt.savefig(f'{product_path}/median_time_series_vs_mean_val', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**5.2.C Calculate the Residuals of the Time Series Against the Mean $\\overline{\\gamma^o}$**:\n", "\n", "To get to the residual, we calculate \n", "\n", "\\begin{equation}\n", "R = X_i - \\overline{X}\n", "\\end{equation}\n", "\n", "**Calculate the residuals:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "residuals = backscatter_values - backscatter_values_mean" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**5.2.D Calculate Cumulative Sum of the Residuals**:\n", "The cumulative sum is defined as: \n", "\n", "\\begin{equation}\n", "S = \\displaystyle\\sum_1^n{R_i}\n", "\\end{equation}\n", "\n", "**Calculate and plot the cumulative sum of the residuals. Save the plot as a png (cumulative_sum_residuals.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sums = residuals.cumsum()\n", "\n", "_ = sums.plot(figsize=(16, 6))\n", "plt.title('Cumulative Sum of the Residuals')\n", "plt.ylabel('Cummulative Sum $S$ [dB]')\n", "plt.xlabel('Time')\n", "plt.grid()\n", "plt.savefig(f'{product_path}/cumulative_sum_residuals', dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The **cumulative sum** is a good indicator of change in the time series. An estimator for the magnitude of change is given as the difference between the maximum and minimum value of the cumulative sum $S$: \n", "\n", "\\begin{equation}\n", "S_{DIFF} = S_{MAX} - S_{MIN}\n", "\\end{equation}\n", "\n", "**Calculate the magnitude of change:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "change_mag = sums.max() - sums.min()\n", "print('Change magnitude: %5.3f dB' % (change_mag))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**5.2.E Identify Change Point in the Time Series**:\n", "A candidate change point is identified from $S$ at the time where $S_{MAX}$ is found:\n", "\n", "\\begin{equation}\n", "T_{{CP}_{before}} = T(S_i = S_{MAX})\n", "\\end{equation}\n", "\n", "with $T_{{CP}_{before}}$ being the timestamp of the last observation *before* the identified change point, $S_i$ is the cumulative sum of $R$ with $i=1,...n$, and $n$ is the number of observations in the time series. \n", "\n", "The first observation *after* a change occurred ($T_{{CP}_{after}}$) is then found as the first observation in the time series following $T_{{CP}_{before}}$.\n", "\n", "**Calculate $T_{{CP}_{before}}$:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "change_point_before = sums[sums==sums.max()].index[0]\n", "print('Last date before change occurred: {}'.format(change_point_before.date()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Calculate $T_{{CP}_{after}}$:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "change_point_after = sums[sums.index > change_point_before].index[0]\n", "print('First date after change occurred: {}'.format(change_point_after.date()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**5.2.F Determine our Confidence in the Identified Change Point using Bootstrapping:**\n", "We can determine if an identified change point is indeed a valid detection by **randomly reordering the time series** and **comparing the various $S$ curves**. During this **\"bootstrapping\"** approach, we count how many times the $S_{DIFF}$ values are greater than $S_{{DIFF}_{random}}$ of the identified change point. \n", " \n", "After bootstrapping, we define the **confidence level $CL$** in a detected change point according to:\n", "\n", "\\begin{equation}\n", "CL = \\frac{N_{GT}}{N_{bootstraps}}\n", "\\end{equation}\n", "\n", "where $N_{GT}$ is the number of times $S_{DIFF}$ > $S_{{DIFF}_{random}}$ and $N_{bootstraps}$ is the number of bootstraps randomizing $R$.\n", "\n", "As another quality metric we can also calculate the **significance $CP_{significance}$** of a change point according to: \n", "\n", "\\begin{equation}\n", "CP_{significance} = 1 - \\left( \\frac{\\sum_{b=1}^{N_{bootstraps}}{S_{{DIFF}_{{random}_i}}}}{N_{bootstraps}} \\middle/ S_{DIFF} \\right)\n", "\\end{equation}\n", "\n", "The closer $CP_{significance}$ is to 1, the more significant the change point.\n", "\n", "**Write a function that implements the bootstrapping algorithm:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# pyplot must be imported as plt\n", "import random\n", "def bootstrap(n_bootstraps, name, sums, residuals, output_file=False):\n", " fig, ax = plt.subplots(figsize=(16,6))\n", " ax.set_ylabel('Cumulative Sums of the Residuals')\n", " change_mag_random_sum = 0\n", " change_mag_random_max = 0 # to keep track of the maximum change magnitude of the bootstrapped sample\n", " qty_change_mag_above_random = 0 # to keep track of the maximum Sdiff of the bootstrapped sample\n", " print(\"Running Bootstrapping for %4.1f iterations ...\" % (n_bootstraps))\n", " colors = ['C0', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'c', 'm', 'y', 'k', 'g']\n", " for i in range(n_bootstraps):\n", " r_random = residuals.sample(frac=1) # Randomize the time steps of the residuals\n", " residuals_random = pd.Series(r_random.values, index=residuals.index)\n", "\n", " sums_random = residuals_random.cumsum()\n", " change_mag_random = sums_random.max() - sums_random.min()\n", " change_mag_random_sum += change_mag_random\n", " if change_mag_random > change_mag_random_max:\n", " change_mag_random_max = change_mag_random\n", " if change_mag > change_mag_random:\n", " qty_change_mag_above_random += 1\n", "\n", " sums_random.plot(ax=ax, color=random.choice(colors), label='_nolegend_')\n", " \n", " if ((i+1)/n_bootstraps*100) % 10 == 0:\n", " print(\"\\r%4.1f percent completed ...\" % ((i+1)/n_bootstraps*100), end='\\r', flush=True)\n", " sums.plot(ax=ax, color='r', linewidth=3)\n", " fig.legend(['S Curve for Candidate Change Point'])\n", " print(f\"Bootstrapping Complete\")\n", " _ = ax.axhline(change_mag_random_sum/n_bootstraps, color='b')\n", " plt.grid()\n", " if output_file:\n", " plt.savefig(f\"{product_path}/bootstrap_{name}_{n_bootstraps}\", dpi=72)\n", " print(f\"Saved plot: bootstrap_{name}_{n_bootstraps}.png\")\n", " return [qty_change_mag_above_random, change_mag_random_sum] " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n_bootstraps = 2000\n", "bootstrapped_change_mag = bootstrap(n_bootstraps, \"\", sums, residuals, output_file=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Call the bootstrap function with a sample size of 2000:**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "Based on the bootstrapping results, we can now calculate **Confidence Level $CL$** and **Significance $CP_{significance}$** for our candidate change point.\n", "\n", "**Calculate the confidence level:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "confidence_level = 1.0 * bootstrapped_change_mag[0] / n_bootstraps\n", "print('Confidence Level for change point {} percent'.format(confidence_level*100.0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Calculate the change point significance:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "change_point_significance = 1.0 - (bootstrapped_change_mag[1]/n_bootstraps)/change_mag \n", "print('Change point significance metric: {}'.format(change_point_significance))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**5.2.G TRICK: Detrending of Time Series Before Change Detection to Improve Robustness:**\n", "\n", "De-trending the time series with global image means improves the robustness of change point detection as global image time series anomalies stemming from calibration or seasonal trends are removed prior to time series analysis. This de-trending needs to be performed with large subsets so real change is not influencing the image statistics. \n", "\n", "NOTE: Due to the small size of our subset, we will see some distortions when we detrend the time series.\n", "\n", "**Let's start by building a global image means time series and plot the global means. Save the plot as a png (global_means_time_series.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "means_pwr = np.mean(raster_stack_pwr, axis=(1, 2))\n", "means_dB = 10.0 * np.log10(means_pwr)\n", "global_means_ts = pd.Series(means_dB, index=time_index)\n", "global_means_ts = global_means_ts[global_means_ts.index > '2015-10-31'] # filter dates\n", "global_means_ts = global_means_ts.rolling(5, center=True).median()\n", "global_means_ts.plot(figsize=(16, 6))\n", "plt.title('Time Series of Global Means')\n", "plt.ylabel('[dB]')\n", "plt.xlabel('Time')\n", "plt.grid()\n", "plt.savefig(f\"{product_path}/global_means_time_series\", dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Compare the time series of global means (above) to the time series of our small subset (below):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "backscatter_values.plot(figsize=(16, 6))\n", "plt.title('Sentinel-1 C-VV Time Series Backscatter Profile,\\\n", "Subset: 5,20,5,5 ')\n", "plt.ylabel('[dB]')\n", "plt.xlabel('Time')\n", "plt.grid()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are some signatures of the global seasonal trend in our subset time series. To remove these signatures and get a cleaner time series of change, we subtract the global mean time series from our subset time series.\n", "\n", "**De-trend the subset and re-plot the backscatter profile. Save the plot (detrended_time_series.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "backscatter_minus_seasonal = backscatter_values - global_means_ts\n", "backscatter_minus_seasonal.plot(figsize=(16, 6))\n", "plt.title('De-trended Sentinel-1 C-VV Time Series Backscatter Profile, Subset: 5,20,5,5')\n", "plt.ylabel('[dB]')\n", "plt.xlabel('Time')\n", "plt.grid()\n", "plt.savefig(f\"{product_path}/detrended_time_series\", dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save a plot comparing the original, global means, and detrended time-series (globalMeans_original_detrended_time_series.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "means_pwr = np.mean(raster_stack_pwr, axis=(1, 2))\n", "means_dB = 10.0 * np.log10(means_pwr)\n", "global_means_ts = pd.Series(means_dB, index=time_index)\n", "global_means_ts = global_means_ts[global_means_ts.index > '2015-10-31'] # filter dates\n", "global_means_ts = global_means_ts.rolling(5, center=True).median()\n", "global_means_ts.plot(figsize=(16, 6))\n", "backscatter_values.plot(figsize=(16, 6))\n", "backscatter_minus_seasonal = (backscatter_values - global_means_ts)\n", "backscatter_minus_seasonal.plot(figsize=(16, 6))\n", "plt.title('Time Series of Global Means')\n", "plt.ylabel('[dB]')\n", "plt.xlabel('Time')\n", "plt.legend(['Global Means TS', 'Backscatter', 'Detrended Backscatter'], loc='best')\n", "plt.grid()\n", "plt.savefig(f\"{product_path}/globalMeans_original_detrended_time_series\", dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Recalculate and plot the residuals based on the de-trended data:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "residuals = backscatter_minus_seasonal - backscatter_values_mean" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Compute, plot, and save the cumulative sum of the detrended time series (cumualtive_sum_detrended_time_series.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sums = residuals.cumsum()\n", "_ = sums.plot(figsize=(16, 6))\n", "plt.title(\"Cumulative Sum of the Detrended Time Series\")\n", "plt.ylabel('CumSum $S$ [dB]')\n", "plt.xlabel('Time')\n", "plt.grid()\n", "plt.savefig(f\"{product_path}/cumualtive_sum_detrended_time_series\", dpi=72)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Detect Change Point and extract related change dates:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "detrended_change_point_before = sums[sums==sums.max()].index[0]\n", "print('Last date before change occurred: {}'.format(detrended_change_point_before.date()))\n", "\n", "detrended_change_point_after = sums[sums.index > detrended_change_point_before].index[0]\n", "print('First date after change occurred: {}'.format(detrended_change_point_after.date()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Perform bootstrapping:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n_bootstraps = 2000\n", "bootstrapped_change_mag = bootstrap(n_bootstraps, \"detrended\", sums, residuals, output_file=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Calculate the confidence level:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "detrended_confidence_level = bootstrapped_change_mag[0] / n_bootstraps\n", "print('Confidence Level for change point {} percent'.format(confidence_level*100.0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "Note how the **change point significance $CP_{significance}$** has increased in the detrended time series:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "detrended_change_point_significance = 1.0 - (bootstrapped_change_mag[1]/n_bootstraps) / change_mag \n", "print('Change point significance metric: {}'.format(change_point_significance))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 6. Cumulative Sum-based Change Detection Across an Entire Image \n", "\n", "With numpy arrays we can apply the concept of **cumulative sum change detection** analysis effectively on the entire image stack. We take advantage of array slicing and axis-based computing in numpy. Axis 0 is the time domain in our raster stacks.\n", "\n", "---\n", " \n", "### 6.1 We first create our time series stack:\n", "\n", "**Filter out the first layer (Keep Dates >= '2015-11-17'):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "raster_stack = raster_stack_pwr\n", "raster_stack_sub = raster_stack_pwr[1:, :, :]\n", "time_index_sub = time_index[1:]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Run the following code cell if you wish to change to dB scale:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "raster_stack = 10.0 * np.log10(raster_stack_sub)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot and save Band-1 (band_1.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(12, 8))\n", "band_number = 0\n", "vmin = np.percentile(raster_stack[band_number], 5)\n", "vmax = np.percentile(raster_stack[band_number], 95)\n", "plt.title('Band  {} {}'.format(band_number+1, time_index_sub[band_number].date()))\n", "plt.imshow(raster_stack[0], cmap='gray', vmin=vmin, vmax=vmax)\n", "_ = plt.colorbar()\n", "plt.savefig(f'{product_path}/band_1.png', dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the plot as a GeoTiff (band_1.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(raster_stack[0], f'{product_path}/band_1', coords, utm_zone, cmap='gray', dpi=600)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 6.2 Calculate Mean Across Time Series to Prepare for Calculation of Cumulative Sum $S$: \n", "\n", "**Plot and save the the raster stack mean (raster_stack_mean.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "raster_stack_mean = np.mean(raster_stack, axis=0)\n", "plt.figure(figsize=(12, 8))\n", "plt.imshow(raster_stack_mean, cmap='gray')\n", "_ = plt.colorbar()\n", "plt.savefig(f'{product_path}/raster_stack_mean.png', dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the raster stack mean as a GeoTiff (raster_stack_mean.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(raster_stack_mean, f'{product_path}/raster_stack_mean', coords, utm_zone, cmap='gray', dpi=600)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Calculate the residuals:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "residuals = raster_stack - raster_stack_mean" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Close img, as it is no longer needed in the notebook:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "radar_stack = None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot and save the residuals for band 1 (residuals_band_1.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(12, 8))\n", "plt.imshow(residuals[0])\n", "plt.title('Residuals for Band  {} {}'.format(band_number+1, time_index_sub[band_number].date()))\n", "_ = plt.colorbar()\n", "plt.savefig(f'{product_path}/residuals_band_1', dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the residuals for band 1 as a GeoTiff (residuals_band_1.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(residuals[0], f'{product_path}/residuals_band_1', coords, utm_zone, dpi=600)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 6.3 Calculate Cumulative Sum $S$ as well as Change Magnitude $S_{diff}$:\n", "\n", "**Plot and save the cumulative sum max, min, and change magnitude (Smax_Smin_Sdiff.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sums = np.cumsum(residuals, axis=0)\n", "sums_max = np.max(sums, axis=0)\n", "sums_min = np.min(sums, axis=0)\n", "change_mag = sums_max - sums_min\n", "fig, ax = plt.subplots(1, 3, figsize=(16, 4))\n", "vmin = sums_min.min()\n", "vmax = sums_max.max()\n", "sums_max_plot = ax[0].imshow(sums_max, vmin=vmin, vmax=vmax)\n", "ax[0].set_title('$S_{max}$')\n", "ax[1].imshow(sums_min, vmin=vmin, vmax=vmax)\n", "ax[1].set_title('$S_{min}$')\n", "ax[2].imshow(change_mag, vmin=vmin, vmax=vmax)\n", "ax[2].set_title('Change Magnitude')\n", "fig.subplots_adjust(right=0.8)\n", "cbar_ax = fig.add_axes([0.85, 0.15, 0.02, 0.7])\n", "_ = fig.colorbar(sums_max_plot, cax=cbar_ax)\n", "plt.savefig(f'{product_path}/Smax_Smin_Sdiff', dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save Smax as a GeoTiff (Smax.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(sums_max, f'{product_path}/Smax', coords, utm_zone, vmin=vmin, vmax=vmax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save Smin as a GeoTiff (Smin.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(sums_min, f'{product_path}/Smin', coords, utm_zone, vmin=vmin, vmax=vmax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the change magnitude as a GeoTiff (Sdiff.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(change_mag, f'{product_path}/Sdiff', coords, utm_zone, vmin=vmin, vmax=vmax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 6.4 Mask $S_{diff}$ With a-priori Threshold To Idenfity Change Candidate Pixels:\n", "\n", "To identified change candidate pixels, we can threshold $S_{diff}$ to reduce computation of the bootstrapping. For land cover change we would not expect more than 5-10% change pixels in a landscape. So, if the test region is reasonably large, setting a threshold for expected change to 10% is appropriate. In our example we'll start out with a very conservative threshold of 20%.\n", "\n", "**Plot and save the histogram for the change magnitude and the change magnitude cumulative distribution function (Sdiff_histogram_CDF.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.rcParams.update({'font.size': 14})\n", "fig = plt.figure(figsize=(14, 6)) # Initialize figure with a size\n", "ax1 = fig.add_subplot(121) # 121 determines: 2 rows, 2 plots, first plot\n", "ax2 = fig.add_subplot(122)\n", "# Second plot: Histogram\n", "# IMPORTANT: To get a histogram, we first need to *flatten* \n", "# the two-dimensional image into a one-dimensional vector.\n", "histogram = ax1.hist(change_mag.flatten(), bins=200, range=(0, np.max(change_mag)))\n", "ax1.xaxis.set_label_text('Change Magnitude')\n", "ax1.set_title('Change Magnitude Histogram')\n", "plt.grid()\n", "n, bins, patches = ax2.hist(change_mag.flatten(), bins=200, range=(0, np.max(change_mag)), cumulative='True', density='True', histtype='step', label='Empirical')\n", "ax2.xaxis.set_label_text('Change Magnitude')\n", "ax2.set_title('Change Magnitude CDF')\n", "plt.grid()\n", "plt.savefig(f'{product_path}/Sdiff_histogram_CDF', dpi=72, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using this threshold, we can create a plot to **visualize our change candidate areas. Save the plot (change_candidate.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "precentile = 0.8\n", "out_indicies = np.where(n>precentile)\n", "threshold_indicies = np.min(out_indicies)\n", "threshold = bins[threshold_indicies]\n", "print('At the {}% percentile, the threshold value is {:2.2f}'.format(precentile*100,threshold))\n", "\n", "change_mag_mask = change_mag < threshold\n", "plt.figure(figsize=(12, 8))\n", "plt.title('Change Candidate Areas (black)')\n", "_ = plt.imshow(change_mag_mask, cmap='gray')\n", "plt.savefig(f'{product_path}/change_candidate', dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the change candidate plot as a GeoTiff (change_candidate.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(change_mag_mask, f'{product_path}/change_candidate', coords, utm_zone, cmap='gray')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 6.5 Bootstrapping to Prepare for Change Point Selection:\n", "\n", "We can now perform bootstrapping over the candidate pixels. The workflow is as follows:\n", "\n", "- Filter our residuals to the change candidate pixels\n", "- Perform bootstrapping over candidate pixels\n", "\n", "**Mask the residuals in pixels below the change point threshold**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "residuals_mask = np.broadcast_to(change_mag_mask, residuals.shape)\n", "residuals_masked = np.ma.array(residuals, mask=residuals_mask)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the masked residuals, **re-compute the cumulative sums:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sums_masked = np.ma.cumsum(residuals_masked, axis=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot the min sums, max sums, and change magnitude of the masked subset (masked_Smax_Smin_Sdiff.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sums_masked_max = np.ma.max(sums_masked, axis=0)\n", "sums_masked_min = np.ma.min(sums_masked, axis=0)\n", "change_mag = sums_masked_max - sums_masked_min\n", "fig, ax = plt.subplots(1, 3, figsize=(16, 4))\n", "vmin = sums_masked_min.min()\n", "vmax = sums_masked_max.max()\n", "sums_masked_max_plot = ax[0].imshow(sums_masked_max, vmin=vmin, vmax=vmax)\n", "ax[0].set_title('$S_{max}$')\n", "ax[1].imshow(sums_masked_min, vmin=vmin, vmax=vmax)\n", "ax[1].set_title('$S_{min}$')\n", "ax[2].imshow(change_mag, vmin=vmin, vmax=vmax)\n", "ax[2].set_title('Change Magnitude')\n", "fig.subplots_adjust(right=0.8)\n", "cbar_ax = fig.add_axes([0.85, 0.15, 0.02, 0.7])\n", "_ = fig.colorbar(sums_masked_max_plot, cax=cbar_ax)\n", "plt.savefig(f'{product_path}/masked_Smax_Smin_Sdiff', dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the Smax of the masked subset as a GeoTiff (masked_Smax.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(sums_masked_max, f'{product_path}/masked_Smax', coords, utm_zone, vmin=vmin, vmax=vmax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the Smin of the masked subset as a GeoTiff (masked_Smin.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(sums_masked_min, f'{product_path}/masked_Smin', coords, utm_zone, vmin=vmin, vmax=vmax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the change magnitude of the masked subset as a GeoTiff (masked_Sdiff.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(change_mag, f'{product_path}/masked_Sdiff', coords, utm_zone, vmin=vmin, vmax=vmax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Perform bootstrapping:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "random_index = np.random.permutation(residuals_masked.shape[0])\n", "residuals_random = residuals_masked[random_index,:,:]\n", "n_bootstraps = 2000 # bootstrap sample size\n", "# to keep track of the maxium Sdiff of the bootstrapped sample:\n", "change_mag_random_max = np.ma.copy(change_mag) \n", "change_mag_random_max[~change_mag_random_max.mask] = 0\n", "# to compute the Sdiff sums of the bootstrapped sample:\n", "change_mag_random_sum = np.ma.copy(change_mag) \n", "change_mag_random_sum[~change_mag_random_max.mask] = 0\n", "# to keep track of the count of the bootstrapped sample\n", "qty_change_mag_above_random = change_mag_random_sum\n", "qty_change_mag_above_random[~qty_change_mag_above_random.mask] = 0\n", "print(\"Running Bootstrapping for %4.1f iterations ...\" % (n_bootstraps))\n", "for i in range(n_bootstraps):\n", " # For efficiency, we shuffle the time axis index and use that \n", " #to randomize the masked array\n", " random_index = np.random.permutation(residuals_masked.shape[0])\n", " # Randomize the time step of the residuals\n", " residuals_random = residuals_masked[random_index, :, :] \n", " sums_random = np.ma.cumsum(residuals_random, axis=0)\n", " sums_random_max = np.ma.max(sums_random, axis=0)\n", " sums_random_min = np.ma.min(sums_random, axis=0)\n", " change_mag_random = sums_random_max - sums_random_min\n", " change_mag_random_sum += change_mag_random\n", " change_mag_random_max[np.ma.greater(change_mag_random, change_mag_random_max)] = \\\n", " change_mag_random[np.ma.greater(change_mag_random, change_mag_random_max)]\n", " qty_change_mag_above_random[np.ma.greater(change_mag, change_mag_random)] += 1\n", " if ((i+1)/n_bootstraps*100)%10 == 0:\n", " print(\"\\r%4.1f percent completed ...\" % ((i+1)/n_bootstraps*100), end='\\r', flush=True)\n", "print(f\"Bootstrapping Complete. \")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 6.6 Extract Confidence Metrics and Select Final Change Points:\n", "\n", "**We first compute for all pixels the confidence level $CL$, the change point significance metric $CP_{significance}$ and the product of the two as our confidence metric for identified change points. Plot and save them (confidence_change_point.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "confidence_level = qty_change_mag_above_random / n_bootstraps\n", "change_point_significance = 1.0 - (change_mag_random_sum/n_bootstraps) / change_mag \n", "#Plot\n", "fig, ax = plt.subplots(1, 3 ,figsize=(16, 4))\n", "a = ax[0].imshow(confidence_level*100)\n", "fig.colorbar(a, ax=ax[0])\n", "ax[0].set_title('Confidence Level %')\n", "a = ax[1].imshow(change_point_significance)\n", "fig.colorbar(a, ax=ax[1])\n", "ax[1].set_title('Change Point Significance')\n", "a = ax[2].imshow(confidence_level*change_point_significance)\n", "fig.colorbar(a, ax=ax[2])\n", "_ = ax[2].set_title('Confidence Level\\nx\\nChange Point Significance')\n", "plt.savefig(f'{product_path}/confidence_change_point', dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the confidence level of the masked subset as a GeoTiff (confidence_level.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(confidence_level*100, f'{product_path}/confidence_level', coords, utm_zone)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the change point significance of the masked subset as a GeoTiff (change_point.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(change_point_significance, f'{product_path}/change_point', coords, utm_zone)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the confidence level x change point significance of the masked subset as a GeoTiff (CL_x_CP.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(confidence_level*change_point_significance, f'{product_path}/CL_x_CP', coords, utm_zone)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Set a change point threshold of 5:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "change_point_threshold = 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Create and save a plot showing the final change points (change_point_thresh.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = plt.figure(figsize=(12, 8))\n", "ax = fig.add_subplot(1, 1, 1)\n", "plt.title('Detected Change Pixels based on Threshold %2.1f' % (change_point_threshold))\n", "a = ax.imshow(confidence_level*change_point_significance < change_point_threshold, cmap='cool')\n", "plt.savefig(f'{product_path}/change_point_thresh', dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the thresholded change point significance of the masked subset as a GeoTiff (change_point_thresh.tiff):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(confidence_level*change_point_significance < change_point_threshold, \n", " f'{product_path}/change_point_thresh', coords, utm_zone, cmap='cool')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 6.7 Derive Timing of Change for Each Change Pixel:\n", "\n", "Our last step in the identification of the change points is to extract the timing of the change. We will produce a raster layer that shows the band number of the first date after a change was detected. We will make use of the numpy indexing scheme.\n", "\n", "**Create a combined mask of the first threshold and the identified change points after the bootstrapping:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# make a mask of our change points from the new threhold and the previous mask\n", "change_point_mask = np.ma.mask_or(confidence_level*change_point_significance'2015-10-31']\n", "change_dates = [str(all_dates[x].date()) for x in change_indices]\n", "print(f\"\\nChange Dates:\\n{change_dates}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Plot the change dates using the change point index raster and save it as a png (change_dates.png):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ticks = change_indices\n", "tick_labels = change_dates\n", "\n", "cmap = matplotlib.colormaps.get_cmap('tab20')\n", "fig, ax = plt.subplots(figsize=(12, 12))\n", "cax = ax.imshow(change_point_index, interpolation='nearest', cmap=cmap)\n", "\n", "ax.set_title('Dates of Change')\n", "cbar = fig.colorbar(cax, ticks=ticks, orientation='horizontal')\n", "_ = cbar.ax.set_xticklabels(tick_labels, size=10, rotation=45, ha='right')\n", "plt.savefig(f'{product_path}/change_dates', dpi=300, transparent='true')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the Dates of Change plot as a GeoTiff (change_dates.tiff):**\n", "\n", "Note: The GeoTiff does not include a colorbar. Date/color correlations can be identified in change_dates.png." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "geotiff_from_plot(change_point_index, f'{product_path}/change_dates', coords, utm_zone, interpolation='nearest', cmap=cmap)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Change_Detection_Amplitude_Time_Series_Example.ipynb - Version 1.5.1 - February 2024*\n", "\n", "*Version Changes*\n", "\n", "- *Adjust how we create the residuals_random Pandas.Series in the bootstrapping function to support Pandas updates*\n", "- *Use matplotlib.colormaps.get_cmap*" ] } ], "metadata": { "kernelspec": { "display_name": "rtc_analysis [conda env:.local-rtc_analysis]", "language": "python", "name": "conda-env-.local-rtc_analysis-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.1" } }, "nbformat": 4, "nbformat_minor": 4 }