{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "![OpenSARlab notebook banner](NotebookAddons/blackboard-banner.png)\n", "\n", "# Prepare a SAR Data Stack\n", "### Joseph H Kennedy and Alex Lewandowski; Alaska Satellite Facility\n", "\n", "This notebook downloads an ASF-HyP3 RTC project and prepares a deep multi-temporal SAR image data stack for use in other notebooks.\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Important Note about JupyterHub\n", "\n", "**Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import url_widget as url_w\n", "notebookUrl = url_w.URLWidget()\n", "display(notebookUrl)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from IPython.display import Markdown\n", "from IPython.display import display\n", "\n", "notebookUrl = notebookUrl.value\n", "user = !echo $JUPYTERHUB_USER\n", "env = !echo $CONDA_PREFIX\n", "if env[0] == '':\n", " env[0] = 'Python 3 (base)'\n", "if env[0] != '/home/jovyan/.local/envs/rtc_analysis':\n", " display(Markdown(f'WARNING:'))\n", " display(Markdown(f'This notebook should be run using the \"rtc_analysis\" conda environment.'))\n", " display(Markdown(f'It is currently using the \"{env[0].split(\"/\")[-1]}\" environment.'))\n", " display(Markdown(f'Select the \"rtc_analysis\" from the \"Change Kernel\" submenu of the \"Kernel\" menu.'))\n", " display(Markdown(f'If the \"rtc_analysis\" environment is not present, use Create_OSL_Conda_Environments.ipynb to create it.'))\n", " display(Markdown(f'Note that you must restart your server after creating a new environment before it is usable by notebooks.'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 0. Importing Relevant Python Packages\n", "\n", "In this notebook we will use the following scientific libraries:\n", "\n", "1. [GDAL](https://www.gdal.org/) is a software library for reading and writing raster and vector geospatial data formats. It includes a collection of programs tailored for geospatial data processing. Most modern GIS systems (such as ArcGIS or QGIS) use GDAL in the background.\n", "1. [NumPy](http://www.numpy.org/) is one of the principal packages for scientific applications of Python. It is intended for processing large multidimensional arrays and matrices, and an extensive collection of high-level mathematical functions and implemented methods makes it possible to perform various operations with these objects.\n", "\n", "**Our first step is to import them:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%capture\n", "import copy\n", "from datetime import datetime, timedelta, timezone\n", "import json # for loads\n", "from pathlib import Path\n", "import re\n", "import shutil\n", "import warnings\n", "\n", "from osgeo import gdal\n", "gdal.UseExceptions()\n", "import numpy as np\n", "\n", "from IPython.display import display, clear_output, Markdown\n", "\n", "import opensarlab_lib as asfn\n", "\n", "from hyp3_sdk import Batch, HyP3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Load Your Own Data Stack Into the Notebook\n", "\n", "This notebook assumes that you've created your own data stack over your personal area of interest using the [Alaska Satellite Facility's](https://www.asf.alaska.edu/) value-added product system HyP3, available via [ASF Data Search (Vertex)](https://search.asf.alaska.edu/#/). HyP3 is an ASF service used to prototype value added products and provide them to users to collect feedback.\n", "\n", "We will retrieve HyP3 data via the hyp3_sdk. As both HyP3 and the Notebook environment sit in the [Amazon Web Services (AWS)](https://aws.amazon.com/) cloud, data transfer is quick and cost effective.\n", "\n", "---\n", "\n", "Before we download anything, create a working directory for this analysis.\n", "\n", "**Select or create a working directory for the analysis:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "while True:\n", " print(f\"Current working directory: {Path.cwd()}\")\n", " data_dir = Path(input(f\"\\nPlease enter the name of a directory in which to store your data for this analysis. This should be in the same parent directory as this notebook.\"))\n", " if data_dir == Path('.'):\n", " continue\n", " if data_dir.is_dir():\n", " contents = data_dir.glob('*')\n", " if len(list(contents)) > 0:\n", " choice = asfn.handle_old_data(data_dir)\n", " if choice == 1:\n", " if data_dir.exists():\n", " shutil.rmtree(data_dir)\n", " data_dir.mkdir()\n", " break\n", " elif choice == 2:\n", " break\n", " else:\n", " clear_output()\n", " continue\n", " else:\n", " break\n", " else:\n", " data_dir.mkdir()\n", " break" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "**Define absolute path to analysis directory:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "analysis_directory = Path.cwd().joinpath(data_dir)\n", "print(f\"analysis_directory: {analysis_directory}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Create a HyP3 object and authenticate:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "hyp3 = HyP3(prompt=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Select a product type to download:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "job_types = ['RTC_GAMMA', 'INSAR_GAMMA', 'AUTORIFT']\n", "job_type = asfn.select_parameter(job_types)\n", "job_type" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Decide whether to search for a HyP3 project or jobs unattached to a project:** " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "options = ['project', 'projectless jobs']\n", "search_type = asfn.select_parameter(options, '')\n", "print(\"Select whether to search for HyP3 Project or HyP3 Jobs unattached to a project\")\n", "display(search_type)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**List projects containing active products of the type chosen in the previous cell and select one:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "my_hyp3_info = hyp3.my_info()\n", "active_projects = dict()\n", "\n", "if search_type.value == 'project':\n", " for project in my_hyp3_info['job_names']:\n", " batch = Batch()\n", " batch = hyp3.find_jobs(name=project, job_type=job_type.value).filter_jobs(running=False, include_expired=False)\n", " if len(batch) > 0:\n", " active_projects.update({batch.jobs[0].name: batch})\n", "\n", " if len(active_projects) > 0:\n", " display(Markdown(\"Note: After selecting a project, you must select the next cell before hitting the 'Run' button or typing Shift/Enter.\"))\n", " display(Markdown(\"Otherwise, you will rerun this code cell.\"))\n", " print('\\nSelect a Project:')\n", " project_select = asfn.select_parameter(active_projects.keys())\n", " display(project_select)\n", "if search_type.value == 'projectless jobs' or len(active_projects) == 0:\n", " project_select = False\n", " if search_type.value == 'project':\n", " print(f\"There were no {job_type.value} jobs found in any current projects.\\n\")\n", " jobs = hyp3.find_jobs(job_type=job_type.value).filter_jobs(running=False, include_expired=False)\n", " orphaned_jobs = Batch()\n", " for j in jobs:\n", " if not j.name:\n", " orphaned_jobs += j\n", " jobs = orphaned_jobs\n", "\n", " if len(jobs) > 0:\n", " print(f\"Found {len(jobs)} {job_type.value} jobs that are not part of a project.\")\n", " print(f\"Select the jobs you wish to download\")\n", " jobs = {i.files[0]['filename']: i for i in jobs}\n", " jobs_select = asfn.select_mult_parameters(jobs.keys(), '', width='500px')\n", " display(jobs_select)\n", " else:\n", " print(f\"There were no {job_type.value} jobs found that are not part of a project either.\")\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Select a date range of products to download:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if project_select:\n", " batch = active_projects[project_select.value]\n", "else:\n", " batch = Batch()\n", " for j in jobs_select.value:\n", " batch += jobs[j]\n", "\n", "display(Markdown(\"Note: After selecting a date range, you should select the next cell before hitting the 'Run' button or typing Shift/Enter.\"))\n", "display(Markdown(\"Otherwise, you may simply rerun this code cell.\"))\n", "print('\\nSelect a Date Range:')\n", "dates = asfn.get_job_dates(batch)\n", "date_picker = asfn.gui_date_picker(dates)\n", "date_picker" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the selected date range and remove products falling outside of it:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "date_range = asfn.get_slider_vals(date_picker)\n", "date_range[0] = date_range[0].date()\n", "date_range[1] = date_range[1].date()\n", "print(f\"Date Range: {str(date_range[0])} to {str(date_range[1])}\")\n", "batch = asfn.filter_jobs_by_date(batch, date_range)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Gather the available paths and orbit directions for the remaining products:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "display(Markdown(\"This may take some time for projects containing many jobs...\"))\n", "asfn.set_paths_orbits(batch)\n", "paths = set()\n", "orbit_directions = set()\n", "for p in batch:\n", " paths.add(p.path)\n", " orbit_directions.add(p.orbit_direction)\n", "paths.add('All Paths')\n", "display(Markdown(f\"Done.\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**Select a path or paths (use shift or ctrl to select multiple paths):**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "display(Markdown(\"Note: After selecting a path, you must select the next cell before hitting the 'Run' button or typing Shift/Enter.\"))\n", "display(Markdown(\"Otherwise, you will simply rerun this code cell.\"))\n", "print('\\nSelect a Path:')\n", "path_choice = asfn.select_mult_parameters(paths)\n", "path_choice" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the selected flight path/s:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flight_path = path_choice.value\n", "if flight_path:\n", " if flight_path:\n", " print(f\"Flight Path: {flight_path}\")\n", " else:\n", " print('Flight Path: All Paths')\n", "else:\n", " print(\"WARNING: You must select a flight path in the previous cell, then rerun this cell.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Select an orbit direction:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if len(orbit_directions) > 1:\n", " display(Markdown(\"Note: After selecting a flight direction, you must select the next cell before hitting the 'Run' button or typing Shift/Enter.\"))\n", " display(Markdown(\"Otherwise, you will simply rerun this code cell.\"))\n", "print('\\nSelect a Flight Direction:')\n", "direction_choice = asfn.select_parameter(orbit_directions, 'Direction:')\n", "direction_choice" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Save the selected orbit direction:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "direction = direction_choice.value\n", "print(f\"Orbit Direction: {direction}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Filter jobs by path and orbit direction:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "batch = asfn.filter_jobs_by_path(batch, flight_path)\n", "batch = asfn.filter_jobs_by_orbit(batch, direction)\n", "print(f\"There are {len(batch)} products to download.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Download the products, unzip them into a directory named after the product type, and delete the zip files:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "products_path = analysis_directory.joinpath(job_type.value)\n", "print(products_path)\n", "if not products_path.is_dir():\n", " products_path.mkdir()\n", "\n", "print(f\"\\nProject: {batch.jobs[0].name}\")\n", "project_zips = batch.download_files(products_path)\n", "for z in project_zips:\n", " if z.suffix == '.nc':\n", " continue\n", " \n", " asfn.asf_unzip(str(products_path), str(z))\n", " z.unlink()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rtc = batch.jobs[0].job_type == 'RTC_GAMMA'\n", "insar = batch.jobs[0].job_type == 'INSAR_GAMMA'\n", "autorift = batch.jobs[0].job_type == 'AUTORIFT'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Determine the available polarizations if downloading RTC products:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc:\n", " polarizations = asfn.get_RTC_polarizations(str(products_path))\n", " polarization_power_set = asfn.get_power_set(polarizations)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Select a polarization:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc:\n", " polarization_choice = asfn.select_parameter(sorted(polarization_power_set), 'Polarizations:')\n", "else:\n", " polarization_choice = None\n", "polarization_choice" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Create a paths variable, holding the relative path to the tiffs or NetCDFs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc:\n", " polarization = polarization_choice.value\n", " print(polarization)\n", " if len(polarization) == 2:\n", " regex = \"\\w[\\--~]{{5,300}}(_|-){}.(tif|tiff)$\".format(polarization)\n", " dbl_polar = False\n", " else:\n", " regex = \"\\w[\\--~]{{5,300}}(_|-){}(v|V|h|H).(tif|tiff)$\".format(polarization[0])\n", " dbl_polar = True\n", "elif insar:\n", " regex = \"\\w*_ueF_\\w*.tif$\"\n", "elif autorift:\n", " # regex = \"\\w*ASF_OD.nc$\"\n", " regex = \"\\w*ASF_OD.*$\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Write functions to collect and print the paths of the tiffs or NetCDFs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_product_paths(regex, pths):\n", " product_paths = list()\n", " paths = Path().glob(pths)\n", " for pth in paths:\n", " tiff_path = re.search(regex, str(pth))\n", " if tiff_path:\n", " product_paths.append(pth)\n", " return product_paths\n", "\n", "def print_product_paths(product_paths):\n", " print(\"Tiff paths:\")\n", " for p in product_paths:\n", " print(f\"{p}\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Write a function to collect the product acquisition dates:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_dates(product_paths):\n", " dates = []\n", " for pth in product_paths:\n", " dates.append(asfn.date_from_product_name(str(pth)).split('T')[0])\n", " return dates" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Convert NetCDFs to geotiffs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if autorift:\n", " import xarray as xr\n", " import re\n", "\n", " def ncToGeoTiff(path): \n", " \n", " prevPath = ''\n", " \n", " for p in path.rglob('*.nc'): \n", " \n", " layers = ['v', 'vx', 'vy', 'v_error', 'vr', 'va', 'M11', 'M12']\n", " fname = p.stem\n", " dates = list(set(re.findall(r'\\d{8}', fname)))\n", " for layer in layers:\n", " layer_dir = products_path/layer\n", "\n", " if prevPath != p: # reduces number of runs\n", " prevPath = p\n", " ds = xr.open_dataset(p)\n", " \n", " t1 = re.findall('\\d*', ds.img_pair_info.acquisition_date_img1)\n", " acq_date_1 = f'{t1[0]}T' + ''.join(t1[2:7])\n", " \n", " t2 = re.findall('\\d*', ds.img_pair_info.acquisition_date_img2)\n", " acq_date_2 = f'{t2[0]}T' + ''.join(t2[2:7])\n", " \n", " \n", " name = f'{fname[0:10]}_{acq_date_1}_{acq_date_2}_{fname[-6:]}_{layer}.tif'\n", " outfile = layer_dir/name\n", " \n", " if not layer_dir.exists():\n", " layer_dir.mkdir()\n", " \n", " if not outfile.exists():\n", " !gdal_translate NETCDF:{p}:{layer} {outfile}\n", " print('\\n')\n", " \n", " def removeNC(path):\n", " for p in path.rglob('*.nc'):\n", " p.unlink()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Collect and print the paths of the tiffs or NetCDFs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rel_prod_path = products_path.relative_to(Path.cwd())\n", "if rtc:\n", " product_pth = f\"{str(rel_prod_path)}/*/*{polarization[0]}*.tif*\"\n", "elif insar:\n", " product_pth = f\"{str(rel_prod_path)}/*/*.tif*\"\n", "elif autorift:\n", " product_pth = f\"{str(rel_prod_path)}/*.tif*\" \n", " ncToGeoTiff(products_path)\n", " removeNC(products_path)\n", "\n", "if not autorift:\n", " product_paths = get_product_paths(regex, product_pth)\n", " print_product_paths(product_paths)\n", "else:\n", " print('Tiff paths:\\n')\n", " \n", " for p in products_path.glob('*'):\n", " print(f'{p.parts[-1]}:')\n", " \n", " for p_tiff in p.rglob('*.tif'):\n", " print(p_tiff)\n", " \n", " print('\\n')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "if autorift:\n", " - convert to geotiff\n", " - delete netcdfs\n", " - re-glob paths for geotiffs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 1.2 Fix multiple UTM Zone-related issues\n", "\n", "Fix multiple UTM Zone-related issues should they exist in your data set. If multiple UTM zones are found, the following code cells will identify the predominant UTM zone and reproject the rest into that zone. This step must be completed prior to merging frames or performing any analysis. AutoRIFT products do not come with projection metadata and so will not be reprojected.\n", "\n", "**Use gdal.Info to determine the UTM definition types and zones in each product:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if not autorift:\n", " coord_choice = asfn.select_parameter([\"UTM\", \"Lat/Long\"], description='Coord Systems:')\n", " coord_choice" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if not autorift:\n", " utm_zones = []\n", " utm_types = []\n", " print('Checking UTM Zones in the data stack ...\\n')\n", " for k in range(0, len(product_paths)):\n", " info = (gdal.Info(str(product_paths[k]), options = ['-json']))\n", " info = json.dumps(info)\n", " info = (json.loads(info))['coordinateSystem']['wkt']\n", " zone = info.split('ID')[-1].split(',')[1][0:-2]\n", " utm_zones.append(zone)\n", " typ = info.split('ID')[-1].split('\"')[1]\n", " utm_types.append(typ)\n", " print(f\"UTM Zones:\\n {utm_zones}\\n\")\n", " print(f\"UTM Types:\\n {utm_types}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Identify the most commonly used UTM Zone in the data:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if not autorift:\n", " if coord_choice.value == 'UTM':\n", " utm_unique, counts = np.unique(utm_zones, return_counts=True)\n", " a = np.where(counts == np.max(counts))\n", " predominant_utm = utm_unique[a][0]\n", " print(f\"Predominant UTM Zone: {predominant_utm}\")\n", " else:\n", " predominant_utm = '4326'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Reproject all tiffs to the predominate UTM:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if not autorift:\n", " # Reproject (if needed) and Mosaic DEM Files in Preparation for Subsequent HAND Calculation\n", " # print(DEM_paths)\n", " reproject_indicies = [i for i, j in enumerate(utm_zones) if j != predominant_utm] #makes list of indicies in utm_zones that need to be reprojected\n", " print('--------------------------------------------')\n", " print('Reprojecting %4.1f files' %(len(reproject_indicies)))\n", " print('--------------------------------------------')\n", " for k in reproject_indicies:\n", " temppath = f\"{str(product_paths[k].parent)}/r{product_paths[k].name}\"\n", " print(temppath) \n", "\n", " cmd = f\"gdalwarp -overwrite {product_paths[k]} {temppath} -s_srs {utm_types[k]}:{utm_zones[k]} -t_srs EPSG:{predominant_utm}\"\n", " # print(cmd)\n", " !{cmd}\n", "\n", " product_paths[k].unlink()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Update product_paths with any new filenames created during reprojection:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if not autorift:\n", " product_paths = get_product_paths(regex, product_pth)\n", " print_product_paths(product_paths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## 1.3 Merge multiple frames from the same date.\n", "\n", "You may notice duplicates in your acquisition dates. As HyP3 processes SAR data on a frame-by-frame basis, duplicates may occur if your area of interest is covered by two consecutive image frames. In this case, two separate images are generated that need to be merged together before time series processing can commence. Currently we only merge RTCs.\n", "\n", "**Create a directory in which to store the reprojected and merged RTCs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if not autorift:\n", " output_dir_path = analysis_directory.joinpath(f\"{job_type.value}_tiffs\")\n", " print(output_dir_path)\n", " if not output_dir_path.is_dir():\n", " output_dir_path.mkdir()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Create a set from the date list, removing any duplicates:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc:\n", " dates = get_dates(product_paths)\n", " print(dates)\n", " unique_dates = set(dates)\n", " print(unique_dates)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Determine which dates have multiple frames. Create a dictionary with each date as a key linked to a value set as an empty string:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc: \n", " dup_date_batches = [{}]\n", " for date in unique_dates:\n", " count = 0\n", " for d in dates:\n", " if date == d:\n", " count +=1\n", " if (dbl_polar and count > 2) or (not dbl_polar and count > 1):\n", " dup_date_batches[0].update({date : \"\"})\n", " if dbl_polar:\n", " dup_date_batches.append(copy.deepcopy(dup_date_batches[0]))\n", " print(dup_date_batches)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Update the key values in dup_paths with the string paths to all the tiffs for each date:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc:\n", " if dbl_polar:\n", " polar_list = [polarization.split(' ')[0], polarization.split(' ')[2]]\n", " else:\n", " polar_list = [polarization]\n", "\n", " for i, polar in enumerate(polar_list):\n", " polar_path_regex = f\"(\\w|/)*_{polar}.(tif|tiff)$\"\n", " polar_paths = get_product_paths(polar_path_regex, product_pth)\n", " for pth in polar_paths:\n", " date = asfn.date_from_product_name(str(pth)).split('T')[0]\n", " if date in dup_date_batches[i]:\n", " dup_date_batches[i][date] = f\"{dup_date_batches[i][date]} {str(pth)}\"\n", "\n", " for d in dup_date_batches:\n", " print(d)\n", " print(\"\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Merge all the frames for each date, save the results to the output directory, and delete the original tiffs.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc and len(dup_date_batches[0]) > 0:\n", " for i, dup_dates in enumerate(dup_date_batches):\n", " polar_regex = \"(?<=_)(vh|VH|vv|VV)(?=.tif{1,2})\"\n", " polar = re.search(polar_regex, dup_dates[list(dup_dates)[0]])\n", " if polar:\n", " polar = f'_{polar.group(0)}'\n", " else:\n", " polar = ''\n", " for dup_date in dup_dates:\n", "# print(f\"\\n\\n{dup_dates[dup_date]}\")\n", " output = f\"{str(output_dir_path)}/merged_{dup_date}T999999{polar}{product_paths[0].suffix}\"\n", " gdal_command = f\"gdal_merge.py -o {output} {dup_dates[dup_date]}\"\n", " print(f\"\\n\\nCalling the command: {gdal_command}\\n\")\n", " !$gdal_command\n", " for pth in dup_dates[dup_date].split(' '):\n", " path = Path(pth)\n", " if path and path.is_file():\n", " path.unlink()\n", " print(f\"Deleting: {str(pth)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "**Verify that all duplicate dates were resolved:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc:\n", " product_paths = get_product_paths(regex, product_pth)\n", " for polar in polar_list:\n", " polar_product_pth = product_pth.replace('V*', polar)\n", " polar_product_paths = get_product_paths(regex, polar_product_pth)\n", " dates = get_dates(polar_product_paths)\n", " if len(dates) != len(set(dates)):\n", " print(f\"Duplicate dates still present!\")\n", " else:\n", " print(f\"No duplicate dates are associated with {polar} polarization.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Print the updated the paths to all remaining non-merged tiffs:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc:\n", " print_product_paths(product_paths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Move all remaining unmerged tiffs into the output directory, and choose whether to save or delete the directory holding the remaining downloaded product files. AutoRIFT NetCDFs will remain in their original directory:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if not autorift:\n", " choices = ['save', 'delete']\n", " print(\"Do you wish to save or delete the directory containing auxiliary product files?\")\n", "else:\n", " choices = []\n", "save_or_del = asfn.select_parameter(choices)\n", "save_or_del" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if not autorift:\n", " for tiff in product_paths:\n", " tiff.rename(f\"{output_dir_path}/{tiff.name}\")\n", " if save_or_del.value == 'delete':\n", " shutil.rmtree(products_path)\n", " product_paths = get_product_paths(regex, product_pth)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Print the path where you saved your tiffs or NetCDFs.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if rtc or insar:\n", " print(str(output_dir_path))\n", "elif autorift:\n", " print(str(products_path))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Relavent notebooks:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run this code to display notebook links\n", "from IPython.display import display, HTML\n", "\n", "current = Path.cwd()\n", "abs_path = [\n", " Path('/home/jovyan/notebooks/SAR_Training/English/Master/Subset_Data_Stack.ipynb'),\n", " Path('/home/jovyan/notebooks/SAR_Training/English/Master/100k_MGRS_Geotiff_Subsetter.ipynb')\n", "]\n", "details = [\n", " 'Subsets a tiff stack into MGRS tiles.',\n", " 'Crops a directory of tiffs to a subset area of interest using an interactive Matplotlib plot of an image in your data stack.'\n", "]\n", "\n", "for a in abs_path:\n", " name = a.stem\n", " relative_path = a.relative_to(current) \n", " detail = details.pop()\n", " link_t = f\"
  • {name}: {detail}
  • \"\n", " html = HTML(link_t)\n", " display(html)" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "*Prepare_RTC_Stack_HyP3_v2.ipynb - Version 2.0.1 - February 2023*\n", "\n", "*Version Changes:*\n", "- *fix breaking projectless jobs downloading*" ] } ], "metadata": { "kernelspec": { "display_name": "rtc_analysis [conda env:.local-rtc_analysis]", "language": "python", "name": "conda-env-.local-rtc_analysis-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" } }, "nbformat": 4, "nbformat_minor": 4 }