{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Using Leafmap and Earthaccess to Explore OPERA DSWx-HLS Products. \n", "\n", "## Example below showcases a May 2024 flooding event in the Porto Algre, Brazil." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Leafmap library provides a suite of tools for interactive mapping and visualization in Jupyter Notebooks Leafmap version 0.30.0 and and later offer tools specifically for accessing NASA Earthdata by building on the newly developed NASA Earthaccess library. Earthaccess provides streamlined access to NASA Earthdata and simplifies the authentication and querying process over previously developed approaches. This notebook is designed to leverage tools within Earthaccess and Leafmap to facility easier access and vizualization of OPERA data products for a user-specified area of interest (AOI). \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## OPERA DSWx-HLS info\n", "see website https://www.jpl.nasa.gov/go/opera/products/dswx-product-suite" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import Libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import earthaccess\n", "import leafmap\n", "import pandas as pd\n", "import geopandas as gpd\n", "from shapely import box\n", "from datetime import datetime" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Authentication \n", "A [NASA Earthdata Login](https://urs.earthdata.nasa.gov/) account is required to download the data used in this tutorial. You can create an account at the link provided. After establishing an account, the code in the next cell will verify authentication. If this is your first time running the notebook, you will be prompted to enter your Earthdata login credentials, which will be saved in ~/.netrc." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "leafmap.nasa_data_login()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## View NASA Earthdata datasets\n", "A tab separated values (TSV) file, made available through the opengeos Github repository, catalogues metadata for more than 9,000 datasets available through NASA Earthdata. In the next cell we load the TSV into a pandas dataframe and view the metadata for the first five (5) Earthdata products" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### View Earthdata datasets\n", "earthdata_url = 'https://github.com/opengeos/NASA-Earth-Data/raw/main/nasa_earth_data.tsv'\n", "earthdata_df = pd.read_csv(earthdata_url, sep='\\t')\n", "# earthdata_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## View the available OPERA products\n", "Note above that the `earthdata_df` contains a number of columns with metadata about each available product. the `ShortName` column will be used to produce a new dataframe containing only OPERA products. Let's view the available products and their metadata." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "opera_df = earthdata_df[earthdata_df['ShortName'].str.contains('OPERA', case=False)]\n", "# opera_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define an area of interest (AOI) and time period of interest (TOI)\n", "Define an area of interest (AOI) for the flood event" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### This cell initializes the AOI and TOI.\n", "\n", "AOI = (-54.215, -30.766,-50.814, -28.938) #W, S, E, N; Floods near Porto Alegre, Brazil\n", "\n", "#Here we have selected two dates. This could expand to include date ranges but then image mosaic rules should be considered (not included here)\n", "StartDate_PreFlood=\"2024-04-21T00:00:00\" #Pre-flood image start date\n", "EndDate_PreFlood=\"2024-04-21T23:59:59\" #Pre-flood image end date\n", "\n", "StartDate_SynFlood=\"2024-05-06T00:00:00\" #Syn-flood image start date\n", "EndDate_SynFlood=\"2024-05-06T23:59:59\" #Syn-flood image end date\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Query Earthdata and return metadata for OPERA products within the AOI\n", "The `earthaccess` library makes it simple to quickly query NASA's Common Metadata Repository (CMR) and return the associated metadata as a Geodataframe. `Leafmap` has recently added functionality that builds on `earthaccess` to enable interactive viewing of this data. \n", "In the next cell, the user should specify which OPERA product and the date range of interest. The AOI defined previously is used as the boundary in the query." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### View OPERA Product Shortnames" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### Print the available OPERA datasets \n", "print('Available OPERA datasets:', opera_df['ShortName'].values)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Query the OPERA DSWx-HLS dataset for the AOI\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dswx_results_PreFlood, dswx_gdf_PreFlood = leafmap.nasa_data_search(\n", " short_name='OPERA_L3_DSWX-HLS_V1',\n", " cloud_hosted=True,\n", " bounding_box= AOI,\n", " temporal=(StartDate_PreFlood, EndDate_PreFlood),\n", " count=-1, # use -1 to return all datasets\n", " return_gdf=True,\n", ")\n", "\n", "dswx_results_SynFlood, dswx_gdf_SynFlood = leafmap.nasa_data_search(\n", " short_name='OPERA_L3_DSWX-HLS_V1',\n", " cloud_hosted=True,\n", " bounding_box= AOI,\n", " temporal=(StartDate_SynFlood, EndDate_SynFlood),\n", " count=-1, # use -1 to return all datasets\n", " return_gdf=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### See the available DSWx-HLS layers\n", "Functionality within earthaccess enables more more asthetic views of the available layers, as well as displaying the thumbnail. These links are clickable and will download in the browser when clicked. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dswx_results_PreFlood[0] #Note this just shows a single MGRS/HLS tile" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dswx_results_SynFlood[0] #Note this just shows a single MGRS/HLS tile" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### View the DSWx-HLS metadata and footprints" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dswx_gdf_PreFlood.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### Plot the location of the tiles \n", "dswx_gdf_PreFlood.explore(fill=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### Plot the location of the tiles \n", "dswx_gdf_SynFlood.explore(fill=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download data with leafmap\n", "Let's download the data from one of our above queries. In the cell below we specify data from the DSWx-HLS." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create a subdirectory\n", "This will be where the files are downloaded. It will be a subdirectory inside of a directory called `data`, and the directory name will be the date that it was created." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "from datetime import datetime\n", "\n", "def create_data_directory():\n", " # Get the current date and time\n", " # current_datetime = datetime.now().strftime(\"%m_%d_%Y_%H_%M_%S\")\n", " current_datetime = datetime.now().strftime(\"%m_%d_%Y\")\n", "\n", " # Define the base directory\n", " base_directory = \"data\"\n", "\n", " # Create the full path for the new directory\n", " new_directory_path_PreFlood = os.path.join(base_directory, f\"data_{current_datetime}/PreFlood\")\n", " # Create the new directory\n", " os.makedirs(new_directory_path_PreFlood, exist_ok=True)\n", "\n", " print(f\"Directory '{new_directory_path_PreFlood}' created successfully.\")\n", "\n", " return new_directory_path_PreFlood \n", "\n", "directory_path_PreFlood = create_data_directory()\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def create_data_directory():\n", " # Get the current date and time\n", " # current_datetime = datetime.now().strftime(\"%m_%d_%Y_%H_%M_%S\")\n", " current_datetime = datetime.now().strftime(\"%m_%d_%Y\")\n", "\n", " # Define the base directory\n", " base_directory = \"data\"\n", "\n", " # Create the full path for the new directory\n", " new_directory_path_SynFlood = os.path.join(base_directory, f\"data_{current_datetime}/SynFlood\")\n", " # Create the new directory\n", " os.makedirs(new_directory_path_SynFlood, exist_ok=True)\n", "\n", " print(f\"Directory '{new_directory_path_SynFlood}' created successfully.\")\n", "\n", " return new_directory_path_SynFlood \n", "\n", "directory_path_SynFlood = create_data_directory()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download the data\n", "The below will download the data to your newly created subdirectory. Look on your file system for a directory `/data/date/` where `date` is the date the directory was created." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dswx_data_PreFlood = leafmap.nasa_data_download(dswx_results_PreFlood, out_dir=directory_path_PreFlood) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dswx_data_SynFlood = leafmap.nasa_data_download(dswx_results_SynFlood, out_dir=directory_path_SynFlood) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## View the files using Leafmap" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load in images from data folder\n", "We load in data from only the DSWx-WTR layer below. If you'd like load data from a different layer change the `B01` to suit your needs. \n", "Included layers:\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B01_WTR.tif\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B02_BWTR.tif\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B03_CONF.tif\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B04_DIAG.tif\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B05_WTR-1.tif\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B06_WTR-2.tif\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B07_LAND.tif\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B08_SHAD.tif\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B09_CLOUD.tif\n", "\n", "\n", "OPERA_L3_DSWx-HLS_*B10_DEM.tif" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "ImageLayer='B01' #B01 corresponds to WTR (see above)\n", "\n", "# Get the current directory\n", "current_directory = os.getcwd()\n", "\n", "# Construct the path to the data directory\n", "data_directory_PreFlood = os.path.join(current_directory, directory_path_PreFlood)\n", "data_directory_SynFlood = os.path.join(current_directory, directory_path_SynFlood)\n", "\n", "# Create a list of file paths and a list of corresponding dates\n", "images_PreFlood = [os.path.join(data_directory_PreFlood, filename) for filename in os.listdir(data_directory_PreFlood) if os.path.isfile(os.path.join(data_directory_PreFlood, filename)) and ImageLayer in filename]\n", "image_dates_PreFlood = [image[25:33] for image in os.listdir(data_directory_PreFlood) if ImageLayer in image]\n", "\n", "images_SynFlood = [os.path.join(data_directory_SynFlood, filename) for filename in os.listdir(data_directory_SynFlood) if os.path.isfile(os.path.join(data_directory_SynFlood, filename)) and ImageLayer in filename]\n", "image_dates_SynFlood = [image[25:33] for image in os.listdir(data_directory_SynFlood) if ImageLayer in image]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Merge individual tiles into a single image" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "filename_merged_PreFlood='PreFlood_Merged.tif'\n", "merged_raster_PreFlood = leafmap.merge_rasters(data_directory_PreFlood,os.path.join(data_directory_PreFlood, filename_merged_PreFlood),input_pattern='*' + ImageLayer +'*.tif',output_format='GTiff',output_nodata=None)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "filename_merged_SynFlood='SynFlood_Merged.tif'\n", "compression = \"lzw\" # Choose the compression method here\n", "merged_raster_SynFlood = leafmap.merge_rasters(data_directory_SynFlood,os.path.join(data_directory_SynFlood, filename_merged_SynFlood),input_pattern= '*' + ImageLayer +'*.tif',output_format='GTiff',output_nodata=None)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Display the merged images\n", "##### *** The Harmonized Landsat Sentinel-2 (HLS) mask layer (Fmask) sometimes misclassifies sediment-laden or white (i.e. turbulent) surface water as Snow/Ice. This is relevant to this Brazil Flood Example." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = leafmap.Map(basemap=\"Esri.WorldImagery\")\n", "# m.add_raster(os.path.join(data_directory_PreFlood, filename_merged_PreFlood), opacity=1)\n", "m.add_raster(os.path.join(data_directory_SynFlood, filename_merged_SynFlood), opacity=1)\n", "legend_dict = {\n", " 'Not Water': '##ffffff',\n", " 'Open Surface Water': '#0000ff',\n", " 'Partial Surface Water': '#b4d5f4',\n", " 'HLS snow/ice mask': '#00ffff',\n", " 'HLS cloud/cloud shadow mask': '#afafaf'\n", "}\n", "# Add the legend to the map\n", "m.add_legend(legend_title=\"Legend Title\", legend_dict=legend_dict)\n", "\n", "m" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a split map to show changes\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "raster_PreFlood_path=os.path.join(data_directory_PreFlood, filename_merged_PreFlood)\n", "raster_SynFlood_path=os.path.join(data_directory_SynFlood, filename_merged_SynFlood)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "raster_path_2merged=[raster_PreFlood_path,raster_SynFlood_path]\n", "#for only 2 dates - will need to update if more dates are used in merge\n", "image_dates_merged=[image_dates_PreFlood[0],image_dates_SynFlood[0]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "leafmap.split_map(\n", " left_layer=raster_PreFlood_path,\n", " right_layer=raster_SynFlood_path,\n", " # opacity=0.5,\n", " left_label=\"First\",\n", " right_label=\"Last\",\n", " label_position=\"bottom\",\n", " basemap=\"Esri.WorldImagery\",\n", " zoom=10,\n", ")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Conclusions\n", "This is a first `earthaccess` and `leafmap` notebook for flood application. More work is needed to expand features for sophisticated filtering (cloud cover, spatial overlap) and analysis. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": "opera_app", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.13" } }, "nbformat": 4, "nbformat_minor": 2 }