{ "cells": [ { "cell_type": "markdown", "id": "d74b0e53-3f0f-4e76-8099-e307f03c4d6e", "metadata": {}, "source": [ "# Using the HyP3 SDK to generate RTC products for given search parameters\n", "\n", "Before running this notebook for the first time, please read [Using the HyP3 SDK to process new granules for given search parameters](https://hyp3-docs.asf.alaska.edu/tutorials/process-new-granules-for-search-parameters/) for a complete introduction to this tutorial.\n", "\n", "You can run this notebook to submit On Demand RTC jobs for all granules that match a particular set of search parameters (date range, area of interest, etc.). After you run the notebook, more granules may become available for your search parameters over the following days (because there is a delay between data being acquired and becoming available in the archive), or you may decide to modify your search parameters. In either case, you can simply run the notebook again to submit RTC jobs for all granules that have not yet been processed.\n", "\n", "This workflow is particularly useful for ongoing monitoring of a geographic area of interest, but it can be used whenever you want to augment your project with additional products without generating duplicates.\n", "\n", "First, install dependencies:" ] }, { "cell_type": "code", "execution_count": null, "id": "9f5b47c3-86db-4574-b44e-09952106360d", "metadata": {}, "outputs": [], "source": [ "!pip install 'asf-search>=6.6.2' hyp3-sdk\n", "\n", "import asf_search\n", "from hyp3_sdk import HyP3" ] }, { "cell_type": "markdown", "id": "334cf891-8ad6-4f9a-b7c7-4f5c7fd7ddc7", "metadata": {}, "source": [ "Next, define your search parameters and job specification as shown below. The search parameters become keyword arguments to the `asf_search.search` function. See [here](https://docs.asf.alaska.edu/asf_search/searching/#keywords) for a full list of available keywords." ] }, { "cell_type": "code", "execution_count": null, "id": "b6c5257b-31f2-42f8-b42a-147e88873f94", "metadata": {}, "outputs": [], "source": [ "search_parameters = {\n", " \"start\": \"2023-06-01T00:00:00Z\",\n", " \"end\": \"2023-06-30T00:00:00Z\",\n", " \"intersectsWith\":\n", " \"POLYGON((-110.7759 44.8543,-101.3998 44.8543,-101.3998 50.8183,-110.7759 50.8183,-110.7759 44.8543))\",\n", " \"platform\": \"S1\",\n", " \"processingLevel\": \"SLC\",\n", "}\n", "job_specification = {\n", " \"job_parameters\": {\n", " \"resolution\": 30,\n", " },\n", " \"job_type\": \"RTC_GAMMA\",\n", " \"name\": \"Project Name\"\n", "}" ] }, { "cell_type": "markdown", "id": "425a5bb4-497a-4de6-b841-f739d9cb37f1", "metadata": {}, "source": [ "Next, construct a list of unprocessed granules:" ] }, { "cell_type": "code", "execution_count": null, "id": "bff7fe5d-2d0a-4360-8c6d-55de10d45c2e", "metadata": {}, "outputs": [], "source": [ "hyp3 = HyP3()\n", "\n", "previous_jobs = hyp3.find_jobs(\n", " name=job_specification['name'],\n", " job_type=job_specification['job_type'],\n", ")\n", "processed_granules = [job.job_parameters['granules'][0] for job in previous_jobs]\n", "print(f'Found {len(processed_granules)} previously processed granules')\n", "\n", "search_results = asf_search.search(**search_parameters)\n", "search_results.raise_if_incomplete()\n", "\n", "unprocessed_granules = [\n", " result for result in search_results if result.properties['sceneName'] not in processed_granules\n", "]\n", "print(f'Found {len(unprocessed_granules)} unprocessed granules')" ] }, { "cell_type": "markdown", "id": "23a0a176-5829-4a97-b6cc-ff7f0e36243c", "metadata": {}, "source": [ "Finally, submit a new RTC job for each unprocessed granule. Note that unprocessed granules are handled in batches. You can adjust the batch size by changing the value of the `batch_size` variable." ] }, { "cell_type": "code", "execution_count": null, "id": "1d57114a-e652-4b04-ae1e-db7f727941dc", "metadata": {}, "outputs": [], "source": [ "from copy import deepcopy\n", "\n", "def get_job_for_granule(granule: asf_search.ASFProduct) -> dict:\n", " job = deepcopy(job_specification)\n", " job['job_parameters']['granules'] = [granule.properties['sceneName']]\n", " return job\n", "\n", "\n", "batch_size = 20\n", "for i in range(0, len(unprocessed_granules), batch_size):\n", " new_jobs = [\n", " get_job_for_granule(granule) for granule in unprocessed_granules[i:i+batch_size]\n", " ]\n", " print(f'Submitting {len(new_jobs)} jobs')\n", " hyp3.submit_prepared_jobs(new_jobs)\n", "\n", "print('Done.')" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.17" } }, "nbformat": 4, "nbformat_minor": 5 }