{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Analysing huge output files\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import os\nfrom datetime import datetime, timedelta\nimport opendrift\nfrom opendrift.models.oceandrift import OceanDrift\nfrom opendrift.readers import reader_oscillating" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First make a simulation with two seedings, marked by *origin_marker*\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "o = OceanDrift(loglevel=50)\no.set_config('drift:horizontal_diffusivity', 10)\nt1 = datetime.now()\nt2 = t1 + timedelta(hours=6)\nnumber = 10000\noutfile = 'simulation.nc' # Raw simulation output\nreader_x = reader_oscillating.Reader('x_sea_water_velocity',\n amplitude=1, zero_time=t1)\nreader_y = reader_oscillating.Reader('y_sea_water_velocity',\n amplitude=1, zero_time=t2)\no.add_reader([reader_x, reader_y])\no.seed_elements(time=t1, lon=4, lat=60, number=number,\n origin_marker=0)\no.seed_elements(time=[t1, t2], lon=4.2, lat=60.4, number=number,\n origin_marker=1)\n\no.run(duration=timedelta(hours=24),\n time_step=900, time_step_output=1800, outfile=outfile)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Opening the output file lazily with Xarray.\nThis will work even if the file is too large to fit in memory, as it\nwill read and process data chuck-by-chunk directly from file using Dask.\n(See also [example_river_runoff.py](https://opendrift.github.io/gallery/example_river_runoff.html))\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "oa = opendrift.open_xarray(outfile)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Calculating histogram\nThe histogram may be stored/cached to a netCDF file for later re-use,\nas the calculation may be time consuming for huge output files.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "h = oa.get_histogram(pixelsize_m=500)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot the cumulative coverage of first seeding (origin_marker=0)\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "b=h.isel(origin_marker=0).sum(dim='time')\noa.plot(background=b.where(b>0), fast=True, show_elements=False, vmin=0, vmax=1000, clabel='First seeding')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Making two animations, for each of the two seedings / origin_markers.\nThe calculated density fields may be stored/cached to a netCDF file\nfor later re-use, as their calculation may be time consuming\nfor huge output files.\nNote that other analysis/plotting methods are not yet adapted\nto datasets opened lazily with open_xarray\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for om in [0, 1]:\n background=h.isel(origin_marker=om)\n oa.animation(background=background.where(background>0), bgalpha=1,\n corners=[4.0, 6, 59.5, 61], fast=False, show_elements=False, vmin=0, vmax=200)\n\n# Cleaning up\nos.remove(outfile)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First seeding\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Second seeding\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.6" } }, "nbformat": 4, "nbformat_minor": 0 }