{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Tutorial showing how to use the Parcels `FieldSet.advancetime` method" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "In many real-world applications, particles are run for long times, using many snapshots of the hydrographic data. If these files are large, having to read them all into memory can take a significant amount of resources" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "The `FieldSet.advancetime` method allows a simulation where only three snapshots of the hydrodynamic fields are in memory at any time, and they can be cycled through. This brief tutorial shows how to use the `FieldSet.advancetime` method to read in only a sebset of all the time slices available at once" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We start with importing the relevant modules" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4\n", "from datetime import timedelta as delta\n", "import numpy as np\n", "from glob import glob\n", "from os import path" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now define a function that loads the Globcurrent fields from the `GlobCurrent_example_data` directory" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "def loadglobcurrentfile(filenames):\n", " filenames = {'U': filenames,\n", " 'V': filenames}\n", " variables = {'U': 'eastward_eulerian_current_velocity',\n", " 'V': 'northward_eulerian_current_velocity'}\n", " dimensions = {'lat': 'lat',\n", " 'lon': 'lon',\n", " 'time': 'time'}\n", " return FieldSet.from_netcdf(filenames, variables, dimensions)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We can create a list of all the files available in the `GlobCurrent_example_data` directory using" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "files = sorted(glob(str(path.join('GlobCurrent_example_data','20*.nc'))))" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now we read in the first three files into the `fieldset` (by using `files[0:3]`)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "WARNING: Casting lon data to np.float32\n", "WARNING: Casting lat data to np.float32\n", "WARNING: Casting depth data to np.float32\n" ] } ], "source": [ "fieldset = loadglobcurrentfile(files[0:3])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now create a `ParticleSet` object" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=[20], lat=[-35])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now we can advect the particles, for ten days. Normally, since we only have three days in memory, we can not advect that long. But in this case, we can use a custom `for`-loop to constantly update the `fieldset` with the latest snapshot. " ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO: Compiled JITParticleAdvectionRK4 ==> /var/folders/r2/8593q8z93kd7t4j9kbb_f7p00000gn/T/parcels-501/27805ff3aa34ba12ddb373f3f2cb1d1b.so\n" ] } ], "source": [ "for i in range(10):\n", " pset.execute(AdvectionRK4, # First advect the particles\n", " runtime=delta(days=1), # runtime needs to be equal to the time between snapshots\n", " dt=delta(minutes=5))\n", "\n", " # Then update the fieldset using the advancetime method\n", " fieldset.advancetime(loadglobcurrentfile(files[i+3]))" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "With this relatively simple setup, Parcels can be run on hydrodynamic datasets that are potentially hundreds of gigabytes in size; just as long as any single snapshot isn't too big." ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" } }, "nbformat": 4, "nbformat_minor": 2 }