{ "cells": [ { "cell_type": "markdown", "metadata": { "internals": { "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "# Writing netCDF data\n", "\n", "**Important Note**: when running this notebook interactively in a browser, you probably will not be able to execute individual cells out of order without getting an error. Instead, choose \"Run All\" from the Cell menu after you modify a cell." ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false, "internals": { "frag_number": 1, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "import netCDF4 # Note: python is case-sensitive!\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 1, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Opening a file, creating a new Dataset\n", "\n", "Let's create a new, empty netCDF file named 'data/new.nc', opened for writing.\n", "\n", "Be careful, opening a file with 'w' will clobber any existing data (unless `clobber=False` is used, in which case an exception is raised if the file already exists).\n", "\n", "- `mode='r'` is the default.\n", "- `mode='a'` opens an existing file and allows for appending (does not clobber existing data)\n", "- `format` can be one of `NETCDF3_CLASSIC`, `NETCDF3_64BIT`, `NETCDF4_CLASSIC` or `NETCDF4` (default). `NETCDF4_CLASSIC` uses HDF5 for the underlying storage layer (as does `NETCDF4`) but enforces the classic netCDF 3 data model so data can be read with older clients. " ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 3, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "root group (NETCDF4_CLASSIC data model, file format HDF5):\n", " dimensions(sizes): \n", " variables(dimensions): \n", " groups: \n", "\n" ] } ], "source": [ "try: ncfile.close() # just to be safe, make sure dataset is not already open.\n", "except: pass\n", "ncfile = netCDF4.Dataset('data/new.nc',mode='w',format='NETCDF4_CLASSIC') \n", "print(ncfile)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 3, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Creating dimensions\n", "\n", "The **ncfile** object we created is a container for _dimensions_, _variables_, and _attributes_. First, let's create some dimensions using the [`createDimension`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createDimension) method. \n", "\n", "- Every dimension has a name and a length. \n", "- The name is a string that is used to specify the dimension to be used when creating a variable, and as a key to access the dimension object in the `ncfile.dimensions` dictionary.\n", "\n", "Setting the dimension length to `0` or `None` makes it unlimited, so it can grow. \n", "\n", "- For `NETCDF4` files, any variable's dimension can be unlimited. \n", "- For `NETCDF4_CLASSIC` and `NETCDF3*` files, only one per variable can be unlimited, and it must be the leftmost (fastest varying) dimension." ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 5, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('lat', : name = 'lat', size = 73\n", ")\n", "('lon', : name = 'lon', size = 144\n", ")\n", "('time', (unlimited): name = 'time', size = 0\n", ")\n" ] } ], "source": [ "lat_dim = ncfile.createDimension('lat', 73) # latitude axis\n", "lon_dim = ncfile.createDimension('lon', 144) # longitude axis\n", "time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).\n", "for dim in ncfile.dimensions.items():\n", " print(dim)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 5, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Creating attributes\n", "\n", "netCDF attributes can be created just like you would for any python object. \n", "\n", "- Best to adhere to established conventions (like the [CF](http://cfconventions.org/) conventions)\n", "- We won't try to adhere to any specific convention here though." ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 7 }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "My model data\n" ] } ], "source": [ "ncfile.title='My model data'\n", "print(ncfile.title)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 8, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "source": [ "Try adding some more attributes..." ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 8, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Creating variables\n", "\n", "Now let's add some variables and store some data in them. \n", "\n", "- A variable has a name, a type, a shape, and some data values. \n", "- The shape of a variable is specified by a tuple of dimension names. \n", "- A variable should also have some named attributes, such as 'units', that describe the data.\n", "\n", "The [`createVariable`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createVariable) method takes 3 mandatory args.\n", "\n", "- the 1st argument is the variable name (a string). This is used as the key to access the variable object from the `variables` dictionary.\n", "- the 2nd argument is the datatype (most numpy datatypes supported). \n", "- the third argument is a tuple containing the dimension names (the dimensions must be created first). Unless this is a `NETCDF4` file, any unlimited dimension must be the leftmost one.\n", "- there are lots of optional arguments (many of which are only relevant when `format='NETCDF4'`) to control compression, chunking, fill_value, etc.\n" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 10, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "float64 temp(time, lat, lon)\n", " units: K\n", " standard_name: air_temperature\n", "unlimited dimensions: time\n", "current shape = (0, 73, 144)\n", "filling on, default _FillValue of 9.96920996839e+36 used\n", "\n" ] } ], "source": [ "# Define two variables with the same names as dimensions,\n", "# a conventional way to define \"coordinate variables\".\n", "lat = ncfile.createVariable('lat', np.float32, ('lat',))\n", "lat.units = 'degrees_north'\n", "lat.long_name = 'latitude'\n", "lon = ncfile.createVariable('lon', np.float32, ('lon',))\n", "lon.units = 'degrees_east'\n", "lon.long_name = 'longitude'\n", "time = ncfile.createVariable('time', np.float64, ('time',))\n", "time.units = 'hours since 1800-01-01'\n", "time.long_name = 'time'\n", "# Define a 3D variable to hold the data\n", "temp = ncfile.createVariable('temp',np.float64,('time','lat','lon')) # note: unlimited dimension is leftmost\n", "temp.units = 'K' # degrees Kelvin\n", "temp.standard_name = 'air_temperature' # this is a CF standard name\n", "print(temp)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 10, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Pre-defined variable attributes (read only)\n", "\n", "The netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim. \n", "\n", "Note: since no data has been written yet, the length of the 'time' dimension is 0." ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 12, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-- Some pre-defined attributes for variable temp:\n", "('temp.dimensions:', (u'time', u'lat', u'lon'))\n", "('temp.shape:', (0, 73, 144))\n", "('temp.dtype:', dtype('float64'))\n", "('temp.ndim:', 3)\n" ] } ], "source": [ "print(\"-- Some pre-defined attributes for variable temp:\")\n", "print(\"temp.dimensions:\", temp.dimensions)\n", "print(\"temp.shape:\", temp.shape)\n", "print(\"temp.dtype:\", temp.dtype)\n", "print(\"temp.ndim:\", temp.ndim)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 12, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Writing data\n", "\n", "To write data a netCDF variable object, just treat it like a numpy array and assign values to a slice." ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 14 }, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('-- Wrote data, temp.shape is now ', (3, 73, 144))\n", "('-- Min/Max values:', 280.00283562143028, 329.99987991477548)\n" ] } ], "source": [ "nlats = len(lat_dim); nlons = len(lon_dim); ntimes = 3\n", "# Write latitudes, longitudes.\n", "# Note: the \":\" is necessary in these \"write\" statements\n", "lat[:] = -90. + (180./nlats)*np.arange(nlats) # south pole to north pole\n", "lon[:] = (180./nlats)*np.arange(nlons) # Greenwich meridian eastward\n", "# create a 3D array of random numbers\n", "data_arr = np.random.uniform(low=280,high=330,size=(ntimes,nlats,nlons))\n", "# Write the data. This writes the whole 3D netCDF variable all at once.\n", "temp[:,:,:] = data_arr # Appends data along unlimited dimension\n", "print(\"-- Wrote data, temp.shape is now \", temp.shape)\n", "# read data back from variable (by slicing it), print min and max\n", "print(\"-- Min/Max values:\", temp[:,:,:].min(), temp[:,:,:].max())" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 15, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "source": [ "- You can just treat a netCDF Variable object like a numpy array and assign values to it.\n", "- Variables automatically grow along unlimited dimensions (unlike numpy arrays)\n", "- The above writes the whole 3D variable all at once, but you can write it a slice at a time instead.\n", "\n", "Let's add another time slice....\n" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 15, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('-- Wrote more data, temp.shape is now ', (4, 73, 144))\n" ] } ], "source": [ "# create a 2D array of random numbers\n", "data_slice = np.random.uniform(low=280,high=330,size=(nlats,nlons))\n", "temp[3,:,:] = data_slice # Appends the 4th time slice\n", "print(\"-- Wrote more data, temp.shape is now \", temp.shape)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 17 }, "slideshow": { "slide_type": "fragment" } }, "source": [ "Note that we have not yet written any data to the time variable. It automatically grew as we appended data along the time dimension to the variable `temp`, but the data is missing." ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 18, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "float64 time(time)\n", " units: hours since 1800-01-01\n", " long_name: time\n", "unlimited dimensions: time\n", "current shape = (4,)\n", "filling on, default _FillValue of 9.96920996839e+36 used\n", "\n", "(, masked_array(data = [-- -- -- --],\n", " mask = [ True True True True],\n", " fill_value = 9.96920996839e+36)\n", ")\n" ] } ], "source": [ "print(time)\n", "times_arr = time[:]\n", "print(type(times_arr),times_arr) # dashes indicate masked values (where data has not yet been written)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 18, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "Let's add write some data into the time variable. \n", "\n", "- Given a set of datetime instances, use date2num to convert to numeric time values and then write that data to the variable." ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 20, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[datetime.datetime(2014, 10, 1, 0, 0), datetime.datetime(2014, 10, 2, 0, 0), datetime.datetime(2014, 10, 3, 0, 0), datetime.datetime(2014, 10, 4, 0, 0)]\n", "(array([ 1882440., 1882464., 1882488., 1882512.]), u'hours since 1800-01-01')\n", "[datetime.datetime(2014, 10, 1, 0, 0) datetime.datetime(2014, 10, 2, 0, 0)\n", " datetime.datetime(2014, 10, 3, 0, 0) datetime.datetime(2014, 10, 4, 0, 0)]\n" ] } ], "source": [ "from datetime import datetime\n", "from netCDF4 import date2num,num2date\n", "# 1st 4 days of October.\n", "dates = [datetime(2014,10,1,0),datetime(2014,10,2,0),datetime(2014,10,3,0),datetime(2014,10,4,0)]\n", "print(dates)\n", "times = date2num(dates, time.units)\n", "print(times, time.units) # numeric values\n", "time[:] = times\n", "# read time data back, convert to datetime instances, check values.\n", "print(num2date(time[:],time.units))" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 20, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Closing a netCDF file\n", "\n", "It's **important** to close a netCDF file you opened for writing:\n", "\n", "- flushes buffers to make sure all data gets written\n", "- releases memory resources used by open netCDF files" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 22, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "root group (NETCDF4_CLASSIC data model, file format HDF5):\n", " title: My model data\n", " dimensions(sizes): lat(73), lon(144), time(4)\n", " variables(dimensions): float32 \u001b[4mlat\u001b[0m(lat), float32 \u001b[4mlon\u001b[0m(lon), float64 \u001b[4mtime\u001b[0m(time), float64 \u001b[4mtemp\u001b[0m(time,lat,lon)\n", " groups: \n", "\n", "Dataset is closed!\n" ] } ], "source": [ "# first print the Dataset object to see what we've got\n", "print(ncfile)\n", "# close the Dataset.\n", "ncfile.close(); print('Dataset is closed!')" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 22, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "# Advanced features\n", "\n", "So far we've only exercised features associated with the old netCDF version 3 data model. netCDF version 4 adds a lot of new functionality that comes with the more flexible HDF5 storage layer. \n", "\n", "Let's create a new file with `format='NETCDF4'` so we can try out some of these features." ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 25, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "root group (NETCDF4 data model, file format HDF5):\n", " dimensions(sizes): \n", " variables(dimensions): \n", " groups: \n", "\n" ] } ], "source": [ "ncfile = netCDF4.Dataset('data/new2.nc','w',format='NETCDF4')\n", "print(ncfile)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 25, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "## Creating Groups\n", "\n", "netCDF version 4 added support for organizing data in hierarchical groups.\n", "\n", "- analagous to directories in a filesystem. \n", "- Groups serve as containers for variables, dimensions and attributes, as well as other groups. \n", "- A `netCDF4.Dataset` creates a special group, called the 'root group', which is similar to the root directory in a unix filesystem. \n", "\n", "- groups are created using the [`createGroup`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createGroup) method.\n", "- takes a single argument (a string, which is the name of the Group instance). This string is used as a key to access the group instances in the `groups` dictionary.\n", "\n", "Here we create two groups to hold data for two different model runs." ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 27, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('model_run1', \n", "group /model_run1:\n", " dimensions(sizes): \n", " variables(dimensions): \n", " groups: \n", ")\n", "('model_run2', \n", "group /model_run2:\n", " dimensions(sizes): \n", " variables(dimensions): \n", " groups: \n", ")\n" ] } ], "source": [ "grp1 = ncfile.createGroup('model_run1')\n", "grp2 = ncfile.createGroup('model_run2')\n", "for grp in ncfile.groups.items():\n", " print(grp)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 27, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "Create some dimensions in the root group." ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 29 }, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "lat_dim = ncfile.createDimension('lat', 73) # latitude axis\n", "lon_dim = ncfile.createDimension('lon', 144) # longitude axis\n", "time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to)." ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 30 }, "slideshow": { "slide_type": "fragment" } }, "source": [ "Now create a variable in grp1 and grp2. The library will search recursively upwards in the group tree to find the dimensions (which in this case are defined one level up).\n", "\n", "- These variables are create with **zlib compression**, another nifty feature of netCDF 4. \n", "- The data are automatically compressed when data is written to the file, and uncompressed when the data is read. \n", "- This can really save disk space, especially when used in conjunction with the [**least_significant_digit**](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createVariable) keyword argument, which causes the data to be quantized (truncated) before compression. This makes the compression lossy, but more efficient." ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 31, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('model_run1', \n", "group /model_run1:\n", " dimensions(sizes): \n", " variables(dimensions): float64 \u001b[4mtemp\u001b[0m(time,lat,lon)\n", " groups: \n", ")\n", "('model_run2', \n", "group /model_run2:\n", " dimensions(sizes): \n", " variables(dimensions): float64 \u001b[4mtemp\u001b[0m(time,lat,lon)\n", " groups: \n", ")\n" ] } ], "source": [ "temp1 = grp1.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)\n", "temp2 = grp2.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)\n", "for grp in ncfile.groups.items(): # shows that each group now contains 1 variable\n", " print(grp)" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 31, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "##Creating a variable with a compound data type\n", "\n", "- Compound data types map directly to numpy structured (a.k.a 'record' arrays). \n", "- Structured arrays are akin to C structs, or derived types in Fortran. \n", "- They allow for the construction of table-like structures composed of combinations of other data types, including other compound types. \n", "- Might be useful for representing multiple parameter values at each point on a grid, or at each time and space location for scattered (point) data. \n", "\n", "Here we create a variable with a compound data type to represent complex data (there is no native complex data type in netCDF). \n", "\n", "- The compound data type is created with the [`createCompoundType`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createCompoundType) method." ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 33, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "compound cmplx_var(time, lat, lon)\n", "compound data type: [('real', '\n", "vlen phony_vlen_var(time, lat, lon)\n", "vlen data type: int64\n", "path = /model_run2\n", "unlimited dimensions: time\n", "current shape = (1, 73, 144)\n", "\n", "('data =\\n', array([[[array([0, 4, 0, 9, 2, 2, 2, 4, 2]), array([7, 5, 4, 4, 9, 8, 0]),\n", " array([3, 6, 6, 8, 2, 7]), ..., array([5, 0, 0, 8, 8, 1, 5, 3]),\n", " array([4, 2, 7]), array([0])],\n", " [array([5, 6, 6, 6, 1, 0, 7]), array([7]),\n", " array([7, 5, 8, 9, 6, 9, 3]), ..., array([0, 6, 5, 4]),\n", " array([7, 1, 9, 7, 7, 2]), array([1, 4, 0])],\n", " [array([4, 3, 1]), array([6, 3, 9, 7, 8]), array([8]), ...,\n", " array([6, 5, 8, 0]), array([0]), array([0, 9, 6, 2, 4])],\n", " ..., \n", " [array([8, 4, 4]), array([4, 1, 6]), array([1, 4, 2, 3, 9]), ...,\n", " array([9, 1]), array([7, 2, 5, 1, 5, 8, 2]),\n", " array([2, 9, 9, 1, 4, 6, 3, 5, 2])],\n", " [array([4, 7, 9, 8, 2, 3, 6, 6]),\n", " array([1, 4, 1, 6, 1, 1, 2, 3, 9]),\n", " array([9, 5, 6, 2, 4, 3, 8, 2, 9]), ..., array([9, 5, 7]),\n", " array([3, 9]), array([4, 2, 6, 9])],\n", " [array([8, 9, 9, 2, 2, 8, 8, 5]), array([3]),\n", " array([8, 8, 0, 2, 9, 2, 3, 0, 9]), ..., array([7]),\n", " array([5, 1, 0, 6, 8, 6]), array([8, 6, 3, 6, 9, 8, 4, 2, 5])]]], dtype=object))\n" ] } ], "source": [ "vlen_data = np.empty((nlats,nlons),object)\n", "for i in range(nlons):\n", " for j in range(nlats):\n", " size = np.random.randint(1,10,size=1) # random length of sequence\n", " vlen_data[j,i] = np.random.randint(0,10,size=size)# generate random sequence\n", "vlvar[0] = vlen_data # append along unlimited dimension (time)\n", "print(vlvar)\n", "print('data =\\n',vlvar[:])" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 39, "slide_type": "subslide" }, "slideshow": { "slide_type": "slide" } }, "source": [ "Close the Dataset and examine the contents with ncdump." ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "collapsed": false, "internals": { "frag_helper": "fragment_end", "frag_number": 41, "slide_helper": "subslide_end" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "netcdf new2 {\r\n", "types:\r\n", " compound complex128 {\r\n", " double real ;\r\n", " double imag ;\r\n", " }; // complex128\r\n", " int64(*) phony_vlen ;\r\n", "dimensions:\r\n", "\tlat = 73 ;\r\n", "\tlon = 144 ;\r\n", "\ttime = UNLIMITED ; // (1 currently)\r\n", "\r\n", "group: model_run1 {\r\n", " variables:\r\n", " \tdouble temp(time, lat, lon) ;\r\n", " \tcomplex128 cmplx_var(time, lat, lon) ;\r\n", " } // group model_run1\r\n", "\r\n", "group: model_run2 {\r\n", " variables:\r\n", " \tdouble temp(time, lat, lon) ;\r\n", " \tphony_vlen phony_vlen_var(time, lat, lon) ;\r\n", " } // group model_run2\r\n", "}\r\n" ] } ], "source": [ "ncfile.close()\n", "!ncdump -h data/new2.nc" ] }, { "cell_type": "markdown", "metadata": { "internals": { "frag_helper": "fragment_end", "frag_number": 41, "slide_helper": "subslide_end", "slide_type": "subslide" }, "slide_helper": "slide_end", "slideshow": { "slide_type": "slide" } }, "source": [ "##Other interesting and useful projects using netcdf4-python\n", "\n", "- [Xray](http://xray.readthedocs.org/en/stable/): N-dimensional variant of the core [pandas](http://pandas.pydata.org) data structure that can operate on netcdf variables.\n", "- [Iris](http://scitools.org.uk/iris/): a data model to create a data abstraction layer which isolates analysis and visualisation code from data format specifics. Uses netcdf4-python to access netcdf data (can also handle GRIB).\n", "- [Biggus](https://github.com/SciTools/biggus): Virtual large arrays (from netcdf variables) with lazy evaluation.\n", "- [cf-python](http://cfpython.bitbucket.org/): Implements the [CF](http://cfconventions.org) data model for the reading, writing and processing of data and metadata. " ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.9" } }, "nbformat": 4, "nbformat_minor": 0 }