{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## handling units for ASPECT output \n", "\n", "The ASPECT notebooks in this repository have not yet touched on units. So in this notebook, we outline a general methodology for handling units and ASPECT data with an emphasis on user configuration. \n", "\n", "The general approach is to use a default mapping of fieldnames to units that the user can override with a configuration json that they create in the directory containing the `vtu` output. \n", "\n", "Some notes and thoughts on ASPECT output:\n", "\n", "* The `vtu` files output by ASPECT contain fieldnames, but no information on units. \n", "* Many of the fields output by ASPECT have standard units (e.g., `T` has units of `K`) but many ASPECT C++ plugins introduce new fields, so any attempt to make a comprehensive list will soon be outdated. \n", "* All ASPECT calculations are done in the MKS system of units, but the units for time can be output in `s` or `Myrs`.\n", "* ASPECT users should be able to easily load existing output, so rather than considering a modification of the ASPECT write routines to include metadata for units, we focus on how to load ASPECT output as it is now. \n", "\n", "## data\n", "\n", "We'll be using ASPECT output from: Data source: DOI 10.17605/OSF.IO/TEJM4 Title: Edge-Driven Convection and Mantle Wind Models Beneath Madagascar Authors: Tahiry Rajaonarison, Sarah Stamps, Stewart Fishwick, Sascha Brune, Anne Glerum, Jiashun Hu.\n", "\n", "To run notebook: download and unpack, set environment variable \"ASPECTdatadir\" to the directory containing the unzipped directory\n", "\n", "## overview \n", "\n", "we store a default mapping for units in a json, `aspect_default_units.json`. This mapping could be stored in the future `yt` frontend for ASPECT. The benefit to using a json file rather than a python dict is that the user-" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import yt\n", "import unyt\n", "import os\n", "import json \n", "import numpy as np\n", "import xmltodict\n", "import meshio" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The default units json contains the following:\n", "```\n", "{\n", " \"T\": \"K\",\n", " \"p\": \"Pa\",\n", " \"viscosity\": \"Pa*s\",\n", " \"strain_rate\": \"1 / s\",\n", " \"velocity\": \"m / s\",\n", " \"time\" : \"s\", \n", " \"length\" : \"m\" \n", "}\n", "```\n", "\n", "For each fieldname, we have a declaration of the units in a `unyt`-compatible string. When loading the field data from the vtu files, we'll first check this json for the units of each field. If `aspect_units.json` exists in the directory of the data, then any fields defined there will override the default values. If a field is not found in either file, then we'll print a warning and not set any units. \n", "\n", "The default units can be loaded into a python dictionary as follows:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "with open('aspect_default_units.json') as jstream:\n", " default_units = json.load(jstream)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'T': 'K', 'p': 'Pa', 'viscosity': 'Pa*s', 'strain_rate': '1 / s', 'velocity': 'm / s', 'time': 's', 'length': 'm'}\n" ] } ], "source": [ "print(default_units)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So if we had a numpy array of `T` values, we could build a `unyt` array as follows:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "unyt_array([ 500., 825., 1150., 1475., 1800.], 'K')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "T = np.linspace(500,1800,5)\n", "T = unyt.unyt_array(T,default_units['T'])\n", "T" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "unyt_array([ 226.85, 551.85, 876.85, 1201.85, 1526.85], 'degC')" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "T.to(\"Celsius\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "and now `YTArray` is a `unyt_array`: " ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "unyt_array([1.e+18, 1.e+19, 1.e+20, 1.e+21, 1.e+22, 1.e+23, 1.e+24,\n", " 1.e+25], 'Pa*s')" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "eta = np.logspace(18,25,8)\n", "eta = yt.YTArray(eta,default_units['viscosity'])\n", "eta" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "unyt_quantity(1, 'Pa*s')" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "eta.unit_quantity" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### a data loader\n", "\n", "same as in other notebooks, but with the addition of a `units_lookup` dict that gets populated by the `_get_units_lookup` function. And `parseNodeData` now populates the `node_data` dictionary with `YTArray` objects in order to pass the units. " ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "class pvuFile(object):\n", " def __init__(self,file,**kwargs):\n", " self.file=file \n", " self.dataDir=kwargs.get('dataDir',os.path.split(file)[0])\n", " with open(file) as data:\n", " self.pXML = xmltodict.parse(data.read())\n", " \n", " # store fields for convenience \n", " self.fields=self.pXML['VTKFile']['PUnstructuredGrid']['PPointData']['PDataArray'] \n", " self.units_lookup = self._get_units_lookup()\n", " \n", " self.connectivity = None\n", " self.coordinates = None\n", " self.node_data = None \n", " \n", " def _get_units_lookup(self):\n", " with open('aspect_default_units.json') as jstream:\n", " units = json.load(jstream)\n", " \n", " # look for the user file \n", " filename = os.path.join(self.dataDir,'aspect_units.json')\n", " \n", " user_units = {}\n", " if os.path.isfile(filename):\n", " with open(filename) as jstream:\n", " user_units = json.load(jstream)\n", " \n", " # set the units dict (will override defaults with user values)\n", " for key,val in user_units.items():\n", " units[key] = val \n", " \n", " return units \n", " \n", " def load(self): \n", " \n", " conlist=[] # list of 2D connectivity arrays \n", " coordlist=[] # global, concatenated coordinate array \n", " nodeDictList=[] # list of node_data dicts, same length as conlist \n", "\n", " con_offset=-1\n", " for mesh_id,src in enumerate(self.pXML['VTKFile']['PUnstructuredGrid']['Piece']): \n", " mesh_name=\"connect{meshnum}\".format(meshnum=mesh_id+1) # connect1, connect2, etc. \n", " srcFi=os.path.join(self.dataDir,src['@Source']) # full path to .vtu file \n", " \n", " [con,coord,node_d]=self.loadPiece(srcFi,mesh_name,con_offset+1) \n", " con_offset=con.max() \n", " \n", " conlist.append(con.astype(\"i8\"))\n", " coordlist.extend(coord.astype(\"f8\"))\n", " nodeDictList.append(node_d)\n", " \n", " self.connectivity=conlist\n", " self.coordinates=np.array(coordlist)\n", " self.node_data=nodeDictList\n", " \n", " def loadPiece(self,srcFi,mesh_name,connectivity_offset=0): \n", "\n", " meshPiece=meshio.read(srcFi) # read it in with meshio \n", " coords=meshPiece.points # coords and node_data are already global\n", " connectivity=meshPiece.cells_dict['hexahedron'] # 2D connectivity array \n", "\n", " # parse node data \n", " node_data=self.parseNodeData(meshPiece.point_data,connectivity,mesh_name)\n", "\n", " # offset the connectivity matrix to global value \n", " connectivity=np.array(connectivity)+connectivity_offset\n", "\n", " return [connectivity,coords,node_data]\n", " \n", " def parseNodeData(self,point_data,connectivity,mesh_name):\n", " \n", " # for each field, evaluate field data by index, reshape to match connectivity \n", " con1d=connectivity.ravel() \n", " conn_shp=connectivity.shape \n", " \n", " comp_hash={0:'cx',1:'cy',2:'cz'}\n", " def rshpData(data1d):\n", " return np.reshape(data1d[con1d],conn_shp)\n", " \n", " node_data={} \n", " for fld in self.fields: \n", " nm=fld['@Name']\n", " if nm in point_data.keys():\n", " \n", " units = self.units_lookup.get(nm,'1')\n", " if '@NumberOfComponents' in fld.keys() and int(fld['@NumberOfComponents'])>1:\n", " # we have a vector, deal with components\n", " for component in range(int(fld['@NumberOfComponents'])): \n", " comp_name=nm+'_'+comp_hash[component] # e.g., velocity_cx \n", " m_F=(mesh_name,comp_name) # e.g., ('connect1','velocity_cx')\n", " node_data[m_F]=yt.YTArray(rshpData(point_data[nm][:,component]),units)\n", " else:\n", " # just a scalar! \n", " m_F=(mesh_name,nm) # e.g., ('connect1','T')\n", " node_data[m_F]=yt.YTArray(rshpData(point_data[nm]),units)\n", " \n", " return node_data \n" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "DataDir=os.path.join(os.environ.get('ASPECTdatadir','../'),'edc_driven_convection_madagascar',\n", " 'edc_driven_convection_madagascar','solution')\n", "pFile=os.path.join(DataDir,'solution-00015.pvtu')\n", "pvuData=pvuFile(pFile)\n", "pvuData.load()" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "yt : [INFO ] 2020-08-26 08:47:11,187 Parameters: current_time = 0.0\n", "yt : [INFO ] 2020-08-26 08:47:11,187 Parameters: domain_dimensions = [2 2 2]\n", "yt : [INFO ] 2020-08-26 08:47:11,188 Parameters: domain_left_edge = [ 2679220.40116885 2679220.40116884 -3272029.00399254]\n", "yt : [INFO ] 2020-08-26 08:47:11,189 Parameters: domain_right_edge = [5653600.91489411 5653600.91489411 -736270.69874229]\n", "yt : [INFO ] 2020-08-26 08:47:11,189 Parameters: cosmological_simulation = 0.0\n" ] } ], "source": [ "ds = yt.load_unstructured_mesh(\n", " pvuData.connectivity,\n", " pvuData.coordinates,\n", " node_data = pvuData.node_data,\n", " length_unit = pvuData.units_lookup['length'],\n", " time_unit = pvuData.units_lookup['time']\n", ") \n" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "dd = ds.all_data()" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "yt : [INFO ] 2020-08-26 08:47:11,555 xlim = 2679220.401169 5653600.914894\n", "yt : [INFO ] 2020-08-26 08:47:11,555 ylim = -3272029.003993 -736270.698742\n", "yt : [INFO ] 2020-08-26 08:47:11,556 xlim = 2679220.401169 5653600.914894\n", "yt : [INFO ] 2020-08-26 08:47:11,557 ylim = -3272029.003993 -736270.698742\n", "yt : [INFO ] 2020-08-26 08:47:11,558 Making a fixed resolution buffer of (('connect1', 'T')) 800 by 800\n" ] }, { "data": { "text/html": [ "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "p=yt.SlicePlot(ds, \"x\", (\"connect1\", \"T\"))\n", "p.set_log(\"T\",False)\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "so units are there! but that's slicing just a single mesh element. " ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "yt : [INFO ] 2020-08-26 08:47:12,337 xlim = 2679220.401169 5653600.914894\n", "yt : [INFO ] 2020-08-26 08:47:12,337 ylim = -3272029.003993 -736270.698742\n", "yt : [INFO ] 2020-08-26 08:47:12,338 xlim = 2679220.401169 5653600.914894\n", "yt : [INFO ] 2020-08-26 08:47:12,338 ylim = -3272029.003993 -736270.698742\n", "yt : [INFO ] 2020-08-26 08:47:12,339 Making a fixed resolution buffer of (('all', 'T')) 800 by 800\n" ] }, { "data": { "text/html": [ "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "p=yt.SlicePlot(ds, \"x\", (\"all\", \"T\"))\n", "p.set_log(\"T\",False)\n", "p.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "hmm. `fluid_type` of `all` doesn't seem to keep the units??? need to map out what `all` actually is... " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.3" } }, "nbformat": 4, "nbformat_minor": 4 }