{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<script async src=\"https://www.googletagmanager.com/gtag/js?id=UA-59152712-8\"></script>\n", "<script>\n", " window.dataLayer = window.dataLayer || [];\n", " function gtag(){dataLayer.push(arguments);}\n", " gtag('js', new Date());\n", "\n", " gtag('config', 'UA-59152712-8');\n", "</script>\n", "\n", "# `BaikalETK`: NRPy+-Based BSSN Solvers for the Einstein Toolkit\n", "\n", "## Author: Zach Etienne\n", "\n", "#### Special thanks to Roland Haas for help in answering many implementation questions\n", "\n", "## This notebook generates `Baikal` and `BaikalVacuum`, [Einstein Toolkit](https://einsteintoolkit.org) thorns for solving Einstein's equations in the BSSN formalism, in Cartesian coordinates. These thorns are highly optimized for modern CPU architectures, featuring SIMD intrinsics and OpenMP support.\n", "\n", "**Notebook Status:** <font color='orange'><b> Validated against the Einstein Toolkit `McLachlan` BSSN thorn, both in the context of black hole binary simulations (excellent gravitational waveform agreement) as well as binary neutron star simulations (when parameter `enable_stress_energy_source_terms` below is set to `True`). Once plots demonstrating this agreement are added to this tutorial notebook, the validation status will be set to</b></font> <font color='green'><b>Validated</b></font>.\n", "\n", "**Validation Notes:** This tutorial notebook has been validated against a trusted Einstein Toolkit thorn, but plots demonstrating its validation have yet to be included in this notebook.\n", "\n", "## Introduction\n", "\n", "```\n", "How often did my soul cry out:\n", "Come back to Baikal once again?\n", "I still do not know this lake:\n", "To see does not mean to know.\n", "```\n", "[Igor Severyanin](https://en.wikipedia.org/wiki/Igor_Severyanin), [[1]](https://1baikal.ru/en/istoriya/let’s-turn-to-baikal-a-poetic-view).\n", "\n", "[Lake Baikal](https://en.wikipedia.org/wiki/Lake_Baikal) is home to the [nerpa seal](https://en.wikipedia.org/wiki/Baikal_seal), NRPy+'s mascot.\n", "\n", "This notebook generates two thorns: `Baikal` and `BaikalVacuum`. `Baikal` contains stress-energy source terms (i.e., the $8\\pi T^{\\mu\\nu}$ part of Einstein's equations) for general relativistic hydrodynamics (GRHD) and magnetohydrodynamics (GRMHD), and the `BaikalVacuum` contains no such source terms, so can be used for e.g., black hole and binary black hole calculations in which no self-gravitating matter is considered.\n", "\n", "`Baikal` and `BaikalVacuum` are meant to reproduce the functionality of the `McLachlan` thorn, generated by the [Mathematica](https://www.wolfram.com/mathematica/)-based [Kranc](http://kranccode.org/) code, but using the fully open-source NRPy+/SymPy infrastructure.\n", "\n", "Unlike `McLachlan`, `Baikal` and `BaikalVacuum` enable the user at runtime to select only the most popular options, like finite-difference order and Kreiss-Oliger dissipation strength. Further, neither thorn yet supports symmetry options or Jacobians needed for `Llama` grids. Support for these options might be provided in a future update. As for BSSN gauge choice, `Baikal` and `BaikalVacuum` by default support only the BSSN moving-puncture gauge conditions, though this can be changed by selecting one of the [other gauge choices implemented within the NRPy+ infrastructure](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb). \n", "\n", "**Regarding the structure of this notebook:**\n", "\n", "As generating the optimized C code kernels needed for these thorns can *individually* take roughly 10 minutes per finite-difference order, there is a strong motivation to parallelize the code generation process. \n", "\n", "To this end, this Jupyter notebook does not itself run Python code in most code blocks directly. Instead \n", "* code blocks needed to generate the C code kernels (i.e., the C-code representations of the basic equations) are simply output to the `$BaikalETKdir/BaikalETK_C_kernels_codegen.py` file using the `%%writefile` [IPython magic command](https://ipython.readthedocs.io/en/stable/interactive/magics.html);\n", "* code blocks needed to generate the Einstein Toolkit (ETK) CCL files are output to `$BaikalETKdir/BaikalETK_ETK_ccl_files_codegen.py` file using the same IPython magic; and\n", "* code blocks needed to generate the C driver routines that call the C code kernels and interface with the rest of the ETK are output to `$BaikalETKdir/BaikalETK_C_drivers_codegen.py`.\n", "\n", "The only non-`%%writefile` IPython magic codeblocks used in this notebook act as glue so that the above three Python modules can be called in sequence, with particular emphasis on generating the C code kernels in parallel (by calling multiple instances of the Python module `$BaikalETKdir/BaikalETK_C_kernels_codegen.py`). On a modern, multi-core CPU, this greatly cuts down the time needed to generate the thorns (sometimes by 10x or more), all while ensuring a convenient user interface for adjusting the thorns to suit one's needs.\n", "\n", "### Associated NRPy+ Source Code & Tutorial Modules for this module: \n", "* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\\[**tutorial**\\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): Spherical/Cartesian ADM$\\to$Curvilinear BSSN converter function, for which ADM quantities are assumed given at each gridpoint (i.e., exact, closed-form expressions are not given). This is used to generate BaikalETK's ADM$\\to$BSSN function, as in the ETK spacetime evolution modules are to assume that initial data are given as ADM quantities in the Cartesian basis at each gridpoint.\n", "* [BSSN/ADM_in_terms_of_BSSN.py](BSSN/ADM_in_terms_of_BSSN.py); [\\[**tutorial**\\]](Tutorial-ADM_in_terms_of_BSSN.ipynb): Constructs ADM quantities in terms of BSSN quantities (in arbitrary curvilinear coordinates, though we use Cartesian here). This is used to generate BaikalETK's BSSN$\\to$ADM function, which make ADM variables available to diagnostic thorns within the ETK.\n", "* [BSSN/BSSN_constraints.py](BSSN/BSSN_constraints.py); [\\[**tutorial**\\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates\n", "* [BSSN/BSSN_RHSs.py](BSSN/BSSN_RHSs.py); [\\[**tutorial**\\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates\n", "* [BSSN/BSSN_gauge_RHSs.py](BSSN/BSSN_gauge_RHSs.py); [\\[**tutorial**\\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='toc'></a>\n", "\n", "# Table of Contents\n", "$$\\label{toc}$$\n", "\n", "This notebook is organized as follows\n", "\n", "1. [Step 1](#initializenrpy): Initialize needed Python/NRPy+ modules\n", "1. [Step 2](#bssn): NRPy+-generated C code kernels for BSSN spacetime solve\n", " 1. [Step 2.a](#bssnrhs): BSSN RHS expressions\n", " 1. [Step 2.b](#hammomconstraints): Hamiltonian & momentum constraints\n", " 1. [Step 2.c](#gamconstraint): Enforce conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint (in Cartesian coordinates, $\\det{\\hat{\\gamma}_{ij}}=1$)\n", " 1. [Step 2.d](#driver_call_codegen_funcs): Given `WhichPart` parameter choice, generate algorithm for calling corresponding function within `BaikalETK_C_kernels_codegen_onepart()` to generate C code kernel \n", " 1. [Step 2.e](#kernel_codegen): Generate C code kernels for `Baikal` and `BaikalETK`\n", " 1. [Step 2.e.i](#feature_choice): Set compile-time and runtime parameters for `Baikal` and `BaikalVacuum`\n", " 1. [Step 2.e.ii](#parallel_codegen): Generate all C-code kernels for `Baikal` and `BaikalVacuum`, in parallel if possible\n", "1. [Step 3](#cclfiles): CCL files - Define how this module interacts and interfaces with the wider Einstein Toolkit infrastructure\n", " 1. [Step 3.a](#paramccl): `param.ccl`: specify free parameters within `BaikalETK`\n", " 1. [Step 3.b](#interfaceccl): `interface.ccl`: define needed gridfunctions; provide keywords denoting what this thorn provides and what it should inherit from other thorns\n", " 1. [Step 3.c](#scheduleccl): `schedule.ccl`:schedule all functions used within `BaikalETK`, specify data dependencies within said functions, and allocate memory for gridfunctions\n", "1. [Step 4](#cdrivers): C driver functions for ETK registration & NRPy+-generated kernels\n", " 1. [Step 4.a](#etkfunctions): Needed ETK functions: Banner, Symmetry registration, Parameter sanity check, Method of Lines (`MoL`) registration, Boundary condition\n", " 1. [Step 4.b](#bssnadmconversions): BSSN $\\leftrightarrow$ ADM conversions\n", " 1. [Step 4.b.i](#admtobssn): ADM $\\to$ BSSN\n", " 1. [Step 4.b.ii](#bssntoadm): BSSN $\\to$ ADM\n", " 1. [Step 4.c](#bssnrhss) Evaluate BSSN right-hand-sides (RHSs)\n", " 1. [Step 4.c.i](#ricci): Evaluate Ricci tensor\n", " 1. [Step 4.c.ii](#bssnrhssricciinput): Evaluate BSSN RHSs, using Ricci tensor as input \n", " 1. [Step 4.d](#enforcegammahatconstraint): Enforcing conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint (in Cartesian coordinates, $\\det{\\hat{\\gamma}_{ij}}=1$)\n", " 1. [Step 4.e](#diagnostics): Diagnostics: Computing Hamiltonian & momentum constraints\n", " 1. [Step 4.f](#t4uu): `driver_BSSN_T4UU()`: Compute $T^{\\mu\\nu}$ from `TmunuBase`'s $T_{\\mu\\nu}$\n", " 1. [Step 4.g](#outcdrivers): Output all C driver functions needed for `Baikal`/`BaikalVacuum`\n", " 1. [Step 4.h](#makecodedefn): `make.code.defn`: List of all C driver functions needed to compile `BaikalETK`\n", "1. [Step 5](#code_validation): Code Validation\n", " 1. [Step 5.a](#self_validation): Validation against [BaikalETK/](BaikalETK/) Python modules\n", "1. [Step 6](#latex_pdf_output): Output this notebook to $\\LaTeX$-formatted PDF file" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='initializenrpy'></a>\n", "\n", "# Step 1: Initialize needed Python/NRPy+ modules \\[Back to [top](#toc)\\]\n", "\n", "$$\\label{initializenrpy}$$" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# Compile-time (i.e., NRPy+-time) parameters for both Baikal & BaikalVacuum:\n", "LapseCondition = \"OnePlusLog\"\n", "ShiftCondition = \"GammaDriving2ndOrder_NoCovariant\"" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# We output to an BaikalETK_validate so as not to overwrite the \"trusted\"\n", "# BaikalETK Python modules directory (generated by an earlier/equivalent\n", "# version of this notebook, and to validate against these trusted Python\n", "# modules. To this end, at the bottom of this notebook we perform a diff\n", "# between this notebook's output against the trusted BaikalETK Python\n", "# modules and error out if any difference exists. The proper approach\n", "# for updating BaikalETK Python modules is to first update this notebook\n", "# then once the updated Baikal/BaikalVacuum ETK thorns have been\n", "# revalidated, copy the Python modules from BaikalETK_validate to\n", "# BaikalETK.\n", "BaikalETKdir = \"BaikalETK_validate\"\n", "import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\n", "import os, sys, shutil # Standard Python modules for multiplatform OS-level functions\n", "\n", "# Output finite difference stencils as functions instead of inlined expressions.\n", "# Dramatically speeds up compile times (esp with higher-order finite differences\n", "# and GCC 9.3+)\n", "import finite_difference as fin # NRPy+: Finite difference C code generation module\n", "import NRPy_param_funcs as par # NRPy+: Parameter interface\n", "par.set_parval_from_str(\"finite_difference::enable_FD_functions\", True)\n", "\n", "cmd.mkdir(os.path.join(BaikalETKdir))" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Overwriting BaikalETK_validate/BaikalETK_C_kernels_codegen.py\n" ] } ], "source": [ "%%writefile $BaikalETKdir/BaikalETK_C_kernels_codegen.py\n", "\n", "# Step 1: Import needed core NRPy+ modules\n", "from outputC import lhrh # NRPy+: Core C code output module\n", "import finite_difference as fin # NRPy+: Finite difference C code generation module\n", "import NRPy_param_funcs as par # NRPy+: Parameter interface\n", "import grid as gri # NRPy+: Functions having to do with numerical grids\n", "import loop as lp # NRPy+: Generate C code loops\n", "import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support\n", "import reference_metric as rfm # NRPy+: Reference metric support\n", "import os, sys # Standard Python modules for multiplatform OS-level functions\n", "import time # Standard Python module; useful for benchmarking\n", "import BSSN.BSSN_RHSs as rhs\n", "import BSSN.BSSN_gauge_RHSs as gaugerhs\n", "\n", "def BaikalETK_C_kernels_codegen_onepart(params=\n", " \"WhichPart=BSSN_RHSs,ThornName=Baikal,FD_order=4,enable_stress_energy_source_terms=True\"):\n", " # Set default parameters\n", " WhichPart = \"BSSN_RHSs\"\n", " ThornName = \"Baikal\"\n", " FD_order = 4\n", " enable_stress_energy_source_terms = True\n", " LapseCondition = \"OnePlusLog\" # Set the standard 1+log lapse condition\n", " # Set the standard, second-order advecting-shift, Gamma-driving shift condition:\n", " ShiftCondition = \"GammaDriving2ndOrder_NoCovariant\"\n", " # Default Kreiss-Oliger dissipation strength\n", " default_KO_strength = 0.1\n", "\n", " import re\n", " if params != \"\":\n", " params2 = re.sub(\"^,\",\"\",params)\n", " params = params2.strip()\n", " splitstring = re.split(\"=|,\", params)\n", "\n", " if len(splitstring) % 2 != 0:\n", " print(\"outputC: Invalid params string: \"+params)\n", " sys.exit(1)\n", "\n", " parnm = []\n", " value = []\n", " for i in range(int(len(splitstring)/2)):\n", " parnm.append(splitstring[2*i])\n", " value.append(splitstring[2*i+1])\n", "\n", " for i in range(len(parnm)):\n", " parnm.append(splitstring[2*i])\n", " value.append(splitstring[2*i+1])\n", "\n", " for i in range(len(parnm)):\n", " if parnm[i] == \"WhichPart\":\n", " WhichPart = value[i]\n", " elif parnm[i] == \"ThornName\":\n", " ThornName = value[i]\n", " elif parnm[i] == \"FD_order\":\n", " FD_order = int(value[i])\n", " elif parnm[i] == \"enable_stress_energy_source_terms\":\n", " enable_stress_energy_source_terms = False\n", " if value[i] == \"True\":\n", " enable_stress_energy_source_terms = True\n", " elif parnm[i] == \"LapseCondition\":\n", " LapseCondition = value[i]\n", " elif parnm[i] == \"ShiftCondition\":\n", " ShiftCondition = value[i]\n", " elif parnm[i] == \"default_KO_strength\":\n", " default_KO_strength = float(value[i])\n", " else:\n", " print(\"BaikalETK Error: Could not parse input param: \"+parnm[i])\n", " sys.exit(1)\n", "\n", " # Set output directory for C kernels\n", " outdir = os.path.join(ThornName, \"src\") # Main C code output directory\n", "\n", " # Set spatial dimension (must be 3 for BSSN)\n", " par.set_parval_from_str(\"grid::DIM\", 3)\n", "\n", " # Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,\n", " # FD order, floating point precision, and CFL factor:\n", " # Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,\n", " # SymTP, SinhSymTP\n", " # NOTE: Only CoordSystem == Cartesian makes sense here; new\n", " # boundary conditions are needed within the ETK for\n", " # Spherical, etc. coordinates.\n", " CoordSystem = \"Cartesian\"\n", "\n", " par.set_parval_from_str(\"reference_metric::CoordSystem\", CoordSystem)\n", " rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.\n", "\n", " # Set the gridfunction memory access type to ETK-like, so that finite_difference\n", " # knows how to read and write gridfunctions from/to memory.\n", " par.set_parval_from_str(\"grid::GridFuncMemAccess\", \"ETK\")\n", "\n", " par.set_parval_from_str(\"BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption\", ShiftCondition)\n", " par.set_parval_from_str(\"BSSN.BSSN_gauge_RHSs::LapseEvolutionOption\", LapseCondition)\n", "\n", " T4UU = None\n", " if enable_stress_energy_source_terms == True:\n", " registered_already = False\n", " for i in range(len(gri.glb_gridfcs_list)):\n", " if gri.glb_gridfcs_list[i].name == \"T4UU00\":\n", " registered_already = True\n", " if not registered_already:\n", " T4UU = ixp.register_gridfunctions_for_single_rank2(\"AUXEVOL\", \"T4UU\", \"sym01\", DIM=4)\n", " else:\n", " T4UU = ixp.declarerank2(\"T4UU\", \"sym01\", DIM=4)\n", "\n", " # Register the BSSN constraints (Hamiltonian & momentum constraints) as gridfunctions.\n", " registered_already = False\n", " for i in range(len(gri.glb_gridfcs_list)):\n", " if gri.glb_gridfcs_list[i].name == \"H\":\n", " registered_already = True\n", " if not registered_already:\n", " # We ignore return values for register_gridfunctions...() calls below\n", " # as they are unused.\n", " gri.register_gridfunctions(\"AUX\", \"H\")\n", " ixp.register_gridfunctions_for_single_rank1(\"AUX\", \"MU\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='bssn'></a>\n", "\n", "# Step 2: Output C code for BSSN spacetime solve \\[Back to [top](#toc)\\]\n", "$$\\label{bssn}$$\n", "\n", "<a id='bssnrhs'></a>\n", "\n", "## Step 2.a: BSSN RHS expressions \\[Back to [top](#toc)\\]\n", "$$\\label{bssnrhs}$$\n", "\n", "`BaikalETK` implements a fully covariant version of the BSSN 3+1 decomposition of Einstein's equations of general relativity, which is fully documented within NRPy+ ([start here](Tutorial-BSSN_formulation.ipynb)). However, especially if you are a newcomer to the field of numerical relativity, you may also find the following lectures and papers useful for understanding the adopted formalism:\n", "\n", "* Mathematical foundations of BSSN and 3+1 initial value problem decompositions of Einstein's equations:\n", " * [Thomas Baumgarte's lectures on mathematical formulation of numerical relativity](https://www.youtube.com/watch?v=t3uo2R-yu4o&list=PLRVOWML3TL_djTd_nsTlq5aJjJET42Qke)\n", " * [Yuichiro Sekiguchi's introduction to BSSN](http://www2.yukawa.kyoto-u.ac.jp/~yuichiro.sekiguchi/3+1.pdf) \n", "* Extensions to the standard BSSN approach used in NRPy+\n", " * [Brown's covariant \"Lagrangian\" formalism of BSSN](https://arxiv.org/abs/0902.3652)\n", " * [BSSN in spherical coordinates, using the reference-metric approach of Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)\n", " * [BSSN in generic curvilinear coordinates, using the extended reference-metric approach of Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)\n", "\n", "Here, we simply call the [BSSN.BSSN_RHSs](BSSN/BSSN_RHSs.py); [\\[**tutorial**\\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) and [BSSN.BSSN_gauge_RHSs](BSSN/BSSN_gauge_RHSs.py); [\\[**tutorial**\\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) NRPy+ Python modules to generate the symbolic expressions, add Kreiss-Oliger dissipation, and then output the finite-difference C code form of the equations using NRPy+'s [finite_difference](finite_difference.py) ([**tutorial**](Tutorial-Finite_Difference_Derivatives.ipynb)) C code generation module." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_kernels_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_kernels_codegen.py\n", "\n", " def BSSN_RHSs__generate_symbolic_expressions():\n", " ######################################\n", " # START: GENERATE SYMBOLIC EXPRESSIONS\n", " print(\"Generating symbolic expressions for BSSN RHSs...\")\n", " start = time.time()\n", " # Enable rfm_precompute infrastructure, which results in\n", " # BSSN RHSs that are free of transcendental functions,\n", " # even in curvilinear coordinates, so long as\n", " # ConformalFactor is set to \"W\" (default).\n", " par.set_parval_from_str(\"reference_metric::enable_rfm_precompute\",\"True\")\n", " par.set_parval_from_str(\"reference_metric::rfm_precompute_Ccode_outdir\",os.path.join(outdir,\"rfm_files/\"))\n", "\n", " # Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:\n", " import BSSN.BSSN_quantities as Bq\n", " par.set_parval_from_str(\"BSSN.BSSN_quantities::LeaveRicciSymbolic\",\"True\")\n", "\n", " rhs.BSSN_RHSs()\n", "\n", " if T4UU != None:\n", " import BSSN.BSSN_stress_energy_source_terms as Bsest\n", " Bsest.BSSN_source_terms_for_BSSN_RHSs(T4UU)\n", " rhs.trK_rhs += Bsest.sourceterm_trK_rhs\n", " for i in range(3):\n", " # Needed for Gamma-driving shift RHSs:\n", " rhs.Lambdabar_rhsU[i] += Bsest.sourceterm_Lambdabar_rhsU[i]\n", " # Needed for BSSN RHSs:\n", " rhs.lambda_rhsU[i] += Bsest.sourceterm_lambda_rhsU[i]\n", " for j in range(3):\n", " rhs.a_rhsDD[i][j] += Bsest.sourceterm_a_rhsDD[i][j]\n", "\n", " gaugerhs.BSSN_gauge_RHSs()\n", "\n", " # Add Kreiss-Oliger dissipation to the BSSN RHSs:\n", " thismodule = \"KO_Dissipation\"\n", " diss_strength = par.Cparameters(\"REAL\", thismodule, \"diss_strength\", default_KO_strength)\n", "\n", " alpha_dKOD = ixp.declarerank1(\"alpha_dKOD\")\n", " cf_dKOD = ixp.declarerank1(\"cf_dKOD\")\n", " trK_dKOD = ixp.declarerank1(\"trK_dKOD\")\n", " betU_dKOD = ixp.declarerank2(\"betU_dKOD\",\"nosym\")\n", " vetU_dKOD = ixp.declarerank2(\"vetU_dKOD\",\"nosym\")\n", " lambdaU_dKOD = ixp.declarerank2(\"lambdaU_dKOD\",\"nosym\")\n", " aDD_dKOD = ixp.declarerank3(\"aDD_dKOD\",\"sym01\")\n", " hDD_dKOD = ixp.declarerank3(\"hDD_dKOD\",\"sym01\")\n", " for k in range(3):\n", " gaugerhs.alpha_rhs += diss_strength*alpha_dKOD[k]*rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k]\n", " rhs.cf_rhs += diss_strength* cf_dKOD[k]*rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k]\n", " rhs.trK_rhs += diss_strength* trK_dKOD[k]*rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k]\n", " for i in range(3):\n", " if \"2ndOrder\" in ShiftCondition:\n", " gaugerhs.bet_rhsU[i] += diss_strength* betU_dKOD[i][k]*rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k]\n", " gaugerhs.vet_rhsU[i] += diss_strength* vetU_dKOD[i][k]*rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k]\n", " rhs.lambda_rhsU[i] += diss_strength*lambdaU_dKOD[i][k]*rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k]\n", " for j in range(3):\n", " rhs.a_rhsDD[i][j] += diss_strength*aDD_dKOD[i][j][k]*rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k]\n", " rhs.h_rhsDD[i][j] += diss_strength*hDD_dKOD[i][j][k]*rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k]\n", "\n", " # We use betaU as our upwinding control vector:\n", " Bq.BSSN_basic_tensors()\n", " betaU = Bq.betaU\n", "\n", " # Now that we are finished with all the rfm hatted\n", " # quantities in generic precomputed functional\n", " # form, let's restore them to their closed-\n", " # form expressions.\n", " par.set_parval_from_str(\"reference_metric::enable_rfm_precompute\",\"False\") # Reset to False to disable rfm_precompute.\n", " rfm.ref_metric__hatted_quantities()\n", " par.set_parval_from_str(\"BSSN.BSSN_quantities::LeaveRicciSymbolic\",\"False\")\n", " end = time.time()\n", " print(\"(BENCH) Finished BSSN RHS symbolic expressions in \"+str(end-start)+\" seconds.\")\n", " # END: GENERATE SYMBOLIC EXPRESSIONS\n", " ######################################\n", "\n", " BSSN_RHSs_SymbExpressions = [lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"aDD00\"), rhs=rhs.a_rhsDD[0][0]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"aDD01\"), rhs=rhs.a_rhsDD[0][1]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"aDD02\"), rhs=rhs.a_rhsDD[0][2]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"aDD11\"), rhs=rhs.a_rhsDD[1][1]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"aDD12\"), rhs=rhs.a_rhsDD[1][2]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"aDD22\"), rhs=rhs.a_rhsDD[2][2]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"alpha\"), rhs=gaugerhs.alpha_rhs),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"betU0\"), rhs=gaugerhs.bet_rhsU[0]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"betU1\"), rhs=gaugerhs.bet_rhsU[1]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"betU2\"), rhs=gaugerhs.bet_rhsU[2]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"cf\"), rhs=rhs.cf_rhs),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"hDD00\"), rhs=rhs.h_rhsDD[0][0]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"hDD01\") ,rhs=rhs.h_rhsDD[0][1]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"hDD02\"), rhs=rhs.h_rhsDD[0][2]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"hDD11\"), rhs=rhs.h_rhsDD[1][1]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"hDD12\"), rhs=rhs.h_rhsDD[1][2]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"hDD22\"), rhs=rhs.h_rhsDD[2][2]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"lambdaU0\"),rhs=rhs.lambda_rhsU[0]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"lambdaU1\"),rhs=rhs.lambda_rhsU[1]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"lambdaU2\"),rhs=rhs.lambda_rhsU[2]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"trK\"), rhs=rhs.trK_rhs),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"vetU0\"), rhs=gaugerhs.vet_rhsU[0]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"vetU1\"), rhs=gaugerhs.vet_rhsU[1]),\n", " lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"vetU2\"), rhs=gaugerhs.vet_rhsU[2]) ]\n", "\n", " return [betaU,BSSN_RHSs_SymbExpressions]\n", "\n", " def Ricci__generate_symbolic_expressions():\n", " ######################################\n", " # START: GENERATE SYMBOLIC EXPRESSIONS\n", " print(\"Generating symbolic expressions for Ricci tensor...\")\n", " start = time.time()\n", " # Enable rfm_precompute infrastructure, which results in\n", " # BSSN RHSs that are free of transcendental functions,\n", " # even in curvilinear coordinates, so long as\n", " # ConformalFactor is set to \"W\" (default).\n", " par.set_parval_from_str(\"reference_metric::enable_rfm_precompute\",\"True\")\n", " par.set_parval_from_str(\"reference_metric::rfm_precompute_Ccode_outdir\",os.path.join(outdir,\"rfm_files/\"))\n", "\n", " # Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:\n", " import BSSN.BSSN_quantities as Bq\n", "\n", " # Next compute Ricci tensor\n", " # par.set_parval_from_str(\"BSSN.BSSN_quantities::LeaveRicciSymbolic\",\"False\")\n", " RbarDD_already_registered = False\n", " for i in range(len(gri.glb_gridfcs_list)):\n", " if \"RbarDD00\" in gri.glb_gridfcs_list[i].name:\n", " RbarDD_already_registered = True\n", " if not RbarDD_already_registered:\n", " # We ignore the return value of ixp.register_gridfunctions_for_single_rank2() below\n", " # as it is unused.\n", " ixp.register_gridfunctions_for_single_rank2(\"AUXEVOL\",\"RbarDD\",\"sym01\")\n", " rhs.BSSN_RHSs()\n", " Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()\n", "\n", " # Now that we are finished with all the rfm hatted\n", " # quantities in generic precomputed functional\n", " # form, let's restore them to their closed-\n", " # form expressions.\n", " par.set_parval_from_str(\"reference_metric::enable_rfm_precompute\",\"False\") # Reset to False to disable rfm_precompute.\n", " rfm.ref_metric__hatted_quantities()\n", " end = time.time()\n", " print(\"(BENCH) Finished Ricci symbolic expressions in \"+str(end-start)+\" seconds.\")\n", " # END: GENERATE SYMBOLIC EXPRESSIONS\n", " ######################################\n", "\n", " Ricci_SymbExpressions = [lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD00\"),rhs=Bq.RbarDD[0][0]),\n", " lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD01\"),rhs=Bq.RbarDD[0][1]),\n", " lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD02\"),rhs=Bq.RbarDD[0][2]),\n", " lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD11\"),rhs=Bq.RbarDD[1][1]),\n", " lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD12\"),rhs=Bq.RbarDD[1][2]),\n", " lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"RbarDD22\"),rhs=Bq.RbarDD[2][2])]\n", "\n", " return [Ricci_SymbExpressions]\n", "\n", " def BSSN_RHSs__generate_Ccode(all_RHSs_Ricci_exprs_list):\n", " betaU = all_RHSs_Ricci_exprs_list[0]\n", " BSSN_RHSs_SymbExpressions = all_RHSs_Ricci_exprs_list[1]\n", "\n", " print(\"Generating C code for BSSN RHSs (FD_order=\"+str(FD_order)+\",Tmunu=\"+str(enable_stress_energy_source_terms)+\") in \"+par.parval_from_str(\"reference_metric::CoordSystem\")+\" coordinates.\")\n", " start = time.time()\n", "\n", " # Store original finite-differencing order:\n", " FD_order_orig = par.parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\")\n", " # Set new finite-differencing order:\n", " par.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\", FD_order)\n", "\n", " BSSN_RHSs_string = fin.FD_outputC(\"returnstring\",BSSN_RHSs_SymbExpressions,\n", " params=\"outCverbose=False,enable_SIMD=True,GoldenKernelsEnable=True\",\n", " upwindcontrolvec=betaU)\n", "\n", " filename = \"BSSN_RHSs_enable_Tmunu_\"+str(enable_stress_energy_source_terms)+\"_FD_order_\"+str(FD_order)+\".h\"\n", " with open(os.path.join(outdir,filename), \"w\") as file:\n", " file.write(lp.loop([\"i2\",\"i1\",\"i0\"],[\"cctk_nghostzones[2]\",\"cctk_nghostzones[1]\",\"cctk_nghostzones[0]\"],\n", " [\"cctk_lsh[2]-cctk_nghostzones[2]\",\"cctk_lsh[1]-cctk_nghostzones[1]\",\"cctk_lsh[0]-cctk_nghostzones[0]\"],\n", " [\"1\",\"1\",\"SIMD_width\"],\n", " [\"#pragma omp parallel for\",\n", " \"#include \\\"rfm_files/rfm_struct__SIMD_outer_read2.h\\\"\",\n", " r\"\"\" #include \"rfm_files/rfm_struct__SIMD_outer_read1.h\"\n", " #if (defined __INTEL_COMPILER && __INTEL_COMPILER_BUILD_DATE >= 20180804)\n", " #pragma ivdep // Forces Intel compiler (if Intel compiler used) to ignore certain SIMD vector dependencies\n", " #pragma vector always // Forces Intel compiler (if Intel compiler used) to vectorize\n", " #endif\"\"\"],\"\",\n", " \"#include \\\"rfm_files/rfm_struct__SIMD_inner_read0.h\\\"\\n\"+BSSN_RHSs_string))\n", "\n", " # Restore original finite-differencing order:\n", " par.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\", FD_order_orig)\n", "\n", " end = time.time()\n", " print(\"(BENCH) Finished BSSN_RHS C codegen (FD_order=\"+str(FD_order)+\",Tmunu=\"+str(enable_stress_energy_source_terms)+\") in \" + str(end - start) + \" seconds.\")\n", "\n", "\n", " def Ricci__generate_Ccode(Ricci_exprs_list):\n", " Ricci_SymbExpressions = Ricci_exprs_list[0]\n", "\n", " print(\"Generating C code for Ricci tensor (FD_order=\"+str(FD_order)+\") in \"+par.parval_from_str(\"reference_metric::CoordSystem\")+\" coordinates.\")\n", " start = time.time()\n", "\n", " # Store original finite-differencing order:\n", " FD_order_orig = par.parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\")\n", " # Set new finite-differencing order:\n", " par.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\", FD_order)\n", "\n", " Ricci_string = fin.FD_outputC(\"returnstring\", Ricci_SymbExpressions,\n", " params=\"outCverbose=False,enable_SIMD=True,GoldenKernelsEnable=True\")\n", "\n", " filename = \"BSSN_Ricci_FD_order_\"+str(FD_order)+\".h\"\n", " with open(os.path.join(outdir, filename), \"w\") as file:\n", " file.write(lp.loop([\"i2\",\"i1\",\"i0\"],[\"cctk_nghostzones[2]\",\"cctk_nghostzones[1]\",\"cctk_nghostzones[0]\"],\n", " [\"cctk_lsh[2]-cctk_nghostzones[2]\",\"cctk_lsh[1]-cctk_nghostzones[1]\",\"cctk_lsh[0]-cctk_nghostzones[0]\"],\n", " [\"1\",\"1\",\"SIMD_width\"],\n", " [\"#pragma omp parallel for\",\n", " \"#include \\\"rfm_files/rfm_struct__SIMD_outer_read2.h\\\"\",\n", " r\"\"\" #include \"rfm_files/rfm_struct__SIMD_outer_read1.h\"\n", " #if (defined __INTEL_COMPILER && __INTEL_COMPILER_BUILD_DATE >= 20180804)\n", " #pragma ivdep // Forces Intel compiler (if Intel compiler used) to ignore certain SIMD vector dependencies\n", " #pragma vector always // Forces Intel compiler (if Intel compiler used) to vectorize\n", " #endif\"\"\"],\"\",\n", " \"#include \\\"rfm_files/rfm_struct__SIMD_inner_read0.h\\\"\\n\"+Ricci_string))\n", "\n", " # Restore original finite-differencing order:\n", " par.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\", FD_order_orig)\n", "\n", " end = time.time()\n", " print(\"(BENCH) Finished Ricci C codegen (FD_order=\"+str(FD_order)+\") in \" + str(end - start) + \" seconds.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='hammomconstraints'></a>\n", "\n", "## Step 2.b: Hamiltonian & momentum constraints \\[Back to [top](#toc)\\]\n", "$$\\label{hammomconstraints}$$\n", "\n", "Next output the C code for evaluating the Hamiltonian & momentum constraints [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. Therefore it is useful to measure the Hamiltonian & momentum constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_kernels_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_kernels_codegen.py\n", "\n", " def BSSN_constraints__generate_symbolic_expressions_and_C_code():\n", " ######################################\n", " # START: GENERATE SYMBOLIC EXPRESSIONS\n", " # Define the Hamiltonian constraint and output the optimized C code.\n", " import BSSN.BSSN_constraints as bssncon\n", "\n", " bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)\n", " if T4UU != None:\n", " import BSSN.BSSN_stress_energy_source_terms as Bsest\n", " Bsest.BSSN_source_terms_for_BSSN_RHSs(T4UU)\n", " Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)\n", " bssncon.H += Bsest.sourceterm_H\n", " for i in range(3):\n", " bssncon.MU[i] += Bsest.sourceterm_MU[i]\n", " # END: GENERATE SYMBOLIC EXPRESSIONS\n", " ######################################\n", "\n", " # Store original finite-differencing order:\n", " FD_order_orig = par.parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\")\n", " # Set new finite-differencing order:\n", " par.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\", FD_order)\n", "\n", " start = time.time()\n", " print(\"Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\")\n", " Ham_mom_string = fin.FD_outputC(\"returnstring\",\n", " [lhrh(lhs=gri.gfaccess(\"aux_gfs\", \"H\"), rhs=bssncon.H),\n", " lhrh(lhs=gri.gfaccess(\"aux_gfs\", \"MU0\"), rhs=bssncon.MU[0]),\n", " lhrh(lhs=gri.gfaccess(\"aux_gfs\", \"MU1\"), rhs=bssncon.MU[1]),\n", " lhrh(lhs=gri.gfaccess(\"aux_gfs\", \"MU2\"), rhs=bssncon.MU[2])],\n", " params=\"outCverbose=False\")\n", "\n", " with open(os.path.join(outdir,\"BSSN_constraints_enable_Tmunu_\"+str(enable_stress_energy_source_terms)+\"_FD_order_\"+str(FD_order)+\".h\"), \"w\") as file:\n", " file.write(lp.loop([\"i2\",\"i1\",\"i0\"],[\"cctk_nghostzones[2]\",\"cctk_nghostzones[1]\",\"cctk_nghostzones[0]\"],\n", " [\"cctk_lsh[2]-cctk_nghostzones[2]\",\"cctk_lsh[1]-cctk_nghostzones[1]\",\"cctk_lsh[0]-cctk_nghostzones[0]\"],\n", " [\"1\",\"1\",\"1\"],[\"#pragma omp parallel for\",\"\",\"\"], \"\", Ham_mom_string))\n", "\n", " # Restore original finite-differencing order:\n", " par.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\", FD_order_orig)\n", "\n", " end = time.time()\n", " print(\"(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=\"+str(FD_order)+\",Tmunu=\"+str(enable_stress_energy_source_terms)+\") in \" + str(end - start) + \" seconds.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='gamconstraint'></a>\n", "\n", "## Step 2.c: Enforce conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint (in Cartesian coordinates, $\\det{\\hat{\\gamma}_{ij}}=1$) \\[Back to [top](#toc)\\]\n", "$$\\label{gamconstraint}$$\n", "\n", "Then enforce conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb)\n", "\n", "Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_kernels_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_kernels_codegen.py\n", "\n", " def enforce_detgammabar_eq_detgammahat__generate_symbolic_expressions_and_C_code():\n", " ######################################\n", " # START: GENERATE SYMBOLIC EXPRESSIONS\n", "\n", " # Enable rfm_precompute infrastructure, which results in\n", " # BSSN RHSs that are free of transcendental functions,\n", " # even in curvilinear coordinates, so long as\n", " # ConformalFactor is set to \"W\" (default).\n", " par.set_parval_from_str(\"reference_metric::enable_rfm_precompute\",\"True\")\n", " par.set_parval_from_str(\"reference_metric::rfm_precompute_Ccode_outdir\",os.path.join(outdir,\"rfm_files/\"))\n", "\n", " import BSSN.Enforce_Detgammahat_Constraint as EGC\n", " enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammahat_Constraint_symb_expressions()\n", "\n", " # Now that we are finished with all the rfm hatted\n", " # quantities in generic precomputed functional\n", " # form, let's restore them to their closed-\n", " # form expressions.\n", " par.set_parval_from_str(\"reference_metric::enable_rfm_precompute\",\"False\") # Reset to False to disable rfm_precompute.\n", " rfm.ref_metric__hatted_quantities()\n", " # END: GENERATE SYMBOLIC EXPRESSIONS\n", " ######################################\n", "\n", " start = time.time()\n", " print(\"Generating optimized C code (FD_order=\"+str(FD_order)+\") for gamma constraint. May take a while, depending on CoordSystem.\")\n", " enforce_gammadet_string = fin.FD_outputC(\"returnstring\", enforce_detg_constraint_symb_expressions,\n", " params=\"outCverbose=False,preindent=0,includebraces=False\")\n", "\n", " with open(os.path.join(outdir,\"enforcedetgammabar_constraint.h\"), \"w\") as file:\n", " file.write(lp.loop([\"i2\",\"i1\",\"i0\"],[\"0\", \"0\", \"0\"],\n", " [\"cctk_lsh[2]\",\"cctk_lsh[1]\",\"cctk_lsh[0]\"],\n", " [\"1\",\"1\",\"1\"],\n", " [\"#pragma omp parallel for\",\n", " \"#include \\\"rfm_files/rfm_struct__read2.h\\\"\",\n", " \"#include \\\"rfm_files/rfm_struct__read1.h\\\"\"],\"\",\n", " \"#include \\\"rfm_files/rfm_struct__read0.h\\\"\\n\"+enforce_gammadet_string))\n", " end = time.time()\n", " print(\"(BENCH) Finished gamma constraint C codegen (FD_order=\"+str(FD_order)+\") in \" + str(end - start) + \" seconds.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='driver_call_codegen_funcs'></a>\n", "\n", "## Step 2.d: Given `WhichPart` parameter choice, generate algorithm for calling corresponding function within `BaikalETK_C_kernels_codegen_onepart()` to generate C code kernel \\[Back to [top](#toc)\\]\n", "$$\\label{driver_call_codegen_funcs}$$" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_kernels_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_kernels_codegen.py\n", "\n", " if WhichPart==\"BSSN_RHSs\":\n", " BSSN_RHSs__generate_Ccode(BSSN_RHSs__generate_symbolic_expressions())\n", " elif WhichPart==\"Ricci\":\n", " Ricci__generate_Ccode(Ricci__generate_symbolic_expressions())\n", " elif WhichPart==\"BSSN_constraints\":\n", " BSSN_constraints__generate_symbolic_expressions_and_C_code()\n", " elif WhichPart==\"detgammabar_constraint\":\n", " enforce_detgammabar_eq_detgammahat__generate_symbolic_expressions_and_C_code()\n", " else:\n", " print(\"Error: WhichPart = \"+WhichPart+\" is not recognized.\")\n", " sys.exit(1)\n", "\n", " # Store all NRPy+ environment variables to an output string so NRPy+ environment from within this subprocess can be easily restored\n", " import pickle # Standard Python module for converting arbitrary data structures to a uniform format.\n", " # https://www.pythonforthelab.com/blog/storing-binary-data-and-serializing/\n", " outstr = []\n", " outstr.append(pickle.dumps(len(gri.glb_gridfcs_list)))\n", " for lst in gri.glb_gridfcs_list:\n", " outstr.append(pickle.dumps(lst.gftype))\n", " outstr.append(pickle.dumps(lst.name))\n", " outstr.append(pickle.dumps(lst.rank))\n", " outstr.append(pickle.dumps(lst.DIM))\n", "\n", " outstr.append(pickle.dumps(len(par.glb_params_list)))\n", " for lst in par.glb_params_list:\n", " outstr.append(pickle.dumps(lst.type))\n", " outstr.append(pickle.dumps(lst.module))\n", " outstr.append(pickle.dumps(lst.parname))\n", " outstr.append(pickle.dumps(lst.defaultval))\n", "\n", " outstr.append(pickle.dumps(len(par.glb_Cparams_list)))\n", " for lst in par.glb_Cparams_list:\n", " outstr.append(pickle.dumps(lst.type))\n", " outstr.append(pickle.dumps(lst.module))\n", " outstr.append(pickle.dumps(lst.parname))\n", " outstr.append(pickle.dumps(lst.defaultval))\n", "\n", " outstr.append(pickle.dumps(len(outC_function_dict)))\n", " for Cfuncname, Cfunc in outC_function_dict.items():\n", " outstr.append(pickle.dumps(Cfuncname))\n", " outstr.append(pickle.dumps(Cfunc))\n", "\n", " return outstr" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='kernel_codegen'></a>\n", "\n", "## Step 2.e: Generate C code kernels for `Baikal` and `BaikalETK` \\[Back to [top](#toc)\\]\n", "$$\\label{kernel_codegen}$$\n", "\n", "Here we generate the C code kernels (i.e., the C-code representation of the equations needed) for `Baikal` and `BaikalETK`.\n", "\n", "<a id='feature_choice'></a>\n", "\n", "### Step 2.e.i: Set compile-time and runtime parameters for `Baikal` and `BaikalVacuum` \\[Back to [top](#toc)\\]\n", "$$\\label{feature_choice}$$\n", "\n", "NRPy+ is a code generation package that is designed to offer maximum flexibility *at the time of C code generation*. As a result, although NRPy+ can in principle output an infinite variety of C code kernels for solving Einstein's equations, generally free parameters in each kernel steerable at *runtime* are restricted to simple scalars. This leads to more optimized kernels (i.e., significantly improved performance as compared to `McLachlan`), but at the expense of flexibility in choosing e.g., different gauges at runtime (currently only the most commonly used gauge is supported), as well as the need to generate multiple kernels (one per finite-differencing order). Reducing the number of kernels and adding more flexibility at runtime will be a focus of future work.\n", "\n", "For now, `Baikal` and `BaikalVacuum` support the following runtime options:\n", "\n", "* `Baikal`: Enables $T^{\\mu\\nu}$ source terms; useful for general relativistic hydrodynamics (GRHD) and general relativistic magnetohydrodynamics (GRMHD) simulations.\n", " * Finite differencing of order 2 and 4, via runtime parameter `FD_order`\n", " * BSSN RHSs, Ricci tensor, Hamiltonian constraint, and $\\hat{\\gamma}=\\bar{\\gamma}$ constraint\n", " * Kreiss-Oliger dissipation strength via runtime parameter `diss_strength`, which is the exact analogue of the `eps_diss` parameter of the `Dissipation` thorn\n", "* `BaikalVacuum`: Disables $T^{\\mu\\nu}$ source terms; useful for generating dynamical black hole and binary black hole spacetimes, in which matter is not present.\n", " * Finite differencing of orders 4, 6, and 8 via runtime parameter `FD_order`\n", " * BSSN RHSs, Ricci tensor, Hamiltonian constraint, and $\\hat{\\gamma}=\\bar{\\gamma}$ constraint\n", " * Kreiss-Oliger dissipation strength via runtime parameter `diss_strength`, which is the exact analogue of the `eps_diss` parameter of the `Dissipation` thorn\n", "\n", "Both thorns adopt the standard moving-puncture gauge conditions, which include the 1+log slicing condition for the lapse and the non-covariant $\\Gamma$-driving shift condition, as documented [in this NRPy+ Tutorial notebook](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb), and implemented in [the corresponding NRPy+ Python module](BSSN/BSSN_gauge_RHSs.py). In case you'd like to make another gauge choice, you need only choose the desired gauge from [the NRPy+ Python module](BSSN/BSSN_gauge_RHSs.py) and add it as parameters `ShiftCondition` and `LapseCondition` to the main BaikalETK code generation function `BaikalETK_C_kernels_codegen_onepart()`. You will find that adding user-defined gauge choices is a straightforward process, as the module is easily extended.\n", "\n", "Next we set up the default parameter lists for `BaikalETK_C_kernels_codegen_onepart()` for `Baikal` and `BaikalVacuum` thorns. We set these parameter lists as strings to make parallelizing the code generation far easier (easier to pass a list of strings than a list of function arguments to Python's `multiprocessing.Pool()`)." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "# Step 2.e.i: Set compile-time and runtime parameters for Baikal and BaikalVacuum\n", "# Runtime parameters for\n", "# Baikal: FD_orders = [2,4] ; Gamma-driving eta parameter; Kreiss-Oliger dissipation strength\n", "# BaikalVacuum: FD_orders = [4,6,8]; Gamma-driving eta parameter; Kreiss-Oliger dissipation strength\n", "\n", "paramslist = []\n", "FD_orders = [2,4,6,8]\n", "WhichParamSet = 0\n", "for WhichPart in [\"BSSN_RHSs\",\"Ricci\",\"BSSN_constraints\",\"detgammabar_constraint\"]:\n", " for FD_order in FD_orders:\n", " enable_stress_energy_list = [True,False]\n", " if FD_order == 2:\n", " enable_stress_energy_list = [True]\n", " elif FD_order >= 6:\n", " enable_stress_energy_list = [False]\n", " for enable_stress_energy in enable_stress_energy_list:\n", " ThornName = \"Baikal\"\n", " if enable_stress_energy == False:\n", " ThornName = \"BaikalVacuum\"\n", " paramstr = \"WhichPart=\"+WhichPart+\",\"\n", " paramstr+= \"ThornName=\"+ThornName+\",\"\n", " paramstr+= \"FD_order=\"+str(FD_order)+\",\"\n", " paramstr+= \"LapseCondition=\"+LapseCondition+\",\"\n", " paramstr+= \"ShiftCondition=\"+ShiftCondition+\",\"\n", " paramstr+= \"enable_stress_energy_source_terms=\"+str(enable_stress_energy)\n", " if (WhichPart != \"detgammabar_constraint\") \\\n", " or (WhichPart == \"detgammabar_constraint\" and FD_order==4):\n", " # Do not output detgammabar_constraint code more than once for each thorn, as\n", " # it does not depend on FD_order\n", " paramslist.append(paramstr)\n", " WhichParamSet = WhichParamSet + 1\n", "\n", "paramslist.sort() # Sort the list alphabetically." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='parallel_codegen'></a>\n", "\n", "### Step 2.e.ii: Generate all C-code kernels for `Baikal` and `BaikalVacuum`, in parallel if possible \\[Back to [top](#toc)\\]\n", "$$\\label{parallel_codegen}$$" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "***************************************\n", "Starting parallel C kernel codegen...\n", "***************************************\n", "Generating symbolic expressions for Ricci tensor...Generating symbolic expressions for BSSN RHSs...Generating symbolic expressions for Ricci tensor...\n", "\n", "Generating symbolic expressions for Ricci tensor...Generating symbolic expressions for BSSN RHSs...\n", "Generating symbolic expressions for BSSN RHSs...\n", "Generating symbolic expressions for BSSN RHSs...Generating symbolic expressions for Ricci tensor...\n", "\n", "\n", "\n", "Generating symbolic expressions for BSSN RHSs...Generating symbolic expressions for Ricci tensor...\n", "\n", "Generating optimized C code (FD_order=4) for gamma constraint. May take a while, depending on CoordSystem.Generating optimized C code (FD_order=4) for gamma constraint. May take a while, depending on CoordSystem.\n", "\n", "(BENCH) Finished gamma constraint C codegen (FD_order=4) in 0.10375690460205078 seconds.\n", "(BENCH) Finished gamma constraint C codegen (FD_order=4) in 0.12864923477172852 seconds.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 0.9982852935791016 seconds.\n", "Generating C code for BSSN RHSs (FD_order=8,Tmunu=False) in Cartesian coordinates.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 1.0139648914337158 seconds.\n", "Generating C code for BSSN RHSs (FD_order=6,Tmunu=False) in Cartesian coordinates.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 1.0944099426269531 seconds.\n", "Generating C code for BSSN RHSs (FD_order=4,Tmunu=False) in Cartesian coordinates.\n", "Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\n", "Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\n", "\n", "(BENCH) Finished Ricci symbolic expressions in 1.9740955829620361 seconds.\n", "Generating C code for Ricci tensor (FD_order=4) in Cartesian coordinates.\n", "(BENCH) Finished Ricci symbolic expressions in 1.9901673793792725 seconds.\n", "Generating C code for Ricci tensor (FD_order=6) in Cartesian coordinates.(BENCH) Finished Ricci symbolic expressions in 1.951263427734375 seconds.\n", "\n", "Generating C code for Ricci tensor (FD_order=4) in Cartesian coordinates.\n", "(BENCH) Finished Ricci symbolic expressions in 2.0254504680633545 seconds.\n", "Generating C code for Ricci tensor (FD_order=8) in Cartesian coordinates.\n", "(BENCH) Finished Ricci symbolic expressions in 2.0379810333251953 seconds.\n", "Generating C code for Ricci tensor (FD_order=2) in Cartesian coordinates.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 2.175140857696533 seconds.\n", "Generating C code for BSSN RHSs (FD_order=4,Tmunu=True) in Cartesian coordinates.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 2.1774909496307373 seconds.\n", "Generating C code for BSSN RHSs (FD_order=2,Tmunu=True) in Cartesian coordinates.\n", "Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\n", "Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=4,Tmunu=False) in 14.319687128067017 seconds.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=6,Tmunu=False) in 15.422996997833252 seconds.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=2,Tmunu=True) in 15.380635499954224 seconds.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=4,Tmunu=True) in 16.61129069328308 seconds.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=8,Tmunu=False) in 18.059557676315308 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=4) in 21.51375389099121 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=4) in 22.40598464012146 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=6) in 22.7694993019104 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=2) in 23.129387378692627 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=4,Tmunu=False) in 24.090815782546997 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=6,Tmunu=False) in 24.394925117492676 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=8) in 23.86538815498352 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=4,Tmunu=True) in 22.553589582443237 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=2,Tmunu=True) in 22.88631319999695 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=8,Tmunu=False) in 26.489030838012695 seconds.\n", "(BENCH) Finished C kernel codegen for Baikal and BaikalVacuum in 27.915515661239624 seconds.\n" ] } ], "source": [ "nrpy_dir_path = os.path.join(\".\")\n", "if nrpy_dir_path not in sys.path:\n", " sys.path.append(nrpy_dir_path)\n", "\n", "# Create all output directories if they do not yet exist\n", "import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\n", "import shutil # Standard Python module for multiplatform OS-level functions\n", "for ThornName in [\"Baikal\",\"BaikalVacuum\"]:\n", " outrootdir = ThornName\n", " cmd.mkdir(os.path.join(outrootdir))\n", " outdir = os.path.join(outrootdir,\"src\") # Main C code output directory\n", "\n", " # Copy SIMD/SIMD_intrinsics.h to $outdir/SIMD/SIMD_intrinsics.h, replacing\n", " # the line \"#define REAL_SIMD_ARRAY REAL\" with \"#define REAL_SIMD_ARRAY CCTK_REAL\"\n", " # (since REAL is undefined in the ETK, but CCTK_REAL takes its place)\n", " cmd.mkdir(os.path.join(outdir,\"SIMD\"))\n", " import fileinput\n", " f = fileinput.input(os.path.join(nrpy_dir_path,\"SIMD\",\"SIMD_intrinsics.h\"))\n", " with open(os.path.join(outdir,\"SIMD\",\"SIMD_intrinsics.h\"),\"w\") as outfile:\n", " for line in f:\n", " outfile.write(line.replace(\"#define REAL_SIMD_ARRAY REAL\", \"#define REAL_SIMD_ARRAY CCTK_REAL\"))\n", "\n", " # Create directory for rfm_files output\n", " cmd.mkdir(os.path.join(outdir,\"rfm_files\"))\n", "\n", "# Start parallel C code generation (codegen)\n", "# NRPyEnvVars stores the NRPy+ environment from all the subprocesses in the following\n", "# parallel codegen\n", "NRPyEnvVars = []\n", "\n", "import time # Standard Python module for benchmarking\n", "import logging\n", "start = time.time()\n", "if __name__ == \"__main__\":\n", " try:\n", " if os.name == 'nt':\n", " # Windows & Jupyter multiprocessing do not mix, so we run in serial on Windows.\n", " # Here's why: https://stackoverflow.com/questions/45719956/python-multiprocessing-in-jupyter-on-windows-attributeerror-cant-get-attribut\n", " raise Exception(\"Parallel codegen currently not available in Windows\")\n", " # Step 3.d.ii: Import the multiprocessing module.\n", " import multiprocessing\n", " print(\"***************************************\")\n", " print(\"Starting parallel C kernel codegen...\")\n", " print(\"***************************************\")\n", "\n", " # Step 3.d.iii: Define master function for parallelization.\n", " # Note that lambdifying this doesn't work in Python 3\n", " def master_func(i):\n", " import BaikalETK.BaikalETK_C_kernels_codegen as BCk # lgtm [py/repeated-import]\n", " return BCk.BaikalETK_C_kernels_codegen_onepart(params=paramslist[i])\n", "\n", " # Step 3.d.iv: Evaluate list of functions in parallel if possible;\n", " # otherwise fallback to serial evaluation:\n", " pool = multiprocessing.Pool() #processes=len(paramslist))\n", " NRPyEnvVars.append(pool.map(master_func,range(len(paramslist))))\n", " pool.terminate()\n", " pool.join()\n", " except:\n", " logging.exception(\"Ignore this warning/backtrace if on a system in which serial codegen is necessary:\")\n", " print(\"***************************************\")\n", " print(\"Starting serial C kernel codegen...\")\n", " print(\"(If you were running in parallel before,\")\n", " print(\" this means parallel codegen failed)\")\n", " print(\"***************************************\")\n", " import BaikalETK.BaikalETK_C_kernels_codegen as BCk # lgtm [py/repeated-import]\n", " # No need to pickle if doing serial codegen.\n", " for param in paramslist:\n", " BCk.BaikalETK_C_kernels_codegen_onepart(params=param)\n", " NRPyEnvVars = [] # Reset NRPyEnvVars in case multiprocessing wrote to it and failed.\n", "\n", "print(\"(BENCH) Finished C kernel codegen for Baikal and BaikalVacuum in \"+str(time.time()-start)+\" seconds.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='cclfiles'></a>\n", "\n", "# Step 3: ETK `ccl` file generation \\[Back to [top](#toc)\\]\n", "$$\\label{cclfiles}$$\n", "\n", "The Einstein Toolkit (ETK) ccl files contain runtime parameters (`param.ccl`), registered gridfunctions (`interface.ccl`), and function scheduling (`schedule.ccl`). As parameters and gridfunctions are registered with NRPy+ when the C-code kernels are generated, and this generation occurs on separate processes in parallel, we store the entire NRPy+ environment for *each* process. This results in a tremendous amount of duplication, which is sorted out next. Once all duplicated environment variables (e.g., registered gridfunctions) are removed, we replace the current NRPy+ environment with the new one, by setting `gri.glb_gridfcs_list[],par.glb_params_list[],par.glb_Cparams_list[]`." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Store all NRPy+ environment variables to file so NRPy+ environment from within this subprocess can be easily restored\n", "import pickle # Standard Python module for converting arbitrary data structures to a uniform format.\n", "from outputC import outC_function_dict # NRPy: core C code output module\n", "import grid as gri # NRPy+: Functions having to do with numerical grids\n", "import finite_difference as fin # NRPy+: Finite difference C code generation module\n", "import NRPy_param_funcs as par # NRPy+: Parameter interface\n", "\n", "# https://www.pythonforthelab.com/blog/storing-binary-data-and-serializing/\n", "if len(NRPyEnvVars) > 0:\n", " grfcs_list = []\n", " param_list = []\n", " Cparm_list = []\n", "\n", " outCfunc_dict = {}\n", "\n", " for WhichParamSet in NRPyEnvVars[0]:\n", " # gridfunctions\n", " i=0\n", " num_elements = pickle.loads(WhichParamSet[i]); i+=1\n", " for lst in range(num_elements):\n", " grfcs_list.append(gri.glb_gridfc(gftype=pickle.loads(WhichParamSet[i+0]),\n", " name =pickle.loads(WhichParamSet[i+1]),\n", " rank =pickle.loads(WhichParamSet[i+2]),\n", " DIM =pickle.loads(WhichParamSet[i+3]))) ; i+=4\n", " # parameters\n", " num_elements = pickle.loads(WhichParamSet[i]); i+=1\n", " for lst in range(num_elements):\n", " param_list.append(par.glb_param(type =pickle.loads(WhichParamSet[i+0]),\n", " module =pickle.loads(WhichParamSet[i+1]),\n", " parname =pickle.loads(WhichParamSet[i+2]),\n", " defaultval=pickle.loads(WhichParamSet[i+3]))) ; i+=4\n", " # Cparameters\n", " num_elements = pickle.loads(WhichParamSet[i]); i+=1\n", " for lst in range(num_elements):\n", " Cparm_list.append(par.glb_Cparam(type =pickle.loads(WhichParamSet[i+0]),\n", " module =pickle.loads(WhichParamSet[i+1]),\n", " parname =pickle.loads(WhichParamSet[i+2]),\n", " defaultval=pickle.loads(WhichParamSet[i+3]))) ; i+=4\n", " # outC_func_dict\n", " num_elements = pickle.loads(WhichParamSet[i]); i+=1\n", " for lst in range(num_elements):\n", " funcname = pickle.loads(WhichParamSet[i+0])\n", " funcbody = pickle.loads(WhichParamSet[i+1]) ; i+=2\n", " outCfunc_dict[funcname] = funcbody\n", "\n", " grfcs_list_uniq = []\n", " for gf_ntuple_stored in grfcs_list:\n", " found_gf = False\n", " for gf_ntuple_new in grfcs_list_uniq:\n", " if gf_ntuple_new == gf_ntuple_stored:\n", " found_gf = True\n", " if found_gf == False:\n", " grfcs_list_uniq.append(gf_ntuple_stored)\n", "\n", " param_list_uniq = []\n", " for pr_ntuple_stored in param_list:\n", " found_pr = False\n", " for pr_ntuple_new in param_list_uniq:\n", " if pr_ntuple_new == pr_ntuple_stored:\n", " found_pr = True\n", " if found_pr == False:\n", " param_list_uniq.append(pr_ntuple_stored)\n", "\n", " # Set glb_paramsvals_list:\n", " # Step 1: Reset all paramsvals to their defaults\n", " par.glb_paramsvals_list = []\n", " for parm in param_list_uniq:\n", " par.glb_paramsvals_list.append(parm.defaultval)\n", "\n", " Cparm_list_uniq = []\n", " for Cp_ntuple_stored in Cparm_list:\n", " found_Cp = False\n", " for Cp_ntuple_new in Cparm_list_uniq:\n", " if Cp_ntuple_new == Cp_ntuple_stored:\n", " found_Cp = True\n", " if found_Cp == False:\n", " Cparm_list_uniq.append(Cp_ntuple_stored)\n", "\n", " # Dictionary outCfunc_dict (by the nature of Python dictionaries) will not have duplicates!\n", "\n", " gri.glb_gridfcs_list = []\n", " par.glb_params_list = []\n", " par.glb_Cparams_list = []\n", "\n", " gri.glb_gridfcs_list = grfcs_list_uniq\n", " par.glb_params_list = param_list_uniq\n", " par.glb_Cparams_list = Cparm_list_uniq\n", " for key, item in outCfunc_dict.items():\n", " outC_function_dict[key] = item\n", "\n", "# Set lapse_floor to default to 1e-15\n", "lap_floor = par.Cparameters(\"REAL\", \"BaikalETK\", \"lapse_floor\", 1e-15)\n", "\n", "# Step 2: Override defaults with values used here.\n", "# Note that almost no NRPy+ parameters\n", "# are used after this point (DIM is definitely\n", "# one exception), so most of these lines have no\n", "# effect.\n", "# import reference_metric as rfm\n", "# import grid as gri\n", "# import BSSN.BSSN_gauge_RHSs\n", "par.set_parval_from_str(\"grid::DIM\", 3)\n", "import reference_metric as rfm\n", "par.set_parval_from_str(\"reference_metric::CoordSystem\", \"Cartesian\")\n", "rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.\n", "par.set_parval_from_str(\"grid::GridFuncMemAccess\", \"ETK\")\n", "par.set_parval_from_str(\"BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption\", ShiftCondition)\n", "par.set_parval_from_str(\"BSSN.BSSN_gauge_RHSs::LapseEvolutionOption\", LapseCondition)\n", "\n", "# Finally, output all functions needed for computing finite-difference stencils\n", "# to thornname/src/finite_difference_functions.h\n", "for thornname in [\"Baikal\", \"BaikalVacuum\"]:\n", " fin.output_finite_difference_functions_h(os.path.join(thornname,\"src\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='paramccl'></a>\n", "\n", "## Step 3.a: `param.ccl`: specify free parameters within `BaikalETK` \\[Back to [top](#toc)\\]\n", "$$\\label{paramccl}$$\n", "\n", "All parameters necessary for the computation of the BSSN right-hand side (RHS) expressions are registered within NRPy+; we use this information to automatically generate `param.ccl`. NRPy+ also specifies default values for each parameter. \n", "\n", "More information on `param.ccl` syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.html#x17-183000D2.3)." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Overwriting BaikalETK_validate/BaikalETK_ETK_ccl_files_codegen.py\n" ] } ], "source": [ "%%writefile $BaikalETKdir/BaikalETK_ETK_ccl_files_codegen.py\n", "\n", "# Step 1: Import needed core NRPy+ modules\n", "import NRPy_param_funcs as par # NRPy+: Parameter interface\n", "import grid as gri # NRPy+: Functions having to do with numerical grids\n", "import os, sys # Standard Python modules for multiplatform OS-level functions\n", "\n", "def keep_param__return_type(paramtuple):\n", " keep_param = True # We'll not set some parameters in param.ccl;\n", " # e.g., those that should be #define'd like M_PI.\n", " typestring = \"\"\n", " # Separate thorns within the ETK take care of grid/coordinate parameters;\n", " # thus we ignore NRPy+ grid/coordinate parameters:\n", " if paramtuple.module == \"grid\" or paramtuple.module == \"reference_metric\":\n", " keep_param = False\n", "\n", " partype = paramtuple.type\n", " if partype == \"bool\":\n", " typestring += \"BOOLEAN \"\n", " elif partype == \"REAL\":\n", " if paramtuple.defaultval != 1e300: # 1e300 is a magic value indicating that the C parameter should be mutable\n", " typestring += \"CCTK_REAL \"\n", " else:\n", " keep_param = False\n", " elif partype == \"int\":\n", " typestring += \"CCTK_INT \"\n", " elif partype == \"#define\":\n", " keep_param = False\n", " elif partype == \"char\":\n", " # FIXME: char/string parameter types should in principle be supported\n", " print(\"Error: parameter \"+paramtuple.module+\"::\"+paramtuple.parname+\n", " \" has unsupported type: \\\"\"+ paramtuple.type + \"\\\"\")\n", " sys.exit(1)\n", " else:\n", " print(\"Error: parameter \"+paramtuple.module+\"::\"+paramtuple.parname+\n", " \" has unsupported type: \\\"\"+ paramtuple.type + \"\\\"\")\n", " sys.exit(1)\n", " return keep_param, typestring\n", "\n", "def output_param_ccl(ThornName=\"BaikalETK\"):\n", " with open(os.path.join(ThornName,\"param.ccl\"), \"w\") as file:\n", " file.write(\"\"\"\n", "# This param.ccl file was automatically generated by NRPy+.\n", "# You are advised against modifying it directly; instead\n", "# modify the Python code that generates it.\n", "\n", "shares: ADMBase # Extends multiple ADMBase variables:\n", "\n", "EXTENDS CCTK_KEYWORD evolution_method \"evolution_method\"\n", "{\n", " \"BaikalETK\" :: \"\"\n", "}\n", "\n", "EXTENDS CCTK_KEYWORD lapse_evolution_method \"lapse_evolution_method\"\n", "{\n", " \"BaikalETK\" :: \"\"\n", "}\n", "\n", "EXTENDS CCTK_KEYWORD shift_evolution_method \"shift_evolution_method\"\n", "{\n", " \"BaikalETK\" :: \"\"\n", "}\n", "\n", "EXTENDS CCTK_KEYWORD dtshift_evolution_method \"dtshift_evolution_method\"\n", "{\n", " \"BaikalETK\" :: \"\"\n", "}\n", "\n", "EXTENDS CCTK_KEYWORD dtlapse_evolution_method \"dtlapse_evolution_method\"\n", "{\n", " \"BaikalETK\" :: \"\"\n", "}\n", "\n", "restricted:\n", "\n", "CCTK_INT FD_order \"Finite-differencing order\"\n", "{\\n\"\"\".replace(\"BaikalETK\",ThornName))\n", " FDorders = []\n", " for _root, _dirs, files in os.walk(os.path.join(ThornName,\"src\")): # _root,_dirs unused.\n", " for Cfilename in files:\n", " if (\"BSSN_Ricci_FD_order\" in Cfilename) and (\".h\" in Cfilename):\n", " array = Cfilename.replace(\".\",\"_\").split(\"_\")\n", " FDorders.append(int(array[-2]))\n", " FDorders.sort()\n", " for order in FDorders:\n", " file.write(\" \"+str(order)+\":\"+str(order)+\" :: \\\"finite-differencing order = \"+str(order)+\"\\\"\\n\")\n", " FDorder_default = 4\n", " if FDorder_default not in FDorders:\n", " print(\"WARNING: 4th-order FD kernel was not output!?! Changing default FD order to \"+str(FDorders[0]))\n", " FDorder_default = FDorders[0]\n", " file.write(\"} \"+str(FDorder_default)+ \"\\n\\n\") # choose 4th order by default, consistent with ML_BSSN\n", " paramccl_str = \"\"\n", " for i in range(len(par.glb_Cparams_list)):\n", " # keep_param is a boolean indicating whether we should accept or reject\n", " # the parameter. singleparstring will contain the string indicating\n", " # the variable type.\n", " keep_param, singleparstring = keep_param__return_type(par.glb_Cparams_list[i])\n", "\n", " if keep_param:\n", " parname = par.glb_Cparams_list[i].parname\n", " partype = par.glb_Cparams_list[i].type\n", " singleparstring += parname + \" \\\"\"+ parname +\" (see NRPy+ for parameter definition)\\\"\\n\"\n", " singleparstring += \"{\\n\"\n", " if partype != \"bool\":\n", " singleparstring += \" *:* :: \\\"All values accepted. NRPy+ does not restrict the allowed ranges of parameters yet.\\\"\\n\"\n", " singleparstring += \"} \"+str(par.glb_Cparams_list[i].defaultval)+\"\\n\\n\"\n", "\n", " paramccl_str += singleparstring\n", " file.write(paramccl_str)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='interfaceccl'></a>\n", "\n", "## Step 3.b: `interface.ccl`: define needed gridfunctions; provide keywords denoting what this thorn provides and what it should inherit from other thorns \\[Back to [top](#toc)\\]\n", "$$\\label{interfaceccl}$$\n", "\n", "`interface.ccl` declares all gridfunctions and determines how `BaikalETK` interacts with other Einstein Toolkit thorns.\n", "\n", "The [official Einstein Toolkit (Cactus) documentation](http://einsteintoolkit.org/usersguide/UsersGuide.html) defines what must/should be included in an `interface.ccl` file [**here**](http://einsteintoolkit.org/usersguide/UsersGuidech12.html#x17-178000D2.2). " ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_ETK_ccl_files_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_ETK_ccl_files_codegen.py\n", "\n", "# First construct lists of the basic gridfunctions used in NRPy+.\n", "# Each type will be its own group in BaikalETK.\n", "evol_gfs_list = []\n", "auxevol_gfs_list = []\n", "aux_gfs_list = []\n", "for i in range(len(gri.glb_gridfcs_list)):\n", " if gri.glb_gridfcs_list[i].gftype == \"EVOL\":\n", " evol_gfs_list.append( gri.glb_gridfcs_list[i].name+\"GF\")\n", "\n", " if gri.glb_gridfcs_list[i].gftype == \"AUX\":\n", " aux_gfs_list.append( gri.glb_gridfcs_list[i].name+\"GF\")\n", "\n", " if gri.glb_gridfcs_list[i].gftype == \"AUXEVOL\":\n", " auxevol_gfs_list.append(gri.glb_gridfcs_list[i].name+\"GF\")\n", "\n", "# NRPy+'s finite-difference code generator assumes gridfunctions\n", "# are alphabetized; not sorting may result in unnecessary\n", "# cache misses.\n", "evol_gfs_list.sort()\n", "aux_gfs_list.sort()\n", "auxevol_gfs_list.sort()\n", "rhs_list = []\n", "for gf in evol_gfs_list:\n", " rhs_list.append(gf.replace(\"GF\",\"\")+\"_rhsGF\")\n", "\n", "def output_interface_ccl(ThornName=\"BaikalETK\",enable_stress_energy_source_terms=False):\n", " outstr = \"\"\"\n", "# This interface.ccl file was automatically generated by NRPy+.\n", "# You are advised against modifying it directly; instead\n", "# modify the Python code that generates it.\n", "\n", "# With \"implements\", we give our thorn its unique name.\n", "implements: BaikalETK\n", "\n", "# By \"inheriting\" other thorns, we tell the Toolkit that we\n", "# will rely on variables/function that exist within those\n", "# functions.\n", "inherits: ADMBase Boundary Grid MethodofLines\\n\"\"\"\n", " if enable_stress_energy_source_terms == True:\n", " outstr += \"inherits: TmunuBase\\n\"\n", " # Need a raw string here due to all the backslashes:\n", " outstr += r\"\"\"\n", "# Needed functions and #include's:\n", "USES INCLUDE: Symmetry.h\n", "USES INCLUDE: Boundary.h\n", "\n", "# Needed Method of Lines function\n", "CCTK_INT FUNCTION MoLRegisterEvolvedGroup(CCTK_INT IN EvolvedIndex, \\\n", " CCTK_INT IN RHSIndex)\n", "REQUIRES FUNCTION MoLRegisterEvolvedGroup\n", "\n", "# Needed Boundary Conditions function\n", "CCTK_INT FUNCTION GetBoundarySpecification(CCTK_INT IN size, CCTK_INT OUT ARRAY nboundaryzones, CCTK_INT OUT ARRAY is_internal, CCTK_INT OUT ARRAY is_staggered, CCTK_INT OUT ARRAY shiftout)\n", "USES FUNCTION GetBoundarySpecification\n", "\n", "CCTK_INT FUNCTION SymmetryTableHandleForGrid(CCTK_POINTER_TO_CONST IN cctkGH)\n", "USES FUNCTION SymmetryTableHandleForGrid\n", "\n", "CCTK_INT FUNCTION Boundary_SelectVarForBC(CCTK_POINTER_TO_CONST IN GH, CCTK_INT IN faces, CCTK_INT IN boundary_width, CCTK_INT IN table_handle, CCTK_STRING IN var_name, CCTK_STRING IN bc_name)\n", "USES FUNCTION Boundary_SelectVarForBC\n", "\n", "# Needed for EinsteinEvolve/NewRad outer boundary condition driver:\n", "CCTK_INT FUNCTION \\\n", " NewRad_Apply \\\n", " (CCTK_POINTER_TO_CONST IN cctkGH, \\\n", " CCTK_REAL ARRAY IN var, \\\n", " CCTK_REAL ARRAY INOUT rhs, \\\n", " CCTK_REAL IN var0, \\\n", " CCTK_REAL IN v0, \\\n", " CCTK_INT IN radpower)\n", "REQUIRES FUNCTION NewRad_Apply\n", "\n", "# Needed to convert ADM initial data into BSSN initial data (gamma extrapolation)\n", "CCTK_INT FUNCTION \\\n", " ExtrapolateGammas \\\n", " (CCTK_POINTER_TO_CONST IN cctkGH, \\\n", " CCTK_REAL ARRAY INOUT var)\n", "REQUIRES FUNCTION ExtrapolateGammas\n", "\n", "# Tell the Toolkit that we want all gridfunctions\n", "# to be visible to other thorns by using\n", "# the keyword \"public\". Note that declaring these\n", "# gridfunctions *does not* allocate memory for them;\n", "# that is done by the schedule.ccl file.\n", "\n", "# FIXME: add info for symmetry conditions:\n", "# https://einsteintoolkit.org/thornguide/CactusBase/SymBase/documentation.html\n", "public:\n", "\"\"\"\n", "\n", " # Next we declare gridfunctions based on their corresponding gridfunction groups as registered within NRPy+\n", " def output_list_of_gfs(gfs_list, description=\"User did not provide description\"):\n", " gfs_list_parsed = []\n", " for j in range(len(gfs_list)):\n", " # Add all gridfunctions in the list...\n", " gfs_list_parsed.append(gfs_list[j])\n", " # Then remove T4UU gridfunction from list if enable_stress_energy_source_terms==False:\n", " if \"T4UU\" in gfs_list_parsed[-1] and enable_stress_energy_source_terms==False:\n", " del gfs_list_parsed[-1]\n", " gfsstr = \" \"\n", " for j in range(len(gfs_list_parsed)):\n", " gfsstr += gfs_list_parsed[j]\n", " if j != len(gfs_list_parsed)-1:\n", " gfsstr += \",\" # This is a comma-separated list of gridfunctions\n", " else:\n", " gfsstr += \"\\n} \\\"\"+description+\"\\\"\\n\\n\"\n", " return gfsstr\n", " # First EVOL type:\n", " outstr += \"CCTK_REAL evol_variables type = GF Timelevels=3\\n{\\n\"\n", " outstr += output_list_of_gfs(evol_gfs_list, \"BSSN evolved gridfunctions\")\n", " # Second EVOL right-hand-sides\n", " outstr += \"CCTK_REAL evol_variables_rhs type = GF Timelevels=1 TAGS=\\'InterpNumTimelevels=1 prolongation=\\\"none\\\"\\'\\n{\\n\"\n", " outstr += output_list_of_gfs(rhs_list, \"right-hand-side storage for BSSN evolved gridfunctions\")\n", " # Then AUX type:\n", " outstr += \"CCTK_REAL aux_variables type = GF Timelevels=3\\n{\\n\"\n", " outstr += output_list_of_gfs(aux_gfs_list, \"Auxiliary gridfunctions for BSSN diagnostics\")\n", " # Finally, AUXEVOL type:\n", " outstr += \"CCTK_REAL auxevol_variables type = GF Timelevels=1 TAGS=\\'InterpNumTimelevels=1 prolongation=\\\"none\\\"\\'\\n{\\n\"\n", " outstr += output_list_of_gfs(auxevol_gfs_list, \"Auxiliary gridfunctions needed for evaluating the BSSN RHSs\")\n", "\n", " with open(os.path.join(ThornName, \"interface.ccl\"), \"w\") as file:\n", " file.write(outstr.replace(\"BaikalETK\", ThornName))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='scheduleccl'></a>\n", "\n", "## Step 3.c: `schedule.ccl`: schedule all functions used within `BaikalETK`, specify data dependencies within said functions, and allocate memory for gridfunctions \\[Back to [top](#toc)\\]\n", "$$\\label{scheduleccl}$$\n", "\n", "Official documentation on constructing ETK `schedule.ccl` files is found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.html#x17-186000D2.4)." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_ETK_ccl_files_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_ETK_ccl_files_codegen.py\n", "\n", "def output_schedule_ccl(ThornName=\"BaikalETK\",enable_stress_energy_source_terms=False):\n", " outstr = \"\"\"\n", "# This schedule.ccl file was automatically generated by NRPy+.\n", "# You are advised against modifying it directly; instead\n", "# modify the Python code that generates it.\n", "\n", "# First allocate storage for one timelevel of ADMBase gridfunctions, which is the\n", "# bare minimum needed by NRPy+. If another thorn (e.g., ADMBase itself) requests\n", "# more timelevels of storage, Cactus automatically allocates the maximum requested.\n", "STORAGE: ADMBase::metric[1], ADMBase::curv[1], ADMBase::lapse[1], ADMBase::shift[1]\n", "\n", "# Next allocate storage for all 3 gridfunction groups used in BaikalETK\n", "STORAGE: evol_variables[3] # Evolution variables\n", "STORAGE: evol_variables_rhs[1] # Variables storing right-hand-sides\n", "STORAGE: aux_variables[3] # Diagnostics variables\n", "STORAGE: auxevol_variables[1] # Single-timelevel storage of variables needed for evolutions.\n", "\n", "# The following scheduler is based on Lean/LeanBSSNMoL/schedule.ccl\n", "\n", "schedule BaikalETK_Banner at STARTUP\n", "{\n", " LANG: C\n", " OPTIONS: meta\n", "} \"Output ASCII art banner\"\n", "\n", "schedule BaikalETK_RegisterSlicing at STARTUP after BaikalETK_Banner\n", "{\n", " LANG: C\n", " OPTIONS: meta\n", "} \"Register 3+1 slicing condition\"\n", "\n", "schedule BaikalETK_Symmetry_registration at BASEGRID\n", "{\n", " LANG: C\n", " OPTIONS: Global\n", "} \"Register symmetries, the CartGrid3D way.\"\n", "\n", "schedule BaikalETK_zero_rhss at BASEGRID after BaikalETK_Symmetry_registration\n", "{\n", " LANG: C\n", "} \"Idea from Lean: set all rhs functions to zero to prevent spurious nans\"\n", "\n", "schedule BaikalETK_ADM_to_BSSN at CCTK_INITIAL after ADMBase_PostInitial\n", "{\n", " LANG: C\n", " OPTIONS: Local\n", " SYNC: evol_variables\n", "} \"Convert initial data into BSSN variables\"\n", "\n", "schedule GROUP ApplyBCs as BaikalETK_ApplyBCs at CCTK_INITIAL after BaikalETK_ADM_to_BSSN\n", "{\n", "} \"Apply boundary conditions\"\n", "\n", "\n", "# MoL: registration\n", "\n", "schedule BaikalETK_MoL_registration in MoL_Register\n", "{\n", " LANG: C\n", " OPTIONS: META\n", "} \"Register variables for MoL\"\n", "\n", "\n", "# MoL: compute RHSs, etc\n", "\"\"\"\n", " if enable_stress_energy_source_terms == True:\n", " outstr += \"\"\"\n", "schedule BaikalETK_driver_BSSN_T4UU in MoL_CalcRHS as BaikalETK_T4UU before BaikalETK_BSSN_to_ADM\n", "{\n", " LANG: C\n", "} \"MoL: Compute T4UU, needed for BSSN RHSs.\"\n", "\"\"\"\n", " outstr += \"\"\"\n", "schedule BaikalETK_driver_pt1_BSSN_Ricci in MoL_CalcRHS as BaikalETK_Ricci before BaikalETK_RHS\n", "{\n", " LANG: C\n", "} \"MoL: Compute Ricci tensor\"\n", "\n", "schedule BaikalETK_driver_pt2_BSSN_RHSs in MoL_CalcRHS as BaikalETK_RHS after BaikalETK_Ricci\n", "{\n", " LANG: C\n", "} \"MoL: Evaluate BSSN RHSs\"\n", "\n", "schedule BaikalETK_NewRad in MoL_CalcRHS after BaikalETK_RHS\n", "{\n", " LANG: C\n", "} \"NewRad boundary conditions, scheduled right after RHS eval.\"\n", "\n", "schedule BaikalETK_floor_the_lapse in MoL_PostStep before BaikalETK_enforce_detgammahat_constraint before BC_Update\n", "{\n", " LANG: C\n", "} \"Set lapse = max(lapse_floor, lapse)\"\n", "\n", "schedule BaikalETK_enforce_detgammahat_constraint in MoL_PostStep before BC_Update\n", "{\n", " LANG: C\n", "} \"Enforce detgammabar = detgammahat (= 1 in Cartesian)\"\n", "\n", "schedule BaikalETK_BoundaryConditions_evolved_gfs in MoL_PostStep\n", "{\n", " LANG: C\n", " OPTIONS: LEVEL\n", " SYNC: evol_variables\n", "} \"Apply boundary conditions and perform AMR+interprocessor synchronization\"\n", "\n", "schedule GROUP ApplyBCs as BaikalETK_ApplyBCs in MoL_PostStep after BaikalETK_BoundaryConditions_evolved_gfs\n", "{\n", "} \"Group for applying boundary conditions\"\n", "\n", "\n", "# Next update ADM quantities\n", "\n", "schedule BaikalETK_BSSN_to_ADM in MoL_PostStep after BaikalETK_ApplyBCs before ADMBase_SetADMVars\n", "{\n", " LANG: C\n", " OPTIONS: Local\n", "} \"Perform BSSN-to-ADM conversion. Useful for diagnostics.\"\n", "\n", "# Compute Hamiltonian & momentum constraints\n", "\"\"\"\n", " if enable_stress_energy_source_terms == True:\n", " outstr += \"\"\"\n", "schedule BaikalETK_driver_BSSN_T4UU in MoL_PseudoEvolution before BaikalETK_BSSN_constraints\n", "{\n", " LANG: C\n", " OPTIONS: Local\n", "} \"MoL_PseudoEvolution: Compute T4UU, needed for BSSN constraints\"\n", "\"\"\"\n", " outstr += \"\"\"\n", "\n", "schedule BaikalETK_BSSN_constraints in MoL_PseudoEvolution\n", "{\n", " LANG: C\n", " OPTIONS: Local\n", "} \"Compute BSSN (Hamiltonian and momentum) constraints\"\n", "\n", "schedule BaikalETK_BoundaryConditions_aux_gfs in MoL_PseudoEvolution after BaikalETK_BSSN_constraints\n", "{\n", " LANG: C\n", " OPTIONS: LEVEL\n", " SYNC: aux_variables\n", "} \"Enforce symmetry BCs in constraint computation\"\n", "\n", "\"\"\"\n", " if enable_stress_energy_source_terms == True:\n", " outstr += \"\"\"\n", "schedule BaikalETK_BSSN_to_ADM in MoL_PseudoEvolution after BaikalETK_BoundaryConditions_aux_gfs\n", "{\n", " LANG: C\n", " OPTIONS: Local\n", "} \"Perform BSSN-to-ADM conversion in MoL_PseudoEvolution. Needed for proper HydroBase integration.\"\n", "\"\"\"\n", " outstr += \"\"\"\n", "schedule GROUP ApplyBCs as BaikalETK_auxgfs_ApplyBCs in MoL_PseudoEvolution after BaikalETK_BoundaryConditions_aux_gfs\n", "{\n", "} \"Apply boundary conditions\"\n", "\"\"\"\n", " with open(os.path.join(ThornName,\"schedule.ccl\"), \"w\") as file:\n", " file.write(outstr.replace(\"BaikalETK\",ThornName))" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import BaikalETK_validate.BaikalETK_ETK_ccl_files_codegen as cclgen\n", "\n", "for enable_stress_energy_source_terms in [True,False]:\n", " ThornName=\"Baikal\"\n", " if enable_stress_energy_source_terms==False:\n", " ThornName=\"BaikalVacuum\"\n", " cclgen.output_param_ccl(ThornName)\n", " cclgen.output_interface_ccl(ThornName,enable_stress_energy_source_terms)\n", " cclgen.output_schedule_ccl(ThornName,enable_stress_energy_source_terms)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='cdrivers'></a>\n", "\n", "# Step 4: C driver functions for ETK registration & NRPy+-generated kernels \\[Back to [top](#toc)\\]\n", "$$\\label{cdrivers}$$\n", "\n", "Now that we have constructed the basic C code kernels and the needed Einstein Toolkit `ccl` files, we next write the driver functions for registering `BaikalETK` within the Toolkit and the C code kernels. Each of these driver functions will be called directly from the thorn's [`schedule.ccl`](#scheduleccl) in the ETK.\n", "\n", "<a id='etkfunctions'></a>\n", "## Step 4.a: Needed ETK functions: Banner, Symmetry registration, Parameter sanity check, Method of Lines (`MoL`) registration, Boundary condition \\[Back to [top](#toc)\\]\n", "$$\\label{etkfunctions}$$\n", "\n", "### To-do: Parameter sanity check function. E.g., error should be thrown if `cctk_nghostzones[]` is set too small for the chosen finite-differencing order within NRPy+." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Overwriting BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", "# Step 1: Import needed core NRPy+ and Python modules\n", "from outputC import lhrh # NRPy+: Core C code output module\n", "import NRPy_param_funcs as par # NRPy+: Parameter interface\n", "import finite_difference as fin # NRPy+: Finite difference C code generation module\n", "import grid as gri # NRPy+: Functions having to do with numerical grids\n", "import loop as lp # NRPy+: Generate C code loops\n", "import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support\n", "import os, sys # Standard Python modules for multiplatform OS-level functions\n", "# We need the function keep_param__return_type() from this module:\n", "import BaikalETK_validate.BaikalETK_ETK_ccl_files_codegen as ccl\n", "\n", "make_code_defn_list = []\n", "def append_to_make_code_defn_list(filename):\n", " if filename not in make_code_defn_list:\n", " make_code_defn_list.append(filename)\n", " return filename" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", "def driver_C_codes(Csrcdict, ThornName,\n", " rhs_list,evol_gfs_list,aux_gfs_list,auxevol_gfs_list,\n", " LapseCondition = \"OnePlusLog\", enable_stress_energy_source_terms=False):\n", " # First the ETK banner code, proudly showing the NRPy+ banner\n", " import NRPy_logo as logo\n", " outstr = \"\"\"\n", "#include <stdio.h>\n", "\n", "void BaikalETK_Banner()\n", "{\n", " \"\"\"\n", " logostr = logo.print_logo(print_to_stdout=False)\n", " outstr += \"printf(\\\"BaikalETK: another Einstein Toolkit thorn generated by\\\\n\\\");\\n\"\n", " for line in logostr.splitlines():\n", " outstr += \" printf(\\\"\"+line+\"\\\\n\\\");\\n\"\n", " outstr += \"}\\n\"\n", "\n", " # Finally add C code string to dictionaries (Python dictionaries are immutable)\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"Banner.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # Then the RegisterSlicing() function, needed for other ETK thorns\n", " outstr = \"\"\"\n", "#include \"cctk.h\"\n", "\n", "#include \"Slicing.h\"\n", "\n", "int BaikalETK_RegisterSlicing (void)\n", "{\n", " Einstein_RegisterSlicing (\"BaikalETK\");\n", " return 0;\n", "}\"\"\"\n", "\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"RegisterSlicing.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # Next BaikalETK_Symmetry_registration(): Register symmetries\n", "\n", " full_gfs_list = []\n", " full_gfs_list.extend(evol_gfs_list)\n", " full_gfs_list.extend(auxevol_gfs_list)\n", " full_gfs_list.extend(aux_gfs_list)\n", "\n", " outstr = \"\"\"\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "#include \"Symmetry.h\"\n", "\n", "void BaikalETK_Symmetry_registration(CCTK_ARGUMENTS)\n", "{\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", " // Stores gridfunction parity across x=0, y=0, and z=0 planes, respectively\n", " int sym[3];\n", "\n", " // Next register parities for each gridfunction based on its name\n", " // (to ensure this algorithm is robust, gridfunctions with integers\n", " // in their base names are forbidden in NRPy+).\n", "\"\"\"\n", " outstr += \"\"\n", " for gfname in full_gfs_list:\n", " gfname_without_GFsuffix = gfname[:-2]\n", " # Do not add T4UU gridfunctions if enable_stress_energy_source_terms==False:\n", " if not (enable_stress_energy_source_terms == False and \"T4UU\" in gfname_without_GFsuffix):\n", " outstr += \"\"\"\n", " // Default to scalar symmetry:\n", " sym[0] = 1; sym[1] = 1; sym[2] = 1;\n", " // Now modify sym[0], sym[1], and/or sym[2] as needed\n", " // to account for gridfunction parity across\n", " // x=0, y=0, and/or z=0 planes, respectively\n", "\"\"\"\n", " # If gridfunction name does not end in a digit, by NRPy+ syntax, it must be a scalar\n", " if gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 1].isdigit() == False:\n", " outstr += \" // (this gridfunction is a scalar -- no need to change default sym[]'s!)\\n\"\n", " elif len(gfname_without_GFsuffix) > 2:\n", " # Rank-1 indexed expression (e.g., vector)\n", " if gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 2].isdigit() == False:\n", " if int(gfname_without_GFsuffix[-1]) > 2:\n", " print(\"Error: Found invalid gridfunction name: \"+gfname)\n", " sys.exit(1)\n", " symidx = gfname_without_GFsuffix[-1]\n", " if int(symidx) < 3: outstr += \" sym[\" + symidx + \"] = -1;\\n\"\n", " # Rank-2 indexed expression\n", " elif gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 2].isdigit() == True:\n", " if len(gfname_without_GFsuffix) > 3 and gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 3].isdigit() == True:\n", " print(\"Error: Found a Rank-3 or above gridfunction: \"+gfname+\", which is at the moment unsupported.\")\n", " print(\"It should be easy to support this if desired.\")\n", " sys.exit(1)\n", " symidx0 = gfname_without_GFsuffix[-2]\n", " if \"T4UU\" in gfname: symidx0 = str(int(symidx0)-1) # E.g., T4UU23 is T4UUyz, corresponding to directions 1,2\n", " if int(symidx0) >= 0: outstr += \" sym[\" + symidx0 + \"] *= -1;\\n\"\n", " symidx1 = gfname_without_GFsuffix[-1]\n", " if \"T4UU\" in gfname: symidx1 = str(int(symidx1)-1) # E.g., T4UU23 is T4UUyz, corresponding to directions 1,2\n", " if int(symidx1) >= 0: outstr += \" sym[\" + symidx1 + \"] *= -1;\\n\"\n", " else:\n", " print(\"Don't know how you got this far with a gridfunction named \"+gfname+\", but I'll take no more of this nonsense.\")\n", " print(\" Please follow best-practices and rename your gridfunction to be more descriptive\")\n", " sys.exit(1)\n", " outstr += \" SetCartSymVN(cctkGH, sym, \\\"BaikalETK::\" + gfname + \"\\\");\\n\"\n", " outstr += \"}\\n\"\n", "\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"Symmetry_registration_oldCartGrid3D.c\")] = \\\n", " outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # Next set RHSs to zero\n", " outstr = \"\"\"\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "#include \"Symmetry.h\"\n", "\n", "void BaikalETK_zero_rhss(CCTK_ARGUMENTS)\n", "{\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\"\"\"\n", " set_rhss_to_zero = \"\"\n", " for gf in rhs_list:\n", " set_rhss_to_zero += gf+\"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)] = 0.0;\\n\"\n", "\n", " outstr += lp.loop([\"i2\",\"i1\",\"i0\"],[\"0\", \"0\", \"0\"],\n", " [\"cctk_lsh[2]\",\"cctk_lsh[1]\",\"cctk_lsh[0]\"],\n", " [\"1\",\"1\",\"1\"],\n", " [\"#pragma omp parallel for\",\"\",\"\",],\"\",set_rhss_to_zero)\n", " outstr += \"}\\n\"\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"zero_rhss.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # Next floor the lapse\n", " outstr = \"\"\"\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "#include \"Symmetry.h\"\n", "\n", "#ifndef MAX\n", "#define MAX(a,b) ( ((a) > (b)) ? (a) : (b) )\n", "\n", "void BaikalETK_floor_the_lapse(CCTK_ARGUMENTS)\n", "{\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\"\"\"\n", " outstr += lp.loop([\"i2\",\"i1\",\"i0\"],[\"0\", \"0\", \"0\"],\n", " [\"cctk_lsh[2]\",\"cctk_lsh[1]\",\"cctk_lsh[0]\"],\n", " [\"1\",\"1\",\"1\"],\n", " [\"#pragma omp parallel for\",\"\",\"\",],\"\",\"\"\"\n", "alphaGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)] = MAX(alphaGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)], lapse_floor);\n", "\"\"\")\n", " outstr += \"\"\"\n", "}\n", "#undef MAX\n", "#endif\\n\"\"\"\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"floor_the_lapse.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # Next registration with the Method of Lines thorn\n", " outstr = \"\"\"\n", "//--------------------------------------------------------------------------\n", "// Register with the Method of Lines time stepper\n", "// (MoL thorn, found in arrangements/CactusBase/MoL)\n", "// MoL documentation located in arrangements/CactusBase/MoL/doc\n", "//--------------------------------------------------------------------------\n", "#include <stdio.h>\n", "\n", "#include \"cctk.h\"\n", "#include \"cctk_Parameters.h\"\n", "#include \"cctk_Arguments.h\"\n", "\n", "#include \"Symmetry.h\"\n", "\n", "void BaikalETK_MoL_registration(CCTK_ARGUMENTS)\n", "{\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", " CCTK_INT ierr = 0, group, rhs;\n", "\n", " // Register evolution & RHS gridfunction groups with MoL, so it knows\n", "\n", " group = CCTK_GroupIndex(\"BaikalETK::evol_variables\");\n", " rhs = CCTK_GroupIndex(\"BaikalETK::evol_variables_rhs\");\n", " ierr += MoLRegisterEvolvedGroup(group, rhs);\n", "\n", " if (ierr) CCTK_ERROR(\"Problems registering with MoL\");\n", "}\n", "\"\"\"\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"MoL_registration.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # Next register with the boundary conditions thorns.\n", " # PART 1: Set BC type to \"none\" for all variables\n", " # Since we choose NewRad boundary conditions, we must register all\n", " # gridfunctions to have boundary type \"none\". This is because\n", " # NewRad is seen by the rest of the Toolkit as a modification to the\n", " # RHSs.\n", "\n", " # This code is based on Kranc's McLachlan/ML_BSSN/src/Boundaries.cc code.\n", " outstr = \"\"\"\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "#include \"cctk_Faces.h\"\n", "#include \"util_Table.h\"\n", "#include \"Symmetry.h\"\n", "\n", "// Set `none` boundary conditions on BSSN RHSs, as these are set via NewRad.\n", "void BaikalETK_BoundaryConditions_evolved_gfs(CCTK_ARGUMENTS)\n", "{\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", " CCTK_INT ierr CCTK_ATTRIBUTE_UNUSED = 0;\n", "\"\"\"\n", " for gf in evol_gfs_list:\n", " outstr += \"\"\"\n", " ierr = Boundary_SelectVarForBC(cctkGH, CCTK_ALL_FACES, 1, -1, \"BaikalETK::\"\"\"+gf+\"\"\"\", \"none\");\n", " if (ierr < 0) CCTK_ERROR(\"Failed to register BC for BaikalETK::\"\"\"+gf+\"\"\"!\");\n", "\"\"\"\n", " outstr += \"\"\"\n", "}\n", "\n", "// Set `flat` boundary conditions on BSSN constraints, similar to what Lean does.\n", "void BaikalETK_BoundaryConditions_aux_gfs(CCTK_ARGUMENTS) {\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", " CCTK_INT ierr CCTK_ATTRIBUTE_UNUSED = 0;\n", "\n", " CCTK_INT bndsize[6];\n", " CCTK_INT is_internal[6];\n", " CCTK_INT is_staggered[6];\n", " CCTK_INT shiftout[6];\n", "\n", " GetBoundarySpecification(6, bndsize, is_internal, is_staggered, shiftout);\n", "\n", "\"\"\"\n", " for gf in aux_gfs_list:\n", " outstr += \"\"\"\n", " ierr = Boundary_SelectVarForBC(cctkGH, CCTK_ALL_FACES, bndsize[0], -1, \"BaikalETK::\"\"\"+gf+\"\"\"\", \"flat\");\n", " if (ierr < 0) CCTK_ERROR(\"Failed to register BC for BaikalETK::\"\"\"+gf+\"\"\"!\");\n", "\"\"\"\n", " outstr += \"}\\n\"\n", "\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"BoundaryConditions.c\")] = outstr.replace(\"BaikalETK\",ThornName)\n", "\n", " # PART 2: Set C code for calling NewRad BCs\n", " # As explained in lean_public/LeanBSSNMoL/src/calc_bssn_rhs.F90,\n", " # the function NewRad_Apply takes the following arguments:\n", " # NewRad_Apply(cctkGH, var, rhs, var0, v0, radpower),\n", " # which implement the boundary condition:\n", " # var = var_at_infinite_r + u(r-var_char_speed*t)/r^var_radpower\n", " # Obviously for var_radpower>0, var_at_infinite_r is the value of\n", " # the variable at r->infinity. var_char_speed is the propagation\n", " # speed at the outer boundary, and var_radpower is the radial\n", " # falloff rate.\n", "\n", " outstr = \"\"\"\n", "#include <math.h>\n", "\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "\n", "void BaikalETK_NewRad(CCTK_ARGUMENTS) {\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", "\"\"\"\n", " for gf in evol_gfs_list:\n", " var_at_infinite_r = \"0.0\"\n", " var_char_speed = \"1.0\"\n", " var_radpower = \"1.0\"\n", "\n", " if \"alpha\" in gf:\n", " var_at_infinite_r = \"1.0\"\n", " if LapseCondition == \"OnePlusLog\":\n", " var_char_speed = \"sqrt(2.0)\"\n", " else:\n", " pass # 1.0 (default) is fine\n", " if \"aDD\" in gf or \"trK\" in gf: # consistent with Lean code.\n", " var_radpower = \"2.0\"\n", "\n", " outstr += \" NewRad_Apply(cctkGH, \"+gf+\", \"+gf.replace(\"GF\",\"\")+\"_rhsGF, \"+var_at_infinite_r+\", \"+var_char_speed+\", \"+var_radpower+\");\\n\"\n", " outstr += \"}\\n\"\n", "\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"BoundaryCondition_NewRad.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='bssnadmconversions'></a>\n", "\n", "## Step 4.b: BSSN $\\leftrightarrow$ ADM conversions \\[Back to [top](#toc)\\]\n", "$$\\label{bssnadmconversions}$$\n", "\n", "<a id='admtobssn'></a>\n", "\n", "### Step 4.b.i: ADM $\\to$ BSSN \\[Back to [top](#toc)\\]\n", "$$\\label{admtobssn}$$\n", "\n", "Initial data in the Einstein Toolkit are given in terms of [ADM quantities](https://en.wikipedia.org/wiki/ADM_formalism), so a conversion is necessary so the quantities are in terms of BSSN variables used for evolving the initial data forward in time." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # First we convert from ADM to BSSN, as is required to convert initial data\n", " # (given using) ADM quantities, to the BSSN evolved variables\n", " import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as atob\n", " IDhDD,IDaDD,IDtrK,IDvetU,IDbetU,IDalpha,IDcf,IDlambdaU = \\\n", " atob.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear(\"Cartesian\",\"DoNotOutputADMInputFunction\",os.path.join(ThornName,\"src\"))\n", "\n", " # Store the original list of registered gridfunctions; we'll want to unregister\n", " # all the *SphorCart* gridfunctions after we're finished with them below.\n", " orig_glb_gridfcs_list = []\n", " for gf in gri.glb_gridfcs_list:\n", " orig_glb_gridfcs_list.append(gf)\n", "\n", " # We ignore the return values for the following register_gridfunctions...() calls,\n", " # as they are unused.\n", " gri.register_gridfunctions( \"AUXEVOL\", \"alphaSphorCart\")\n", " ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\", \"betaSphorCartU\")\n", " ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\", \"BSphorCartU\")\n", " ixp.register_gridfunctions_for_single_rank2(\"AUXEVOL\", \"gammaSphorCartDD\", \"sym01\")\n", " ixp.register_gridfunctions_for_single_rank2(\"AUXEVOL\", \"KSphorCartDD\", \"sym01\")\n", "\n", " # ADM to BSSN conversion, used for converting ADM initial data into a form readable by this thorn.\n", " # ADM to BSSN, Part 1: Set up function call and pointers to ADM gridfunctions\n", " outstr = \"\"\"\n", "#include <math.h>\n", "\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "\n", "void BaikalETK_ADM_to_BSSN(CCTK_ARGUMENTS) {\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", " CCTK_REAL *alphaSphorCartGF = alp;\n", "\"\"\"\n", " # It's ugly if we output code in the following ordering, so we'll first\n", " # output to a string and then sort the string to beautify the code a bit.\n", " outstrtmp = []\n", " for i in range(3):\n", " outstrtmp.append(\" CCTK_REAL *betaSphorCartU\"+str(i)+\"GF = beta\"+chr(ord('x')+i)+\";\\n\")\n", " outstrtmp.append(\" CCTK_REAL *BSphorCartU\"+str(i)+\"GF = dtbeta\"+chr(ord('x')+i)+\";\\n\")\n", " for j in range(i,3):\n", " outstrtmp.append(\" CCTK_REAL *gammaSphorCartDD\"+str(i)+str(j)+\"GF = g\"+chr(ord('x')+i)+chr(ord('x')+j)+\";\\n\")\n", " outstrtmp.append(\" CCTK_REAL *KSphorCartDD\"+str(i)+str(j)+\"GF = k\"+chr(ord('x')+i)+chr(ord('x')+j)+\";\\n\")\n", " outstrtmp.sort()\n", " for line in outstrtmp:\n", " outstr += line\n", "\n", " # ADM to BSSN, Part 2: Set up ADM to BSSN conversions for BSSN gridfunctions that do not require\n", " # finite-difference derivatives (i.e., all gridfunctions except lambda^i (=Gamma^i\n", " # in non-covariant BSSN)):\n", " # h_{ij}, a_{ij}, trK, vet^i=beta^i,bet^i=B^i, cf (conformal factor), and alpha\n", "\n", " # Output finite difference stencils as inlined expressions.\n", " # We do this instead of outputting as FD functions, as this function\n", " # does not take long to compile, and we have already output all the\n", " # FD functions to file, so if this one contains new FD functions,\n", " # the compile will fail.\n", " par.set_parval_from_str(\"finite_difference::enable_FD_functions\", False)\n", "\n", " all_but_lambdaU_expressions = [\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"hDD00\"), rhs=IDhDD[0][0]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"hDD01\"), rhs=IDhDD[0][1]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"hDD02\"), rhs=IDhDD[0][2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"hDD11\"), rhs=IDhDD[1][1]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"hDD12\"), rhs=IDhDD[1][2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"hDD22\"), rhs=IDhDD[2][2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"aDD00\"), rhs=IDaDD[0][0]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"aDD01\"), rhs=IDaDD[0][1]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"aDD02\"), rhs=IDaDD[0][2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"aDD11\"), rhs=IDaDD[1][1]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"aDD12\"), rhs=IDaDD[1][2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"aDD22\"), rhs=IDaDD[2][2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"trK\"), rhs=IDtrK),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"vetU0\"), rhs=IDvetU[0]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"vetU1\"), rhs=IDvetU[1]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"vetU2\"), rhs=IDvetU[2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"betU0\"), rhs=IDbetU[0]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"betU1\"), rhs=IDbetU[1]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"betU2\"), rhs=IDbetU[2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"alpha\"), rhs=IDalpha),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\", \"cf\"), rhs=IDcf)]\n", "\n", " outCparams = \"preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False\"\n", " all_but_lambdaU_outC = fin.FD_outputC(\"returnstring\", all_but_lambdaU_expressions, outCparams)\n", " outstr += lp.loop([\"i2\", \"i1\", \"i0\"], [\"0\", \"0\", \"0\"], [\"cctk_lsh[2]\", \"cctk_lsh[1]\", \"cctk_lsh[0]\"],\n", " [\"1\", \"1\", \"1\"], [\"#pragma omp parallel for\", \"\", \"\"], \" \", all_but_lambdaU_outC)\n", "\n", " # ADM to BSSN, Part 3: Set up ADM to BSSN conversions for BSSN gridfunctions defined from\n", " # finite-difference derivatives: lambda^i, which is Gamma^i in non-covariant BSSN):\n", " outstr += \"\"\"\n", " const CCTK_REAL invdx0 = 1.0/CCTK_DELTA_SPACE(0);\n", " const CCTK_REAL invdx1 = 1.0/CCTK_DELTA_SPACE(1);\n", " const CCTK_REAL invdx2 = 1.0/CCTK_DELTA_SPACE(2);\n", "\"\"\"\n", "\n", " path = os.path.join(ThornName,\"src\")\n", " BaikalETK_src_filelist = []\n", " for _root, _dirs, files in os.walk(path): # _root, _dirs unused.\n", " for filename in files:\n", " BaikalETK_src_filelist.append(filename)\n", " BaikalETK_src_filelist.sort() # Sort the list in place.\n", "\n", " BSSN_FD_orders_output = []\n", " for filename in BaikalETK_src_filelist:\n", " if \"BSSN_RHSs_\" in filename:\n", " array = filename.replace(\".\",\"_\").split(\"_\")\n", " FDorder = int(array[-2])\n", " if FDorder not in BSSN_FD_orders_output:\n", " BSSN_FD_orders_output.append(FDorder)\n", " BSSN_FD_orders_output.sort()\n", "\n", " for current_FD_order in BSSN_FD_orders_output:\n", " # Store original finite-differencing order:\n", " orig_FD_order = par.parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\")\n", " # Set new finite-differencing order:\n", " par.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\", current_FD_order)\n", "\n", " outCparams = \"preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False\"\n", " lambdaU_expressions = [lhrh(lhs=gri.gfaccess(\"in_gfs\",\"lambdaU0\"),rhs=IDlambdaU[0]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"lambdaU1\"),rhs=IDlambdaU[1]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"lambdaU2\"),rhs=IDlambdaU[2])]\n", " lambdaU_expressions_FDout = fin.FD_outputC(\"returnstring\",lambdaU_expressions, outCparams)\n", "\n", " lambdafile = \"ADM_to_BSSN__compute_lambdaU_FD_order_\"+str(current_FD_order)+\".h\"\n", " with open(os.path.join(ThornName,\"src\",lambdafile),\"w\") as file:\n", " file.write(lp.loop([\"i2\",\"i1\",\"i0\"],[\"cctk_nghostzones[2]\",\"cctk_nghostzones[1]\",\"cctk_nghostzones[0]\"],\n", " [\"cctk_lsh[2]-cctk_nghostzones[2]\",\"cctk_lsh[1]-cctk_nghostzones[1]\",\"cctk_lsh[0]-cctk_nghostzones[0]\"],\n", " [\"1\",\"1\",\"1\"],[\"#pragma omp parallel for\",\"\",\"\"],\"\",lambdaU_expressions_FDout))\n", "\n", " outstr += \" if(FD_order == \"+str(current_FD_order)+\") {\\n\"\n", " outstr += \" #include \\\"\"+lambdafile+\"\\\"\\n\"\n", " outstr += \" }\\n\"\n", " # Restore original FD order\n", " par.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\", orig_FD_order)\n", "\n", " outstr += \"\"\"\n", " ExtrapolateGammas(cctkGH,lambdaU0GF);\n", " ExtrapolateGammas(cctkGH,lambdaU1GF);\n", " ExtrapolateGammas(cctkGH,lambdaU2GF);\n", "}\n", "\"\"\"\n", "\n", " # Unregister the *SphorCartGF's.\n", " gri.glb_gridfcs_list = orig_glb_gridfcs_list\n", "\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"ADM_to_BSSN.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='bssntoadm'></a>\n", "\n", "### Step 4.b.ii: BSSN $\\to$ ADM \\[Back to [top](#toc)\\]\n", "$$\\label{bssntoadm}$$\n", "\n", "All modules (thorns) in the Einstein Toolkit that deal with spacetime quantities do so via the core `ADMBase` module, which assumes variables are written in ADM form. Therefore, in order for `BaikalETK` to interface properly with the rest of the Toolkit, its native BSSN variables must be converted to ADM quantities." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " import BSSN.ADM_in_terms_of_BSSN as btoa\n", " import BSSN.BSSN_quantities as Bq\n", "\n", " btoa.ADM_in_terms_of_BSSN()\n", " Bq.BSSN_basic_tensors() # Gives us betaU & BU\n", "\n", " outstr = \"\"\"\n", "#include <math.h>\n", "\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "\n", "void BaikalETK_BSSN_to_ADM(CCTK_ARGUMENTS) {\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", "\"\"\"\n", " btoa_lhrh = []\n", " for i in range(3):\n", " for j in range(i,3):\n", " btoa_lhrh.append(lhrh(lhs=\"g\"+chr(ord('x')+i)+chr(ord('x')+j)+\"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]\",\n", " rhs=btoa.gammaDD[i][j]))\n", " for i in range(3):\n", " for j in range(i,3):\n", " btoa_lhrh.append(lhrh(lhs=\"k\"+chr(ord('x')+i)+chr(ord('x')+j)+\"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]\",\n", " rhs=btoa.KDD[i][j]))\n", " btoa_lhrh.append(lhrh(lhs=\"alp[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]\",rhs=Bq.alpha))\n", "\n", " for i in range(3):\n", " btoa_lhrh.append(lhrh(lhs=\"beta\"+chr(ord('x')+i)+\"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]\",\n", " rhs=Bq.betaU[i]))\n", "\n", " for i in range(3):\n", " btoa_lhrh.append(lhrh(lhs=\"dtbeta\"+chr(ord('x')+i)+\"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]\",\n", " rhs=Bq.BU[i]))\n", "\n", " outCparams = \"preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False\"\n", " bssn_to_adm_Ccode = fin.FD_outputC(\"returnstring\",btoa_lhrh, outCparams)\n", " outstr += lp.loop([\"i2\",\"i1\",\"i0\"],[\"0\",\"0\",\"0\"],[\"cctk_lsh[2]\",\"cctk_lsh[1]\",\"cctk_lsh[0]\"],\n", " [\"1\",\"1\",\"1\"],[\"#pragma omp parallel for\",\"\",\"\"],\"\",bssn_to_adm_Ccode)\n", "\n", " outstr += \"}\\n\"\n", "\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"BSSN_to_ADM.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='bssnrhss'></a>\n", "\n", "## Step 4.c: Evaluate BSSN right-hand-sides (RHSs) \\[Back to [top](#toc)\\]\n", "$$\\label{bssnrhss}$$\n", "\n", "<a id='ricci'></a>\n", "\n", "### Step 4.c.i: Evaluate Ricci tensor \\[Back to [top](#toc)\\]\n", "$$\\label{ricci}$$\n", "\n", "To slightly optimize the performance of `BaikalETK`'s BSSN solver, we split the computation of the [complicated expressions for the Ricci tensor $\\\\bar{R}_{ij}$](Tutorial-BSSN_quantities.ipynb#rbar) into its own function, and then use the result when evaluating the BSSN right-hand-side (RHS) expressions." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " ###########################\n", " ###########################\n", " # BSSN_RHSs and Ricci driver functions\n", " ###########################\n", " common_includes = \"\"\"\n", "#include <math.h>\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "#include \"SIMD/SIMD_intrinsics.h\"\n", "#include \"finite_difference_functions.h\"\n", "\"\"\"\n", " common_preloop = \"\"\"\n", " DECLARE_CCTK_ARGUMENTS;\n", " const CCTK_REAL NOSIMDinvdx0 = 1.0/CCTK_DELTA_SPACE(0);\n", " const REAL_SIMD_ARRAY invdx0 = ConstSIMD(NOSIMDinvdx0);\n", " const CCTK_REAL NOSIMDinvdx1 = 1.0/CCTK_DELTA_SPACE(1);\n", " const REAL_SIMD_ARRAY invdx1 = ConstSIMD(NOSIMDinvdx1);\n", " const CCTK_REAL NOSIMDinvdx2 = 1.0/CCTK_DELTA_SPACE(2);\n", " const REAL_SIMD_ARRAY invdx2 = ConstSIMD(NOSIMDinvdx2);\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # Output the driver code for computing the Ricci tensor:\n", " outstr = common_includes\n", " for order in BSSN_FD_orders_output:\n", " outstr += \"\"\"extern void \"\"\"+ThornName+\"_\"+\"BSSN_Ricci_FD_order_\"+str(order)+\"(CCTK_ARGUMENTS);\\n\"\n", " outstr += \"\"\"\n", "void BaikalETK_driver_pt1_BSSN_Ricci(CCTK_ARGUMENTS) {\n", " DECLARE_CCTK_ARGUMENTS;\n", " const CCTK_INT *FD_order = CCTK_ParameterGet(\"FD_order\",\"BaikalETK\",NULL);\n", "\"\"\"\n", " for order in BSSN_FD_orders_output:\n", " outstr += \" if(*FD_order == \"+str(order)+\") {\\n\"\n", " outstr += \" \"+ThornName+\"_\"+\"BSSN_Ricci_FD_order_\"+str(order)+\"(CCTK_PASS_CTOC);\\n\"\n", " outstr += \" }\\n\"\n", " outstr += \"} // END FUNCTION\\n\"\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"driver_pt1_BSSN_Ricci.c\")] = outstr.replace(\"BaikalETK\",ThornName)\n", "\n", " # Create functions for the largest C kernels (BSSN RHSs and Ricci) and output\n", " # the .h files to .c files with function wrappers; delete original .h files\n", " for filename in BaikalETK_src_filelist:\n", " if (\"BSSN_Ricci_FD_order_\") in filename and (\".h\" in filename):\n", " outstr = common_includes + \"void BaikalETK_\"+filename.replace(\".h\",\"\")+\"(CCTK_ARGUMENTS) {\\n\"\n", " outstr += common_preloop\n", " with open(os.path.join(path,filename), \"r\") as currfile:\n", " outstr += currfile.read()\n", " # Now that we've inserted the contents of the kernel into this file,\n", " # we delete the file containing the kernel\n", " os.remove(os.path.join(path,filename))\n", " outstr += \"} // END FUNCTION\\n\"\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(filename.replace(\".h\",\".c\"))] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='bssnrhssricciinput'></a>\n", "\n", "### Step 4.c.ii: Evaluate BSSN RHSs, using Ricci tensor as input \\[Back to [top](#toc)\\]\n", "$$\\label{bssnrhssricciinput}$$\n", "\n", "Next we construct the driver function for evaluating the BSSN RHSs, which make use of the Ricci tensor $\\bar{R}_{ij}$, which has just been computed." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " ###########################\n", " # Output BSSN RHSs driver function\n", " outstr = common_includes\n", " for filename in BaikalETK_src_filelist:\n", " if (\"BSSN_RHSs_\" in filename) and (\".h\" in filename):\n", " outstr += \"\"\"extern void \"\"\" + ThornName+\"_\"+filename.replace(\".h\", \"(CCTK_ARGUMENTS);\") + \"\\n\"\n", "\n", " outstr += \"\"\"\n", "void BaikalETK_driver_pt2_BSSN_RHSs(CCTK_ARGUMENTS) {\n", " DECLARE_CCTK_ARGUMENTS;\n", "\n", " const CCTK_INT *FD_order = CCTK_ParameterGet(\"FD_order\",\"BaikalETK\",NULL);\n", "\n", "\"\"\"\n", " for filename in BaikalETK_src_filelist:\n", " if (\"BSSN_RHSs_\" in filename) and (\".h\" in filename):\n", " array = filename.replace(\".\", \"_\").split(\"_\")\n", " outstr += \" if(*FD_order == \" + str(array[-2]) + \") {\\n\"\n", " outstr += \" \" + ThornName+\"_\"+filename.replace(\".h\", \"(CCTK_PASS_CTOC);\") + \"\\n\"\n", " outstr += \" }\\n\"\n", " outstr += \"} // END FUNCTION\\n\"\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"driver_pt2_BSSN_RHSs.c\")] = outstr.replace(\"BaikalETK\", ThornName)\n", "\n", " def SIMD_declare_C_params():\n", " SIMD_declare_C_params_str = \"\"\n", " for i in range(len(par.glb_Cparams_list)):\n", " # keep_param is a boolean indicating whether we should accept or reject\n", " # the parameter. singleparstring will contain the string indicating\n", " # the variable type.\n", " keep_param, singleparstring = ccl.keep_param__return_type(par.glb_Cparams_list[i])\n", "\n", " if (keep_param) and (\"CCTK_REAL\" in singleparstring):\n", " parname = par.glb_Cparams_list[i].parname\n", " SIMD_declare_C_params_str += \" const \"+singleparstring + \"*NOSIMD\"+parname+\\\n", " \" = CCTK_ParameterGet(\\\"\"+parname+\"\\\",\\\"BaikalETK\\\",NULL);\\n\"\n", " SIMD_declare_C_params_str += \" const REAL_SIMD_ARRAY \"+parname+\" = ConstSIMD(*NOSIMD\"+parname+\");\\n\"\n", " return SIMD_declare_C_params_str\n", "\n", " # Create functions for the largest C kernels (BSSN RHSs and Ricci) and output\n", " # the .h files to .c files with function wrappers; delete original .h files\n", " path = os.path.join(ThornName, \"src\")\n", " for filename in BaikalETK_src_filelist:\n", " if (\"BSSN_RHSs_\" in filename) and (\".h\" in filename):\n", " outstr = common_includes + \"void BaikalETK_\"+filename.replace(\".h\",\"\")+\"(CCTK_ARGUMENTS) {\\n\"\n", " outstr += common_preloop+SIMD_declare_C_params()\n", " with open(os.path.join(path,filename), \"r\") as currfile:\n", " outstr += currfile.read()\n", " # Now that we've inserted the contents of the kernel into this file,\n", " # we delete the file containing the kernel\n", " os.remove(os.path.join(path,filename))\n", " outstr += \"} // END FUNCTION\\n\"\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(filename.replace(\".h\",\".c\"))] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='enforcegammahatconstraint'></a>\n", "\n", "## Step 4.d: Enforcing conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint (in Cartesian coordinates, $\\det{\\hat{\\gamma}_{ij}}=1$) \\[Back to [top](#toc)\\]\n", "$$\\label{enforcegammahatconstraint}$$\n", "\n", "Here we construct the driver function for enforcing the conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint. The BSSN equations are not strongly hyperbolic if this condition is not set." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # Next, the driver for enforcing detgammabar = detgammahat constraint:\n", " outstr = common_includes + \"\"\"\n", "void BaikalETK_enforce_detgammahat_constraint(CCTK_ARGUMENTS) {\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", " #include \"enforcedetgammabar_constraint.h\"\n", "}\n", "\"\"\"\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"driver_enforcedetgammabar_constraint.c\")] = \\\n", " outstr.replace(\"BaikalETK\", ThornName)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='diagnostics'></a>\n", "\n", "## Step 4.e: Diagnostics: Computing Hamiltonian & momentum constraints \\[Back to [top](#toc)\\]\n", "$$\\label{diagnostics}$$\n", "\n", "The BSSN Hamiltonian & momentum constraints are useful diagnostics of a numerical-relativity calculation's health, as both should converge to zero with increasing numerical resolution. Here we construct the driver function." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " # Next, the driver for computing the BSSN Hamiltonian & momentum constraints\n", " outstr = \"\"\"\n", "#include <math.h>\n", "\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "#include \"SIMD/SIMD_intrinsics.h\" // Contains needed definition of REAL_SIMD_ARRAY\n", "#include \"finite_difference_functions.h\"\n", "\n", "void BaikalETK_BSSN_constraints(CCTK_ARGUMENTS) {\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", " const CCTK_REAL invdx0 = 1.0/CCTK_DELTA_SPACE(0);\n", " const CCTK_REAL invdx1 = 1.0/CCTK_DELTA_SPACE(1);\n", " const CCTK_REAL invdx2 = 1.0/CCTK_DELTA_SPACE(2);\n", "\"\"\"\n", " for filename in BaikalETK_src_filelist:\n", " if \"BSSN_constraints_\" in filename:\n", " array = filename.replace(\".\",\"_\").split(\"_\")\n", " outstr += \" if(FD_order == \"+str(array[-2])+\") {\\n\"\n", " outstr += \" #include \\\"\"+filename+\"\\\"\\n\"\n", " outstr += \" }\\n\"\n", " outstr += \"}\\n\"\n", "\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"driver_BSSN_constraints.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='t4uu'></a>\n", "\n", "## Step 4.f: `driver_BSSN_T4UU()`: Compute $T^{\\mu\\nu}$ from `TmunuBase`'s $T_{\\mu\\nu}$ \\[Back to [top](#toc)\\]\n", "$$\\label{t4uu}$$\n", "\n", "Here we implement $T^{\\mu\\nu} = g^{\\mu \\delta} g^{\\nu \\gamma} T_{\\delta \\gamma}.$" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Appending to BaikalETK_validate/BaikalETK_C_drivers_codegen.py\n" ] } ], "source": [ "%%writefile -a $BaikalETKdir/BaikalETK_C_drivers_codegen.py\n", "\n", " if enable_stress_energy_source_terms == True:\n", " # Declare T4DD as a set of gridfunctions. These won't\n", " # actually appear in interface.ccl, as interface.ccl\n", " # was set above. Thus before calling the code output\n", " # by FD_outputC(), we'll have to set pointers\n", " # to the actual gridfunctions they reference.\n", " # (In this case the eTab's.)\n", " T4DD = ixp.register_gridfunctions_for_single_rank2(\"AUXEVOL\",\"T4DD\",\"sym01\",DIM=4)\n", " import BSSN.ADMBSSN_tofrom_4metric as AB4m\n", " AB4m.g4UU_ito_BSSN_or_ADM(\"BSSN\")\n", "\n", " T4UUraised = ixp.zerorank2(DIM=4)\n", " for mu in range(4):\n", " for nu in range(4):\n", " for delta in range(4):\n", " for gamma in range(4):\n", " T4UUraised[mu][nu] += AB4m.g4UU[mu][delta]*AB4m.g4UU[nu][gamma]*T4DD[delta][gamma]\n", "\n", " T4UU_expressions = [\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU00\"),rhs=T4UUraised[0][0]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU01\"),rhs=T4UUraised[0][1]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU02\"),rhs=T4UUraised[0][2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU03\"),rhs=T4UUraised[0][3]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU11\"),rhs=T4UUraised[1][1]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU12\"),rhs=T4UUraised[1][2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU13\"),rhs=T4UUraised[1][3]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU22\"),rhs=T4UUraised[2][2]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU23\"),rhs=T4UUraised[2][3]),\n", " lhrh(lhs=gri.gfaccess(\"in_gfs\",\"T4UU33\"),rhs=T4UUraised[3][3])]\n", "\n", " outCparams = \"outCverbose=False,includebraces=False,preindent=2,enable_SIMD=True\"\n", " T4UUstr = fin.FD_outputC(\"returnstring\",T4UU_expressions, outCparams)\n", " T4UUstr_loop = lp.loop([\"i2\",\"i1\",\"i0\"],[\"0\",\"0\",\"0\"],[\"cctk_lsh[2]\",\"cctk_lsh[1]\",\"cctk_lsh[0]\"],\n", " [\"1\",\"1\",\"SIMD_width\"],[\"#pragma omp parallel for\",\"\",\"\"],\"\",T4UUstr)\n", "\n", " outstr = \"\"\"\n", "#include <math.h>\n", "\n", "#include \"cctk.h\"\n", "#include \"cctk_Arguments.h\"\n", "#include \"cctk_Parameters.h\"\n", "\n", "#include \"SIMD/SIMD_intrinsics.h\"\n", "\n", "void BaikalETK_driver_BSSN_T4UU(CCTK_ARGUMENTS) {\n", " DECLARE_CCTK_ARGUMENTS;\n", " DECLARE_CCTK_PARAMETERS;\n", "\n", " const CCTK_REAL *restrict T4DD00GF = eTtt;\n", " const CCTK_REAL *restrict T4DD01GF = eTtx;\n", " const CCTK_REAL *restrict T4DD02GF = eTty;\n", " const CCTK_REAL *restrict T4DD03GF = eTtz;\n", " const CCTK_REAL *restrict T4DD11GF = eTxx;\n", " const CCTK_REAL *restrict T4DD12GF = eTxy;\n", " const CCTK_REAL *restrict T4DD13GF = eTxz;\n", " const CCTK_REAL *restrict T4DD22GF = eTyy;\n", " const CCTK_REAL *restrict T4DD23GF = eTyz;\n", " const CCTK_REAL *restrict T4DD33GF = eTzz;\n", "\"\"\"+T4UUstr_loop+\"\"\"\n", "}\\n\"\"\"\n", "\n", " # Add C code string to dictionary (Python dictionaries are immutable)\n", " Csrcdict[append_to_make_code_defn_list(\"driver_BSSN_T4UU.c\")] = outstr.replace(\"BaikalETK\",ThornName)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='outcdrivers'></a>\n", "\n", "## Step 4.g: Output all C driver functions needed for `Baikal`/`BaikalVacuum` \\[Back to [top](#toc)\\]\n", "$$\\label{outcdrivers}$$\n", "\n", "First we call the above functions (output above to the `BaikalETK_validate.BaikalETK_C_drivers_codegen` Python module) to store all needed driver C files to a Python dictionary, then we simply outputs the dictionary to the appropriate files." ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "import BaikalETK_validate.BaikalETK_C_drivers_codegen as driver\n", "\n", "# The following Python dictionaries consist of a key, which is the filename\n", "# in the thorn's src/ directory (e.g., \"driver_BSSN_constraints.c\"),\n", "# and a value, which is the corresponding source code, stored as a\n", "# Python string.\n", "Vac_Csrcdict = {}\n", "Reg_Csrcdict = {}\n", "\n", "# We'll need lists of gridfunctions for these driver functions\n", "evol_gfs_list = cclgen.evol_gfs_list\n", "aux_gfs_list = cclgen.aux_gfs_list\n", "auxevol_gfs_list = cclgen.auxevol_gfs_list\n", "\n", "# Generate driver codes for Baikal thorn (i.e., populate the Reg_Csrcdict dictionary)\n", "driver.driver_C_codes(Reg_Csrcdict, \"Baikal\",\n", " cclgen.rhs_list,cclgen.evol_gfs_list,cclgen.aux_gfs_list,cclgen.auxevol_gfs_list,\n", " LapseCondition = LapseCondition, enable_stress_energy_source_terms=True)\n", "\n", "# Generate driver codes for BaikalVacuum thorn (i.e., populate the Vac_Csrcdict dictionary)\n", "driver.driver_C_codes(Vac_Csrcdict, \"BaikalVacuum\",\n", " cclgen.rhs_list,cclgen.evol_gfs_list,cclgen.aux_gfs_list,cclgen.auxevol_gfs_list,\n", " LapseCondition = LapseCondition, enable_stress_energy_source_terms=False)\n", "\n", "# Next we output the contents of the Reg_Csrcdict and\n", "# Vac_Csrcdict dictionaries to files in the respective\n", "# thorns' directories.\n", "for key,val in Reg_Csrcdict.items():\n", " with open(os.path.join(\"Baikal\",\"src\",key),\"w\") as file:\n", " file.write(val)\n", "for key,val in Vac_Csrcdict.items():\n", " with open(os.path.join(\"BaikalVacuum\",\"src\",key),\"w\") as file:\n", " file.write(val)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='makecodedefn'></a>\n", "\n", "## Step 4.h: `make.code.defn`: List of all C driver functions needed to compile `BaikalETK` \\[Back to [top](#toc)\\]\n", "$$\\label{makecodedefn}$$\n", "\n", "When constructing each C code driver function above, we called the `append_to_make_code_defn_list()` function, which built a list of each C code driver file. We'll now add each of those files to the `make.code.defn` file, used by the Einstein Toolkit's build system." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Finished generating Baikal and BaikalVacuum Einstein Toolkit thorns!\n" ] } ], "source": [ "# Finally output the thorns' make.code.defn files, consisting of\n", "# a list of all C codes in the above dictionaries. This is\n", "# part of the ETK build system so that these files are output.\n", "\n", "def output_make_code_defn(dictionary, ThornName):\n", " with open(os.path.join(ThornName, \"src\", \"make.code.defn\"), \"w\") as file:\n", " file.write(\"\"\"\n", "# Main make.code.defn file for thorn \"\"\"+ThornName+\"\"\"\n", "\n", "# Source files in this directory\n", "SRCS =\"\"\")\n", " filestring = \"\"\n", "\n", " list_of_C_driver_files = list(dictionary.keys())\n", " for i in range(len(list_of_C_driver_files)):\n", " filestring += \" \"+list_of_C_driver_files[i]\n", " if i != len(list_of_C_driver_files)-1:\n", " filestring += \" \\\\\\n\"\n", " else:\n", " filestring += \"\\n\"\n", " file.write(filestring)\n", "\n", "output_make_code_defn(Reg_Csrcdict, \"Baikal\")\n", "output_make_code_defn(Vac_Csrcdict, \"BaikalVacuum\")\n", "\n", "print(\"Finished generating Baikal and BaikalVacuum Einstein Toolkit thorns!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='code_validation'></a>\n", "\n", "# Step 5: Code validation \\[Back to [top](#toc)\\]\n", "$$\\label{code_validation}$$\n", "\n", "Here we will show plots demonstrating good agreement between `BaikalETK` and, e.g., `McLachlan` (another, trusted ETK thorn)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='self_validation'></a>\n", "\n", "## Step 5.a: Validation against [BaikalETK/](BaikalETK/) Python modules \\[Back to [top](#toc)\\]\n", "$$\\label{self_validation}$$\n", "\n", "As a self-validation check, we verify that the ETK thorns just generated here perfectly match those generated by the `BaikalETK.BaikalETK_main_codegen_driver` Python module." ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "***************************************\n", "Starting parallel C kernel codegen...\n", "***************************************\n", "Generating symbolic expressions for BSSN RHSs...\n", "Generating symbolic expressions for BSSN RHSs...\n", "Generating symbolic expressions for BSSN RHSs...\n", "Generating symbolic expressions for BSSN RHSs...\n", "Generating symbolic expressions for Ricci tensor...\n", "Generating symbolic expressions for Ricci tensor...\n", "Generating symbolic expressions for BSSN RHSs...\n", "Generating symbolic expressions for Ricci tensor...\n", "Generating symbolic expressions for Ricci tensor...\n", "Generating symbolic expressions for Ricci tensor...\n", "Generating optimized C code (FD_order=4) for gamma constraint. May take a while, depending on CoordSystem.\n", "Generating optimized C code (FD_order=4) for gamma constraint. May take a while, depending on CoordSystem.\n", "(BENCH) Finished gamma constraint C codegen (FD_order=4) in 0.07429814338684082 seconds.\n", "(BENCH) Finished gamma constraint C codegen (FD_order=4) in 0.07741808891296387 seconds.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 0.7707383632659912 seconds.\n", "Generating C code for BSSN RHSs (FD_order=6,Tmunu=False) in Cartesian coordinates.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 0.7784299850463867 seconds.\n", "Generating C code for BSSN RHSs (FD_order=4,Tmunu=False) in Cartesian coordinates.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 0.7759640216827393 seconds.\n", "Generating C code for BSSN RHSs (FD_order=8,Tmunu=False) in Cartesian coordinates.\n", "Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\n", "Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\n", "Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\n", "(BENCH) Finished Ricci symbolic expressions in 1.3086082935333252 seconds.\n", "Generating C code for Ricci tensor (FD_order=6) in Cartesian coordinates.\n", "(BENCH) Finished Ricci symbolic expressions in 1.2989954948425293 seconds.\n", "Generating C code for Ricci tensor (FD_order=4) in Cartesian coordinates.\n", "(BENCH) Finished Ricci symbolic expressions in 1.3120527267456055 seconds.\n", "Generating C code for Ricci tensor (FD_order=8) in Cartesian coordinates.\n", "(BENCH) Finished Ricci symbolic expressions in 1.3443379402160645 seconds.\n", "Generating C code for Ricci tensor (FD_order=4) in Cartesian coordinates.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 1.3933684825897217 seconds.\n", "Generating C code for BSSN RHSs (FD_order=4,Tmunu=True) in Cartesian coordinates.\n", "(BENCH) Finished BSSN RHS symbolic expressions in 1.3973097801208496 seconds.\n", "Generating C code for BSSN RHSs (FD_order=2,Tmunu=True) in Cartesian coordinates.\n", "(BENCH) Finished Ricci symbolic expressions in 1.5551927089691162 seconds.\n", "Generating C code for Ricci tensor (FD_order=2) in Cartesian coordinates.\n", "Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\n", "Generating optimized C code for Ham. & mom. constraints. May take a while, depending on CoordSystem.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=4,Tmunu=False) in 9.570033073425293 seconds.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=6,Tmunu=False) in 10.808182716369629 seconds.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=2,Tmunu=True) in 11.086544752120972 seconds.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=4,Tmunu=True) in 11.621188163757324 seconds.\n", "(BENCH) Finished BSSN_RHS C codegen (FD_order=8,Tmunu=False) in 13.593563318252563 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=4) in 16.840373754501343 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=4) in 16.979881763458252 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=6) in 17.595633506774902 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=2) in 18.98304510116577 seconds.\n", "(BENCH) Finished Ricci C codegen (FD_order=8) in 19.302734851837158 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=6,Tmunu=False) in 20.034168004989624 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=2,Tmunu=True) in 19.196338176727295 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=4,Tmunu=True) in 18.98688793182373 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=4,Tmunu=False) in 21.114547729492188 seconds.\n", "(BENCH) Finished Hamiltonian & momentum constraint C codegen (FD_order=8,Tmunu=False) in 21.257333755493164 seconds.\n", "Finished C kernel codegen for Baikal and BaikalVacuum in 22.234663009643555 seconds.\n", "Finished generating Baikal and BaikalVacuum Einstein Toolkit thorns!\n" ] } ], "source": [ "# First move the Baikal and Baikal_vacuum thorns just generated by this notebook to\n", "# the validate/ subdirectory:\n", "shutil.rmtree(\"validate\", ignore_errors=True)\n", "cmd.mkdir(\"validate\")\n", "shutil.move(\"Baikal\",\"validate\")\n", "shutil.move(\"BaikalVacuum\",\"validate\")\n", "\n", "# Next generate Baikal and BaikalVacuum directly from the Python module.\n", "# These will output to the Baikal and BaikalVacuum directories.\n", "!python3 BaikalETK/BaikalETK_main_codegen_driver.py" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "PASSED: validate/BaikalVacuum/interface.ccl matches trusted version\n", "PASSED: validate/BaikalVacuum/param.ccl matches trusted version\n", "PASSED: validate/BaikalVacuum/schedule.ccl matches trusted version\n", "PASSED: validate/BaikalVacuum/src/ADM_to_BSSN.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/ADM_to_BSSN__compute_lambdaU_FD_order_4.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/ADM_to_BSSN__compute_lambdaU_FD_order_6.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/ADM_to_BSSN__compute_lambdaU_FD_order_8.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_RHSs_enable_Tmunu_False_FD_order_4.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_RHSs_enable_Tmunu_False_FD_order_6.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_RHSs_enable_Tmunu_False_FD_order_8.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_Ricci_FD_order_4.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_Ricci_FD_order_6.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_Ricci_FD_order_8.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_constraints_enable_Tmunu_False_FD_order_4.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_constraints_enable_Tmunu_False_FD_order_6.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_constraints_enable_Tmunu_False_FD_order_8.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BSSN_to_ADM.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/Banner.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BoundaryCondition_NewRad.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/BoundaryConditions.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/MoL_registration.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/RegisterSlicing.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/Symmetry_registration_oldCartGrid3D.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/driver_BSSN_constraints.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/driver_enforcedetgammabar_constraint.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/driver_pt1_BSSN_Ricci.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/driver_pt2_BSSN_RHSs.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/enforcedetgammabar_constraint.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/finite_difference_functions.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/floor_the_lapse.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/make.code.defn matches trusted version\n", "PASSED: validate/BaikalVacuum/src/zero_rhss.c matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__SIMD_inner_read0.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__SIMD_inner_read1.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__SIMD_inner_read2.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__SIMD_outer_read0.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__SIMD_outer_read1.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__SIMD_outer_read2.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__declare.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__define.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__freemem.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__malloc.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__read0.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__read1.h matches trusted version\n", "PASSED: validate/BaikalVacuum/src/rfm_files/rfm_struct__read2.h matches trusted version\n", "PASSED: validate/Baikal/interface.ccl matches trusted version\n", "PASSED: validate/Baikal/param.ccl matches trusted version\n", "PASSED: validate/Baikal/schedule.ccl matches trusted version\n", "PASSED: validate/Baikal/src/ADM_to_BSSN.c matches trusted version\n", "PASSED: validate/Baikal/src/ADM_to_BSSN__compute_lambdaU_FD_order_2.h matches trusted version\n", "PASSED: validate/Baikal/src/ADM_to_BSSN__compute_lambdaU_FD_order_4.h matches trusted version\n", "PASSED: validate/Baikal/src/BSSN_RHSs_enable_Tmunu_True_FD_order_2.c matches trusted version\n", "PASSED: validate/Baikal/src/BSSN_RHSs_enable_Tmunu_True_FD_order_4.c matches trusted version\n", "PASSED: validate/Baikal/src/BSSN_Ricci_FD_order_2.c matches trusted version\n", "PASSED: validate/Baikal/src/BSSN_Ricci_FD_order_4.c matches trusted version\n", "PASSED: validate/Baikal/src/BSSN_constraints_enable_Tmunu_True_FD_order_2.h matches trusted version\n", "PASSED: validate/Baikal/src/BSSN_constraints_enable_Tmunu_True_FD_order_4.h matches trusted version\n", "PASSED: validate/Baikal/src/BSSN_to_ADM.c matches trusted version\n", "PASSED: validate/Baikal/src/Banner.c matches trusted version\n", "PASSED: validate/Baikal/src/BoundaryCondition_NewRad.c matches trusted version\n", "PASSED: validate/Baikal/src/BoundaryConditions.c matches trusted version\n", "PASSED: validate/Baikal/src/MoL_registration.c matches trusted version\n", "PASSED: validate/Baikal/src/RegisterSlicing.c matches trusted version\n", "PASSED: validate/Baikal/src/Symmetry_registration_oldCartGrid3D.c matches trusted version\n", "PASSED: validate/Baikal/src/driver_BSSN_T4UU.c matches trusted version\n", "PASSED: validate/Baikal/src/driver_BSSN_constraints.c matches trusted version\n", "PASSED: validate/Baikal/src/driver_enforcedetgammabar_constraint.c matches trusted version\n", "PASSED: validate/Baikal/src/driver_pt1_BSSN_Ricci.c matches trusted version\n", "PASSED: validate/Baikal/src/driver_pt2_BSSN_RHSs.c matches trusted version\n", "PASSED: validate/Baikal/src/enforcedetgammabar_constraint.h matches trusted version\n", "PASSED: validate/Baikal/src/finite_difference_functions.h matches trusted version\n", "PASSED: validate/Baikal/src/floor_the_lapse.c matches trusted version\n", "PASSED: validate/Baikal/src/make.code.defn matches trusted version\n", "PASSED: validate/Baikal/src/zero_rhss.c matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__SIMD_inner_read0.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__SIMD_inner_read1.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__SIMD_inner_read2.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__SIMD_outer_read0.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__SIMD_outer_read1.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__SIMD_outer_read2.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__declare.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__define.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__freemem.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__malloc.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__read0.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__read1.h matches trusted version\n", "PASSED: validate/Baikal/src/rfm_files/rfm_struct__read2.h matches trusted version\n" ] } ], "source": [ "# Then compare all files generated by this notebook\n", "# (output moved in previous code cell to validate/)\n", "# and the separate Python module (output to Baikal\n", "# and BaikalVacuum).\n", "import difflib\n", "\n", "def compare_two_files(filepath1,filepath2):\n", " with open(filepath1) as file1, open(filepath2) as file2:\n", " # Read the lines of each file\n", " file1_lines = file1.readlines()\n", " file2_lines = file2.readlines()\n", " num_diffs = 0\n", " file1_lines_noleadingwhitespace = []\n", " for line in file1_lines:\n", " if line.strip() == \"\": # If the line contains only whitespace, remove all leading whitespace\n", " file1_lines_noleadingwhitespace.append(line.lstrip())\n", " else:\n", " file1_lines_noleadingwhitespace.append(line)\n", " file2_lines_noleadingwhitespace = []\n", " for line in file2_lines:\n", " if line.strip() == \"\": # If the line contains only whitespace, remove all leading whitespace\n", " file2_lines_noleadingwhitespace.append(line.lstrip())\n", " else:\n", " file2_lines_noleadingwhitespace.append(line)\n", " for line in difflib.unified_diff(file1_lines_noleadingwhitespace, file2_lines_noleadingwhitespace,\n", " fromfile=filepath1,\n", " tofile =filepath2):\n", " sys.stdout.writelines(line)\n", " num_diffs = num_diffs + 1\n", " if num_diffs == 0:\n", " print(\"PASSED: \"+filepath1+\" matches trusted version\")\n", " else:\n", " print(\"FAILED (see diff above): \"+filepath1+\" does NOT match trusted version\")\n", "\n", "import os\n", "for dirpath, dirnames, filenames in os.walk(\"validate\"):\n", " if len(filenames) > 1:\n", " filenames.sort()\n", " for file in [os.path.join(dirpath, file) for file in filenames]:\n", " compare_two_files(file,file.replace(os.path.join(\"validate/\"),\"\"))\n", "\n", "# print(\"Ignoring lines with only whitespace:\")\n", "# for file in [\"BaikalETK_C_drivers_codegen.py\",\"BaikalETK_C_kernels_codegen.py\",\"BaikalETK_ETK_ccl_files_codegen.py\"]:\n", "# compare_two_files(BaikalETKdir,\"BaikalETK\",file)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='latex_pdf_output'></a>\n", "\n", "# Step 6: Output this notebook to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n", "$$\\label{latex_pdf_output}$$\n", "\n", "The following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n", "[Tutorial-BaikalETK.pdf](Tutorial-BaikalETK.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Created Tutorial-BaikalETK.tex, and compiled LaTeX file to PDF file\n", " Tutorial-BaikalETK.pdf\n" ] } ], "source": [ "import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\n", "cmd.output_Jupyter_notebook_to_LaTeXed_PDF(\"Tutorial-BaikalETK\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.0rc2" } }, "nbformat": 4, "nbformat_minor": 2 }