{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<script async src=\"https://www.googletagmanager.com/gtag/js?id=UA-59152712-8\"></script>\n", "<script>\n", " window.dataLayer = window.dataLayer || [];\n", " function gtag(){dataLayer.push(arguments);}\n", " gtag('js', new Date());\n", "\n", " gtag('config', 'UA-59152712-8');\n", "</script>\n", "\n", "# Common Functions for `GiRaFFEfood` Initial Data for `GiRaFFE`\n", "\n", "### NRPy+ Source Code for this module: [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Common_Functions.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Common_Functions.py)\n", "\n", "**Notebook Status:** <font color='red'><b> In Progress </b></font>\n", "\n", "**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module through the main initial data modules that depend on it.\n", "\n", "## Introduction: \n", "We will need to \"feed\" our giraffe with initial data to evolve. There are several different choices of initial data we can use here; while each represents different physical systems, they all have some steps in common with each other. To avoid code duplication, we will first write several functions that we will use for all of them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='toc'></a>\n", "\n", "# Table of Contents:\n", "$$\\label{toc}$$\n", "\n", "This notebook is organized as follows\n", "\n", "1. [Step 1](#initializenrpy): Import core NRPy+ modules and set NRPy+ parameters\n", "1. [Step 2](#vectorpotential): Set the vector potential from input functions \n", "1. [Step 3](#velocity): Compute $v^i_{(n)}$ from $E^i$ and $B^i$\n", "1. [Step 4](#setall): Generate specified initial data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='initializenrpy'></a>\n", "\n", "# Step 1: Import core NRPy+ modules and set NRPy+ parameters \\[Back to [top](#toc)\\]\n", "$$\\label{initializenrpy}$$\n", "\n", "Here, we will import the NRPy+ core modules, set the reference metric to Cartesian, and set commonly used NRPy+ parameters. We will also set up a parameter to determine what initial data is set up, although it won't do much yet." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Step 0: Import the NRPy+ core modules and set the reference metric to Cartesian\n", "import NRPy_param_funcs as par\n", "import grid as gri # NRPy+: Functions having to do with numerical grids\n", "import indexedexp as ixp\n", "import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends\n", "import reference_metric as rfm\n", "par.set_parval_from_str(\"reference_metric::CoordSystem\",\"Cartesian\")\n", "rfm.reference_metric()\n", "\n", "# Step 1a: Set commonly used parameters.\n", "thismodule = __name__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='vectorpotential'></a>\n", "\n", "# Step 2: Set the vector potential from input functions \\[Back to [top](#toc)\\]\n", "$$\\label{vectorpotential}$$\n", "\n", "First, we will write a function to generate the vector potential from input functions for each component. This function will also apply the correct coordinate staggering if the input is set as such. That is, in the staggered prescription, $A_x$ is sampled at $(i,j+1/2,k+1/2)$, $A_y$ at $(i+1/2,j,k+1/2)$, and $A_z$ at $(i+1/2,j+1/2,k)$.\n", "\n", "We will first do this for initial data that are given with Cartesian vector components." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Generic function for all 1D tests: Compute Ax,Ay,Az\n", "def Axyz_func_Cartesian(Ax_func,Ay_func,Az_func, stagger_enable, **params):\n", " x = rfm.xx_to_Cart[0]\n", " y = rfm.xx_to_Cart[1]\n", " z = rfm.xx_to_Cart[2]\n", " AD = ixp.zerorank1()\n", " # First Ax\n", " if stagger_enable:\n", " y += sp.Rational(1,2)*gri.dxx[1]\n", " z += sp.Rational(1,2)*gri.dxx[2]\n", " AD[0] = Ax_func(x,y,z, **params)\n", " # Then Ay\n", " if stagger_enable:\n", " x += sp.Rational(1,2)*gri.dxx[0]\n", " y -= sp.Rational(1,2)*gri.dxx[1]\n", " z += sp.Rational(1,2)*gri.dxx[2]\n", " AD[1] = Ay_func(x,y,z, **params)\n", " # Finally Az\n", " if stagger_enable:\n", " x += sp.Rational(1,2)*gri.dxx[0]\n", " y += sp.Rational(1,2)*gri.dxx[1]\n", " z -= sp.Rational(1,2)*gri.dxx[2]\n", " AD[2] = Az_func(x,y,z, **params)\n", "\n", " return AD" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, some initial data are given with spherical vector components. These will need to be handled slightly differently. We will need to perform a change of basis; fortunately, sympy provides us with powerful tools to do so. We will use these to first write a generic function to transform vectors to a Cartesian basis." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Generic function to convert contravariant vectors from a spherical to Cartesian basis.\n", "def change_basis_spherical_to_Cartesian_D(somevector_sphD):\n", " # Use the Jacobian matrix to transform the vectors to Cartesian coordinates.\n", " drrefmetric__dx_0UDmatrix = sp.Matrix([[sp.diff(rfm.xxSph[0],rfm.xx[0]), sp.diff(rfm.xxSph[0],rfm.xx[1]), sp.diff(rfm.xxSph[0],rfm.xx[2])],\n", " [sp.diff(rfm.xxSph[1],rfm.xx[0]), sp.diff(rfm.xxSph[1],rfm.xx[1]), sp.diff(rfm.xxSph[1],rfm.xx[2])],\n", " [sp.diff(rfm.xxSph[2],rfm.xx[0]), sp.diff(rfm.xxSph[2],rfm.xx[1]), sp.diff(rfm.xxSph[2],rfm.xx[2])]])\n", "\n", " somevectorD = ixp.zerorank1()\n", "\n", " for i in range(3):\n", " for j in range(3):\n", " somevectorD[i] += drrefmetric__dx_0UDmatrix[(j,i)]*somevector_sphD[j]\n", "\n", " return somevectorD\n", "\n", "# Generic function to convert covariant vectors from a spherical to Cartesian basis.\n", "def change_basis_spherical_to_Cartesian_U(somevector_sphU):\n", " # Use the Jacobian matrix to transform the vectors to Cartesian coordinates.\n", " drrefmetric__dx_0UDmatrix = sp.Matrix([[sp.diff(rfm.xxSph[0],rfm.xx[0]), sp.diff(rfm.xxSph[0],rfm.xx[1]), sp.diff(rfm.xxSph[0],rfm.xx[2])],\n", " [sp.diff(rfm.xxSph[1],rfm.xx[0]), sp.diff(rfm.xxSph[1],rfm.xx[1]), sp.diff(rfm.xxSph[1],rfm.xx[2])],\n", " [sp.diff(rfm.xxSph[2],rfm.xx[0]), sp.diff(rfm.xxSph[2],rfm.xx[1]), sp.diff(rfm.xxSph[2],rfm.xx[2])]])\n", " dx__drrefmetric_0UDmatrix = drrefmetric__dx_0UDmatrix.inv()\n", "\n", " somevectorU = ixp.zerorank1()\n", "\n", " for i in range(3):\n", " for j in range(3):\n", " somevectorU[i] += dx__drrefmetric_0UDmatrix[(j,i)]*somevector_sphU[j]\n", "\n", " return somevectorU" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Generic function for all 1D tests: Compute Ax,Ay,Az\n", "def Axyz_func_spherical(Ar_func,At_func,Ap_func, stagger_enable, **params):\n", " if \"KerrSchild_radial_shift\" in params:\n", " KerrSchild_radial_shift = params[\"KerrSchild_radial_shift\"]\n", " r = rfm.xxSph[0] + KerrSchild_radial_shift # We are setting the data up in Shifted Kerr-Schild coordinates\n", " else:\n", " r = rfm.xxSph[0] # Some other coordinate system\n", " theta = rfm.xxSph[1]\n", " phi = rfm.xxSph[2]\n", " AsphD = ixp.zerorank1()\n", " # First Ax\n", " if stagger_enable:\n", " y += sp.Rational(1,2)*gri.dxx[1]\n", " z += sp.Rational(1,2)*gri.dxx[2]\n", " AsphD[0] = Ar_func(r,theta,phi, **params)\n", " # Then Ay\n", " if stagger_enable:\n", " x += sp.Rational(1,2)*gri.dxx[0]\n", " y -= sp.Rational(1,2)*gri.dxx[1]\n", " z += sp.Rational(1,2)*gri.dxx[2]\n", " AsphD[1] = At_func(r,theta,phi, **params)\n", " # Finally Az\n", " if stagger_enable:\n", " x += sp.Rational(1,2)*gri.dxx[0]\n", " y += sp.Rational(1,2)*gri.dxx[1]\n", " z -= sp.Rational(1,2)*gri.dxx[2]\n", " AsphD[2] = Ap_func(r,theta,phi, **params)\n", "\n", " # Use the Jacobian matrix to transform the vectors to Cartesian coordinates.\n", " AD = change_basis_spherical_to_Cartesian(AsphD)\n", " return AD" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='velocity'></a>\n", "\n", "# Step 3: Compute $v^i_{(n)}$ from $E^i$ and $B^i$ \\[Back to [top](#toc)\\]\n", "$$\\label{velocity}$$\n", "\n", "This function computes the Valenciea 3-velocity from input electric and magnetic fields. It can also take the three-metric $\\gamma_{ij}$ as an optional input; if this is not set, the function defaults to flat spacetime." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Generic function for all 1D tests: Valencia 3-velocity from ED and BU\n", "def compute_ValenciavU_from_ED_and_BU(ED, BU, gammaDD=None):\n", " # Now, we calculate v^i = ([ijk] E_j B_k) / B^2,\n", " # where [ijk] is the Levi-Civita symbol and B^2 = \\gamma_{ij} B^i B^j$ is a trivial dot product in flat space.\n", " LeviCivitaSymbolDDD = ixp.LeviCivitaSymbol_dim3_rank3()\n", "\n", " B2 = sp.sympify(0)\n", " # In flat spacetime, use the Minkowski metric; otherwise, use the input metric.\n", " if gammaDD is None:\n", " gammaDD = ixp.zerorank2()\n", " for i in range(3):\n", " gammaDD[i][i] = sp.sympify(1)\n", " for i in range(3):\n", " for j in range(3):\n", " B2 += gammaDD[i][j] * BU[i] * BU[j]\n", "\n", " BD = ixp.zerorank1()\n", " for i in range(3):\n", " for j in range(3):\n", " BD[i] = gammaDD[i][j]*BU[j]\n", " ValenciavU = ixp.zerorank1()\n", " for i in range(3):\n", " for j in range(3):\n", " for k in range(3):\n", " ValenciavU[i] += LeviCivitaSymbolDDD[i][j][k] * ED[j] * BD[k] / B2\n", "\n", " return ValenciavU" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='setall'></a>\n", "\n", "# Step 4: Generate specified initial data \\[Back to [top](#toc)\\]\n", "$$\\label{setall}$$\n", "\n", "This is the main function that users can call to generate the initial data by passing the name of the initial data as a string and specifying if they want to enable staggering." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def GiRaFFEfood_NRPy_generate_initial_data(ID_type = \"DegenAlfvenWave\", stagger_enable = False,**params):\n", " global AD, ValenciavU\n", " if ID_type == \"ExactWald\":\n", " AD = gfcf.Axyz_func_spherical(gfew.Ar_EW,gfew.Ath_EW,gfew.Aph_EW,stagger_enable,**params)\n", " ValenciavU = gfew.ValenciavU_func_EW(**params)\n", " elif ID_type == \"MagnetosphericWald\":\n", " AD = gfcf.Axyz_func_spherical(gfmw.Ar_MW,gfmw.Ath_MW,gfmw.Aph_MW,stagger_enable,**params)\n", " ValenciavU = gfmw.ValenciavU_func_MW(**params)\n", " elif ID_type == \"SplitMonopole\":\n", " AD = gfcf.Axyz_func_spherical(gfsm.Ar_SM,gfsm.Ath_SM,gfsm.Aph_SM,stagger_enable,**params)\n", " ValenciavU = gfsm.ValenciavU_func_SM(**params)\n", " elif ID_type == \"AlfvenWave\":\n", " AD = gfcf.Axyz_func_Cartesian(gfaw.Ax_AW,gfaw.Ay_AW,gfaw.Az_AW, stagger_enable, **params)\n", " ValenciavU = gfaw.ValenciavU_func_AW(**params)\n", " elif ID_type == \"FastWave\":\n", " AD = gfcf.Axyz_func_Cartesian(gffw.Ax_FW,gffw.Ay_FW,gffw.Az_FW, stagger_enable, **params)\n", " ValenciavU = gffw.ValenciavU_func_FW(**params)\n", " elif ID_type == \"DegenAlfvenWave\":\n", " AD = gfcf.Axyz_func_Cartesian(gfdaw.Ax_DAW,gfdaw.Ay_DAW,gfdaw.Az_DAW, stagger_enable, **params)\n", " ValenciavU = gfdaw.ValenciavU_func_DAW(**params)\n", " elif ID_type == \"ThreeWaves\":\n", " AD = gfcf.Axyz_func_Cartesian(gftw.Ax_TW,gftw.Ay_TW,gftw.Az_TW, stagger_enable, **params)\n", " ValenciavU = gftw.ValenciavU_func_TW(**params)\n", " elif ID_type == \"FFE_Breakdown\":\n", " AD = gfcf.Axyz_func_Cartesian(gffb.Ax_FB,gffb.Ay_FB,gffb.Az_FB, stagger_enable, **params)\n", " ValenciavU = gffb.ValenciavU_func_FB(**params)\n", " elif ID_type == \"AlignedRotator\":\n", " AD = gfcf.Axyz_func_spherical(gfar.Ar_AR,gfar.Ath_AR,gfar.Aph_AR, stagger_enable, **params)\n", " ValenciavU = gfar.ValenciavU_func_AR(**params)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 4 }