{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

Introduction

\n", "\n", "Here you will find a step by step guide to downloading, configuring, and running the Einstein Toolkit. You may use this tutorial on a workstation or laptop, or on a supported cluster. Configuring the Einstein Toolkit on an unsupported cluster is beyond the scope of this tutorial. If you find something that does not work, please feel free to email users@einsteintoolkit.org.\n", "\n", "This tutorial is very basic and does not describe the internal workings of the Einstein Toolkit. For a more detailed introduction, please have a look a the [text](https://arxiv.org/abs/1305.5299) provided by Miguel Zilhão and Frank Löffler and the [one](https://arxiv.org/abs/2011.13314) by Nicholas Choustikov." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Prerequisites

\n", "When using the Einstein Toolkit on a laptop or workstation you will want a number of packages installed in order to download, compile and use the Einstein Toolkit components. If this is a machine which you control (i.e. you have root), you can install using one of the recipes that follow:\n", "\n", "On Mac, please first \n", "- Install [Xcode](https://itunes.apple.com/us/app/xcode/id497799835) from the Apple [App Store](https://itunes.apple.com/us/app/xcode/id497799835). In *addition* agree to Xcode license and install the Xcode Command Line Tools in Terminal \n", "```bash\n", "sudo xcodebuild -license\n", "sudo xcode-select --install\n", "```\n", "- when using MacPorts\n", " - install MacPorts for your version of the Mac operating system, if you have not already installed it (https://www.macports.org/install.php). \n", " - Next, please install the following packages, using the commands:\n", "```bash\n", "sudo port -N install pkgconfig gcc12 openmpi-gcc12 fftw-3 gsl zlib openssl subversion ld64 hdf5 +fortran +gfortran\n", "sudo port select mpi openmpi-gcc12-fortran\n", "sudo port select gcc mp-gcc12\n", "```\n", "- when using HomeBrew\n", " - install HomeBrew for your version of the Mac operating system, if you have not already installed it (https://brew.sh/). \n", " - make sure to add /opt/homebrew/bin to your PATH as instructed by the installation script. You may need to restart your terminal. \n", " - Next, please install the following packages, using the commands:\n", "```bash\n", "brew install fftw gcc gsl hdf5 hwloc jpeg open-mpi openssl zlib pkg-config bash subversion\n", "brew link --force jpeg openssl zlib\n", "```\n", "\n", "On Debian/Ubuntu/Mint use this command (the strange syntax is to suport all three of them):\n", "```bash\n", "$(sudo -l sudo) su -c 'apt-get update'\n", "$(sudo -l sudo) su -c 'apt-get install -y subversion gcc git numactl libgsl-dev libpapi-dev python3 python-is-python3 python3-pip libhwloc-dev libudev-dev make libopenmpi-dev libhdf5-openmpi-dev libfftw3-dev libssl-dev liblapack-dev g++ curl gfortran patch pkg-config libhdf5-dev libjpeg-turbo?-dev'\n", "```\n", "\n", "On Fedora use this command:\n", "```bash\n", "sudo dnf install -y libjpeg-turbo-devel gcc git lapack-devel make subversion gcc-c++ which papi-devel perl python3 python3-pip hwloc-devel openmpi-devel hdf5-openmpi-devel openssl-devel libtool-ltdl-devel numactl-devel gcc-gfortran findutils hdf5-devel fftw-devel patch gsl-devel pkgconfig\n", "module load mpi/openmpi-x86_64\n", "```\n", "You will have to repeat the `module load` command once in each new shell each time you would like to compile or run the code. You may have to log out and back in for the module command to become visible.\n", "\n", "On Centos use this command:\n", "```bash\n", "su -c 'yum install -y epel-release'\n", "su -c 'yum install --enablerepo=crb -y libjpeg-turbo-devel gcc git lapack-devel make subversion gcc-c++ which python3 python3-pip papi-devel hwloc-devel openmpi-devel openssl-devel libtool-ltdl-devel numactl-devel gcc-gfortran fftw-devel patch gsl-devel perl'\n", "module load mpi/openmpi-x86_64\n", "```\n", "You will have to repeat the `module load` command once in each new shell each time you would like to compile or run the code. You may have to log out and back in for the module command to become visible.\n", "\n", "On OpenSuse use this command:\n", "```bash\n", "sudo zypper install -y curl gcc git lapack-devel make subversion gcc-c++ which python3 python3-pip papi-devel hwloc-devel openmpi-devel libopenssl-devel libnuma-devel gcc-fortran hdf5-devel libfftw3-3 patch gsl-devel pkg-config\n", "mpi-selector --set $(mpi-selector --list | head -n1)\n", "```\n", "You will only have to execute the `mpi-selector` once, after that log out and back in to make the `mpirun` and `mpicc` commands visible without which Cactus will compile very slowly and fail to run.\n", "\n", "On Windows 10/11 please install the Ubuntu Linux subsystem, then follow the instructions for a Ubuntu system. [These](https://docs.microsoft.com/en-us/windows/wsl/install) are Microsoft's official instructions on how to do so, [Ubuntu](https://ubuntu.com/wsl#install-ubuntu-on-wsl) provides an alternative version. You may also want to install native ssh client like [Putty](https://www.chiark.greenend.org.uk/~sgtatham/putty/) and an X11 server like [VcXsrv](https://sourceforge.net/projects/vcxsrv/), [XMing](https://sourceforge.net/projects/xming/) or an all-in-one solution for X11 server and ssh client like [MobaXterm](https://mobaxterm.mobatek.net/)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook setup\n", "This notebook is intended to be used online on the Einstein Toolkit tutorial server, offline as a read-only document, as a jupyter notebook that you can download and also in your own docker container using `nds-org/jupyter-et`. To make all of these work some setting need to be tweaked, which we do in the next cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# this allows you to use \"cd\" in cells to change directories instead of requiring \"%cd\"\n", "%automagic on\n", "# override IPython's default %%bash to not buffer all output\n", "from IPython.core.magic import register_cell_magic\n", "@register_cell_magic\n", "def bash(line, cell): get_ipython().system(cell)\n", "\n", "# this (non-default package) keeps the end of shell output in view\n", "try: import scrolldown\n", "except ModuleNotFoundError: pass\n", "\n", "# We are going to install kuibit, a Python package to post-process Cactus simulations.\n", "# We will install kuibit inside the Cactus directory. The main reason for this is to\n", "# have a make easier to uninstall kuibit (you can just remove the Cactus folder). \n", "import os, sys\n", "os.environ[\"PYTHONUSERBASE\"] = os.environ['HOME'] + \"/Cactus/python\"\n", "sys.path.insert(1, f\"{os.environ['PYTHONUSERBASE']}/lib/python{sys.version_info[0]}.{sys.version_info[1]}/site-packages\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: By default, the cells in this notebook are Python commands. However, cells that start with %%bash are executed in a bash shell. If you wish to run these commands outside the notebook and in a bash shell, cut and paste only the part after the initial %%bash. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Optimized Download/Build Experience\n", "Downloading the source code from github in a classroom setting, where lots of users are doing the same thing at the same time, can create network problems, and compiling the complete ET from scratch can take up to half an hour.\n", "\n", "The next cell will create a complete pre-built ET checkout in your home directory, speeding up subsequent cells. This step is optional, but should allow you to execute the notebook in less time.\n", "\n", "**Note:** This will only work in the docker image or on the tutorial server." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "[ -r ~etuser/Cactus.tar.gz ] && ! [ -d ~/Cactus ] && tar -xzf ~etuser/Cactus.tar.gz -C ~/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Download

\n", "\n", "A script called GetComponents is used to fetch the components of the Einstein Toolkit. GetComponents serves as convenient wrapper around lower level tools like git and svn to download the codes that make up the Einstein toolkit from their individual repositories. You may download and make it executable as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cd ~/" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "curl -kLO https://raw.githubusercontent.com/gridaphobe/CRL/ET_2022_11/GetComponents\n", "chmod a+x GetComponents" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "GetComponents accepts a thorn list as an argument. To check out the needed components:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "./GetComponents https://bitbucket.org/einsteintoolkit/manifest/raw/ET_2022_11/einsteintoolkit.th" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cd ~/Cactus" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Configure and build

\n", "\n", "The recommended way to compile the Einstein Toolkit is to use the Simulation Factory (\"SimFactory\").\n", "

Configuring SimFactory for your machine

\n", "\n", "The ET depends on various libraries, and needs to interact with machine-specific queueing systems and MPI implementations. As such, it needs to be configured for a given machine. For this, it uses SimFactory. Generally, configuring SimFactory means providing an optionlist, for specifying library locations and build options, a submit script for using the batch queueing system, and a runscript, for specifying how Cactus should be run, e.g. which mpirun command to use." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "./simfactory/bin/sim setup-silent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After this step is complete you will find your machine's default setup under ./simfactory/mdb/machines/<hostname >.ini\n", "You can edit some of these settings freely, such as \"description\", \"basedir\" etc. Some entry edits could result in simulation start-up warnings and/or errors such as \"ppn\" (processor-per-node meaning number of cores on your machine), \"num-threads\" (number of threads per core) so such edits must be done with some care." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Building the Einstein Toolkit

\n", "\n", "Assuming that SimFactory has been successfully set up on your machine, you should be able to build the Einstein Toolkit with the command below. The option \"-j2\" sets the make command that will be used by the script. The number used is the number of processes used when building. Even in parallel, this step may take a while, as it compiles all the thorns specified in thornlists/einsteintoolkit.th.\n", "\n", "Note: Using too many processes to compile on the test machine may result in compiler failures, if the system runs out of memory." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "./simfactory/bin/sim build -j2 --thornlist thornlists/einsteintoolkit.th" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Running a simple example

\n", "\n", "You can now run the Einstein Toolkit with a simple test parameter file." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "./simfactory/bin/sim create-run helloworld \\\n", " --parfile arrangements/CactusExamples/HelloWorld/par/HelloWorld.par" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above command will run the simulation naming it \"helloworld\" and display its log output to screen." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you see
INFO (HelloWorld): Hello World!
anywhere in the above output, then congratulations, you have successfully downloaded, compiled and run the Einstein Toolkit! You may now want to try some of the other tutorials to explore some interesting physics examples." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Running single star simulation

\n", "\n", "What follows is the much more computationally intensive example of simulating a static TOV star. Just below this cell you can see the contents of a Cactus parameter file to simulate a single, spherical symmetric star using the Einstein Toolkit. The parameter file has been set up to run to completion in about 10 minutes, making it useful for a tutorial but too coarsely resolved to do science with it.\n", "\n", "Run the cell to write its content to `par/tov_ET.par` so that it can be used for a short simulation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "cat >par/tov_ET.par <<\"#EOF\"\n", "# Example parameter file for a static TOV star. Everything is evolved, but\n", "# because this is a solution to the GR and hydro equations, nothing changes\n", "# much. What can be seen is the initial perturbation (due to numerical errors)\n", "# ringing down (look at the density maximum), and later numerical errors\n", "# governing the solution. Try higher resolutions to decrease this error.\n", "\n", "# Some basic stuff\n", "ActiveThorns = \"Time MoL\"\n", "ActiveThorns = \"Coordbase CartGrid3d Boundary StaticConformal\"\n", "ActiveThorns = \"SymBase ADMBase TmunuBase HydroBase InitBase ADMCoupling ADMMacros\"\n", "ActiveThorns = \"IOUtil\"\n", "ActiveThorns = \"Formaline\"\n", "ActiveThorns = \"SpaceMask CoordGauge Constants LocalReduce aeilocalinterp LoopControl\"\n", "ActiveThorns = \"Carpet CarpetLib CarpetReduce CarpetRegrid2 CarpetInterp\"\n", "ActiveThorns = \"CarpetIOASCII CarpetIOScalar CarpetIOHDF5 CarpetIOBasic\"\n", "\n", "# Finalize\n", "Cactus::terminate = \"time\"\n", "Cactus::cctk_final_time = 400 #800 # divide by ~203 to get ms\n", "\n", "# Termination Trigger\n", "ActiveThorns = \"TerminationTrigger\"\n", "TerminationTrigger::max_walltime = 24 # hours\n", "TerminationTrigger::on_remaining_walltime = 0 # minutes\n", "TerminationTrigger::check_file_every = 512\n", "TerminationTrigger::termination_file = \"TerminationTrigger.txt\"\n", "TerminationTrigger::termination_from_file = \"yes\"\n", "TerminationTrigger::create_termination_file = \"yes\"\n", "\n", "# grid parameters\n", "Carpet::domain_from_coordbase = \"yes\"\n", "CartGrid3D::type = \"coordbase\"\n", "CartGrid3D::domain = \"full\"\n", "CartGrid3D::avoid_origin = \"no\"\n", "CoordBase::xmin = 0.0\n", "CoordBase::ymin = 0.0\n", "CoordBase::zmin = 0.0\n", "CoordBase::xmax = 24.0\n", "CoordBase::ymax = 24.0\n", "CoordBase::zmax = 24.0\n", "# Change these parameters to change resolution. The ?max settings above\n", "# have to be multiples of these. 'dx' is the size of one cell in x-direction.\n", "# Making this smaller means using higher resolution, because more points will\n", "# be used to cover the same space.\n", "CoordBase::dx = 2.0\n", "CoordBase::dy = 2.0\n", "CoordBase::dz = 2.0\n", "\n", "CarpetRegrid2::regrid_every = 0\n", "CarpetRegrid2::num_centres = 1\n", "CarpetRegrid2::num_levels_1 = 2\n", "CarpetRegrid2::radius_1[1] = 12.0\n", "\n", "\n", "CoordBase::boundary_size_x_lower = 3\n", "CoordBase::boundary_size_y_lower = 3\n", "CoordBase::boundary_size_z_lower = 3\n", "CoordBase::boundary_size_x_upper = 3\n", "CoordBase::boundary_size_y_upper = 3\n", "CoordBase::boundary_size_z_upper = 3\n", "CoordBase::boundary_shiftout_x_lower = 1\n", "CoordBase::boundary_shiftout_y_lower = 1\n", "CoordBase::boundary_shiftout_z_lower = 1\n", "CoordBase::boundary_shiftout_x_upper = 0\n", "CoordBase::boundary_shiftout_y_upper = 0\n", "CoordBase::boundary_shiftout_z_upper = 0\n", "\n", "\n", "ActiveThorns = \"ReflectionSymmetry\"\n", "\n", "ReflectionSymmetry::reflection_x = \"yes\"\n", "ReflectionSymmetry::reflection_y = \"yes\"\n", "ReflectionSymmetry::reflection_z = \"yes\"\n", "ReflectionSymmetry::avoid_origin_x = \"no\"\n", "ReflectionSymmetry::avoid_origin_y = \"no\"\n", "ReflectionSymmetry::avoid_origin_z = \"no\"\n", "\n", "# storage and coupling\n", "TmunuBase::stress_energy_storage = yes\n", "TmunuBase::stress_energy_at_RHS = yes\n", "TmunuBase::timelevels = 1\n", "TmunuBase::prolongation_type = none\n", "\n", "\n", "HydroBase::timelevels = 3\n", "\n", "ADMMacros::spatial_order = 4\n", "\n", "SpaceMask::use_mask = \"yes\"\n", "\n", "Carpet::enable_all_storage = no\n", "Carpet::use_buffer_zones = \"yes\"\n", "\n", "Carpet::poison_new_timelevels = \"yes\"\n", "Carpet::check_for_poison = \"no\"\n", "\n", "Carpet::init_3_timelevels = no\n", "Carpet::init_fill_timelevels = \"yes\"\n", "\n", "CarpetLib::poison_new_memory = \"yes\"\n", "CarpetLib::poison_value = 114\n", "\n", "# system specific Carpet paramters\n", "Carpet::max_refinement_levels = 10\n", "driver::ghost_size = 3\n", "Carpet::prolongation_order_space = 3\n", "Carpet::prolongation_order_time = 2\n", "\n", "# Time integration\n", "time::dtfac = 0.25\n", "\n", "MoL::ODE_Method = \"rk4\"\n", "MoL::MoL_Intermediate_Steps = 4\n", "MoL::MoL_Num_Scratch_Levels = 1\n", "\n", "# check all physical variables for NaNs\n", "# This can save you computing time, so it's not a bad idea to do this\n", "# once in a whioe.\n", "ActiveThorns = \"NaNChecker\"\n", "NaNChecker::check_every = 16384\n", "NaNChecker::action_if_found = \"terminate\" #\"terminate\", \"just warn\", \"abort\"\n", "NaNChecker::check_vars = \"ADMBase::metric ADMBase::lapse ADMBase::shift HydroBase::rho HydroBase::eps HydroBase::press HydroBase::vel\"\n", "\n", "# Hydro paramters\n", "\n", "ActiveThorns = \"EOS_Omni GRHydro\"\n", "\n", "HydroBase::evolution_method = \"GRHydro\"\n", "\n", "GRHydro::riemann_solver = \"Marquina\"\n", "GRHydro::GRHydro_eos_type = \"Polytype\"\n", "GRHydro::GRHydro_eos_table = \"2D_Polytrope\"\n", "GRHydro::recon_method = \"ppm\"\n", "GRHydro::GRHydro_stencil = 3\n", "GRHydro::bound = \"none\"\n", "GRHydro::rho_abs_min = 1.e-10\n", "# Parameter controlling finite difference order of the Christoffel symbols in GRHydro\n", "GRHydro::sources_spatial_order = 4\n", "\n", "# Curvature evolution parameters\n", "\n", "ActiveThorns = \"GenericFD NewRad\"\n", "ActiveThorns = \"ML_BSSN ML_BSSN_Helper\"\n", "ADMBase::evolution_method = \"ML_BSSN\"\n", "ADMBase::lapse_evolution_method = \"ML_BSSN\"\n", "ADMBase::shift_evolution_method = \"ML_BSSN\"\n", "ADMBase::dtlapse_evolution_method= \"ML_BSSN\"\n", "ADMBase::dtshift_evolution_method= \"ML_BSSN\"\n", "\n", "ML_BSSN::timelevels = 3\n", "\n", "ML_BSSN::harmonicN = 1 # 1+log\n", "ML_BSSN::harmonicF = 2.0 # 1+log\n", "ML_BSSN::evolveA = 1\n", "ML_BSSN::evolveB = 1\n", "# NOTE: The following parameters select geodesic slicing. This choice only enables you to evolve stationary spacetimes.\n", "# They will not allow you to simulate a collapsing TOV star.\n", "ML_BSSN::ShiftGammaCoeff = 0.0\n", "ML_BSSN::AlphaDriver = 0.0\n", "ML_BSSN::BetaDriver = 0.0\n", "ML_BSSN::advectLapse = 0\n", "ML_BSSN::advectShift = 0\n", "ML_BSSN::MinimumLapse = 1.0e-8\n", "\n", "ML_BSSN::my_initial_boundary_condition = \"extrapolate-gammas\"\n", "ML_BSSN::my_rhs_boundary_condition = \"NewRad\"\n", "\n", "# Some dissipation to get rid of high-frequency noise\n", "ActiveThorns = \"SphericalSurface Dissipation\"\n", "Dissipation::verbose = \"no\"\n", "Dissipation::epsdis = 0.01\n", "Dissipation::vars = \"\n", " ML_BSSN::ML_log_confac\n", " ML_BSSN::ML_metric\n", " ML_BSSN::ML_curv\n", " ML_BSSN::ML_trace_curv\n", " ML_BSSN::ML_Gamma\n", " ML_BSSN::ML_lapse\n", " ML_BSSN::ML_shift\n", "\"\n", "\n", "\n", "# init parameters\n", "InitBase::initial_data_setup_method = \"init_some_levels\"\n", "\n", "# Use TOV as initial data\n", "ActiveThorns = \"TOVSolver\"\n", "\n", "HydroBase::initial_hydro = \"tov\"\n", "ADMBase::initial_data = \"tov\"\n", "ADMBase::initial_lapse = \"tov\"\n", "ADMBase::initial_shift = \"tov\"\n", "ADMBase::initial_dtlapse = \"zero\"\n", "ADMBase::initial_dtshift = \"zero\"\n", "\n", "# Parameters for initial star\n", "TOVSolver::TOV_Rho_Central[0] = 1.28e-3\n", "TOVSolver::TOV_Gamma = 2\n", "TOVSolver::TOV_K = 100\n", "\n", "# Set equation of state for evolution\n", "EOS_Omni::poly_gamma = 2\n", "EOS_Omni::poly_k = 100\n", "EOS_Omni::gl_gamma = 2\n", "EOS_Omni::gl_k = 100\n", "\n", "\n", "# I/O\n", "\n", "# Use (create if necessary) an output directory named like the\n", "# parameter file (minus the .par)\n", "IO::out_dir = ${parfile}\n", "\n", "# Write one file overall per output (variable/group)\n", "# In production runs, comment this or set to \"proc\" to get one file\n", "# per MPI process\n", "IO::out_mode = \"onefile\"\n", "\n", "# Some screen output\n", "IOBasic::outInfo_every = 512\n", "IOBasic::outInfo_vars = \"Carpet::physical_time_per_hour HydroBase::rho{reductions='maximum'}\"\n", "\n", "# Scalar output\n", "IOScalar::outScalar_every = 512\n", "IOScalar::one_file_per_group = \"yes\"\n", "IOScalar::outScalar_reductions = \"norm1 norm2 norm_inf sum maximum minimum\"\n", "IOScalar::outScalar_vars = \"\n", " HydroBase::rho{reductions='maximum'}\n", " HydroBase::press{reductions='maximum'}\n", " HydroBase::eps{reductions='minimum maximum'}\n", " HydroBase::vel{reductions='minimum maximum'}\n", " HydroBase::w_lorentz{reductions='minimum maximum'}\n", " ADMBase::lapse{reductions='minimum maximum'}\n", " ADMBase::shift{reductions='minimum maximum'}\n", " ML_BSSN::ML_Ham{reductions='norm1 norm2 maximum minimum norm_inf'}\n", " ML_BSSN::ML_mom{reductions='norm1 norm2 maximum minimum norm_inf'}\n", " GRHydro::dens{reductions='minimum maximum sum'}\n", " Carpet::timing{reductions='average'}\n", "\"\n", "\n", "# 1D ASCII output. Disable for production runs!\n", "IOASCII::out1D_every = 2048\n", "IOASCII::one_file_per_group = yes\n", "IOASCII::output_symmetry_points = no\n", "IOASCII::out1D_vars = \"\n", " HydroBase::rho\n", " HydroBase::press\n", " HydroBase::eps\n", " HydroBase::vel\n", " ADMBase::lapse\n", " ADMBase::metric\n", " ADMBase::curv\n", " ML_BSSN::ML_Ham\n", " ML_BSSN::ML_mom\n", "\"\n", "\n", "# 2D HDF5 output\n", "CarpetIOHDF5::output_buffer_points = \"no\"\n", "\n", "CarpetIOHDF5::out2D_every = 2048\n", "CarpetIOHDF5::out2D_vars = \"\n", " HydroBase::rho\n", " HydroBase::eps\n", " HydroBase::vel\n", " HydroBase::w_lorentz\n", " ADMBase::lapse\n", " ADMBase::shift\n", " ADMBase::metric\n", " ML_BSSN::ML_Ham\n", " ML_BSSN::ML_mom\n", " \"\n", "#EOF" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Simfactory maintain the concept of a self-contained \"simulation\" which is identified by a name and stores its parameter file, executable and other related files. Once a simulation has been created individual simulation segments can be submitted using the stored executable and parameter file." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "# create simulation directory structure\n", "./simfactory/bin/sim create tov_ET --configuration sim --parfile=par/tov_ET.par" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `create` command sets up the simulation directory skeleton. It copies the executable, parameter file as well as Simfactory's queuing scripts." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "# start simulation segment\n", "./simfactory/bin/sim submit tov_ET --cores=2 --num-threads=1 --walltime=0:20:00" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `submit` command submitted a new segment for the simulation `tov_ET` to the queueing system to run in the background asking for a maximum runtime of 20 minutes, using a total of 2 compute cores and using 1 thread per MPI ranks. On your laptop it will start right away, on a cluster the queuing system will wait until a sufficient number of nodes is able to start your simulation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can check the status of the simulation with the command below. You can run this command repeatedly until the job shows
[ACTIVE (FINISHED)...
as its state. Prior to that, it may show up as QUEUED or RUNNING." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%%bash\n", "./simfactory/bin/sim list-simulations tov_ET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " To watch a simulation's log output use the `show-output` command of simfactory. **Interrupt the kernel** (or press `CTRL-C` if copying & pasting these commands to a terminal) if you wish to stop watching." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "# watch log output, following along as new output is produced\n", "./simfactory/bin/sim show-output --follow tov_ET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can leave out the `--follow` option if you would like to see all output up to this point." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Managing submitted simulations\n", "\n", "Since the `submit` command was used to start the simulation, it is running in the background and you have to use simfactory commands to interact with it. The next cell shows how to list simulations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Remember** that you have to interrupt the kernel to stop `show-output` and be able to execute the cells below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "./simfactory/bin/sim list-simulations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Simfactory offers a `stop` command to abort a running simulation. The next cell has the command intentionally commented out to prevent accidental stopping of your very first simulation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "#./simfactory/bin/sim stop tov_ET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "after this the simulation changes to the \"FINISHED\" state indicating it is no longer running." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Simulations that are no longer needed are removed using the `purge` command. The next cell has the command intentionally commented out to prevent accidental removing of your very first simulation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "#./simfactory/bin/sim purge tov_ET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Simfactory creates all output for a simulation in a set of output directories, one for each restart from a checkpoint. You can find out its location using `get-output-dir`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "./simfactory/bin/sim get-output-dir tov_ET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Plotting the Output

\n", "\n", "The simplest way to analyze a simulation is using [kuibit](https://sbozzolo.github.io/kuibit/). a Python package for post-processing and visualization. `kuibit` is part of the Einstein Toolkit, but has to be installed separately. You can install `kuibit` with `pip3`. This will take care of all the required dependencies." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "# We install kuibit inside the Cactus folder so that if you wish to\n", "# remove kuibit you can simply remove the folder. You can ignore the\n", "# following line if you want to install kuibit along your other python\n", "# packages.\n", "export PYTHONUSERBASE=\"$HOME/Cactus/python\"\n", "pip3 install -U --user kuibit==1.3.5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`kuibit` is a Python library: you can use it in Python scripts/notebooks to inspect and post-process your simulations. `kuibit` has rich [documentation](https://sbozzolo.github.io/kuibit) and tutorials to get use the package. Here, we are going to show only a few features.\n", "\n", "The Einstein Toolkit comes with several [kuibit examples](https://sbozzolo.github.io/kuibit/#examples) that are ready to be used. The examples are available in the `utils/Analysis/kuibit/examples` folder. Most of these codes are command-line scripts that produce plots. \n", "\n", "For example, two common plots are for timeseries and for grid variables. For those, we can use the `plot_timeseries.py` and `plot_grid_var.py` codes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "# Plot a timeseries with the maximum of the density\n", "./utils/Analysis/kuibit/examples/bins/plot_timeseries.py \\\n", "--datadir $HOME/simulations/tov_ET --variable \"rho\" --reduction \"maximum\" \\\n", "--outdir $HOME/simulations/tov_ET\n", "\n", "# Plot a 2D slice of the rest-mass density (which does not support reflection \n", "# symmetry, see below). \n", "#`-x0` and `-x1` define the region we want to plot: they coordinates of the \n", "# lower left and top right corners. \n", "#When not specified, the latest available iteration is plotted. \n", "./utils/Analysis/kuibit/examples/bins/plot_grid_var.py \\\n", "--datadir $HOME/simulations/tov_ET --variable \"rho\" -x0 0 0 -x1 10 10 \\\n", "--outdir $HOME/simulations/tov_ET --logscale --multilinear-interpolate \\\n", "--colorbar" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The outputs are two images named `rho_maximum.png` and `rho_xy.png` saved in the folder of the simulation.\n", "Let's have a look at them" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import Image\n", "\n", "# datadir is the top-level directory that contains that data for\n", "# a given simulation.\n", "datadir = os.environ[\"HOME\"]+\"/simulations/tov_ET\"\n", "Image(filename=os.path.join(datadir, \"rho_maximum.png\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Image(filename=os.path.join(datadir, \"rho_xy.png\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While the examples are handy and have lots of options, often we want full control. For that, we use `kuibit` as a library. Let's see how we can make plots similar to the ones we generated with the examples. First, we import what we need:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# This cell enables inline plotting in the notebook\n", "%matplotlib inline\n", "\n", "import matplotlib.pyplot as plt\n", "from kuibit.simdir import SimDir\n", "import kuibit.visualize_matplotlib as viz" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The main interface to simulation data in `kuibit` is the `SimDir`. It contains all the information that `kuibit` can extract from the output. `SimDir` takes as input the top level folder of the output. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sim = SimDir(datadir)\n", "# This will print a list with all the data available\n", "print(sim)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's plot the maximum pressure and compare it with the pressure as computed from the maximum density." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rho = sim.timeseries.maximum['rho']\n", "press = sim.timeseries.maximum['press']\n", "\n", "# Polytropic constants\n", "K, Gamma = 100, 2\n", "\n", "# Timeseries in kuibit support all the algebraic operations\n", "cold_press = K * rho**Gamma\n", "\n", "plt.ylabel(\"Pressure\")\n", "plt.xlabel(\"Time\")\n", "plt.plot(cold_press, label=\"cold_press\")\n", "plt.plot(press, label=\"press\", ls=\"dashed\")\n", "plt.legend()\n", "\n", "max_diff = abs(press - cold_press).max()\n", "\n", "print(f\"Max difference {max_diff:.3e}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's plot an equatorial slice of the rest-mass density at the initial iteration." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iteration_number = 0\n", "rho_xy = sim.gridfunctions.xy[\"rho\"][iteration_number]\n", "\n", "# We cannot plot rho_xy directly because it contains all\n", "# the information for the various refinement levels. \n", "# We need to resample the data onto a uniform grid.\n", "\n", "# shape is the resolution at which we resample\n", "# x0, x1 are the bottom left and top right coordiantes\n", "# that we want to consider\n", "\n", "# Here we choose x0=[0,0] because we have reflection \n", "# symmetry\n", "\n", "# resample=True activates multilinear resampling\n", "\n", "rho_xy_unif = rho_xy.to_UniformGridData(shape=[100, 100], \n", " x0=[0,0],\n", " x1=[10, 10],\n", " resample=True)\n", "\n", "# Undo reflection symmetry on the x axis\n", "rho_xy_unif.reflection_symmetry_undo(dimension=0)\n", "# Undo reflection symmetry on the y axis\n", "rho_xy_unif.reflection_symmetry_undo(dimension=1)\n", "\n", "viz.plot_color(rho_xy_unif,\n", " logscale=True,\n", " colorbar=True,\n", " label=\"rho\",\n", " xlabel=\"x\",\n", " ylabel=\"y\",\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The image is pixelated because the resolution of our simulation is very low.\n", "\n", "Finally, we can compare our data with some reference values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# this cell shows the expected plot using previously stored data\n", "import numpy as np\n", "\n", "# reconstruct plot data from saved strings\n", "(quant_diff_s, minval, maxval, delta_t) = \\\n", " (\"ff8baee2e5d2ac70320c0007182c404f5b656f7b8897a8bbcddde8eeede8ddcfc0b0a29589817b777473757a8189929ca6b0bac4cbd0d3d4d4d2cfcbc7c2bdb8b4b0adaaa9a8a9abaeb3b8bcc1c5c8cccf\",\n", " 1.235e-03, 1.280e-03, 5.000e+00)\n", "quant_diff = np.array(bytearray.fromhex(quant_diff_s))\n", "rec_vals = quant_diff / 255. * (maxval- minval) + minval\n", "rec_time = np.arange(0,len(quant_diff)) * delta_t\n", "\n", "# plot them, including your results if you have them\n", "plt.plot(rec_time, rec_vals/rec_vals[0],\n", " label=\"central density (stored values)\")\n", "try: plt.plot(rho/rho(0), label=\"central density (your results)\")\n", "except: pass\n", "plt.xlabel(r'$t$ [$M_{\\odot}$]');\n", "plt.ylabel(r'$\\rho_c / \\rho_c(0)$');\n", "plt.legend(loc='lower right');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Running the cell above will produce a plot of the expected results as well as your own results.\n", "![Central density(stored value)](https://github.com/nds-org/jupyter-et/raw/master/data/tov_ET.png)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create small dataset to show what plot should look like\n", "def sparsify(lin_data, sparsity):\n", " # drop unwanted datapoint\n", " sparse_data = lin_data[::sparsity,:]\n", " \n", " # compute min, max of dataset then difference to minimum and quantize to 8 bit precisison\n", " minval = np.amin(sparse_data[:,2])\n", " maxval = np.amax(sparse_data[:,2])\n", " print(\"minval:\",minval)\n", " print(\"maxval:\",maxval)\n", " diff = sparse_data[:,2] - minval\n", " quant_diff = np.minimum(np.maximum(np.round(diff / (maxval - minval) * 255.5), 0), 255).astype('int')\n", "\n", " # timesteps are equidistant and start at 0 so we only need the stepsize\n", " delta_t = sparse_data[1,1] - sparse_data[0,1]\n", "\n", " # string rep of 8bit differences\n", " quant_diff_s = \"\"\n", " for i in quant_diff: quant_diff_s += \"%02x\" % i\n", " \n", " print ('\"%s\", %.3e, %.3e, %.3e' % (quant_diff_s, minval, maxval, delta_t))\n", "\n", "# create a low fidelity representation of every 10th datapoint and output all data a string\n", "sparsify(np.array([rho.t,rho.x,rho.y]).transpose(), 10) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 2 }