{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "qT_RWmTEugu9"
},
"source": [
"# Reducing Numerical Errors with Deep Learning\n",
"\n",
"First, we'll target numerical errors that arise in the discretization of a continuous PDE $\\mathcal P^*$, i.e. when we formulate $\\mathcal P$. This approach will demonstrate that, despite the lack of closed-form descriptions, discretization errors often are functions with regular and repeating structures and, thus, can be learned by a neural network. Once the network is trained, it can be evaluated locally to improve the solution of a PDE-solver, i.e., to reduce its numerical error. The resulting method is a hybrid one: it will always run (a coarse) PDE solver, and then improve it at runtime with corrections inferred by an NN.\n",
"\n",
" \n",
"Pretty much all numerical methods contain some form of iterative process: repeated updates over time for explicit solvers, or within a single update step for implicit solvers. \n",
"An example for the second case could be found [here](https://github.com/tum-pbs/CG-Solver-in-the-Loop),\n",
"but below we'll target the first case, i.e. iterations over time.\n",
"[[run in colab]](https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/diffphys-code-sol.ipynb)\n",
"\n",
"\n",
"## Problem formulation\n",
"\n",
"In the context of reducing errors, it's crucial to have a _differentiable physics solver_, so that the learning process can take the reaction of the solver into account. This interaction is not possible with supervised learning or PINN training. Even small inference errors of a supervised NN can accumulate over time, and lead to a data distribution that differs from the distribution of the pre-computed data. This distribution shift can lead to sub-optimal results, or even cause blow-ups of the solver.\n",
"\n",
"In order to learn the error function, we'll consider two different discretizations of the same PDE $\\mathcal P^*$: \n",
"a _reference_ version, which we assume to be accurate, with a discretized version \n",
"$\\mathcal P_r$, and solutions $\\mathbf r \\in \\mathscr R$, where $\\mathscr R$ denotes the manifold of solutions of $\\mathcal P_r$.\n",
"In parallel to this, we have a less accurate approximation of the same PDE, which we'll refer to as the _source_ version, as this will be the solver that our NN should later on interact with. Analogously,\n",
"we have $\\mathcal P_s$ with solutions $\\mathbf s \\in \\mathscr S$.\n",
"After training, we'll obtain a _hybrid_ solver that uses $\\mathcal P_s$ in conjunction with a trained network to obtain improved solutions, i.e., solutions that are closer to the ones produced by $\\mathcal P_r$.\n",
"\n",
"```{figure} resources/diffphys-sol-manifolds.jpeg\n",
"---\n",
"height: 150px\n",
"name: diffphys-sol-manifolds\n",
"---\n",
"Visual overview of coarse and reference manifolds\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tayrJa7_ZzS_"
},
"source": [
"\n",
"Let's assume $\\mathcal{P}$ advances a solution by a time step $\\Delta t$, and let's denote $n$ consecutive steps by a superscript:\n",
"$\n",
"\\newcommand{\\pde}{\\mathcal{P}}\n",
"\\newcommand{\\pdec}{\\pde_{s}}\n",
"\\newcommand{\\vc}[1]{\\mathbf{s}_{#1}} \n",
"\\newcommand{\\vr}[1]{\\mathbf{r}_{#1}} \n",
"\\newcommand{\\vcN}{\\vs} \n",
"\\newcommand{\\project}{\\mathcal{T}} \n",
"\\pdec^n ( \\mathcal{T} \\vr{t} ) = \\pdec(\\pdec(\\cdots \\pdec( \\mathcal{T} \\vr{t} )\\cdots)) .\n",
"$ \n",
"The corresponding state of the simulation is\n",
"$\n",
"\\mathbf{s}_{t+n} = \\mathcal{P}^n ( \\mathcal{T} \\mathbf{r}_{t} ) .\n",
"$\n",
"Here we assume a mapping operator $\\mathcal{T}$ exists that transfers a reference solution to the source manifold. This could, e.g., be a simple downsampling operation.\n",
"Especially for longer sequences, i.e. larger $n$, the source state \n",
"$\\newcommand{\\vc}[1]{\\mathbf{s}_{#1}} \\vc{t+n}$\n",
"will deviate from a corresponding reference state\n",
"$\\newcommand{\\vr}[1]{\\mathbf{r}_{#1}} \\vr{t+n}$. \n",
"This is what we will address with an NN in the following.\n",
"\n",
"As before, we'll use an $L^2$-norm to quantify the deviations, i.e., \n",
"an error function $\\newcommand{\\loss}{e} \n",
"\\newcommand{\\corr}{\\mathcal{C}} \n",
"\\newcommand{\\vc}[1]{\\mathbf{s}_{#1}} \n",
"\\newcommand{\\vr}[1]{\\mathbf{r}_{#1}} \n",
"\\loss (\\vc{t},\\mathcal{T} \\vr{t})=\\Vert\\vc{t}-\\mathcal{T} \\vr{t}\\Vert_2$. \n",
"Our learning goal is to train at a correction operator \n",
"$\\mathcal{C} ( \\mathbf{s} )$ such that \n",
"a solution to which the correction is applied has a lower error than the original unmodified (source) \n",
"solution: $\\newcommand{\\loss}{e} \n",
"\\newcommand{\\corr}{\\mathcal{C}} \n",
"\\newcommand{\\vr}[1]{\\mathbf{r}_{#1}} \n",
"\\loss ( \\mathcal{P}_{s}( \\corr (\\mathcal{T} \\vr{t}) ) , \\mathcal{T} \\vr{t+1}) < \\loss ( \\mathcal{P}_{s}( \\mathcal{T} \\vr{t} ), \\mathcal{T} \\vr{t+1})$. \n",
"\n",
"The correction function \n",
"$\\newcommand{\\vcN}{\\mathbf{s}} \\newcommand{\\corr}{\\mathcal{C}} \\corr (\\vcN | \\theta)$ \n",
"is represented as a deep neural network with weights $\\theta$\n",
"and receives the state $\\mathbf{s}$ to infer an additive correction field with the same dimension.\n",
"To distinguish the original states $\\mathbf{s}$ from the corrected ones, we'll denote the latter with an added tilde $\\tilde{\\mathbf{s}}$.\n",
"The overall learning goal now becomes\n",
"\n",
"$$\n",
"\\newcommand{\\corr}{\\mathcal{C}} \n",
"\\newcommand{\\vr}[1]{\\mathbf{r}_{#1}} \n",
"\\text{arg min}_\\theta \\big( ( \\mathcal{P}_{s} \\corr )^n ( \\mathcal{T} \\vr{t} ) - \\mathcal{T} \\vr{t+n} \\big)^2\n",
"$$\n",
"\n",
"To simplify the notation, we've dropped the sum over different samples here (the $i$ from previous versions).\n",
"A crucial bit that's easy to overlook in the equation above, is that the correction depends on the modified states, i.e.\n",
"it is a function of\n",
"$\\tilde{\\mathbf{s}}$, so we have \n",
"$\\newcommand{\\vctN}{\\tilde{\\mathbf{s}}} \\newcommand{\\corr}{\\mathcal{C}} \\corr (\\vctN | \\theta)$.\n",
"These states actually evolve over time when training. They don't exist beforehand.\n",
"\n",
"**TL;DR**:\n",
"We'll train a network $\\mathcal{C}$ to reduce the numerical errors of a simulator with a more accurate reference. It's crucial to have the _source_ solver realized as a differential physics operator, such that it can give gradients for an improved training of $\\mathcal{C}$.\n",
"\n",
" \n",
"\n",
"---\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hPgwGkzYdIww"
},
"source": [
"## Getting started with the implementation\n",
"\n",
"The following replicates an experiment) from [Solver-in-the-loop: learning from differentiable physics to interact with iterative pde-solvers](https://ge.in.tum.de/publications/2020-um-solver-in-the-loop/) {cite}`holl2019pdecontrol`, further details can be found in section B.1 of the [appendix](https://arxiv.org/pdf/2007.00016.pdf) of the paper.\n",
"\n",
"First, let's download the prepared data set (for details on generation & loading cf. https://github.com/tum-pbs/Solver-in-the-Loop), and let's get the data handling out of the way, so that we can focus on the _interesting_ parts..."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "JwZudtWauiGa",
"outputId": "30ab90f0-4b0c-4451-81da-f85887aeb7b9"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading training data (73MB), this can take a moment the first time...\n",
"Loaded data, 6 training sims\n"
]
}
],
"source": [
"import os, sys, logging, argparse, pickle, glob, random, distutils.dir_util, urllib.request\n",
"\n",
"fname_train = 'sol-karman-2d-train.pickle'\n",
"if not os.path.isfile(fname_train):\n",
" print(\"Downloading training data (73MB), this can take a moment the first time...\")\n",
" urllib.request.urlretrieve(\"https://physicsbaseddeeplearning.org/data/\"+fname_train, fname_train)\n",
"\n",
"with open(fname_train, 'rb') as f: data_preloaded = pickle.load(f)\n",
"print(\"Loaded data, {} training sims\".format(len(data_preloaded)) )\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RY1F4kdWPLNG"
},
"source": [
"Also let's get installing / importing all the necessary libraries out of the way. And while we're at it, we can set the random seed - obviously, 42 is the ultimate choice here \ud83d\ude42"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "BGN4GqxkIueM",
"outputId": "d934bf06-b6b9-41ce-be11-d1d6d3561c89"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": []
}
],
"source": [
"#!pip install --upgrade --quiet phiflow\n",
"#!pip uninstall phiflow\n",
"!pip install --upgrade --quiet git+https://github.com/tum-pbs/PhiFlow@develop\n",
"\n",
"from phi.tf.flow import *\n",
"import tensorflow as tf\n",
"from tensorflow import keras\n",
"\n",
"random.seed(42) \n",
"np.random.seed(42)\n",
"tf.random.set_seed(42)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OhnzPdoww11P"
},
"source": [
"## Simulation setup\n",
"\n",
"Now we can set up the _source_ simulation $\\mathcal{P}_{s}$. \n",
"Note that we won't deal with \n",
"$\\mathcal{P}_{r}$\n",
"below: the downsampled reference data is contained in the training data set. It was generated with a four times finer discretization. Below we're focusing on the interaction of the source solver and the NN. \n",
"\n",
"This code block and the next ones will define lots of functions, that will be used later on for training.\n",
"\n",
"The `KarmanFlow` solver below simulates a relatively standard wake flow case with a spherical obstacle in a rectangular domain, and an explicit viscosity solve to obtain different Reynolds numbers. This is the geometry of the setup:\n",
"\n",
"```{figure} resources/diffphys-sol-domain.png\n",
"---\n",
"height: 200px\n",
"name: diffphys-sol-domain\n",
"---\n",
"Domain setup for the wake flow case (sizes in the imlpementation are using an additional factor of 100).\n",
"```\n",
"\n",
"The solver applies inflow boundary conditions for the y-velocity with a pre-multiplied mask (`vel_BcMask`), to set the y components at the bottom of the domain during the simulation step. This mask is created with the `HardGeometryMask` from phiflow, which initializes the spatially shifted entries for the components of a staggered grid correctly. The simulation step is quite straight forward: it computes contributions for viscosity, inflow, advection and finally makes the resulting motion divergence free via an implicit pressure solve:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "6WNMcdWUw4EP"
},
"outputs": [],
"source": [
"class KarmanFlow():\n",
" def __init__(self, domain):\n",
" self.domain = domain\n",
"\n",
" self.vel_BcMask = self.domain.staggered_grid(HardGeometryMask(Box[:5, :]) )\n",
" \n",
" self.inflow = self.domain.scalar_grid(Box[5:10, 25:75]) # scale with domain if necessary!\n",
" self.obstacles = [Obstacle(Sphere(center=[50, 50], radius=10))] \n",
"\n",
" def step(self, density_in, velocity_in, re, res, buoyancy_factor=0, dt=1.0):\n",
" velocity = velocity_in\n",
" density = density_in\n",
"\n",
" # viscosity\n",
" velocity = phi.flow.diffuse.explicit(field=velocity, diffusivity=1.0/re*dt*res*res, dt=dt)\n",
" \n",
" # inflow boundary conditions\n",
" velocity = velocity*(1.0 - self.vel_BcMask) + self.vel_BcMask * (1,0)\n",
"\n",
" # advection \n",
" density = advect.semi_lagrangian(density+self.inflow, velocity, dt=dt)\n",
" velocity = advected_velocity = advect.semi_lagrangian(velocity, velocity, dt=dt)\n",
"\n",
" # mass conservation (pressure solve)\n",
" pressure = None\n",
" velocity, pressure = fluid.make_incompressible(velocity, self.obstacles)\n",
" self.solve_info = { 'pressure': pressure, 'advected_velocity': advected_velocity }\n",
" \n",
" return [density, velocity]\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RYFUGICgxk0K"
},
"source": [
"## Network architecture\n",
"\n",
"We'll also define two alternative versions of a neural networks to represent \n",
"$\\newcommand{\\vcN}{\\mathbf{s}} \\newcommand{\\corr}{\\mathcal{C}} \\corr$. In both cases we'll use fully convolutional networks, i.e. networks without any fully-connected layers. We'll use Keras within tensorflow to define the layers of the network (mostly via `Conv2D`), typically activated via ReLU and LeakyReLU functions, respectively.\n",
"The inputs to the network are: \n",
"- 2 fields with x,y velocity\n",
"- the Reynolds number as constant channel.\n",
"\n",
"The output is: \n",
"- a 2 component field containing the x,y velocity.\n",
"\n",
"First, let's define a small network consisting only of four convolutional layers with ReLU activations (we're also using keras here for simplicity). The input dimensions are determined from input tensor in the `inputs_dict` (it has three channels: u,v, and Re). Then we process the data via three conv layers with 32 features each, before reducing to 2 channels in the output. "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "qIrWYTy6xscA"
},
"outputs": [],
"source": [
"def network_small(inputs_dict):\n",
" l_input = keras.layers.Input(**inputs_dict)\n",
" block_0 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(l_input)\n",
" block_0 = keras.layers.LeakyReLU()(block_0)\n",
"\n",
" l_conv1 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(block_0)\n",
" l_conv1 = keras.layers.LeakyReLU()(l_conv1)\n",
" l_conv2 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(l_conv1)\n",
" block_1 = keras.layers.LeakyReLU()(l_conv2)\n",
"\n",
" l_output = keras.layers.Conv2D(filters=2, kernel_size=5, padding='same')(block_1) # u, v\n",
" return keras.models.Model(inputs=l_input, outputs=l_output)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YfHvdI7yxtdj"
},
"source": [
"For flexibility (and larger-scale tests later on), let's also define a _proper_ ResNet with a few more layers. This architecture is the one from the original paper, and will give a fairly good performance (`network_small` above will train faster, but give a sub-optimal performance at inference time)."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"id": "TyfpA7Fbx0ro"
},
"outputs": [],
"source": [
"def network_medium(inputs_dict):\n",
" l_input = keras.layers.Input(**inputs_dict)\n",
" block_0 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(l_input)\n",
" block_0 = keras.layers.LeakyReLU()(block_0)\n",
"\n",
" l_conv1 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(block_0)\n",
" l_conv1 = keras.layers.LeakyReLU()(l_conv1)\n",
" l_conv2 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(l_conv1)\n",
" l_skip1 = keras.layers.add([block_0, l_conv2])\n",
" block_1 = keras.layers.LeakyReLU()(l_skip1)\n",
"\n",
" l_conv3 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(block_1)\n",
" l_conv3 = keras.layers.LeakyReLU()(l_conv3)\n",
" l_conv4 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(l_conv3)\n",
" l_skip2 = keras.layers.add([block_1, l_conv4])\n",
" block_2 = keras.layers.LeakyReLU()(l_skip2)\n",
"\n",
" l_conv5 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(block_2)\n",
" l_conv5 = keras.layers.LeakyReLU()(l_conv5)\n",
" l_conv6 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(l_conv5)\n",
" l_skip3 = keras.layers.add([block_2, l_conv6])\n",
" block_3 = keras.layers.LeakyReLU()(l_skip3)\n",
"\n",
" l_conv7 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(block_3)\n",
" l_conv7 = keras.layers.LeakyReLU()(l_conv7)\n",
" l_conv8 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(l_conv7)\n",
" l_skip4 = keras.layers.add([block_3, l_conv8])\n",
" block_4 = keras.layers.LeakyReLU()(l_skip4)\n",
"\n",
" l_conv9 = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(block_4)\n",
" l_conv9 = keras.layers.LeakyReLU()(l_conv9)\n",
" l_convA = keras.layers.Conv2D(filters=32, kernel_size=5, padding='same')(l_conv9)\n",
" l_skip5 = keras.layers.add([block_4, l_convA])\n",
" block_5 = keras.layers.LeakyReLU()(l_skip5)\n",
"\n",
" l_output = keras.layers.Conv2D(filters=2, kernel_size=5, padding='same')(block_5)\n",
" return keras.models.Model(inputs=l_input, outputs=l_output)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ew-MgPSlyLW-"
},
"source": [
"Next, we're coming to two functions which are pretty important: they transform the simulation state into an input tensor for the network, and vice versa. Hence, they're the interface between _keras/tensorflow_ and _phiflow_.\n",
"\n",
"The `to_keras` function uses the two vector components via `vector['x']` and `vector['y']` to discard the outermost layer of the velocity field grids. This gives two tensors of equal size that can be combined. \n",
"It then adds a constant channel via `math.ones` that is multiplied by the desired Reynolds number in `ext_const_channel`. The resulting stack of grids is stacked along the `channels` dimensions, and represents an input to the neural network. \n",
"\n",
"After network evaluation, we transform the output tensor back into a phiflow grid via the `to_phiflow` function. \n",
"It converts the 2-component tensor that is returned by the network into a phiflow staggered grid object, so that it is compatible with the velocity field of the fluid simulation.\n",
"(Note: these are two _centered_ grids with different sizes, so we leave the work to the `domain.staggered_grid` function, which also sets physical size and boundary conditions as given by the domain object)."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "hhGFpTjGyRyg"
},
"outputs": [],
"source": [
"\n",
"def to_keras(dens_vel_grid_array, ext_const_channel):\n",
" # align the sides the staggered velocity grid making its size the same as the centered grid\n",
" return math.stack(\n",
" [\n",
" math.pad( dens_vel_grid_array[1].vector['x'].values, {'x':(0,1)} , math.extrapolation.ZERO),\n",
" dens_vel_grid_array[1].vector['y'].y[:-1].values, # v\n",
" math.ones(dens_vel_grid_array[0].shape)*ext_const_channel # Re\n",
" ],\n",
" math.channel('channels')\n",
" )\n",
"\n",
"def to_phiflow(tf_tensor, domain):\n",
" return domain.staggered_grid(\n",
" math.stack(\n",
" [\n",
" math.tensor(tf.pad(tf_tensor[..., 1], [(0,0), (0,1), (0,0)]), math.batch('batch'), math.spatial('y, x')), # v\n",
" math.tensor( tf_tensor[...,:-1, 0], math.batch('batch'), math.spatial('y, x')), # u \n",
" ], math.channel('vector')\n",
" )\n",
" )\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VngMwN_9y00S"
},
"source": [
"---\n",
"\n",
"## Data handling\n",
"\n",
"So far so good - we also need to take care of a few more mundane tasks, e.g., some data handling and randomization. Below we define a `Dataset` class that stores all \"ground truth\" reference data (already downsampled).\n",
"\n",
"We actually have a lot of data dimensions: multiple simulations, with many time steps, each with different fields. This makes the code below a bit more difficult to read.\n",
"\n",
"The data format for the numpy array `dataPreloaded`: is `['sim_name', frame, field (dens & vel)]`, where each field has dimension `[batch-size, y-size, x-size, channels]` (this is the standard for a phiflow export)."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "tjywcdD2y20t"
},
"outputs": [],
"source": [
"class Dataset():\n",
" def __init__(self, data_preloaded, num_frames, num_sims=None, batch_size=1, is_testset=False):\n",
" self.epoch = None\n",
" self.epochIdx = 0\n",
" self.batch = None\n",
" self.batchIdx = 0\n",
" self.step = None\n",
" self.stepIdx = 0\n",
"\n",
" self.dataPreloaded = data_preloaded\n",
" self.batchSize = batch_size\n",
"\n",
" self.numSims = num_sims\n",
" self.numBatches = num_sims//batch_size\n",
" self.numFrames = num_frames\n",
" self.numSteps = num_frames\n",
" \n",
" # initialize directory keys (using naming scheme from SoL codebase)\n",
" # constant additional per-sim channel: Reynolds numbers from data generation\n",
" # hard coded for training and test data here\n",
" if not is_testset:\n",
" self.dataSims = ['karman-fdt-hires-set/sim_%06d'%i for i in range(num_sims) ]\n",
" ReNrs = [160000.0, 320000.0, 640000.0, 1280000.0, 2560000.0, 5120000.0]\n",
" self.extConstChannelPerSim = { self.dataSims[i]:[ReNrs[i]] for i in range(num_sims) }\n",
" else:\n",
" self.dataSims = ['karman-fdt-hires-testset/sim_%06d'%i for i in range(num_sims) ]\n",
" ReNrs = [120000.0, 480000.0, 1920000.0, 7680000.0] \n",
" self.extConstChannelPerSim = { self.dataSims[i]:[ReNrs[i]] for i in range(num_sims) }\n",
"\n",
" self.dataFrames = [ np.arange(num_frames) for _ in self.dataSims ] \n",
"\n",
" # debugging example, check shape of a single marker density field:\n",
" #print(format(self.dataPreloaded[self.dataSims[0]][0][0].shape )) \n",
" \n",
" # the data has the following shape ['sim', frame, field (dens/vel)] where each field is [batch-size, y-size, x-size, channels]\n",
" self.resolution = self.dataPreloaded[self.dataSims[0]][0][0].shape[1:3] \n",
"\n",
" # compute data statistics for normalization\n",
" self.dataStats = {\n",
" 'std': (\n",
" np.std(np.concatenate([np.absolute(self.dataPreloaded[asim][i][0].reshape(-1)) for asim in self.dataSims for i in range(num_frames)], axis=-1)), # density\n",
" np.std(np.concatenate([np.absolute(self.dataPreloaded[asim][i][1].reshape(-1)) for asim in self.dataSims for i in range(num_frames)], axis=-1)), # x-velocity\n",
" np.std(np.concatenate([np.absolute(self.dataPreloaded[asim][i][2].reshape(-1)) for asim in self.dataSims for i in range(num_frames)], axis=-1)), # y-velocity\n",
" )\n",
" }\n",
" self.dataStats.update({\n",
" 'ext.std': [ np.std([np.absolute(self.extConstChannelPerSim[asim][0]) for asim in self.dataSims]) ] # Reynolds Nr\n",
" })\n",
"\n",
" \n",
" if not is_testset:\n",
" print(\"Data stats: \"+format(self.dataStats))\n",
"\n",
"\n",
" # re-shuffle data for next epoch\n",
" def newEpoch(self, exclude_tail=0, shuffle_data=True):\n",
" self.numSteps = self.numFrames - exclude_tail\n",
" simSteps = [ (asim, self.dataFrames[i][0:(len(self.dataFrames[i])-exclude_tail)]) for i,asim in enumerate(self.dataSims) ]\n",
" sim_step_pair = []\n",
" for i,_ in enumerate(simSteps):\n",
" sim_step_pair += [ (i, astep) for astep in simSteps[i][1] ] # (sim_idx, step) ...\n",
"\n",
" if shuffle_data: random.shuffle(sim_step_pair)\n",
" self.epoch = [ list(sim_step_pair[i*self.numSteps:(i+1)*self.numSteps]) for i in range(self.batchSize*self.numBatches) ]\n",
" self.epochIdx += 1\n",
" self.batchIdx = 0\n",
" self.stepIdx = 0\n",
"\n",
" def nextBatch(self): \n",
" self.batchIdx += self.batchSize\n",
" self.stepIdx = 0\n",
"\n",
" def nextStep(self):\n",
" self.stepIdx += 1\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "twIMJ3V0N1FX"
},
"source": [
"The `nextEpoch`, `nextBatch`, and `nextStep` functions will be called at training time to randomize the order of the training data.\n",
"\n",
"Now we need one more function that compiles the data for a mini batch to train with, called `getData` below. It returns batches of the desired size in terms of marker density, velocity, and Reynolds number.\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"id": "Dfwd4TnqN1Tn"
},
"outputs": [],
"source": [
"# for class Dataset():\n",
"def getData(self, consecutive_frames):\n",
" d_hi = [\n",
" np.concatenate([\n",
" self.dataPreloaded[\n",
" self.dataSims[self.epoch[self.batchIdx+i][self.stepIdx][0]] # sim_key\n",
" ][\n",
" self.epoch[self.batchIdx+i][self.stepIdx][1]+j # frames\n",
" ][0]\n",
" for i in range(self.batchSize)\n",
" ], axis=0) for j in range(consecutive_frames+1)\n",
" ]\n",
" u_hi = [\n",
" np.concatenate([\n",
" self.dataPreloaded[\n",
" self.dataSims[self.epoch[self.batchIdx+i][self.stepIdx][0]] # sim_key\n",
" ][\n",
" self.epoch[self.batchIdx+i][self.stepIdx][1]+j # frames\n",
" ][1]\n",
" for i in range(self.batchSize)\n",
" ], axis=0) for j in range(consecutive_frames+1)\n",
" ]\n",
" v_hi = [\n",
" np.concatenate([\n",
" self.dataPreloaded[\n",
" self.dataSims[self.epoch[self.batchIdx+i][self.stepIdx][0]] # sim_key\n",
" ][\n",
" self.epoch[self.batchIdx+i][self.stepIdx][1]+j # frames\n",
" ][2]\n",
" for i in range(self.batchSize)\n",
" ], axis=0) for j in range(consecutive_frames+1)\n",
" ]\n",
" ext = [\n",
" self.extConstChannelPerSim[\n",
" self.dataSims[self.epoch[self.batchIdx+i][self.stepIdx][0]]\n",
" ][0] for i in range(self.batchSize)\n",
" ]\n",
" return [d_hi, u_hi, v_hi, ext]\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bIWnyPYlz8q7"
},
"source": [
"Note that the `density` here denotes a passively advected marker field, and not the density of the fluid. Below we'll be focusing on the velocity only, the marker density is tracked purely for visualization purposes.\n",
"\n",
"After all the definitions we can finally run some code. We can define the dataset object with the downloaded data from the first cell."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "59EBdEdj0QR2",
"outputId": "d9282614-d514-47d8-b911-c262c81c252e"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Data stats: {'std': (2.6542656, 0.23155601, 0.3066732), 'ext.std': [1732512.6262166172]}\n"
]
}
],
"source": [
"nsims = 6\n",
"batch_size = 3\n",
"simsteps = 500\n",
"\n",
"dataset = Dataset( data_preloaded=data_preloaded, num_frames=simsteps, num_sims=nsims, batch_size=batch_size )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0N92RooWPzeA"
},
"source": [
"Additionally, we've defined several global variables to control the training and the simulation in the next code cells.\n",
"\n",
"The most important and interesting one is `msteps`. It defines the number of simulation steps that are unrolled at each training iteration. This directly influences the runtime of each training step, as we first have to simulate all steps forward, and then backpropagate the gradient through all `msteps` simulation steps interleaved with the NN evaluations. However, this is where we'll receive important feedback in terms of gradients how the inferred corrections actually influence a running simulation. Hence, larger `msteps` are typically better.\n",
"\n",
"In addition we define the resolution of the simulation in `source_res`, and allocate the fluid solver object called `simulator`. In order to create grids, it requires access to a `Domain` object, which mostly exists for convenience purposes: it stores resolution, physical size in `bounds`, and boundary conditions of the domain. This information needs to be passed to every grid, and hence it's convenient to have it in one place in the form of the `Domain`. For the setup described above, we need different boundary conditions along x and y: closed walls, and free flow in and out of the domain, respecitvely.\n",
"\n",
"We also instantiate the actual NN `network` in the next cell. "
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "EjgkdCzKP2Ip",
"outputId": "2d4b34f6-2d40-4273-fc2c-1dac7fe786cb"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model: \"model\"\n",
"_________________________________________________________________\n",
"Layer (type) Output Shape Param # \n",
"=================================================================\n",
"input_1 (InputLayer) [(None, 64, 32, 3)] 0 \n",
"_________________________________________________________________\n",
"conv2d (Conv2D) (None, 64, 32, 32) 2432 \n",
"_________________________________________________________________\n",
"leaky_re_lu (LeakyReLU) (None, 64, 32, 32) 0 \n",
"_________________________________________________________________\n",
"conv2d_1 (Conv2D) (None, 64, 32, 32) 25632 \n",
"_________________________________________________________________\n",
"leaky_re_lu_1 (LeakyReLU) (None, 64, 32, 32) 0 \n",
"_________________________________________________________________\n",
"conv2d_2 (Conv2D) (None, 64, 32, 32) 25632 \n",
"_________________________________________________________________\n",
"leaky_re_lu_2 (LeakyReLU) (None, 64, 32, 32) 0 \n",
"_________________________________________________________________\n",
"conv2d_3 (Conv2D) (None, 64, 32, 2) 1602 \n",
"=================================================================\n",
"Total params: 55,298\n",
"Trainable params: 55,298\n",
"Non-trainable params: 0\n",
"_________________________________________________________________\n"
]
}
],
"source": [
"# one of the most crucial! how many simulation steps to look into the future while training\n",
"msteps = 4\n",
"\n",
"# # this is the actual resolution in terms of cells\n",
"source_res = list(dataset.resolution)\n",
"# # this is a virtual size, in terms of abstract units for the bounding box of the domain (it's important for conversions or when rescaling to physical units)\n",
"simulation_length = 100.\n",
"\n",
"# for readability\n",
"from phi.physics._boundaries import Domain, OPEN, STICKY as CLOSED\n",
"\n",
"boundary_conditions = {\n",
" 'x':(phi.physics._boundaries.STICKY,phi.physics._boundaries.STICKY), \n",
" 'y':(phi.physics._boundaries.OPEN, phi.physics._boundaries.OPEN) }\n",
"\n",
"domain = Domain(y=source_res[0], x=source_res[1], bounds=Box[0:2*simulation_length, 0:simulation_length], boundaries=boundary_conditions)\n",
"simulator = KarmanFlow(domain=domain)\n",
"\n",
"network = network_small(dict(shape=(source_res[0],source_res[1], 3)))\n",
"network.summary()\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AbpNPzplQZMF"
},
"source": [
"## Interleaving simulation and NN\n",
"\n",
"Now comes the **most crucial** step in the whole setup: we define a function that encapsulates the chain of simulation steps and network evaluations in each training step. After all the work defining helper functions, it's actually pretty simple: we create a gradient tape via `tf.GradientTape()` such that we can backpropagate later on. We then loop over `msteps`, call the simulator via `simulator.step` for an input state, and afterwards evaluate the correction via `network(to_keras(...))`. The NN correction is then added to the last simulation state in the `prediction` list (we're actually simply overwriting the last simulated velocity `prediction[-1][1]` with `prediction[-1][1] + correction[-1]`.\n",
"\n",
"One other important thing that's happening here is normalization: the inputs to the network are divided by the standard deviations in `dataset.dataStats`. After evaluating the `network`, we only have a velocity left, so we can simply multiply by the standard deviation of the velocity again (via `* dataset.dataStats['std'][1]` and `[2]`).\n",
"\n",
"The `training_step` function also directly evaluates and returns the loss. Here, we can simply use an $L^2$ loss over the whole sequence, i.e. the iteration over `msteps`. This is requiring a few lines of code because we separately loop over 'x' and 'y' components, in order to normalize and compare to the ground truth values from the training data set.\n",
"\n",
"The \"learning\" happens in the last two lines via `tape.gradient()` and `opt.apply_gradients()`, which then contain the aggregated information about how to change the NN weights to nudge the simulation closer to the reference for the full chain of simulation steps."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"id": "D5NeMcLGQaxh",
"scrolled": true
},
"outputs": [],
"source": [
"def training_step(dens_gt, vel_gt, Re, i_step):\n",
" with tf.GradientTape() as tape:\n",
" prediction, correction = [ [dens_gt[0],vel_gt[0]] ], [0] # predicted states with correction, inferred velocity corrections\n",
"\n",
" for i in range(msteps):\n",
" prediction += [\n",
" simulator.step(\n",
" density_in=prediction[-1][0],\n",
" velocity_in=prediction[-1][1],\n",
" re=Re, res=source_res[1],\n",
" )\n",
" ] # prediction: [[density1, velocity1], [density2, velocity2], ...]\n",
"\n",
" model_input = to_keras(prediction[-1], Re)\n",
" model_input /= math.tensor([dataset.dataStats['std'][1], dataset.dataStats['std'][2], dataset.dataStats['ext.std'][0]], channel('channels')) # [u, v, Re]\n",
" model_out = network(model_input.native(['batch', 'y', 'x', 'channels']), training=True)\n",
" model_out *= [dataset.dataStats['std'][1], dataset.dataStats['std'][2]] # [u, v]\n",
" correction += [ to_phiflow(model_out, domain) ] # [velocity_correction1, velocity_correction2, ...]\n",
"\n",
" prediction[-1][1] = prediction[-1][1] + correction[-1]\n",
" #prediction[-1][1] = correction[-1]\n",
"\n",
" # evaluate loss\n",
" loss_steps_x = [\n",
" tf.nn.l2_loss(\n",
" (\n",
" vel_gt[i].vector['x'].values.native(('batch', 'y', 'x'))\n",
" - prediction[i][1].vector['x'].values.native(('batch', 'y', 'x'))\n",
" )/dataset.dataStats['std'][1]\n",
" )\n",
" for i in range(1,msteps+1)\n",
" ]\n",
" loss_steps_x_sum = tf.math.reduce_sum(loss_steps_x)\n",
"\n",
" loss_steps_y = [\n",
" tf.nn.l2_loss(\n",
" (\n",
" vel_gt[i].vector['y'].values.native(('batch', 'y', 'x'))\n",
" - prediction[i][1].vector['y'].values.native(('batch', 'y', 'x'))\n",
" )/dataset.dataStats['std'][2]\n",
" )\n",
" for i in range(1,msteps+1)\n",
" ]\n",
" loss_steps_y_sum = tf.math.reduce_sum(loss_steps_y)\n",
"\n",
" loss = (loss_steps_x_sum + loss_steps_y_sum)/msteps\n",
"\n",
" gradients = tape.gradient(loss, network.trainable_variables)\n",
" opt.apply_gradients(zip(gradients, network.trainable_variables))\n",
"\n",
" return math.tensor(loss) \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "c4yLlDM3QfUR"
},
"source": [
"Once defined, we can prepare this function for executing the training step by calling phiflow's `math.jit_compile()` function. It automatically maps to the correct pre-compilation step of the chosen backend. E.g., for TF this internally creates a computational graph, and optimizes the chain of operations. For JAX, it can even compile optimized GPU code (if JAX is set up correctly). Using the jit compilation can make a huge difference in terms of runtime."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"id": "K2JcO3-QQgC9"
},
"outputs": [],
"source": [
"\n",
"training_step_jit = math.jit_compile(training_step)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E6Vly1_0QhZ1"
},
"source": [
"## Training\n",
"\n",
"For the training, we use a standard Adam optimizer, and run 15 epochs by default. This should be increased for the larger network or to obtain more accurate results. For longer training runs, it would also be beneficial to decrease the learning rate over the course of the epochs, but for simplicity, we'll keep `LR` constant here.\n",
"\n",
"Optionally, this is also the right point to load a network state to resume training."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"id": "PuljFamYQksW"
},
"outputs": [],
"source": [
"LR = 1e-4\n",
"EPOCHS = 15\n",
"\n",
"opt = tf.keras.optimizers.Adam(learning_rate=LR) \n",
"\n",
"# optional, load existing network...\n",
"# set to epoch nr. to load existing network from there\n",
"resume = 0\n",
"if resume>0: \n",
" ld_network = keras.models.load_model('./nn_epoch{:04d}.h5'.format(resume)) \n",
" #ld_network = keras.models.load_model('./nn_final.h5') # or the last one\n",
" network.set_weights(ld_network.get_weights())\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lrALctV1RWBO"
},
"source": [
"Finally, we can start training the NN! This is very straight forward now, we simply loop over the desired number of iterations, get a batch each time via `getData`, feed it into the source simulation input `source_in`, and compare it in the loss with the `reference` data for the batch.\n",
"\n",
"The setup above will automatically take care that the differentiable physics solver used here provides the right gradient information, and provides it to the tensorflow network. Be warned: due to the complexity of the setup, this training run can take a while... (If you have a saved `nn_final.h5` network from a previous run, you can potentially skip this block and load the previously trained model instead via the cell above.)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "m3Nd8YyHRVFQ",
"outputId": "686a3419-d022-4889-c0de-66e4e02953d1",
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 001/015, batch 001/002, step 0001/0496: loss=2605.340576171875\n",
"epoch 001/015, batch 001/002, step 0002/0496: loss=1485.1646728515625\n",
"epoch 001/015, batch 001/002, step 0003/0496: loss=790.8267211914062\n",
"epoch 001/015, batch 001/002, step 0129/0496: loss=98.64994049072266\n",
"epoch 001/015, batch 001/002, step 0257/0496: loss=75.3546142578125\n",
"epoch 001/015, batch 001/002, step 0385/0496: loss=70.05519104003906\n",
"epoch 002/015, batch 001/002, step 0401/0496: loss=19.126527786254883\n",
"epoch 003/015, batch 001/002, step 0401/0496: loss=9.628664016723633\n",
"epoch 004/015, batch 001/002, step 0401/0496: loss=7.898053169250488\n",
"epoch 005/015, batch 001/002, step 0401/0496: loss=3.6936004161834717\n",
"epoch 006/015, batch 001/002, step 0401/0496: loss=3.172729730606079\n",
"epoch 007/015, batch 001/002, step 0401/0496: loss=2.8511123657226562\n",
"epoch 008/015, batch 001/002, step 0401/0496: loss=3.4968295097351074\n",
"epoch 009/015, batch 001/002, step 0401/0496: loss=1.6942076683044434\n",
"epoch 010/015, batch 001/002, step 0401/0496: loss=1.6551270484924316\n",
"epoch 011/015, batch 001/002, step 0401/0496: loss=1.9383186101913452\n",
"epoch 012/015, batch 001/002, step 0401/0496: loss=2.0140795707702637\n",
"epoch 013/015, batch 001/002, step 0401/0496: loss=1.4174892902374268\n",
"epoch 014/015, batch 001/002, step 0401/0496: loss=1.2593278884887695\n",
"epoch 015/015, batch 001/002, step 0401/0496: loss=1.250532627105713\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training done, saved NN\n"
]
}
],
"source": [
"steps = 0\n",
"for j in range(EPOCHS): # training\n",
" dataset.newEpoch(exclude_tail=msteps)\n",
" if j0 and ib==0 and i==400): # reduce output \n",
" print('epoch {:03d}/{:03d}, batch {:03d}/{:03d}, step {:04d}/{:04d}: loss={}'.format( j+1, EPOCHS, ib+1, dataset.numBatches, i+1, dataset.numSteps, loss ))\n",
" \n",
" dataset.nextStep()\n",
"\n",
" dataset.nextBatch()\n",
"\n",
" if j%10==9: network.save('./nn_epoch{:04d}.h5'.format(j+1))\n",
"\n",
"# all done! save final version\n",
"network.save('./nn_final.h5'); print(\"Training done, saved NN\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "swG7GeDpWT_Z"
},
"source": [
"The loss should go down from above 1000 initially to below 10. This is a good sign, but of course it's even more important to see how the NN-solver combination fares on new inputs. With this training approach we've realized a hybrid solver, consisting of a regular _source_ simulator, and a network that was trained to specifically interact with this simulator for a chosen domain of simulation cases.\n",
"\n",
"Let's see how well this works by applying it to a set of test data inputs with new Reynolds numbers that were not part of the training data.\n",
"\n",
"To keep things somewhat simple, we won't aim for a high-performance version of our hybrid solver. For performance, please check out the external code base: the network trained here should be directly useable in [this apply script](https://github.com/tum-pbs/Solver-in-the-Loop/blob/master/karman-2d/karman_apply.py).\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0c38ne0UdIxV"
},
"source": [
"## Evaluation \n",
"\n",
"In order to evaluate the performance of our DL-powered solver, we essentially only need to repeat the inner loop of each training iteration for more steps. While we were limited to `msteps` evaluations at training time, we can now run our solver for arbitrary lengths. This is a good test for how well our solver has learned to keep the data within the desired distribution, and represents a generalization test for longer rollouts.\n",
"\n",
"We can reuse the solver code from above, but in the following, we will consider two simulated versions: for comparison, we'll run one reference simulation in the _source_ space (i.e., without any modifications). This version receives the regular outputs of each evaluation of the simulator, and ignores the learned correction (stored in `steps_source` below). The second version, repeatedly computes the source solver plus the learned correction, and advances this state in the solver (`steps_hybrid`).\n",
"\n",
"We also need a set of new data. Below, we'll download a new set of Reynolds numbers (in between the ones used for training), so that we can later on run the unmodified simulator and the DL-powered one on these cases.\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "RumKebW_05xp",
"outputId": "b119bc05-2f9d-4289-c951-f9f12627c7fb"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading test data (38MB), this can take a moment the first time...\n",
"Loaded test data, 4 training sims\n"
]
}
],
"source": [
"fname_test = 'sol-karman-2d-test.pickle'\n",
"if not os.path.isfile(fname_test):\n",
" print(\"Downloading test data (38MB), this can take a moment the first time...\")\n",
" urllib.request.urlretrieve(\"https://physicsbaseddeeplearning.org/data/\"+fname_test, fname_test)\n",
"\n",
"with open(fname_test, 'rb') as f: data_test_preloaded = pickle.load(f)\n",
"print(\"Loaded test data, {} training sims\".format(len(data_test_preloaded)) )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rZ9h-gRddIxb"
},
"source": [
"Next we create a new dataset object `dataset_test` that organizes the data. We're simply using the first batch of the unshuffled dataset, though.\n",
"\n",
"A subtle but important point: we still have to use the normalization from the original training data set: `dataset.dataStats['std']` values. The test data set has it's own mean and standard deviation, and so the trained NN never saw this data before. The NN was trained with the data in `dataset` above, and hence we have to use the constants from there for normalization to make sure the network receives values that it can relate to the data it was trained with."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "9OPruTGMdIxe",
"outputId": "254e71e0-c471-4fec-df6f-f116227d12f3"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Reynolds numbers in test data set: (120000.0, 480000.0, 1920000.0, 7680000.0) along batch\u1d47\n"
]
}
],
"source": [
"dataset_test = Dataset( data_preloaded=data_test_preloaded, is_testset=True, num_frames=simsteps, num_sims=4, batch_size=4 )\n",
"\n",
"# we only need 1 batch with t=0 states to initialize the test simulations with\n",
"dataset_test.newEpoch(shuffle_data=False)\n",
"batch = getData(dataset_test, consecutive_frames=0) \n",
"\n",
"re_nr_test = math.tensor(batch[3], math.batch('batch')) # Reynolds numbers\n",
"print(\"Reynolds numbers in test data set: \"+format(re_nr_test))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sMqRPg2pdIxh"
},
"source": [
"Next we construct a `math.tensor` as initial state for the centered marker fields, and a staggered grid from the next two indices of the test set batch. Similar to `to_phiflow` above, we can use `phi.math.stack()` to combine two fields of appropriate size as a staggered grid."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"id": "xK1MEaPqdIxi"
},
"outputs": [],
"source": [
"source_dens_initial = math.tensor( batch[0][0], math.batch('batch'), math.spatial('y, x'))\n",
"\n",
"source_vel_initial = domain.staggered_grid(phi.math.stack([\n",
" math.tensor(batch[2][0], math.batch('batch'),math.spatial('y, x')),\n",
" math.tensor(batch[1][0], math.batch('batch'),math.spatial('y, x'))], channel('vector')))\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KhGVceo6dIxl"
},
"source": [
"Now we can first run the _source_ simulation for 120 steps as baseline:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "nbTTl15kdIxl",
"outputId": "14521920-1966-41d6-e3a2-41db9fb2f69d"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Source simulation steps 121\n"
]
}
],
"source": [
"source_dens_test, source_vel_test = source_dens_initial, source_vel_initial\n",
"steps_source = [[source_dens_test,source_vel_test]]\n",
"\n",
"# note - math.jit_compile() not useful for numpy solve... hence not necessary\n",
"for i in range(120):\n",
" [source_dens_test,source_vel_test] = simulator.step(\n",
" density_in=source_dens_test,\n",
" velocity_in=source_vel_test,\n",
" re=re_nr_test,\n",
" res=source_res[1],\n",
" )\n",
" steps_source.append( [source_dens_test,source_vel_test] )\n",
"\n",
"print(\"Source simulation steps \"+format(len(steps_source)))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vQV0qV5pdIxm"
},
"source": [
"Next, we compute the corresponding states of our learned hybrid solver. Here, we closely follow the training code, however, now without any gradient tapes or loss computations. We only evaluate the NN in a forward pass for each simulated state to compute a correction field:\n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "fH5tFfh9dIxn",
"outputId": "1a3c76f6-e401-479e-911d-4bd58f69dab1"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Steps with hybrid solver 121\n"
]
}
],
"source": [
"source_dens_test, source_vel_test = source_dens_initial, source_vel_initial\n",
"steps_hybrid = [[source_dens_test,source_vel_test]]\n",
" \n",
"for i in range(120):\n",
" [source_dens_test,source_vel_test] = simulator.step(\n",
" density_in=source_dens_test,\n",
" velocity_in=source_vel_test,\n",
" re=math.tensor(re_nr_test),\n",
" res=source_res[1],\n",
" )\n",
" model_input = to_keras([source_dens_test,source_vel_test], re_nr_test )\n",
" model_input /= math.tensor([dataset.dataStats['std'][1], dataset.dataStats['std'][2], dataset.dataStats['ext.std'][0]], channel('channels')) # [u, v, Re]\n",
" model_out = network(model_input.native(['batch', 'y', 'x', 'channels']), training=False)\n",
" model_out *= [dataset.dataStats['std'][1], dataset.dataStats['std'][2]] # [u, v]\n",
" correction = to_phiflow(model_out, domain) \n",
" source_vel_test = source_vel_test+correction\n",
"\n",
" steps_hybrid.append( [source_dens_test,source_vel_test+correction] )\n",
" \n",
"print(\"Steps with hybrid solver \"+format(len(steps_hybrid)))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tnHYeOfldIxp"
},
"source": [
"Given the stored states, we quantify the improvements that the NN yields, and visualize the results. \n",
"\n",
"In the following cells, the index `b` chooses one of the four test simulations (by default index 0, the lowest Re outside the training data range), and computes the accumulated mean absolute error (MAE) over all time steps.\n"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 318
},
"id": "bU-PwcCCdIxq",
"outputId": "66956540-891f-4af7-bafe-22fbd11d8b47"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"MAE for source: 0.1363069713115692 , and hybrid: 0.05150971934199333\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEICAYAAABF82P+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3deXxU1f3/8dcnAZJAQshC9oQECPuSQEBFK7hg9euCW4tWBVfcd1BQBFlFAS0ooljBhVqt1lYettZfRdFSEQmrEEQgBEjIvgLZZ87vjzvEAAECZDLJzOf5eOSRuXPvnTmXG+Y95557zhFjDEoppdSxvFxdAKWUUi2TBoRSSqkGaUAopZRqkAaEUkqpBmlAKKWUapAGhFJKqQY5NSBE5HIR2SEiu0RkYgPrnxCRNBHZIiIrRaRLvXU2Ednk+FnhzHIqpZQ6njirH4SIeAO/ACOBTGAdcLMxJq3eNhcBa40x5SJyPzDCGDPase6QMca/se8XGhpq4uPjm/IQlFLK7a1fv77AGNO5oXVtnPi+Q4Fdxph0ABH5EBgF1AWEMeabetv/ANx6pm8WHx9Pamrqme6ulFIeSUT2nmidMy8xRQP76y1nOp47kbuAL+ot+4pIqoj8ICLXOqOASimlTsyZNYhGE5FbgRRgeL2nuxhjskSkK/C1iPxkjNl9zH7jgHEAcXFxzVZepZTyBM6sQWQBsfWWYxzPHUVELgWeBa4xxlQded4Yk+X4nQ6sApKP3dcYs8QYk2KMSencucFLaEoppc6QM2sQ64BEEUnACoabgD/U30BEkoE3gcuNMXn1ng8Cyo0xVSISCpwPvHS6BaipqSEzM5PKysqzOAz34+vrS0xMDG3btnV1UZRSLZjTAsIYUysiDwFfAt7AUmPMNhGZDqQaY1YAcwF/4GMRAdhnjLkG6A28KSJ2rFrOnPp3PzVWZmYmAQEBxMfH43h9j2eMobCwkMzMTBISElxdHKVUC+bUNghjzL+Afx3z3JR6jy89wX7fA/3P9v0rKys1HI4hIoSEhJCfn+/qoiilWji370mt4XA8/TdRyr3YbM7pz+b2AaGUUu7q558rufPOvQwf/gvO6PSsAdEMZs2aRd++fRkwYABJSUmsXbvW1UVSSrVSxhjWrDnEddftpk+fNJYtK2TNmsOkpTX9zTgtoh+EO1uzZg2ff/45GzZswMfHh4KCAqqrq8/qNWtra2nTRk+dUp7k8GEbH3xQzOLF+WzcWAGAj49w++0hjB8fRvfuvk3+nlqDcLLs7GxCQ0Px8fEBIDQ0lKioKFauXElycjL9+/fnzjvvpKrK6gISHx9PQUEBAKmpqYwYMQKA559/nttuu43zzz+f2267jdzcXK677joGDhzIwIED+f777wFYvnw5Q4cOJSkpiXvvvRebzdb8B62UajL79lXz1FOZxMRsZdy4fWzcWEFIiDeTJoWTkdGPN96Ic0o4gAfVIEQ2OOV1jRl00vWXXXYZ06dPp0ePHlx66aWMHj2ac845h9tvv52VK1fSo0cPxowZw+LFi3nsscdO+lppaWmsXr0aPz8/Ro8ezfDhw/n73/+OzWbj0KFDbN++nY8++oj//e9/tG3blgceeIA///nPjBkzpikPWSnVDLZsKWf27Fw++aSYI9/zzj23Aw8+GMqNNwbh6+v87/dag3Ayf39/1q9fz5IlS+jcuTOjR4/mzTffJCEhgR49egAwduxYvvvuu1O+1jXXXIOfnx8AX3/9Nffffz8A3t7eBAYGsnLlStavX8+QIUNISkpi5cqVpKenO+/glFJNbuPGcq6/fjcDB/7MRx8VIwI33xzE2rU9WbOmJ7feGtIs4QAeVIM41Td9Z/L29mbEiBGMGDGC/v37s2jRohNu26ZNG+x2O8BxPcA7dOhw0vcxxjB27FheeOGFsy+0UqrZGGP4f//vIPPm5fLVVwcB8PUVxo0LZcKEcGJi2rmkXFqDcLIdO3awc+fOuuVNmzbRrVs3MjIy2LVrFwDvv/8+w4db4xTGx8ezfv16AP72t7+d8HUvueQSFi9eDIDNZqO0tJRLLrmETz75hLw8a9SSoqIi9u494Ui+SikXq6qys2xZIQMGbOfyy3fx1VcH6dDBiyeeCGPPnn4sWBDrsnAADQinO3ToEGPHjqVPnz4MGDCAtLQ05syZw7Jly/jd735H//798fLy4r777gNg6tSpPProo6SkpODt7X3C112wYAHffPMN/fv3Z/DgwaSlpdGnTx9mzpzJZZddxoABAxg5ciTZ2dnNdahKqUYqKqplxoxsunTZyp137mXr1koiI9syZ04U+/f3Y/78GCIiXD9WmtNmlGtuKSkp5tgJg7Zv307v3r1dVKKWTf9tlGp+hYW1vPJKHgsX5nHwoHUpeeBAP554IoybbgqiXbvm/84uIuuNMSkNrfOYNgillHKVkpJa5s/PY8GCX4Ph0ksDmDQpgosu8m+xw99oQCillJOUldlYuDCPefPyKC217lW97LIApk6NZNgwfxeX7tQ0IJRSqonl5dWwYEEeixYV1AXDxRcHMGNG6wiGIzQglFKqieTn1/DCC7ksXpxPZaXVvjt8uD9Tp0Zy0UUBLi7d6dOAUEqps1RaamP+/FxeeSWPQ4esNoZrrglk0qQIzj335P2XWjINCKWUOkNVVXYWLy5g5sxsCgutS0lXXtmRmTOjSEpq7+LSnT3tB+Fk/v5HX2985513eOihh066z4gRIzj2lt2GpKam8sgjjzS4rv6gf0qpplVdbeeddwrp1SuNxx/PpLDQxm9+48/q1T34/PPubhEOoDWIVqu2tpaUlBRSUhq8fVkp5QRlZTaWLCngj3/MIyurBoC+fX2ZMyeaK6/s2GJvVz1TWoNwkYMHD5KQkEBNjfVHVlZWdtTy+++/T1JSEv369ePHH38Ejh/ye9WqVVx11VUAFBYWctlll9G3b1/uvvtup8wupZSnOnTIxuzZOcTHb2XChCyysmro29eXd9/twubNvbnqqkC3CwfwoBqETHPOyTNTT/5BXFFRQVJSUt1yUVER11xzDQEBAYwYMYJ//vOfXHvttXz44Ydcf/31tG1rda8vLy9n06ZNfPfdd9x5551s3boVOHrI71WrVtW97rRp07jggguYMmUK//znP3n77beb/mCV8jCHDtl4/fV85s7No6CgFoDf/MafiRPDueIK96sxHMtjAsJV/Pz82LRpU93yO++8U9e+cPfdd/PSSy9x7bXXsmzZMt5666267W6++WYALrzwQsrKyigpKQGOHvK7vu+++45PP/0UgCuvvJKgoCCnHZNS7q601Marr+bxyit5FBVZjc/nndeBGTMiufjiALcPhiM8JiBO9U3fFc4//3wyMjJYtWoVNpuNfv361a079g/wyPKphvxWSp25rKxqFizI4403CuqGxBg2rAPPPRfBb3/r/jWGY2kbhIuNGTOGP/zhD9xxxx1HPf/RRx8BsHr1agIDAwkMDDzp61x44YV88MEHAHzxxRcUFxc7p8BKuaH09CruumsvCQnbmDvXGi/poov8WbkykdWre3D55e7ZxnAqHlODaKluueUWJk+eXHdJ6QhfX1+Sk5Opqalh6dKlp3ydqVOncvPNN9O3b1+GDRtGXFycs4qslNvIyKhi1qwc3nmnkNpaEIEbb+zEhAnhDB2qtXUd7tvFPvnkEz777DPef//9Zn3f1vBvo5SzZGVVM2tWDn/6UyE1NQYvL7j11mAmT44gMdHX1cVrVjrcdwv18MMP88UXX/Cvf/3L1UVRyiPk5dUwZ86vYyWJwC23BDFlSiQ9enhWMDSGBoQLvfrqq64uglIeoaCglrlzc3nttXzKy63G5xtv7MS0aZH06XP8XYHK4vYBYYzxyMalk3GXy4pKnUpDg+hdfXUg06ZFkpzsHsNhOJNbB4Svry+FhYWEhIRoSDgYYygsLMTXV6vTyn1VVNhZsCCPl17KpbjY6sfw2992ZPr0SG18Pg1uHRAxMTFkZmaSn5/v6qK0KL6+vsTExLi6GEo1OWMMn3xSwvjxWezbVw1Y8zHMnBnFBRe0nol6Wgq3Doi2bduSkJDg6mIopZrBli3lPPJIJt9+ewiAAQP8mDcvmksv9Zyez03NqR3lRORyEdkhIrtEZGID658QkTQR2SIiK0WkS711Y0Vkp+NnrDPLqZRqvUpKannkkf0kJ//Mt98eIiTEmzfeiGXDhl6MHOl5vZ+bktNqECLiDSwCRgKZwDoRWWGMSau32UYgxRhTLiL3Ay8Bo0UkGJgKpAAGWO/YV7sHK6UAsNsN771XxFNPZZGfX4uXFzz8cGemTYskKMitL440G2fWIIYCu4wx6caYauBDYFT9DYwx3xhjyh2LPwBHLoz/FviPMabIEQr/AS53YlmVUq3ITz9VMHz4L9xxx17y82v5zW/82bixFwsXxmo4NCFnBkQ0sL/ecqbjuRO5C/jiDPdVSnmAyko7kyZlkZy8ndWrDxMW1ob33uvCt98mMmCA3rba1FpE1IrIrViXk4af5n7jgHGAjj2klJtbu/Ywd9yxl+3bKxGBBx/szMyZkXTq1CI+xtySM2sQWUBsveUYx3NHEZFLgWeBa4wxVaezrzFmiTEmxRiT0rlz5yYruFKq5aistPP001kMG7aD7dsr6dnTh9Wre/Daa7EaDk7mzIBYBySKSIKItANuAlbU30BEkoE3scIhr96qL4HLRCRIRIKAyxzPKaU8yJo1h0hO3s5LL+UCMGFCGBs39mbYMO3T0BycFr/GmFoReQjrg90bWGqM2SYi04FUY8wKYC7gD3zsuBVtnzHmGmNMkYjMwAoZgOnGmCJnlVUp1bKUl9uZMuUAr7ySh90OvXv7smxZF845R3tBNye3Hu5bKdX6rFp1kLvv3sfu3VV4ecFTT4UzdWokvr46v5kz6HDfSqkWr7zczoQJmbz+egEA/fv7snRpF1JStNbgKhoQSimX27ChnFtu2cPPP1fRtq3w7LMRTJoUTrt2WmtwJQ0IpZTL2GyG+fNzmTw5m5oaQ+/evnzwQTxJSdqnoSXQgFBKuUR6ehVjx2awevVhAB56qDMvvRSNn5/WGloKDQilVLMyxvCnPxXy+OOZHD5sJzKyLW+/HccVVwS6umjqGBoQSqlms39/Nffcs48vvywD4Pe/78Trr8cREqIfRS2RnhWllNMZY3jnnSIee2w/ZWV2goO9WbQolptuCnZ10dRJaEAopZyqsLCWceP28emnJQCMGhXIG2/EERHR1sUlU6eiAaGUcpqvvz7ImDEZZGXVEBDgxWuvxXLbbcE6iU8roQGhlGpylZV2nn32AC+/bA2xNmxYB5YvjychwcfFJVOnQwNCKdWkNm0q59ZbM9i2rRJvb5gyJZJnnomgTRutNbQ2GhBKqSZhtxsWLMjj6acPUFNj6NHDh+XL4xkyRIfKaK00IJRSZy0vr4bbb9/LF19Yt6/ef38oc+dG06GDt4tLps6GBoRS6qysXFnGrbdmkJNTS3CwN0uXdmHUqE6uLpZqAtqnXSl1RmprDZMnH2DkyF3k5NQyfLg/mzf31nBwI1qDUEqdtr17q7j1VmscJS8vmDo1ksmTI/D21oZod6IBoZRqNGMM779fxMMPWz2io6La8sEH8QwfHuDqoikn0IBQSjVKYWEt9923j08+sXpEX3ddIEuWdCE0VD9G3JWeWaXUKf3736Xceec+srOtHtELF8Yydqz2iHZ3GhBKqRMqL7fz1FNZLFqUD8AFF3Tgvfe0R7Sn0IBQSjVo7drDjBmTwS+/WNOAzpgRyfjx4doQ7UE0IJRSR6mutjNjRg6zZ+dgt0Pfvr68/348yck6Dain0YBQStXZsKGcO+7Yy5YtFYjAhAlhTJ8eha+vdpnyRBoQSimqqqxaw5w5Odhs0LVrO955J57f/Mbf1UVTLqQBoZSHS009zO2372XbtkpE4NFHOzNrVpSOo6Q0IJTyVFVVdqZPz+bFF3Ox2SAx0Ydly7pw/vlaa1AWDQilPNDXXx/kgQf2sWNHFSLwxBNhzJwZhZ+ftjWoX2lAKOVBcnJqGD8+kz//uRiAnj19ePttrTWohmlAKOUBamoMr76ax7Rp2ZSV2fH1FSZPjmD8+HB8fLTWoBqmAaGUm1u5soyHH85k+/ZKAK68siMLF8bStav2hlYnpwGhlJvKza3hiScy+eAD63JS9+4+LFgQw//9X6CLS6ZaCw0IpdyMzWZYsqSASZMOUFpqw9dXeO65SJ58MkwvJ6nT4tS/FhG5XER2iMguEZnYwPoLRWSDiNSKyI3HrLOJyCbHzwpnllMpd7F69SFSUn7mgQf2U1pq44orOrJtWx+eeSZCw0GdNqfVIETEG1gEjAQygXUissIYk1Zvs33A7cD4Bl6iwhiT5KzyKeVOsrKqefrprLq7k+Li2jF/fjQ33NBJh+RWZ8yZl5iGAruMMekAIvIhMAqoCwhjTIZjnd2J5VDKbVVU2Hn55Vxmz86lvNyOj4/w1FPhTJwYQfv2WmNQZ8eZAREN7K+3nAmccxr7+4pIKlALzDHG/KMpC6dUa2aM4eOPS3j66SwyMqoBuP76TsydG613J6km05IbqbsYY7JEpCvwtYj8ZIzZXX8DERkHjAOIi4tzRRmVanY//niYxx/P5PvvDwPQv78vCxbEctFFOi+0alrOrINmAbH1lmMczzWKMSbL8TsdWAUkN7DNEmNMijEmpXPnzmdXWqVauJ07K/n979M555wdfP/9YcLC2rBkSRwbN/bWcFBO4cyAWAckikiCiLQDbgIadTeSiASJiI/jcShwPvXaLpTyJDk5NTzwwD56907j449L8PUVJk4MZ+fOvtxzT6jO8KacxmmXmIwxtSLyEPAl4A0sNcZsE5HpQKoxZoWIDAH+DgQBV4vINGNMX6A38Kaj8doLqw1CA0J5lIMHbcybl8v8+XkcPmzHywvuvjuEqVMjiYlp5+riKQ8gxhhXl6FJpKSkmNTUVFcXQ6mzVl1tZ8mSAmbMyCEvrxaAUaMCeeGFKHr39nNx6ZS7EZH1xpiUhta15EZqpTyKzWb44IMipkzJrrsz6dxzOzB3bjQXXKCjrarmpwGhlIsZY/j730t47rls0tKsAfX69PFl1qwoRo0K1I5uymU0IJRyoa++KmPixAOsX18OQJcu7Xj++Uhuuy1YG5+Vy2lAKOUC69eXM3FiFl99dRCAiIg2TJ4cyd13h+iYSarF0IBQqhn99FMF06dn88knJQAEBnozaVI4Dz8cpkNjqBZHA0KpZnBsMPj4CA8/3JlJkyIIDtb/hqpl0r9MpZwoLa2CadOy+etffw2Ge+8N5emnw4mK0r4MqmXTgFDKCTIyqpgyJZvly4swxgqGceNCmThRg0G1HhoQSjWhgoJaXnghh9dey6e62tC2rXDPPSFMmhShvZ9Vq3PKgBARL+BcY8z3zVAepVqlvLwa5s/PY9GifA4ftiMCt94azIwZkcTH6/DbqnU6ZUAYY+wisogGRlNVytPl5dXw0ku5vP56PhUV1rA1V1zRkdmzo0hKau/i0il1dhp7iWmliNwAfGrcZfAmpc5CYWEtc+fm8uqr+ZSXWxMiXn11IM89F8GQIR1cXDqlmkZjA+Je4AnAJiIVgADGGNPRaSVTqgWqqLCzcGEes2fnUFZmBcNVV3Vk2rQoBg3SGoNyL40KCGOMzkaiPFpNjWH58iKmTj3A/v01AIwcGcCMGVGcc47WGJR7avRdTCJyDXChY3GVMeZz5xRJqZbj8GEbb79dyLx5uXXBMHCgH3PnRjNypFaglXtrVECIyBxgCPBnx1OPisj5xphJTiuZUi508KCNRYvymT8/j4ICa06G3r19eeaZcP7wh2C8vHQgPeX+GluD+D8gyRhjBxCRd4GNgAaEcisHD9pYsCCPl1/Oo7jYBsDQoe2ZNCmCa64J1GBQHuV0Osp1AoocjwOdUBalXKaqys6bbxYwc2YO+flWjeGCCzowZUokl14aoHMyKI/U2ICYDWwUkW+w7mC6EJjotFIp1UzKy+28/XYB8+blsW+fNYvbeed1YNasKEaM8NdgUB6tsT2p7cC5WO0QAE8bY3KcWTClnKmkpJZXX81n4cL8ujaGvn19mT07iquv1lnclILG96R+yhjzV2BFM5RJKacpLbXaGF55JY+SEquNYciQ9kycGM6oUZ10Fjel6mnsJaavRGQ88BFw+MiTxpiiE++iVMtRUlLLwoX5RwXDRRf5M3lyJBddpJeSlGpIYwNitOP3g/WeM0DXpi2OUk2rsLCWBQvyWLAgr67n8/Dh/kybFsnw4dr/U6mTaWwbxERjzEfNUB6lmsSePVW8/HIeS5cW1o2VdPHFAUyZEqHBoFQjNbYNYgLW5SWlWrStWyuYMyeHDz8sxmZdSeKKKzryzDMRXHCBv2sLp1Qro20Qyi2sW3eY2bNz+Mc/SgFo0wbGjAlm/Phw+vf3c3HplGqdtA1CtVrGGFatOsTs2Tl89dVBwJra8+67Q5kwIYwuXXSiHqXORmNHc01wdkGUaiybzfDZZyW8+GIuP/5YDkBAgBf339+Zxx8PIyKirYtLqJR78DrZShF5qt7j3x2zbrazCqVUQyor7bz1VgF9+qRxww17+PHHckJD2zBjRiR79/bjxRejNRyUakKnqkHcBLzkeDwJ+LjeusuBZ5xRKKXqKy6uZfHiAhYuzCM31+r1HB/fjvHjw7njjhDatz/p9xyl1Bk6VUDICR43tKxUkzp0yOr1PHduHqWl1i1Jycl+TJgQzu9+F0SbNvonqJQznSogzAkeN7SsVJOorLTzxhsFzJ7968iqF18cwKRJ4VxyiY6sqlRzOVXdfKCIlInIQWCA4/GR5f6nenERuVxEdojILhE5bvRXEblQRDaISK2I3HjMurEistPxM/a0jkq1SrW1hrffLqBHj208/ngm+fm1nHdeB77+OpGVKxO59NKOGg5KNaOT1iCMMd5n+sIi4g0sAkYCmcA6EVlhjEmrt9k+4HZg/DH7BgNTgRSsmsp6x77FZ1oe1XLZ7Ya//a2E5547wI4dVQAMGODHrFlRXHmlhoJSrnI6EwadrqHALmNMOoCIfAiMAuoCwhiT4VhnP2bf3wL/OdIRT0T+g9Uo/hcnlle5wH/+U8bEiVls2FABQLduPkyfHslNNwXp7G1KuZgzAyIa2F9vORM45yz2jW6icqkW4JdfKnnyyUw+/7wMgMjItkyZEsFdd4XStq0Gg1ItgTMDwulEZBwwDiAuLs7FpVGNUVJSy8yZOSxcmE9NjSEgwItnnongkUfC9HZVpVoYZ/6PzAJi6y3HOJ5rsn2NMUuMMSnGmJTOnTufcUGV89XWGhYvzicxMY358/OorTXcdVcIO3f2ZeLECA0HpVogZ9Yg1gGJIpKA9eF+E/CHRu77JTBbRIIcy5dhddRTrdBXX5Xx2GOZbNtWCcCFF/rzyisxDBrU3sUlU0qdjNMCwhhTKyIPYX3YewNLjTHbRGQ6kGqMWSEiQ4C/A0HA1SIyzRjT1xhTJCIzsEIGYLqOHNv67N5dxZNPZvLZZ9YIqwkJ7Zg3L5rrruukdyYp1QqIMe7R3y0lJcWkpqa6uhgKOHzYxgsv5DJ3bi7V1QZ/fy8mT47gscfC8PHRS0lKtSQist4Yk9LQulbdSK1aFmMMn35awuOPZ7J/fw0AY8cG88IL0URG6iB6SrU2GhCqSaSlVfDII5msXGnNyzBokB+vvRbLeefpLG5KtVYaEOqslJXZmDYtm4UL86itheBgb2bOjGLcuFC8vbWdQanWTANCnRFjDB9/bF1OOnCgBhG4775QZs6MIiRE/6yUcgf6P1mdtt27q3jwwf18+aXVC/qcc9rz+utxetuqUm5GA0I1WlWVnblzc5k1K4fKSkOnTt7MmRPFPfeE6rhJSrkhDQjVKN9+e5D77tvHzz9bo63edlsw8+ZFExamdycp5a40INRJFRTUMn58Ju++a/VT7NHDhzfeiOOiiwJcXDKllLNpQKgGGWN4550ixo/PpKjIho+P8MwzETz9dLh2dlPKQ2hAqOPs3FnJvffu45tvDgFwySUBLF4cS2Kir4tLppQ6oqq2it3Fu9lZuJMDBw9w/5D7m/w9NCBUnepqqxF6xowcqqoMoaFtePnlaG69NVjHTlLKRcprytmev51t+dtIy0+r+51RkoHdWHOtCcLtSbfj19avSd9bA0IB8N13B7nvvv1s326NuDp2bDDz5sUQGqp/Iko5W7WtmvTidHYU7OCXwl/YXbzb+inaTUZJBobjx8zzFm+6BXUjMSSRxOBEKmsrNSBU0yoqqmXChCyWLi0EIDHRh8WLY7nkko4uLplS7sMYQ97hPNKL09lbupe9JXvZW7qXXUW72FW0i72le+tqA8dq49WGHiE96Nu5r/UT1pc+nfvQPbg77bzbObXcGhAeyhjDX/5SzGOPZZKfX0u7dsKkSeFMnBiBr682Qit1OuzGTu6hXPaX7Wd/6X72l+1nT/EeMkoz2FO8h/TidA7XHD7h/l7iRdegrvQI6UGP4B4khiTSNahr3Y+zg+BENCA8UEZGFffd92tP6OHD/XnjjTh69dJGaKVOpKKmgsyyTDJKMthTsoc9xXvYXbybXwp/YWfRTspryk+6f5BvEF2DuhLfKZ64wDi6BHahW3A3ugd3J6FTAj5tfJrpSBpPA8KD2GyGhQvzmDw5m/JyO0FB3sybF80dd4RoI7TyeMYY8svzrQ/8wp3sKtrF7uLdpBenk1GSQX55/kn3D/ELIS4wjtjAWGI7xhLfKZ6ETgnEd4qna1BXgvyCTrp/S6QB4SE2by7nnnv2sW6d9S1n9OggFiyIITxce0Irz2Gz28gsy6z78N9dtLsuBHYX76asquyE+7bxakNsx1jiAuNICEogoVNC3WWhxODEVhkAp6IB4ebKy+1Mn57NvHm52GwQE9OW11+P5eqrO7m6aEo5RUVNBenF6Uf/lKSzq2gX6cXpVNuqT7hvoE+g1Q4Q0oPuwd3pFtSNrkFdSQhKILxDON5e3s14JK6nAeHGVq4sY9y4faSnVyMCjzzSmZkzowgI8Kw/cuU+qmqr6hqCcw/nknc4j+yD2dadQaV7ySjJ4MDBAyd9jetWzxUAABZeSURBVAj/iLoP/25B3egW/OvvED+93FqfBoQbKiqqZfz4LJYts25d7d/fl7fe6sI553RwccmUOjmb3caBgwfYU7LHagx2NATvLt7NnuI9ZB/KPuVrtPFqQ3yn+LoAOFID6B7cna5BXfFvp7McNpYGhJv59NNiHnhgP7m5tfj4CFOmRDJhQjht2+q3ItUyGGPIPZzLjoId7Cjcwc8FP7OzaGejLgF5izcxHWOIC4wjwj+CsA5hhHcIt+4K6tSFLoFdiA2MpY2XfrQ1Bf1XdBN5eTU89NB+Pv64BIDf/Maft96Ko2dPvXVVuUZJZQm7i3azs2hn3a2gR0LhZI3BEf4RdXf/HLkV9EhNILpjtH74NyP9l27ljDF89FExDz20n8JCGx06ePHii9Hcf79O4qOcr7SylF8Kf6kLgF1Fu+p+F1UUnXC/Tr6d6BnSk56hPekV0su6EygkkW5B3ejQTi+FthQaEK1YdnYN99+/j88+KwWsUVffeiuOhISW1+FGtV5FFUXWbaFFu62hIYp31dUM8g7nnXC/9m3b0y3I6gh25M6gxOBEeob2pHP7ztoY3ApoQLRCxhiWLy/i0UczKS62ERDgxfz5Mdx9t96Boc6MzW5jb+ledhTsYHvBdn4u+Lnu52QdxPza+NUNFnckALoHd6d7cHci/CP077GV04BoZbKyqrn33n3885/WNdwrrujIm2/GERvrmrFaVOtSWllKWn4aOwp31LUH/FL4C7uKdlFlq2pwn/Zt29d98B8ZPfTI4+iO0XiJjt3lrjQgWgljDO++W8Rjj2VSWmojMNCbBQtiGDNG52pQx6u11/JL4S/8lPsTP+X9xJbcLWzJ3cLe0r0n3CcqIIpeob3oFdKLnqE96R3am16hvYjpGKN/Yx5KA6IVyMqqZty4ffzrX1at4corrVpDdLTWGjydMYasg1lszdtaFwY/5f1EWn5ag7eL+nj70Ltz77og6BHSg56hPekR0kP7B6jjaEC0YMYY3nvPamsoLbXRqZM3CxfG6AxvHqqipoJt+dvYnLOZzbmb62oFxZXFDW4f3yme/mH9GRA+oO53Ykii3iaqGk3/UlqovLwa7r13H//4h3WH0tVXB/Lmm3FERurgeu7OGMP+sv11AXDkEtGOgh3YjO247YP9gukf1p/+Yf3pF9aPAeED6BvWl44+OumTOjsaEC3QP/5Rwrhx+8jPr6VjRy8WLIhl7FitNbij8ppytuZtrQuDIzWDksqS47b1Ei96h/YmKSKJAeEDGBg+kIERA4n0j9S/DeUUGhAtSGmpjUcf3c+771odjC6+OIBly7oQF6dtDe6gqKKIjdkb2ZC9gY05G9mUs4kdhTsanGoytH0oA8IHMCBsgHWJKLw/fTv3bfI5h5U6GacGhIhcDiwAvIE/GWPmHLPeB3gPGAwUAqONMRkiEg9sB3Y4Nv3BGHOfM8vqaqtWHWTs2L3s21eNr6/w4ovRPPRQZ+0N3QrZjZ2Mkgw252xmU84mNuVuYlPOJvaV7jtuW2/xpl9YP6tWEGYFwcDwgdqHQLUITgsIEfEGFgEjgUxgnYisMMak1dvsLqDYGNNdRG4CXgRGO9btNsYkOat8LUVVlZ3Jkw8wf34exkBKSnvefz9ep/9sJSpqKvi54Ge25G45qmZwsPrgcdv6tfFjYMRAkiOSSY5IZlDkIPqG9cW3jZ5r1TI5swYxFNhljEkHEJEPgVFA/YAYBTzvePwJ8Jp40NemrVsruOWWDLZsqcDbG559NoLJkyN15NUWqLiimLT8NH4u+LluBNLtBdtJL05v8BJReIdwkiKSGBg+kOTIZJIikkgMTvS4CWdU6+bMgIgG9tdbzgTOOdE2xphaESkFQhzrEkRkI1AGTDbG/NeJZW1WxhgWLy7gySczqaw0dOvmw/Ll8Zx7rg5S5mpVtVVHNRr/lPcT2/K3kXMop8HtvcWbXqG96BfWr65WkBSRRIR/RDOXXKmm11IbqbOBOGNMoYgMBv4hIn2NMUeNESwi44BxAHFxcS4o5ukrKKjlrrv2smKFdfvqnXeGsGBBDP7++s2yuRVXFLM512onOHJpKC0/jVp77XHbtm/bnl6hvegd2pueIT2tjmahVkcznzY6OKJyT84MiCwgtt5yjOO5hrbJFJE2QCBQaIwxQBWAMWa9iOwGegCp9Xc2xiwBlgCkpKQYZxxEU/ruu4P84Q8ZZGXVEBjozZIlcfz+9+430XlLY7Pb2Fm086hbSTfnbGZ/2f7jthWEXqG9GBg+0LqLKHwAfTv3pUunLjrmkPI4zgyIdUCiiCRgBcFNwB+O2WYFMBZYA9wIfG2MMSLSGSgyxthEpCuQCKQ7saxOZbMZZs3KYdq0bOx2GDasAx98EE+XLvrNsykdmams/rATW3K3sC1/G5W1lcdt79fGj/7h/UkKTyIpIonkyGT6h/XX+QiUcnBaQDjaFB4CvsS6zXWpMWabiEwHUo0xK4C3gfdFZBdQhBUiABcC00WkBrAD9xljTjz7SAuWk1PDLbdk8PXXBxGBZ54JZ9q0KNq00Ybos1FZW0lafppVI6g39ERhRWGD28cFxtX1KxgYMZCB4QPpHtxdG42VOgmxrua0fikpKSY1NfXUGzajVasOcvPNe8jJqSUsrA3Ll8czcqQOf3C68g/nH9VOsDl38wmHnQj0CazrVHbkElG/sH508u3kgpIr1fKJyHpjTEpD61pqI3WrZozhxRdzefbZA9jtMHy4P3/5S4KOo3QKNruNXUW76u4e2pSziQ3ZG8g6eGzTlTXsRK/QXr8OOeFoM9ChqZVqOhoQTezwYRt33bWPjz6yRth89tkInn8+Ui8pHcNu7Owo2EHqgVRSD6SyPns9G3M2Ul5Tfty2Hdp2OKqDWVJEEn0699FhJ5RyMg2IJrR3bxXXXpvOpk0VBAR48ec/x3P11Xppw27s7C7azfrs9XVhsP7A+gZ7G8d2jGVgxMC6MYiSI5PpHtxd7yBSygU0IJpIauphrrxyN3l5tXTv7sNnn3WlTx/P+4ZbbasmLT/tqEHpNudu5lD1oeO2jekYw5CoIaREpTA4cjCDowYT2j7UBaVWSjVEA6IJfP55KaNH76G83M4llwTw8ccJBAW5/z9tta2abXnb6moE67PXszl3c4MzmUUFRDEochApkSkMjhpMSlSK9jZWqoVz/08xJ1uypID779+H3Q5jxgTz1ltxtGvnfpdDbHYb2wu2sy5rHesOrCP1QOoJwyAxOJFBkYOsNgPHOERhHcJcUGql1NnQgDgLf/xjHo8/ngnAlClWY7Q73EFjN3Z2Fe1i/QGrzWDdgXVsyN7A4ZrDx22bGJzI4KjBDI60agXJEckE+ga6oNRKqaamAXGG5s7N5amnrNsvX3stlgcf7OziEp2ZI2GwLssKgSN3E5VVlR23bZfALgyJHsKQKOtnUOQgDQOl3JgGxBmYMyeHSZMOAPDmm3GMG9c6GlaNMWSUZNQFwZHaQUPTW0YFRFkNx5GDGRJtNSTrZSKlPIsGxGlatqyQSZMOIAJvv92FO+4IOfVOLlBrr+WXwl+sGc0cHc42ZG+guLL4uG2jAqIYEjWEwZGDGRQ5iEGRg4gMiHRBqZVSLYkGxGn4+uuDjBu3F4BFi2JbRDgYYzhw8IA1QF3eT2zN28rWvK0nHKCuc/vODI4azKCIQaREpTA0eijRHaNdUHKlVEunAdFI27dXcMMN6dTWwpNPhnH//c3f5mA3dnYW7qy7PLQhewNbcrc0WCsAiO8Uz8DwgSRFJNXVDKIDot2iIV0p5XwaEI1QWmrjyit3U1Ji47rrAnnpJed/4z4SBkfaC07W+zjIN4j+4f3p17mf9Tusnw5Qp5Q6axoQjfDEE5ns2VPNoEF+LF+egJdX034DN8aQXpzOj1k/1jUcb8zZ2GDv4+iA6KN6HidFJBHp7x631yqlWhYNiFP45z9LWbq0EB8fYfnyeNq3P/tOcEUVRfyY9SNrM9eyNmstP2b92OA8BrEdY+s6nKVEWT2QtfexUqq5aECcRHFxLffcsw+AWbOi6N379MdWstltbM3byg+ZP7Amcw0/ZP7AjsIdx20X1iGModFDGRo1tC4M9LZSpZQraUCcxCOPZJKdXcP553fgscca92GdcyinrnbwQ9YP/Jj143GXinzb+DIochDnRJ9j/cScQ5fALnqZSCnVomhAnMD//neI5cuL8PMTli3rgrf38R/eNbYaNuVs4vv937Mmcw1rMtewr3TfcdsldErgvNjzOC/mPM6NOZeB4QNp662TBymlWjYNiBOYPTsHgPHjw0lM9AWgvKacNfvX8O3eb1m9bzVrs9YeN8GNfzt/hkQNqasZnBtzrrYbKKVaJQ2IBmzaVM6//lWGX8cqkq/fynNfL2bV3lWszVxLjb3mqG0TgxMZFjuMYbHDOC/mPPp07oO3l7eLSq6UUk1HA+IYm3I2cdtb78Pt31Ld5Seu/+zX4awFITkimRHxI7iwy4UMix2mDclKKbelAVHP5pzNJL+ZDI7PfLsjEC6Kv4gR8SO4IO4CgvyCXFtIpZRqJhoQ9azcs9J6sL8/l/g9yF9f+h3BfsGuLZRSSrmIBkQ9q3Z9bz3YeC2v/2UswX6+ri2QUkq5kPvNjXkW/rd3LQAjup1Hjx4aDkopz6YB4ZB9MJsiWyZUdeCG4cmuLo5SSrmcBoTDmv0/WA+y+jLyUh0FVSmlNCAcPt+0GoCA0oH06OHj4tIopZTraUA4/DfdqkEMjhiqYyIppRQaEIA14uqe6k0AXD/0AheXRimlWgYNCGBz9lZs3uVQHM21I7u6ujhKKdUiaEAAf/3+OwD8SwcSG9vOxaVRSqmWwakBISKXi8gOEdklIhMbWO8jIh851q8Vkfh66yY5nt8hIr91Zjn/s/1/AAwIHuLMt1FKqVbFaQEhIt7AIuAKoA9ws4j0OWazu4BiY0x34BXgRce+fYCbgL7A5cDrjtdzih2H1gNw1cDznfUWSinV6jizBjEU2GWMSTfGVAMfAqOO2WYU8K7j8SfAJWLdQjQK+NAYU2WM2QPscrxek8spKeZw+91Q25Yxvx3mjLdQSqlWyZkBEQ3sr7ec6XiuwW2MMbVAKRDSyH2bxPsr/wtiaF/Wm+jwDs54C6WUapVadSO1iIwTkVQRSc3Pzz+j1zhQXIDX4VB6+Wv7g1JK1efM0VyzgNh6yzGO5xraJlNE2gCBQGEj98UYswRYApCSkmLOpJCv3H0n8+23U3qo6kx2V0opt+XMGsQ6IFFEEkSkHVaj84pjtlkBjHU8vhH42hhjHM/f5LjLKQFIBH50VkG9vLwI6ujnrJdXSqlWyWk1CGNMrYg8BHwJeANLjTHbRGQ6kGqMWQG8DbwvIruAIqwQwbHdX4E0oBZ40Bhjc1ZZlVJKHU+sL+ytX0pKiklNTXV1MZRSqlURkfXGmJSG1rXqRmqllFLOowGhlFKqQRoQSimlGqQBoZRSqkEaEEoppRrkNncxiUg+sPcsXiIUKGii4rQWnnjM4JnH7YnHDJ553Kd7zF2MMZ0bWuE2AXG2RCT1RLd6uStPPGbwzOP2xGMGzzzupjxmvcSklFKqQRoQSimlGqQB8aslri6AC3jiMYNnHrcnHjN45nE32TFrG4RSSqkGaQ1CKaVUgzw+IETkchHZISK7RGSiq8vjLCISKyLfiEiaiGwTkUcdzweLyH9EZKfjd5Cry9rURMRbRDaKyOeO5QQRWes45x85hqN3KyLSSUQ+EZGfRWS7iJzn7udaRB53/G1vFZG/iIivO55rEVkqInkisrXecw2eW7EsdBz/FhEZdDrv5dEBISLewCLgCqAPcLOI9HFtqZymFnjSGNMHOBd40HGsE4GVxphEYKVj2d08Cmyvt/wi8IoxpjtQDNzlklI51wLg38aYXsBArON323MtItHAI0CKMaYf1hQDN+Ge5/od4PJjnjvRub0Caz6dRGAcsPh03sijAwIYCuwyxqQbY6qBD4FRLi6TUxhjso0xGxyPD2J9YERjHe+7js3eBa51TQmdQ0RigCuBPzmWBbgY+MSxiTsecyBwIdZ8Kxhjqo0xJbj5ucaa38bPMTtleyAbNzzXxpjvsObPqe9E53YU8J6x/AB0EpHIxr6XpwdENLC/3nKm4zm3JiLxQDKwFgg3xmQ7VuUA4S4qlrP8EXgKsDuWQ4ASY0ytY9kdz3kCkA8sc1xa+5OIdMCNz7UxJguYB+zDCoZSYD3uf66PONG5PavPOE8PCI8jIv7A34DHjDFl9dc5pnt1m9vaROQqIM8Ys97VZWlmbYBBwGJjTDJwmGMuJ7nhuQ7C+racAEQBHTj+MoxHaMpz6+kBkQXE1luOcTznlkSkLVY4/NkY86nj6dwjVU7H7zxXlc8JzgeuEZEMrMuHF2Ndm+/kuAwB7nnOM4FMY8xax/InWIHhzuf6UmCPMSbfGFMDfIp1/t39XB9xonN7Vp9xnh4Q64BEx50O7bAatVa4uExO4bj2/jaw3Rjzcr1VK4Cxjsdjgc+au2zOYoyZZIyJMcbEY53br40xtwDfADc6NnOrYwYwxuQA+0Wkp+OpS7Dmd3fbc411aelcEWnv+Fs/csxufa7rOdG5XQGMcdzNdC5QWu9S1Cl5fEc5Efk/rOvU3sBSY8wsFxfJKUTkAuC/wE/8ej3+Gax2iL8CcVij4f7eGHNsA1irJyIjgPHGmKtEpCtWjSIY2AjcaoypcmX5mpqIJGE1zLcD0oE7sL4Quu25FpFpwGisO/Y2AndjXW93q3MtIn8BRmCN2poLTAX+QQPn1hGWr2FdbisH7jDGpDb6vTw9IJRSSjXM0y8xKaWUOgENCKWUUg3SgFBKKdUgDQillFIN0oBQSinVIA0I5fFEJERENjl+ckQky/H4kIi83kxlSHLccq1Ui9Hm1Jso5d6MMYVAEoCIPA8cMsbMa+ZiJAEpwL+a+X2VOiGtQSh1AiIyot4cEs+LyLsi8l8R2Ssi14vISyLyk4j82zGMCSIyWES+FZH1IvJlQyNnisjvHHMWbBaR7xy9+KcDox01l9Ei0sEx7v+PjgH3Rjn2vV1EPhORVY6x/6c257+J8iwaEEo1Xjes8ZyuAZYD3xhj+gMVwJWOkHgVuNEYMxhYCjTUM38K8FtjzEDgGsdQ81OAj4wxScaYj4BnsYYGGQpcBMx1jMgK1jD1NwADgN+JSIqTjld5OL3EpFTjfWGMqRGRn7CGZvm34/mfgHigJ9AP+I81wgHeWENPH+t/wDsi8lesQeUachnWQIPjHcu+WMMoAPzHcVkMEfkUuABo9PAJSjWWBoRSjVcFYIyxi0iN+XWcGjvW/yUBthljzjvZixhj7hORc7AmMlovIoMb2EyAG4wxO4560trv2PFxdLwc5RR6iUmpprMD6Cwi54E1vLqI9D12IxHpZoxZa4yZgjWxTyxwEAiot9mXwMOOwdYQkeR660aKNQexH9bMYf9zzuEoT6cBoVQTcbQl3Ai8KCKbgU3AsAY2neto3N4KfA9sxhqWus+RRmpgBtAW2CIi2xzLR/yINa/HFuBvpzM6p1KnQ0dzVaoVEZHbgRRjzEOuLotyf1qDUEop1SCtQSillGqQ1iCUUko1SANCKaVUgzQglFJKNUgDQimlVIM0IJRSSjVIA0IppVSD/j+NyOxhJxNCVQAAAABJRU5ErkJggg==\n",
"text/plain": [
"