{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Neural Ordinary Differential Equations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A significant portion of processes can be described by differential equations: let it be evolution of physical systems, medical conditions of a patient, fundamental properties of markets, etc. Such data is sequential and continuous in its nature, meaning that observations are merely realizations of some continuously changing state.\n", "\n", "There is also another type of sequential data that is discrete – NLP data, for example: its state changes discretely, from one symbol to another, from one word to another.\n", "\n", "Today both these types are normally processed using recurrent neural networks. They are, however, essentially different in their nature, and it seems that they should be treated differently.\n", "\n", "At the last NIPS conference a very interesting [paper](https://arxiv.org/abs/1806.07366) was presented that attempts to tackle this problem. Authors propose a very promising approach, which they call **Neural Ordinary Differential Equations**.\n", "\n", "Here I tried to reproduce and summarize the results of original paper, making it a little easier to familiarize yourself with the idea. As I believe, this new architecture may soon be, among convolutional and recurrent networks, in a toolbox of any data scientist." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Imagine a problem: there is a process following an unknown ODE and some (noisy) observations along its trajectory\n", "\n", "$$\n", "\\frac{dz}{dt} = f(z(t), t) \\tag{1}\n", "$$\n", "$$\n", "\\{(z_0, t_0),(z_1, t_1),...,(z_M, t_M)\\} - \\text{observations}\n", "$$\n", "\n", "Is it possible to find an approximation $\\widehat{f}(z, t, \\theta)$ of dynamics function $f(z, t)$?\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, consider a somewhat simpler task: there are only 2 observations, at the beginning and at the end of the trajectory, $(z_0, t_0), (z_1, t_1)$. One starts the evolution of the system from $z_0, t_0$ for time $t_1 - t_0$ with some parameterized dynamics function using any ODE initial value solver. After that, one ends up being at some new state $\\hat{z_1}, t_1$, compares it with the observation $z_1$, and tries to minimize the difference by varying the parameters $\\theta$.\n", "\n", "Or, more formally, consider optimizing the following loss function $L(\\hat{z_1})$:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$\n", "L(z(t_1)) = L \\Big( z(t_0) + \\int_{t_0}^{t_1} f(z(t), t, \\theta)dt \\Big) = L \\big( \\text{ODESolve}(z(t_0), f, t_0, t_1, \\theta) \\big) \\tag{2}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Figure 1: Continuous backpropagation of the gradient requires solving the augmented ODE backwards in time.
Arrows represent adjusting backpropagated gradients with gradients from observations.
\n", "Figure from the original paper

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In case you don't want to dig into the maths, the above figure representes what is going on. Black trajectory represents solving the ODE during forward propagation. Red arrows represent solving the adjoint ODE during backpropagation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To optimize $L$ one needs to compute the gradients wrt. its parameters: $z(t_0), t_0, t_1, \\theta$. To do this let us first determine how loss depends on the state at every moment of time $(z(t))$:\n", "$$\n", "a(t) = \\frac{\\partial L}{\\partial z(t)} \\tag{3}\n", "$$\n", "$a(t)$ is called *adjoint*, its dynamics is given by another ODE, which can be thought of as an instantaneous analog of the chain rule" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$\n", "\\frac{d a(t)}{d t} = -a(t) \\frac{\\partial f(z(t), t, \\theta)}{\\partial z} \\tag{4}\n", "$$\n", "Actual derivation of this particular formula can be found in the appendix of the original paper." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All vectors here are considered row vectors, whereas the original paper uses both column and row representations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can then compute \n", "$$\n", "\\frac{\\partial L}{\\partial z(t_0)} = \\int_{t_1}^{t_0} a(t) \\frac{\\partial f(z(t), t, \\theta)}{\\partial z} dt \\tag{5}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To compute the gradients wrt. to $t$ and $\\theta$ one can think of them as if they were part of the augmented state" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$\n", "\\frac{d}{dt} \\begin{bmatrix} z \\\\ \\theta \\\\ t \\end{bmatrix} (t) = f_{\\text{aug}}([z, \\theta, t]) := \\begin{bmatrix} f([z, \\theta, t ]) \\\\ 0 \\\\ 1 \\end{bmatrix} \\tag{6}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Adjoint state to this augmented state is then\n", "$$\n", "a_{\\text{aug}} := \\begin{bmatrix} a \\\\ a_{\\theta} \\\\ a_t \\end{bmatrix}, a_{\\theta}(t) := \\frac{\\partial L}{\\partial \\theta(t)}, a_t(t) := \\frac{\\partial L}{\\partial t(t)} \\tag{7}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Gradient of the augmented dynamics\n", "\n", "$$\n", "\\frac{\\partial f_{\\text{aug}}}{\\partial [z, \\theta, t]} = \\begin{bmatrix} \n", "\\frac{\\partial f}{\\partial z} & \\frac{\\partial f}{\\partial \\theta} & \\frac{\\partial f}{\\partial t} \\\\\n", "0 & 0 & 0 \\\\\n", "0 & 0 & 0\n", "\\end{bmatrix} \\tag{8}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Adjoint state ODE from formula (4) is then \n", "$$\n", "\\frac{d a_{\\text{aug}}}{dt} = - \\begin{bmatrix} a\\frac{\\partial f}{\\partial z} & a\\frac{\\partial f}{\\partial \\theta} & a\\frac{\\partial f}{\\partial t}\\end{bmatrix} \\tag{9}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By solving this adjoint augmented ODE initial value problem one gets\n", "$$\n", "\\frac{\\partial L}{\\partial z(t_0)} = \\int_{t_1}^{t_0} a(t) \\frac{\\partial f(z(t), t, \\theta)}{\\partial z} dt \\tag{10}\n", "$$\n", "\n", "$$\n", "\\frac{\\partial L}{\\partial \\theta} = \\int_{t_1}^{t_0} a(t) \\frac{\\partial f(z(t), t, \\theta)}{\\partial \\theta} dt \\tag{11}\n", "$$\n", "\n", "$$\n", "\\frac{\\partial L}{\\partial t_0} = \\int_{t_1}^{t_0} a(t) \\frac{\\partial f(z(t), t, \\theta)}{\\partial t} dt \\tag{12}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "which, together with,\n", "$$\n", "\\frac{\\partial L}{\\partial t_1} = - a(t) \\frac{\\partial f(z(t), t, \\theta)}{\\partial t} \\tag{13}\n", "$$\n", "complements gradients wrt. all the ODESolve parameters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The gradients (10), (11), (12), (13) can be calculated altogether during a single call of the ODESolve with augmented state dynamics (9)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
Figure from the original paper
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The algorithm above describes backpropagation of gradients for the ODE initial value problem with subsequent observations. This algorithm lies in the heart of Neural ODEs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In case there are many observations along the trajectory, one computes the adjoint augmented ODE dynamics for subsequent observations, adjusting the backpropagated gradients with direct gradients at observation times, as shown above on *figure 1*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Implementation " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The code below is my own implementation of the **Neural ODE**. I did it solely for better understanding of what's going on. However it is very close to what is actually implemented in authors' [repository](https://github.com/rtqichen/torchdiffeq). This notebook collects all the code that's necessary for understanding in one place and is slightly more commented. For actual usage and experiments I suggest using authors' original implementation.\n", "\n", "Below is the code if you are interested." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import math\n", "import numpy as np\n", "from IPython.display import clear_output\n", "from tqdm import tqdm_notebook as tqdm\n", "\n", "import matplotlib as mpl\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "import seaborn as sns\n", "sns.color_palette(\"bright\")\n", "import matplotlib as mpl\n", "import matplotlib.cm as cm\n", "\n", "import torch\n", "from torch import Tensor\n", "from torch import nn\n", "from torch.nn import functional as F \n", "from torch.autograd import Variable\n", "\n", "use_cuda = torch.cuda.is_available()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Implement any ordinary differential equation initial value solver. For the sake of simplicity it'll be Euler's ODE initial value solver, however any explicit or implicit method will do." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "def ode_solve(z0, t0, t1, f):\n", " \"\"\"\n", " Simplest Euler ODE initial value solver\n", " \"\"\"\n", " h_max = 0.05\n", " n_steps = math.ceil((abs(t1 - t0)/h_max).max().item())\n", "\n", " h = (t1 - t0)/n_steps\n", " t = t0\n", " z = z0\n", "\n", " for i_step in range(n_steps):\n", " z = z + h * f(z, t)\n", " t = t + h\n", " return z" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also implement a superclass of parameterized dynamics function in the form of neural network with a couple useful methods.\n", "\n", "First, one needs to be able to flatten all the parameters that the function depends on.\n", "\n", "Second, one needs to implement a method that computes the augmented dynamics. This augmented dynamics depends on the gradient of the function wrt. its inputs and parameters. In order to not have to specify them by hand for every new architecture, we will use **torch.autograd.grad** method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class ODEF(nn.Module):\n", " def forward_with_grad(self, z, t, grad_outputs):\n", " \"\"\"Compute f and a df/dz, a df/dp, a df/dt\"\"\"\n", " batch_size = z.shape[0]\n", "\n", " out = self.forward(z, t)\n", "\n", " a = grad_outputs\n", " adfdz, adfdt, *adfdp = torch.autograd.grad(\n", " (out,), (z, t) + tuple(self.parameters()), grad_outputs=(a),\n", " allow_unused=True, retain_graph=True\n", " )\n", " # grad method automatically sums gradients for batch items, we have to expand them back \n", " if adfdp is not None:\n", " adfdp = torch.cat([p_grad.flatten() for p_grad in adfdp]).unsqueeze(0)\n", " adfdp = adfdp.expand(batch_size, -1) / batch_size\n", " if adfdt is not None:\n", " adfdt = adfdt.expand(batch_size, 1) / batch_size\n", " return out, adfdz, adfdt, adfdp\n", "\n", " def flatten_parameters(self):\n", " p_shapes = []\n", " flat_parameters = []\n", " for p in self.parameters():\n", " p_shapes.append(p.size())\n", " flat_parameters.append(p.flatten())\n", " return torch.cat(flat_parameters)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The code below incapsulates forward and backward passes of *Neural ODE*. We have to separate it from main ***torch.nn.Module*** because custom backward function can't be implemented inside Module, but can be implemented inside ***torch.autograd.Function***. So this is just a little workaround.\n", "\n", "This function underlies the whole Neural ODE method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class ODEAdjoint(torch.autograd.Function):\n", " @staticmethod\n", " def forward(ctx, z0, t, flat_parameters, func):\n", " assert isinstance(func, ODEF)\n", " bs, *z_shape = z0.size()\n", " time_len = t.size(0)\n", "\n", " with torch.no_grad():\n", " z = torch.zeros(time_len, bs, *z_shape).to(z0)\n", " z[0] = z0\n", " for i_t in range(time_len - 1):\n", " z0 = ode_solve(z0, t[i_t], t[i_t+1], func)\n", " z[i_t+1] = z0\n", "\n", " ctx.func = func\n", " ctx.save_for_backward(t, z.clone(), flat_parameters)\n", " return z\n", "\n", " @staticmethod\n", " def backward(ctx, dLdz):\n", " \"\"\"\n", " dLdz shape: time_len, batch_size, *z_shape\n", " \"\"\"\n", " func = ctx.func\n", " t, z, flat_parameters = ctx.saved_tensors\n", " time_len, bs, *z_shape = z.size()\n", " n_dim = np.prod(z_shape)\n", " n_params = flat_parameters.size(0)\n", "\n", " # Dynamics of augmented system to be calculated backwards in time\n", " def augmented_dynamics(aug_z_i, t_i):\n", " \"\"\"\n", " tensors here are temporal slices\n", " t_i - is tensor with size: bs, 1\n", " aug_z_i - is tensor with size: bs, n_dim*2 + n_params + 1\n", " \"\"\"\n", " z_i, a = aug_z_i[:, :n_dim], aug_z_i[:, n_dim:2*n_dim] # ignore parameters and time\n", "\n", " # Unflatten z and a\n", " z_i = z_i.view(bs, *z_shape)\n", " a = a.view(bs, *z_shape)\n", " with torch.set_grad_enabled(True):\n", " t_i = t_i.detach().requires_grad_(True)\n", " z_i = z_i.detach().requires_grad_(True)\n", " func_eval, adfdz, adfdt, adfdp = func.forward_with_grad(z_i, t_i, grad_outputs=a) # bs, *z_shape\n", " adfdz = adfdz.to(z_i) if adfdz is not None else torch.zeros(bs, *z_shape).to(z_i)\n", " adfdp = adfdp.to(z_i) if adfdp is not None else torch.zeros(bs, n_params).to(z_i)\n", " adfdt = adfdt.to(z_i) if adfdt is not None else torch.zeros(bs, 1).to(z_i)\n", "\n", " # Flatten f and adfdz\n", " func_eval = func_eval.view(bs, n_dim)\n", " adfdz = adfdz.view(bs, n_dim) \n", " return torch.cat((func_eval, -adfdz, -adfdp, -adfdt), dim=1)\n", "\n", " dLdz = dLdz.view(time_len, bs, n_dim) # flatten dLdz for convenience\n", " with torch.no_grad():\n", " ## Create placeholders for output gradients\n", " # Prev computed backwards adjoints to be adjusted by direct gradients\n", " adj_z = torch.zeros(bs, n_dim).to(dLdz)\n", " adj_p = torch.zeros(bs, n_params).to(dLdz)\n", " # In contrast to z and p we need to return gradients for all times\n", " adj_t = torch.zeros(time_len, bs, 1).to(dLdz)\n", "\n", " for i_t in range(time_len-1, 0, -1):\n", " z_i = z[i_t]\n", " t_i = t[i_t]\n", " f_i = func(z_i, t_i).view(bs, n_dim)\n", "\n", " # Compute direct gradients\n", " dLdz_i = dLdz[i_t]\n", " dLdt_i = torch.bmm(torch.transpose(dLdz_i.unsqueeze(-1), 1, 2), f_i.unsqueeze(-1))[:, 0]\n", "\n", " # Adjusting adjoints with direct gradients\n", " adj_z += dLdz_i\n", " adj_t[i_t] = adj_t[i_t] - dLdt_i\n", "\n", " # Pack augmented variable\n", " aug_z = torch.cat((z_i.view(bs, n_dim), adj_z, torch.zeros(bs, n_params).to(z), adj_t[i_t]), dim=-1)\n", "\n", " # Solve augmented system backwards\n", " aug_ans = ode_solve(aug_z, t_i, t[i_t-1], augmented_dynamics)\n", "\n", " # Unpack solved backwards augmented system\n", " adj_z[:] = aug_ans[:, n_dim:2*n_dim]\n", " adj_p[:] += aug_ans[:, 2*n_dim:2*n_dim + n_params]\n", " adj_t[i_t-1] = aug_ans[:, 2*n_dim + n_params:]\n", "\n", " del aug_z, aug_ans\n", "\n", " ## Adjust 0 time adjoint with direct gradients\n", " # Compute direct gradients \n", " dLdz_0 = dLdz[0]\n", " dLdt_0 = torch.bmm(torch.transpose(dLdz_0.unsqueeze(-1), 1, 2), f_i.unsqueeze(-1))[:, 0]\n", "\n", " # Adjust adjoints\n", " adj_z += dLdz_0\n", " adj_t[0] = adj_t[0] - dLdt_0\n", " return adj_z.view(bs, *z_shape), adj_t, adj_p, None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wrap ode adjoint function in **nn.Module** for convenience." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class NeuralODE(nn.Module):\n", " def __init__(self, func):\n", " super(NeuralODE, self).__init__()\n", " assert isinstance(func, ODEF)\n", " self.func = func\n", "\n", " def forward(self, z0, t=Tensor([0., 1.]), return_whole_sequence=False):\n", " t = t.to(z0)\n", " z = ODEAdjoint.apply(z0, t, self.func.flatten_parameters(), self.func)\n", " if return_whole_sequence:\n", " return z\n", " else:\n", " return z[-1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Application" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## _Learning true dynamics function (proof of concept)_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a proof-of-concept we will now test if Neural ODE can indeed restore true dynamics function using sampled data.\n", "\n", "To test this we will specify an ODE, evolve it and sample points on its trajectory, and then restore it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we'll test a simple linear ODE. Dynamics is given with a matrix." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$\n", "\\frac{dz}{dt} = \\begin{bmatrix}-0.1 & -1.0\\\\1.0 & -0.1\\end{bmatrix} z\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Trained function here is also a simple matrix." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The trained function here is also a simple matrix." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![leaning gif](assets/linear_learning.gif)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, slighty more sophisticated dynamics (no gif as its learning process is not so satisfying :)). \n", "Trained function here is MLP with one hidden layer.\n", "![complicated result](assets/comp_result.png)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class LinearODEF(ODEF):\n", " def __init__(self, W):\n", " super(LinearODEF, self).__init__()\n", " self.lin = nn.Linear(2, 2, bias=False)\n", " self.lin.weight = nn.Parameter(W)\n", "\n", " def forward(self, x, t):\n", " return self.lin(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Dynamics is simply given with a matrix." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class SpiralFunctionExample(LinearODEF):\n", " def __init__(self):\n", " super(SpiralFunctionExample, self).__init__(Tensor([[-0.1, -1.], [1., -0.1]]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Initial random linear dynamics function to be optimized" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class RandomLinearODEF(LinearODEF):\n", " def __init__(self):\n", " super(RandomLinearODEF, self).__init__(torch.randn(2, 2)/2.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "More sophisticated dynamics for creating trajectories" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class TestODEF(ODEF):\n", " def __init__(self, A, B, x0):\n", " super(TestODEF, self).__init__()\n", " self.A = nn.Linear(2, 2, bias=False)\n", " self.A.weight = nn.Parameter(A)\n", " self.B = nn.Linear(2, 2, bias=False)\n", " self.B.weight = nn.Parameter(B)\n", " self.x0 = nn.Parameter(x0)\n", "\n", " def forward(self, x, t):\n", " xTx0 = torch.sum(x*self.x0, dim=1)\n", " dxdt = torch.sigmoid(xTx0) * self.A(x - self.x0) + torch.sigmoid(-xTx0) * self.B(x + self.x0)\n", " return dxdt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Dynamics function to be optimized is MLP" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class NNODEF(ODEF):\n", " def __init__(self, in_dim, hid_dim, time_invariant=False):\n", " super(NNODEF, self).__init__()\n", " self.time_invariant = time_invariant\n", "\n", " if time_invariant:\n", " self.lin1 = nn.Linear(in_dim, hid_dim)\n", " else:\n", " self.lin1 = nn.Linear(in_dim+1, hid_dim)\n", " self.lin2 = nn.Linear(hid_dim, hid_dim)\n", " self.lin3 = nn.Linear(hid_dim, in_dim)\n", " self.elu = nn.ELU(inplace=True)\n", "\n", " def forward(self, x, t):\n", " if not self.time_invariant:\n", " x = torch.cat((x, t), dim=-1)\n", "\n", " h = self.elu(self.lin1(x))\n", " h = self.elu(self.lin2(h))\n", " out = self.lin3(h)\n", " return out" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def to_np(x):\n", " return x.detach().cpu().numpy()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_trajectories(obs=None, times=None, trajs=None, save=None, figsize=(16, 8)):\n", " plt.figure(figsize=figsize)\n", " if obs is not None:\n", " if times is None:\n", " times = [None] * len(obs)\n", " for o, t in zip(obs, times):\n", " o, t = to_np(o), to_np(t)\n", " for b_i in range(o.shape[1]):\n", " plt.scatter(o[:, b_i, 0], o[:, b_i, 1], c=t[:, b_i, 0], cmap=cm.plasma)\n", "\n", " if trajs is not None: \n", " for z in trajs:\n", " z = to_np(z)\n", " plt.plot(z[:, 0, 0], z[:, 0, 1], lw=1.5)\n", " if save is not None:\n", " plt.savefig(save)\n", " plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def conduct_experiment(ode_true, ode_trained, n_steps, name, plot_freq=10):\n", " # Create data\n", " z0 = Variable(torch.Tensor([[0.6, 0.3]]))\n", "\n", " t_max = 6.29*5\n", " n_points = 200\n", "\n", " index_np = np.arange(0, n_points, 1, dtype=np.int)\n", " index_np = np.hstack([index_np[:, None]])\n", " times_np = np.linspace(0, t_max, num=n_points)\n", " times_np = np.hstack([times_np[:, None]])\n", "\n", " times = torch.from_numpy(times_np[:, :, None]).to(z0)\n", " obs = ode_true(z0, times, return_whole_sequence=True).detach()\n", " obs = obs + torch.randn_like(obs) * 0.01\n", "\n", " # Get trajectory of random timespan \n", " min_delta_time = 1.0\n", " max_delta_time = 5.0\n", " max_points_num = 32\n", " def create_batch():\n", " t0 = np.random.uniform(0, t_max - max_delta_time)\n", " t1 = t0 + np.random.uniform(min_delta_time, max_delta_time)\n", "\n", " idx = sorted(np.random.permutation(index_np[(times_np > t0) & (times_np < t1)])[:max_points_num])\n", "\n", " obs_ = obs[idx]\n", " ts_ = times[idx]\n", " return obs_, ts_\n", "\n", " # Train Neural ODE\n", " optimizer = torch.optim.Adam(ode_trained.parameters(), lr=0.01)\n", " for i in range(n_steps):\n", " obs_, ts_ = create_batch()\n", "\n", " z_ = ode_trained(obs_[0], ts_, return_whole_sequence=True)\n", " loss = F.mse_loss(z_, obs_.detach())\n", "\n", " optimizer.zero_grad()\n", " loss.backward(retain_graph=True)\n", " optimizer.step()\n", "\n", " if i % plot_freq == 0:\n", " z_p = ode_trained(z0, times, return_whole_sequence=True)\n", "\n", " plot_trajectories(obs=[obs], times=[times], trajs=[z_p], save=f\"assets/imgs/{name}/{i}.png\")\n", " clear_output(wait=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ode_true = NeuralODE(SpiralFunctionExample())\n", "ode_trained = NeuralODE(RandomLinearODEF())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "conduct_experiment(ode_true, ode_trained, 500, \"linear\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "func = TestODEF(Tensor([[-0.1, -0.5], [0.5, -0.1]]), Tensor([[0.2, 1.], [-1, 0.2]]), Tensor([[-1., 0.]]))\n", "ode_true = NeuralODE(func)\n", "\n", "func = NNODEF(2, 16, time_invariant=True)\n", "ode_trained = NeuralODE(func)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "conduct_experiment(ode_true, ode_trained, 3000, \"comp\", plot_freq=30)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As one can see, Neural ODEs are pretty successful in approximating dynamics. Now let's check if they can be used in a slightly more complicated (MNIST, ha-ha) task." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Neural ODE inspired by ResNets " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In residual networks hidden state changes according to the formula\n", "$$\n", "h_{t+1} = h_{t} + f(h_{t}, \\theta_{t})\n", "$$\n", "\n", "where $t \\in \\{0...T\\}$ is residual block number and $f$ is a function learned by layers inside the block.\n", "\n", "If one takes a limit of an infinite number of residual blocks with smaller steps one gets continuous dynamics of hidden units to be an ordinary differential equation just as we had above.\n", "\n", "$$\n", "\\frac{dh(t)}{dt} = f(h(t), t, \\theta)\n", "$$\n", "\n", "Starting from the input layer $h(0)$, one can define the output layer $h(T)$ to be the solution to this ODE initial value problem at some time T.\n", "\n", "Now one can treat $\\theta$ as parameters shared among all infinitesimally small residual blocks." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Testing Neural ODE architecture on MNIST\n", "\n", "In this section we test the ability of Neural ODE's to be used as a component in more conventional architectures. \n", "In particular, we will use Neural ODE in place of residual blocks in MNIST classifier.\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def norm(dim):\n", " return nn.BatchNorm2d(dim)\n", "\n", "def conv3x3(in_feats, out_feats, stride=1):\n", " return nn.Conv2d(in_feats, out_feats, kernel_size=3, stride=stride, padding=1, bias=False)\n", "\n", "def add_time(in_tensor, t):\n", " bs, c, w, h = in_tensor.shape\n", " return torch.cat((in_tensor, t.expand(bs, 1, w, h)), dim=1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class ConvODEF(ODEF):\n", " def __init__(self, dim):\n", " super(ConvODEF, self).__init__()\n", " self.conv1 = conv3x3(dim + 1, dim)\n", " self.norm1 = norm(dim)\n", " self.conv2 = conv3x3(dim + 1, dim)\n", " self.norm2 = norm(dim)\n", "\n", " def forward(self, x, t):\n", " xt = add_time(x, t)\n", " h = self.norm1(torch.relu(self.conv1(xt)))\n", " ht = add_time(h, t)\n", " dxdt = self.norm2(torch.relu(self.conv2(ht)))\n", " return dxdt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class ContinuousNeuralMNISTClassifier(nn.Module):\n", " def __init__(self, ode):\n", " super(ContinuousNeuralMNISTClassifier, self).__init__()\n", " self.downsampling = nn.Sequential(\n", " nn.Conv2d(1, 64, 3, 1),\n", " norm(64),\n", " nn.ReLU(inplace=True),\n", " nn.Conv2d(64, 64, 4, 2, 1),\n", " norm(64),\n", " nn.ReLU(inplace=True),\n", " nn.Conv2d(64, 64, 4, 2, 1),\n", " )\n", " self.feature = ode\n", " self.norm = norm(64)\n", " self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))\n", " self.fc = nn.Linear(64, 10)\n", "\n", " def forward(self, x):\n", " x = self.downsampling(x)\n", " x = self.feature(x)\n", " x = self.norm(x)\n", " x = self.avg_pool(x)\n", " shape = torch.prod(torch.tensor(x.shape[1:])).item()\n", " x = x.view(-1, shape)\n", " out = self.fc(x)\n", " return out" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "func = ConvODEF(64)\n", "ode = NeuralODE(func)\n", "model = ContinuousNeuralMNISTClassifier(ode)\n", "if use_cuda:\n", " model = model.cuda()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torchvision\n", "\n", "img_std = 0.3081\n", "img_mean = 0.1307\n", "\n", "\n", "batch_size = 32\n", "train_loader = torch.utils.data.DataLoader(\n", " torchvision.datasets.MNIST(\"data/mnist\", train=True, download=True,\n", " transform=torchvision.transforms.Compose([\n", " torchvision.transforms.ToTensor(),\n", " torchvision.transforms.Normalize((img_mean,), (img_std,))\n", " ])\n", " ),\n", " batch_size=batch_size, shuffle=True\n", ")\n", "\n", "test_loader = torch.utils.data.DataLoader(\n", " torchvision.datasets.MNIST(\"data/mnist\", train=False, download=True,\n", " transform=torchvision.transforms.Compose([\n", " torchvision.transforms.ToTensor(),\n", " torchvision.transforms.Normalize((img_mean,), (img_std,))\n", " ])\n", " ),\n", " batch_size=128, shuffle=True\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "optimizer = torch.optim.Adam(model.parameters())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def train(epoch):\n", " num_items = 0\n", " train_losses = []\n", "\n", " model.train()\n", " criterion = nn.CrossEntropyLoss()\n", " print(f\"Training Epoch {epoch}...\")\n", " for batch_idx, (data, target) in tqdm(enumerate(train_loader), total=len(train_loader)):\n", " if use_cuda:\n", " data = data.cuda()\n", " target = target.cuda()\n", " optimizer.zero_grad()\n", " output = model(data)\n", " loss = criterion(output, target) \n", " loss.backward()\n", " optimizer.step()\n", "\n", " train_losses += [loss.item()]\n", " num_items += data.shape[0]\n", " print('Train loss: {:.5f}'.format(np.mean(train_losses)))\n", " return train_losses" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test():\n", " accuracy = 0.0\n", " num_items = 0\n", "\n", " model.eval()\n", " criterion = nn.CrossEntropyLoss()\n", " print(f\"Testing...\")\n", " with torch.no_grad():\n", " for batch_idx, (data, target) in tqdm(enumerate(test_loader), total=len(test_loader)):\n", " if use_cuda:\n", " data = data.cuda()\n", " target = target.cuda()\n", " output = model(data)\n", " accuracy += torch.sum(torch.argmax(output, dim=1) == target).item()\n", " num_items += data.shape[0]\n", " accuracy = accuracy * 100 / num_items\n", " print(\"Test Accuracy: {:.3f}%\".format(accuracy))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "n_epochs = 5\n", "test()\n", "train_losses = []\n", "for epoch in range(1, n_epochs + 1):\n", " train_losses += train(epoch)\n", " test()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "plt.figure(figsize=(9, 5))\n", "history = pd.DataFrame({\"loss\": train_losses})\n", "history[\"cum_data\"] = history.index * batch_size\n", "history[\"smooth_loss\"] = history.loss.ewm(halflife=10).mean()\n", "history.plot(x=\"cum_data\", y=\"smooth_loss\", figsize=(12, 5), title=\"train error\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "Testing...\n", "100% 79/79 [00:01<00:00, 45.69it/s]\n", "Test Accuracy: 9.740%\n", "\n", "Training Epoch 1...\n", "100% 1875/1875 [01:15<00:00, 24.69it/s]\n", "Train loss: 0.20137\n", "Testing...\n", "100% 79/79 [00:01<00:00, 46.64it/s]\n", "Test Accuracy: 98.680%\n", "\n", "Training Epoch 2...\n", "100% 1875/1875 [01:17<00:00, 24.32it/s]\n", "Train loss: 0.05059\n", "Testing...\n", "100% 79/79 [00:01<00:00, 46.11it/s]\n", "Test Accuracy: 97.760%\n", "\n", "Training Epoch 3...\n", "100% 1875/1875 [01:16<00:00, 24.63it/s]\n", "Train loss: 0.03808\n", "Testing...\n", "100% 79/79 [00:01<00:00, 45.65it/s]\n", "Test Accuracy: 99.000%\n", "\n", "Training Epoch 4...\n", "100% 1875/1875 [01:17<00:00, 24.28it/s]\n", "Train loss: 0.02894\n", "Testing...\n", "100% 79/79 [00:01<00:00, 45.42it/s]\n", "Test Accuracy: 99.130%\n", "\n", "Training Epoch 5...\n", "100% 1875/1875 [01:16<00:00, 24.67it/s]\n", "Train loss: 0.02424\n", "Testing...\n", "100% 79/79 [00:01<00:00, 45.89it/s]\n", "Test Accuracy: 99.170%\n", "```\n", "\n", "![train error](assets/train_error.png)\n", "\n", "After a very rough training procedure of only 5 epochs and 6 minutes of training the model already has test error of less than 1%. Which shows that Neural ODE architecture fits very good as a component in more conventional nets." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In their paper, authors also compare this classifier to simple 1-layer MLP, to ResNet with alike architecture, and to same ODE architecture, but in which gradients propagated directly through ODESolve (without adjoint gradient method) (RK-Net).\n", "![\"Methods comparison\"](assets/methods_compare.png)\n", "
Figure from original paper
\n", "\n", "According to them, 1-layer MLP with roughly the same amount of parameters as Neural ODE-Net has much higher test error, ResNet with roughly the same error has much more parameters, and RK-Net with direct backpropagation through ODESolver has slightly higher error and linearly growing memory usage." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In their paper, authors use implicit Runge-Kutta solver with adaptive step size instead of simple Euler's method. They also examine some ODE-Net characteristics.\n", "\n", "![\"Node attrs\"](assets/ode_solver_attrs.png)\n", "\n", "
ODE-Net characteristics (NFE Forward - number of function evaluations during forward pass)
\n", "
Figure from original paper
\n", "\n", "- (a) Changing tolerable Numerical Error varies the number of steps per forward pass evaluation.\n", "- (b) Time spent by the forward call is proportional to the number of function evaluations.\n", "- (c) Number of backward evaluations is roughly half the number of forward evaluations, this suggests that adjoint method is more computationally efficient than direct backpropagation through ODESolver.\n", "- (d) As ODE-Net becomes more and more trained, it demands more and more evaluations, presumably adapting to the increasing complexity of the model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generative latent function time-series model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Neural ODE seems to be more suitable for continuous sequential data even when this continuous trajectory is in some unknown latent space.\n", "\n", "In this section we will experiment with generating continuous sequential data using Neural ODE and exploring its latent space a bit.\n", "Authors also compare it to the same sequential data but generated with Recurrent Neural Networks.\n", "\n", "The approach here is slightly different from the corresponding example in authors repository, the one here has a more diverse set of trajectories." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Training data consists of random spirals, one half of which is clockwise and another is counter-clockwise. Then, random subtimespans of size 100 are sampled from these spirals, having passed through encoder rnn model in reversed order yielding a latent starting state, which then evolves creating a trajectory in the latent space. This latent trajectory is then mapped onto the data space trajectory and compared with the actual data observations. Thus, the model learns to generate data-alike trajectories." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![image.png](assets/spirals_examples.png)\n", "
Examples of spirals in the dataset
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### VAE as a generative model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A generative model through sampling procedure:\n", "$$\n", "z_{t_0} \\sim \\mathcal{N}(0, I)\n", "$$\n", "\n", "$$\n", "z_{t_1}, z_{t_2},...,z_{t_M} = \\text{ODESolve}(z_{t_0}, f, \\theta_f, t_0,...,t_M)\n", "$$\n", "\n", "$$\n", "\\text{each } x_{t_i} \\sim p(x \\mid z_{t_i};\\theta_x)\n", "$$\n", "\n", "Which can be trained using variational autoencoder approach:\n", "\n", "1. Run the RNN encoder through the time series backwards in time to infer the parameters $\\mu_{z_{t_0}}$, $\\sigma_{z_{t_0}}$ of variational posterior and sample from it\n", "$$\n", "z_{t_0} \\sim q \\left( z_{t_0} \\mid x_{t_0},...,x_{t_M}; t_0,...,t_M; \\theta_q \\right) = \\mathcal{N} \\left(z_{t_0} \\mid \\mu_{z_{t_0}} \\sigma_{z_{t_0}} \\right)\n", "$$\n", "2. Obtain the latent trajectory \n", "$$\n", "z_{t_1}, z_{t_2},...,z_{t_N} = \\text{ODESolve}(z_{t_0}, f, \\theta_f, t_0,...,t_N), \\text{ where } \\frac{d z}{d t} = f(z, t; \\theta_f)\n", "$$\n", "3. Map the latent trajectory onto the data space using another neural network: $\\hat{x_{t_i}}(z_{t_i}, t_i; \\theta_x)$\n", "4. Maximize Evidence Lower BOund estimate for sampled trajectory\n", "$$\n", "\\text{ELBO} \\approx N \\Big( \\sum_{i=0}^{M} \\log p(x_{t_i} \\mid z_{t_i}(z_{t_0}; \\theta_f); \\theta_x) + KL \\left( q( z_{t_0} \\mid x_{t_0},...,x_{t_M}; t_0,...,t_M; \\theta_q) \\parallel \\mathcal{N}(0, I) \\right) \\Big)\n", "$$\n", "And in case of Gaussian posterior $p(x \\mid z_{t_i};\\theta_x)$ and known noise level $\\sigma_x$\n", "$$\n", "\\text{ELBO} \\approx -N \\Big( \\sum_{i=1}^{M}\\frac{(x_i - \\hat{x_i} )^2}{\\sigma_x^2} - \\log \\sigma_{z_{t_0}}^2 + \\mu_{z_{t_0}}^2 + \\sigma_{z_{t_0}}^2 \\Big) + C\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Computation graph of the latent ODE model can be depicted like this\n", "![vae_model](assets/vae_model.png)\n", "
Figure from the original paper
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can then test how this model extrapolates the trajectory from only its initial moment observations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Defining the models" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class RNNEncoder(nn.Module):\n", " def __init__(self, input_dim, hidden_dim, latent_dim):\n", " super(RNNEncoder, self).__init__()\n", " self.input_dim = input_dim\n", " self.hidden_dim = hidden_dim\n", " self.latent_dim = latent_dim\n", "\n", " self.rnn = nn.GRU(input_dim+1, hidden_dim)\n", " self.hid2lat = nn.Linear(hidden_dim, 2*latent_dim)\n", "\n", " def forward(self, x, t):\n", " # Concatenate time to input\n", " t = t.clone()\n", " t[1:] = t[:-1] - t[1:]\n", " t[0] = 0.\n", " xt = torch.cat((x, t), dim=-1)\n", "\n", " _, h0 = self.rnn(xt.flip((0,))) # Reversed\n", " # Compute latent dimension\n", " z0 = self.hid2lat(h0[0])\n", " z0_mean = z0[:, :self.latent_dim]\n", " z0_log_var = z0[:, self.latent_dim:]\n", " return z0_mean, z0_log_var" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class NeuralODEDecoder(nn.Module):\n", " def __init__(self, output_dim, hidden_dim, latent_dim):\n", " super(NeuralODEDecoder, self).__init__()\n", " self.output_dim = output_dim\n", " self.hidden_dim = hidden_dim\n", " self.latent_dim = latent_dim\n", "\n", " func = NNODEF(latent_dim, hidden_dim, time_invariant=True)\n", " self.ode = NeuralODE(func)\n", " self.l2h = nn.Linear(latent_dim, hidden_dim)\n", " self.h2o = nn.Linear(hidden_dim, output_dim)\n", "\n", " def forward(self, z0, t):\n", " zs = self.ode(z0, t, return_whole_sequence=True)\n", "\n", " hs = self.l2h(zs)\n", " xs = self.h2o(hs)\n", " return xs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class ODEVAE(nn.Module):\n", " def __init__(self, output_dim, hidden_dim, latent_dim):\n", " super(ODEVAE, self).__init__()\n", " self.output_dim = output_dim\n", " self.hidden_dim = hidden_dim\n", " self.latent_dim = latent_dim\n", "\n", " self.encoder = RNNEncoder(output_dim, hidden_dim, latent_dim)\n", " self.decoder = NeuralODEDecoder(output_dim, hidden_dim, latent_dim)\n", "\n", " def forward(self, x, t, MAP=False):\n", " z_mean, z_log_var = self.encoder(x, t)\n", " if MAP:\n", " z = z_mean\n", " else:\n", " z = z_mean + torch.randn_like(z_mean) * torch.exp(0.5 * z_log_var)\n", " x_p = self.decoder(z, t)\n", " return x_p, z, z_mean, z_log_var\n", "\n", " def generate_with_seed(self, seed_x, t):\n", " seed_t_len = seed_x.shape[0]\n", " z_mean, z_log_var = self.encoder(seed_x, t[:seed_t_len])\n", " x_p = self.decoder(z_mean, t)\n", " return x_p" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Generating dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t_max = 6.29*5\n", "n_points = 200\n", "noise_std = 0.02\n", "\n", "num_spirals = 1000\n", "\n", "index_np = np.arange(0, n_points, 1, dtype=np.int)\n", "index_np = np.hstack([index_np[:, None]])\n", "times_np = np.linspace(0, t_max, num=n_points)\n", "times_np = np.hstack([times_np[:, None]] * num_spirals)\n", "times = torch.from_numpy(times_np[:, :, None]).to(torch.float32)\n", "\n", "# Generate random spirals parameters\n", "normal01 = torch.distributions.Normal(0, 1.0)\n", "\n", "x0 = Variable(normal01.sample((num_spirals, 2))) * 2.0 \n", "\n", "W11 = -0.1 * normal01.sample((num_spirals,)).abs() - 0.05\n", "W22 = -0.1 * normal01.sample((num_spirals,)).abs() - 0.05\n", "W21 = -1.0 * normal01.sample((num_spirals,)).abs()\n", "W12 = 1.0 * normal01.sample((num_spirals,)).abs()\n", "\n", "xs_list = []\n", "for i in range(num_spirals):\n", " if i % 2 == 1: # Make it counter-clockwise\n", " W21, W12 = W12, W21\n", "\n", " func = LinearODEF(Tensor([[W11[i], W12[i]], [W21[i], W22[i]]]))\n", " ode = NeuralODE(func)\n", "\n", " xs = ode(x0[i:i+1], times[:, i:i+1], return_whole_sequence=True)\n", " xs_list.append(xs)\n", "\n", "\n", "orig_trajs = torch.cat(xs_list, dim=1).detach()\n", "samp_trajs = orig_trajs + torch.randn_like(orig_trajs) * noise_std\n", "samp_ts = times" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(15, 9))\n", "axes = axes.flatten()\n", "for i, ax in enumerate(axes):\n", " ax.scatter(samp_trajs[:, i, 0], samp_trajs[:, i, 1], c=samp_ts[:, i, 0], cmap=cm.plasma)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy.random as npr\n", "\n", "def gen_batch(batch_size, n_sample=100):\n", " n_batches = samp_trajs.shape[1] // batch_size\n", " time_len = samp_trajs.shape[0]\n", " n_sample = min(n_sample, time_len)\n", " for i in range(n_batches):\n", " if n_sample > 0:\n", " t0_idx = npr.multinomial(1, [1. / (time_len - n_sample)] * (time_len - n_sample))\n", " t0_idx = np.argmax(t0_idx)\n", " tM_idx = t0_idx + n_sample\n", " else:\n", " t0_idx = 0\n", " tM_idx = time_len\n", "\n", " frm, to = batch_size*i, batch_size*(i+1)\n", " yield samp_trajs[t0_idx:tM_idx, frm:to], samp_ts[t0_idx:tM_idx, frm:to]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vae = ODEVAE(2, 64, 6)\n", "vae = vae.cuda()\n", "if use_cuda:\n", " vae = vae.cuda()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "optim = torch.optim.Adam(vae.parameters(), betas=(0.9, 0.999), lr=0.001)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "preload = False\n", "n_epochs = 20000\n", "batch_size = 100\n", "\n", "plot_traj_idx = 1\n", "plot_traj = orig_trajs[:, plot_traj_idx:plot_traj_idx+1]\n", "plot_obs = samp_trajs[:, plot_traj_idx:plot_traj_idx+1]\n", "plot_ts = samp_ts[:, plot_traj_idx:plot_traj_idx+1]\n", "if use_cuda:\n", " plot_traj = plot_traj.cuda()\n", " plot_obs = plot_obs.cuda()\n", " plot_ts = plot_ts.cuda()\n", "\n", "if preload:\n", " vae.load_state_dict(torch.load(\"models/vae_spirals.sd\"))\n", "\n", "for epoch_idx in range(n_epochs):\n", " losses = []\n", " train_iter = gen_batch(batch_size)\n", " for x, t in train_iter:\n", " optim.zero_grad()\n", " if use_cuda:\n", " x, t = x.cuda(), t.cuda()\n", "\n", " max_len = np.random.choice([30, 50, 100])\n", " permutation = np.random.permutation(t.shape[0])\n", " np.random.shuffle(permutation)\n", " permutation = np.sort(permutation[:max_len])\n", "\n", " x, t = x[permutation], t[permutation]\n", "\n", " x_p, z, z_mean, z_log_var = vae(x, t)\n", " kl_loss = -0.5 * torch.sum(1 + z_log_var - z_mean**2 - torch.exp(z_log_var), -1)\n", " loss = 0.5 * ((x-x_p)**2).sum(-1).sum(0) / noise_std**2 + kl_loss\n", " loss = torch.mean(loss)\n", " loss /= max_len\n", " loss.backward()\n", " optim.step()\n", " losses.append(loss.item())\n", "\n", " print(f\"Epoch {epoch_idx}\")\n", "\n", " frm, to, to_seed = 0, 200, 50\n", " seed_trajs = samp_trajs[frm:to_seed]\n", " ts = samp_ts[frm:to]\n", " if use_cuda:\n", " seed_trajs = seed_trajs.cuda()\n", " ts = ts.cuda()\n", "\n", " samp_trajs_p = to_np(vae.generate_with_seed(seed_trajs, ts))\n", "\n", " fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(15, 9))\n", " axes = axes.flatten()\n", " for i, ax in enumerate(axes):\n", " ax.scatter(to_np(seed_trajs[:, i, 0]), to_np(seed_trajs[:, i, 1]), c=to_np(ts[frm:to_seed, i, 0]), cmap=cm.plasma)\n", " ax.plot(to_np(orig_trajs[frm:to, i, 0]), to_np(orig_trajs[frm:to, i, 1]))\n", " ax.plot(samp_trajs_p[:, i, 0], samp_trajs_p[:, i, 1])\n", " plt.show()\n", "\n", " print(np.mean(losses), np.median(losses))\n", " clear_output(wait=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "spiral_0_idx = 3\n", "spiral_1_idx = 6\n", "\n", "homotopy_p = Tensor(np.linspace(0., 1., 10)[:, None])\n", "vae = vae\n", "if use_cuda:\n", " homotopy_p = homotopy_p.cuda()\n", " vae = vae.cuda()\n", "\n", "spiral_0 = orig_trajs[:, spiral_0_idx:spiral_0_idx+1, :]\n", "spiral_1 = orig_trajs[:, spiral_1_idx:spiral_1_idx+1, :]\n", "ts_0 = samp_ts[:, spiral_0_idx:spiral_0_idx+1, :]\n", "ts_1 = samp_ts[:, spiral_1_idx:spiral_1_idx+1, :]\n", "if use_cuda:\n", " spiral_0, ts_0 = spiral_0.cuda(), ts_0.cuda()\n", " spiral_1, ts_1 = spiral_1.cuda(), ts_1.cuda()\n", "\n", "z_cw, _ = vae.encoder(spiral_0, ts_0)\n", "z_cc, _ = vae.encoder(spiral_1, ts_1)\n", "\n", "homotopy_z = z_cw * (1 - homotopy_p) + z_cc * homotopy_p\n", "\n", "t = torch.from_numpy(np.linspace(0, 6*np.pi, 200))\n", "t = t[:, None].expand(200, 10)[:, :, None].cuda()\n", "t = t.cuda() if use_cuda else t\n", "hom_gen_trajs = vae.decoder(homotopy_z, t)\n", "\n", "fig, axes = plt.subplots(nrows=2, ncols=5, figsize=(15, 5))\n", "axes = axes.flatten()\n", "for i, ax in enumerate(axes):\n", " ax.plot(to_np(hom_gen_trajs[:, i, 0]), to_np(hom_gen_trajs[:, i, 1]))\n", "plt.show()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "torch.save(vae.state_dict(), \"models/vae_spirals.sd\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is what I got after a night of training\n", "![spiral reconstruction with seed](assets/spirals_reconstructed.png)\n", "
Dots are noisy observations of the original trajectories (blue),
yellow are reconstructed and interpolated trajectories using dots as inputs.
Color of the dots represents time.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Reconstuctions of some examples are not very good. Maybe the model is not complex enough or haven't been trained for a long enough time. Anyway, results look very credible.\n", "\n", "Now lets have a look at what happens if we interpolate the latent variable of the clockwise trajectory to another - the counter-clockwise one.\n", "![homotopy](assets/spirals_homotopy.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Authors also compare reconstructed trajectories using initial moment of time observations of Neural ODE and simple RNN.\n", "![ode_rnn_comp](assets/ode_rnn_comp.png)\n", "
Figure from the original paper
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Continuous normalizing flows\n", "\n", "The original paper also contributes a lot in the topic of Normalizing Flows. Normalizing flows are used when one needs to sample from a complicated distribution originating from a change of variables in some simple distribution (e.q. Gaussian), while still being able to know the probability density of each sample. \n", "They show that using continuous change of variables is much more computationally efficient and interpretable than previous methods._\n", "\n", "Normalizing flows are very useful in such models as *Variational AutoEncoders*, *Bayesian Neural Networks* and other things in Bayesian setting.\n", "\n", "This topic, however, is beyond the scope of the present notebook, and those interested are adressed to the original paper.\n", "\n", "To tease you a bit:\n", "\n", "![CNF_NF_comp](assets/CNF_NF_comp.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Visualizing the transformation from noise (simple distribution) to data (complicated distribution) for two datasets;
X-axis represents density and samples transformation with \"time\" (for CNF) and \"depth\" (for NF)
Figure from the original paper
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This concludes my little investigation of **Neural ODEs**. Hope you found it useful!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Useful links\n", "\n", " - [Original paper](https://arxiv.org/abs/1806.07366)\n", " - [Authors' PyTorch implementation](https://github.com/rtqichen/torchdiffeq)\n", " - [Variational Inference](https://www.cs.princeton.edu/courses/archive/fall11/cos597C/lectures/variational-inference-i.pdf)\n", " - [My article on VAE (Russian)](https://habr.com/en/post/331552/)\n", " - [VAE explained](https://www.jeremyjordan.me/variational-autoencoders/)\n", " - [More on Normalizing Flows](http://akosiorek.github.io/ml/2018/04/03/norm_flows.html)\n", " - [Variational Inference with Normalizing Flows Paper](https://arxiv.org/abs/1505.05770)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": { "01447e49744f4e15a918eab38023ee12": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "048637d9cc1145f09927902862f40951": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "05e09cca05514276a8729c3d97e57437": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "065445947c66425282ee8aaf7954a120": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "0775b035e93647aa9f244cae1ff06eab": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "07af261c59f6431fb4793235e4b161b2": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "0b6e4731c92f432689b4cc7c385392d4": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_46b9cd6a486b4ada8e8e39209763eabc", "style": "IPY_MODEL_7ba08023e68346c99d295c8cf317c9c6", "value": " 0% 0/79 [00:00<?, ?it/s]" } }, "0b85dee07d4749b18aef1dabf03bb1f4": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "0caf938964ce48848a446461157dfeb2": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_eb836fe776da47d8b89c0b01ecb89d98", "IPY_MODEL_7722c8d516c14ea7941084c5844c6162" ], "layout": "IPY_MODEL_7007118bd6734014b09e7e49477c0c4a" } }, "0df95b684157426880512578f9f78d1f": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "0e7868acdb7c4c619625d37deb9eb3a3": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "0ea17b3b30934876a6770551876e249a": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "0f5ee0f14be24040a03c395b2231c00e": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "10784fae38df4b7d89198aace1bd64a7": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_f8a3eabaf27c46ca80f37df959abdbb4", "IPY_MODEL_279a240511db48b7bf70ab42356266b0" ], "layout": "IPY_MODEL_7831b7d3b6c54b538bea298f096fb536" } }, "123379c3ee7743cba4ed7ec34ae86803": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "13fbea0ae8394bc586befbd1c8b4ae7b": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_96a3541c181347b39e533bcedf25f0d4", "IPY_MODEL_47f6d7e9386e43809947d74f5fbead50" ], "layout": "IPY_MODEL_65030196eaca4611bf4bf974a38610d6" } }, "15905bc3a2bf4b29aebfe0fbecbefb11": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_475f88c0c44e4c4ca73a6494f280bcd9", "IPY_MODEL_5449f0ee3b8e487f906ee3677e58fb9b" ], "layout": "IPY_MODEL_7bede11c51444425aabed22076a37eb1" } }, "16f0c997619d429785f07a3b41574900": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_90af5c7af9cc46e4b2e1dde7a04ce01a", "style": "IPY_MODEL_94a492c2964f4a7baf815dad7841b6db", "value": "100% 79/79 [00:01<00:00, 45.89it/s]" } }, "16f7d8c07e5844c4bed41ab1eabfa8e1": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "173a8399af0a4f20b483e0985bf17480": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_459b7bea4d514b53b67e970855d95de7", "IPY_MODEL_16f0c997619d429785f07a3b41574900" ], "layout": "IPY_MODEL_123379c3ee7743cba4ed7ec34ae86803" } }, "1754cba9ef2d4f00b605b2852eeef3f7": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_30c723e20df74bce9aec7c29e280788d", "IPY_MODEL_54f60614bb0246d8b157793b021806e2" ], "layout": "IPY_MODEL_2ad0b5aab3c14fd9b193f5fe48ba8d9e" } }, "1806bd82afb64886a188425bf61de3ee": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "1853c04636984ec6a178826a85930c1b": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "18983393e5414f6f858e8c034d9fb9bc": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "19c7b3b12ec3416299db573bd6984df5": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "1a9b8f506fca4149a2b2cd992b670785": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "1b92e09380d34936a89235d115570832": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "1d8be434b18e433b98f4321b257e3274": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "1e707ca751194ec2a7a14564648bd27a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "1e92b05840124fe7bf8d9bd143da48cd": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_3d18d316f3e44e479038dfc5c8bc306b", "style": "IPY_MODEL_dcf78f860cc4483d9741b04aa3d6d1db", "value": "100% 79/79 [00:01<00:00, 46.64it/s]" } }, "1f89084ff986444bbd62198e126bcde4": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "1ffd204fab354f9f921d259346a47742": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "2220a84fa7b3415e8e5c775585eb0323": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "23b650880517487192d3a539ce2eee33": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_7b73e5216e7840faa3eb2662a0c01b2a", "max": 1875, "style": "IPY_MODEL_0775b035e93647aa9f244cae1ff06eab", "value": 1875 } }, "2524fdd6758a4d6db92eb448c6f025f8": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "279a240511db48b7bf70ab42356266b0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_86c82f8a014e4048879c0d3298efdda2", "style": "IPY_MODEL_8eb957262d244a90a6f5d70e3c1195d9", "value": "100% 79/79 [00:01<00:00, 46.80it/s]" } }, "290d0742411a4702ae281fea70f3bcaa": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_fe681ff909f441eea634e7bb6169422b", "style": "IPY_MODEL_c27ea7e05da146a2ae2b4713ebcd38c0", "value": "100% 79/79 [00:01<00:00, 44.72it/s]" } }, "2a521e703d1a4df598d4d7ebc8917df1": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "2ad0b5aab3c14fd9b193f5fe48ba8d9e": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "2d5c4ccf97f641f4b36c516e49b42f1c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_bc63d0e73204431c8d36c5cc5851b74c", "style": "IPY_MODEL_16f7d8c07e5844c4bed41ab1eabfa8e1", "value": "100% 1875/1875 [01:17<00:00, 24.32it/s]" } }, "3022b9630fe343639f36b7d80ef877eb": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_1ffd204fab354f9f921d259346a47742", "max": 79, "style": "IPY_MODEL_7ae4bac9876546f0b92d11366637d861", "value": 79 } }, "30c723e20df74bce9aec7c29e280788d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_e6b679d3e9d544cfb613164b03c8991d", "max": 1875, "style": "IPY_MODEL_dc1ee5808f0946eaa79fd898647e0ba2", "value": 1875 } }, "31beb781955e4abea1fa33138a70ca69": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "32b30cb9acc244a9914e34f45879c21e": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_4287d4cb0bb1408797be58903995174a", "style": "IPY_MODEL_5a734a68c8c04c6b96a31d7d4aa90f5f", "value": "100% 1875/1875 [01:15<00:00, 24.69it/s]" } }, "33497dd7f39b42da8ce8c28ae0eb1ebf": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_4b4e2cc965cc4c4088ea23c0b05484b9", "max": 1875, "style": "IPY_MODEL_6449f6d43b3e458bb419bcc1198fd5b9", "value": 1875 } }, "34a8733ed250438fb584cfaa46c00ab3": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_656337147cc34e5494d9c722d65496ba", "IPY_MODEL_fffa785bfcd54cb49a1c2a420bac2b51" ], "layout": "IPY_MODEL_9bb7628377ca49989eeeddf2c4d5f4c8" } }, "34b6a9403d5b42e695cadb3a3446004a": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "3538802d34dc4533b03785643f57235d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "36e2fc573d5d45c4bd9ed063f5e14954": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_87334fc2d276402e917d0517382e8b42", "max": 79, "style": "IPY_MODEL_4351044a73984ba19c3c1ba8fa67a921", "value": 79 } }, "3727ef4991cb4f7f9ef7d007df01f449": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "3bdc65622ceb44fa895eeccf2ecc0152": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "3c18e69d054d4a259f9b75cb5f2f57f5": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "3d18d316f3e44e479038dfc5c8bc306b": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "3dcea128ee1c47fabd9c1ca1513cbe0e": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_3727ef4991cb4f7f9ef7d007df01f449", "style": "IPY_MODEL_d3f17e88798a4f6a9e33e9005df7c7f9", "value": "100% 79/79 [00:01<00:00, 45.42it/s]" } }, "3e54bef0104e4979b134823ac90c4fa6": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "3f6fa3cbad674dcb93ae3b17bcfb60d6": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_481f434e06ba4366b43eece1c7200a33", "max": 1875, "style": "IPY_MODEL_ba07dd1810c74b6ba518f6221168913b", "value": 1875 } }, "40f5b79337564201be954dda50b6ea67": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "415f6e33dce140f3ac3935aa6769bce9": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "4287d4cb0bb1408797be58903995174a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "4351044a73984ba19c3c1ba8fa67a921": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "438a27adf49445deaa10c31609d44516": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_69f3ab8664b44198aad0fe0ecc18f08c", "IPY_MODEL_7ddab42457f440d59ede14a2b662c324" ], "layout": "IPY_MODEL_0f5ee0f14be24040a03c395b2231c00e" } }, "44766dc4803a4a5aa6a8c752d0ae2049": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_2220a84fa7b3415e8e5c775585eb0323", "max": 79, "style": "IPY_MODEL_2524fdd6758a4d6db92eb448c6f025f8", "value": 79 } }, "44f2ee887e874370bc4b4d0330c843af": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "459b7bea4d514b53b67e970855d95de7": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_f209fbc7469f40e7963d4c18bae6e111", "max": 79, "style": "IPY_MODEL_1d8be434b18e433b98f4321b257e3274", "value": 79 } }, "46b9cd6a486b4ada8e8e39209763eabc": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "475f88c0c44e4c4ca73a6494f280bcd9": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_e199254644b34a6395dcc41cee2bd738", "max": 1875, "style": "IPY_MODEL_0b85dee07d4749b18aef1dabf03bb1f4", "value": 1875 } }, "47f6d7e9386e43809947d74f5fbead50": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_fffd484e385e4249bb9759808ee02fac", "style": "IPY_MODEL_9f4929ea841e4ab3960b47003ad1d648", "value": "100% 79/79 [00:01<00:00, 43.20it/s]" } }, "481f434e06ba4366b43eece1c7200a33": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "4a6e93fc0e6c47ff830e00836304d3f4": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_b0eb50e2e02740d4a166cb4ced86c574", "IPY_MODEL_b4e3c1c1e8f64d7d8158e341efda2edb" ], "layout": "IPY_MODEL_6766a5a8c8f14f3daf0eb3013746dc9f" } }, "4a9b90eef6b94bcc9e7919fbba4419fb": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_728662f43b604263a57237acc830f2fa", "max": 79, "style": "IPY_MODEL_8a28e7cf467f422bafcc4a4f7f80d99d", "value": 79 } }, "4b4e2cc965cc4c4088ea23c0b05484b9": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "4bbe6a4019384880accf89bd6ea4ce43": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "4c1560d1809d4a1ea741d43dcef95076": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "529def52c63f4a29b578b1ece833bd93": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "5300061d70b04209a9b59fdd6493424c": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "5449f0ee3b8e487f906ee3677e58fb9b": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_40f5b79337564201be954dda50b6ea67", "style": "IPY_MODEL_6ddadc736a5845f198d128a1c0c0e94d", "value": "100% 1875/1875 [01:16<00:00, 24.63it/s]" } }, "54f60614bb0246d8b157793b021806e2": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_803ab43df57c4b49a0c9400d794ba6c3", "style": "IPY_MODEL_1f89084ff986444bbd62198e126bcde4", "value": "100% 1875/1875 [01:16<00:00, 24.40it/s]" } }, "54fef2fc76454dbc87f85d678e570fca": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "57541d738454477198d5f0ccc4521703": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_048637d9cc1145f09927902862f40951", "style": "IPY_MODEL_6ee900a5b4a44c46afefd5dbc277089a", "value": "100% 1875/1875 [01:17<00:00, 24.28it/s]" } }, "5a6b2b387a7141bfbe7d6826f12f54eb": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "5a734a68c8c04c6b96a31d7d4aa90f5f": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "5a9947e7eab6439b880b24ee85c27c61": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "6027f1f6d186491e9b1c6a0b4488f602": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "6262c5576e884ab7886d6e186df54ae8": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "63395bfc4f0e433588c3562efc322ddb": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_743078593d73460abf57d9429fa92c8b", "style": "IPY_MODEL_9568817c86d54a68b638bb217be5696d", "value": "100% 79/79 [00:01<00:00, 45.65it/s]" } }, "63cd0b359ca748c4a0aae0e09a4fbb3c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_da9d36f1abb3470f89dcdd21931dcfcd", "IPY_MODEL_57541d738454477198d5f0ccc4521703" ], "layout": "IPY_MODEL_07af261c59f6431fb4793235e4b161b2" } }, "6449f6d43b3e458bb419bcc1198fd5b9": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "65030196eaca4611bf4bf974a38610d6": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "656337147cc34e5494d9c722d65496ba": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_1a9b8f506fca4149a2b2cd992b670785", "max": 79, "style": "IPY_MODEL_9b94b49d7ef44cba8d0b514159407f8d", "value": 79 } }, "675221f89e2140ef8d60204390e9acda": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "6766a5a8c8f14f3daf0eb3013746dc9f": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "69523589bb43433cb1de06bb116e8958": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "69a43112476f4730b7c8d60e2b301e10": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "69f3ab8664b44198aad0fe0ecc18f08c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_1e707ca751194ec2a7a14564648bd27a", "max": 79, "style": "IPY_MODEL_c292aa2c40e347418d62e2d8befd013d", "value": 79 } }, "6cfd09d8ff154c2786daa740299f3cf2": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "6ddadc736a5845f198d128a1c0c0e94d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "6df4ed67c40043e59585f8160fb32156": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_36e2fc573d5d45c4bd9ed063f5e14954", "IPY_MODEL_63395bfc4f0e433588c3562efc322ddb" ], "layout": "IPY_MODEL_97d837fcb3944443970fb37175e2a8e2" } }, "6e17b8245d8e4202ba9b9cc8b2a16282": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_69523589bb43433cb1de06bb116e8958", "max": 79, "style": "IPY_MODEL_9403280d739a4430938626d044d78ba6", "value": 79 } }, "6ea469953ee04cc0bab8260495313f27": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "6ee900a5b4a44c46afefd5dbc277089a": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "7007118bd6734014b09e7e49477c0c4a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "728662f43b604263a57237acc830f2fa": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "73790d724d814ee2bb3442420f1efa87": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "743078593d73460abf57d9429fa92c8b": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "75a786ad7f7c459fa6f108319910e93c": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "7678625a957942feb9d1f19894db4207": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_f53c52d6047a41fb9fca83d28d5cf184", "style": "IPY_MODEL_31beb781955e4abea1fa33138a70ca69", "value": "100% 1875/1875 [01:16<00:00, 24.50it/s]" } }, "7722c8d516c14ea7941084c5844c6162": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_95ee7405b95748a691751e8e3674cee7", "style": "IPY_MODEL_d10f04a4086c4d0a82af922259aa8dfd", "value": "100% 1875/1875 [01:15<00:00, 24.69it/s]" } }, "7831b7d3b6c54b538bea298f096fb536": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "7ae4bac9876546f0b92d11366637d861": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "7b73e5216e7840faa3eb2662a0c01b2a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "7ba08023e68346c99d295c8cf317c9c6": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "7bede11c51444425aabed22076a37eb1": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "7c36e77d1ef14e1bbcb55f3390ac0f97": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "7c5e8e8b8fec405dbf312c03d8bc8919": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "danger", "layout": "IPY_MODEL_75a786ad7f7c459fa6f108319910e93c", "max": 79, "style": "IPY_MODEL_529def52c63f4a29b578b1ece833bd93" } }, "7d44bc9bc4754f09b48ba223a1311fb0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_415f6e33dce140f3ac3935aa6769bce9", "max": 1875, "style": "IPY_MODEL_8f9d3d77e4e740388b5d7a15bd09e57f", "value": 1875 } }, "7ddab42457f440d59ede14a2b662c324": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_18983393e5414f6f858e8c034d9fb9bc", "style": "IPY_MODEL_73790d724d814ee2bb3442420f1efa87", "value": "100% 79/79 [00:01<00:00, 47.10it/s]" } }, "803ab43df57c4b49a0c9400d794ba6c3": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "81b4d8b1315742a183033513b0459455": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_54fef2fc76454dbc87f85d678e570fca", "style": "IPY_MODEL_afc4b1da754d42a7a5fb84526e3b139c", "value": "100% 1875/1875 [01:16<00:00, 24.67it/s]" } }, "8271d20551384a1f9c2c3fe6bfa37094": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "832391149cbb481e90d8fa94041f6b0e": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_a0d36ce229f14b88976975c250e2ca3b", "max": 79, "style": "IPY_MODEL_1806bd82afb64886a188425bf61de3ee", "value": 79 } }, "8480fea52e3d488aae8d090a01f5688e": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_910073d15395494f9ec3efa8bf691e9e", "style": "IPY_MODEL_fcc3a232659a4a2a83d065a787c26402", "value": "100% 79/79 [00:01<00:00, 45.69it/s]" } }, "8531fb26ddc84ed9ac53bbf1acdf5139": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_944251061ed24845865095e543468fb1", "style": "IPY_MODEL_8d807d05f54f4175a3ff616d65083830", "value": "100% 79/79 [00:01<00:00, 46.50it/s]" } }, "86c82f8a014e4048879c0d3298efdda2": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "86e7222efdff4a8392ad8ac37928440a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "87334fc2d276402e917d0517382e8b42": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "8a28e7cf467f422bafcc4a4f7f80d99d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "8cb88a4d7a6443aea442fc31f7bb3eee": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "8d807d05f54f4175a3ff616d65083830": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "8eb957262d244a90a6f5d70e3c1195d9": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "8f9d3d77e4e740388b5d7a15bd09e57f": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "90af5c7af9cc46e4b2e1dde7a04ce01a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "910073d15395494f9ec3efa8bf691e9e": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "925b89ad27154b26ad4834342ea0e497": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_b0262c82ff9346258f175008712f31c4", "style": "IPY_MODEL_0df95b684157426880512578f9f78d1f", "value": "100% 79/79 [00:01<00:00, 46.11it/s]" } }, "93a9f35b76de44c0aaf776946347b76e": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_baf09de2e4e645fa88b93a5a82fa1664", "IPY_MODEL_2d5c4ccf97f641f4b36c516e49b42f1c" ], "layout": "IPY_MODEL_3bdc65622ceb44fa895eeccf2ecc0152" } }, "9403280d739a4430938626d044d78ba6": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "944251061ed24845865095e543468fb1": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "94a492c2964f4a7baf815dad7841b6db": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "9568817c86d54a68b638bb217be5696d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "95ee7405b95748a691751e8e3674cee7": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "96a3541c181347b39e533bcedf25f0d4": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_9d9e984f53ef402ab5d1642ce9aaa392", "max": 79, "style": "IPY_MODEL_0e7868acdb7c4c619625d37deb9eb3a3", "value": 79 } }, "97d837fcb3944443970fb37175e2a8e2": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "9b94b49d7ef44cba8d0b514159407f8d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "9ba772bad6514a60b6a67b5801836a54": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_01447e49744f4e15a918eab38023ee12", "max": 79, "style": "IPY_MODEL_69a43112476f4730b7c8d60e2b301e10", "value": 79 } }, "9bb7628377ca49989eeeddf2c4d5f4c8": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "9d9e984f53ef402ab5d1642ce9aaa392": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "9f4929ea841e4ab3960b47003ad1d648": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "a0d36ce229f14b88976975c250e2ca3b": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "a167fa6ae8a64dd0a6a5e76074c13c70": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_05e09cca05514276a8729c3d97e57437", "style": "IPY_MODEL_3538802d34dc4533b03785643f57235d", "value": "100% 1875/1875 [01:17<00:00, 24.13it/s]" } }, "a18ce0988e894226b3ae3fec703d1114": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_44766dc4803a4a5aa6a8c752d0ae2049", "IPY_MODEL_8480fea52e3d488aae8d090a01f5688e" ], "layout": "IPY_MODEL_19c7b3b12ec3416299db573bd6984df5" } }, "a2b7b5f4fe3f437c810ee41d2e75ace7": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "a9d79f7b477d47e2a2d6f487f6736c97": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_7c36e77d1ef14e1bbcb55f3390ac0f97", "style": "IPY_MODEL_1b92e09380d34936a89235d115570832", "value": "100% 79/79 [00:01<00:00, 46.85it/s]" } }, "aeea7f2d543645a1a4650e63048c5e21": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_3022b9630fe343639f36b7d80ef877eb", "IPY_MODEL_a9d79f7b477d47e2a2d6f487f6736c97" ], "layout": "IPY_MODEL_b2cdb88885a54889a24905348828163c" } }, "afa037fad5894728ac3f93bf6cfe91e7": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "afc4b1da754d42a7a5fb84526e3b139c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "b0262c82ff9346258f175008712f31c4": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "b0eb50e2e02740d4a166cb4ced86c574": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_2a521e703d1a4df598d4d7ebc8917df1", "max": 79, "style": "IPY_MODEL_cb2e7e4d6dcd4644bf6bcc6379a2eebf", "value": 79 } }, "b2cdb88885a54889a24905348828163c": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "b4e3c1c1e8f64d7d8158e341efda2edb": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_5300061d70b04209a9b59fdd6493424c", "style": "IPY_MODEL_0ea17b3b30934876a6770551876e249a", "value": "100% 79/79 [00:01<00:00, 46.76it/s]" } }, "b7b7fe61a5094687b3b471fd96a9c3f0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_3e54bef0104e4979b134823ac90c4fa6", "style": "IPY_MODEL_6ea469953ee04cc0bab8260495313f27", "value": "100% 1875/1875 [01:17<00:00, 24.34it/s]" } }, "b94d8094b6e24f9cb5f82e15b6b7a8df": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_3c18e69d054d4a259f9b75cb5f2f57f5", "style": "IPY_MODEL_e5b247044ffc47b5aeca6bfe487a7f28", "value": "100% 79/79 [00:01<00:00, 46.40it/s]" } }, "ba07dd1810c74b6ba518f6221168913b": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "baf09de2e4e645fa88b93a5a82fa1664": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_dfea0c5e412b4a98827d3c14e7cd07f7", "max": 1875, "style": "IPY_MODEL_8cb88a4d7a6443aea442fc31f7bb3eee", "value": 1875 } }, "bc63d0e73204431c8d36c5cc5851b74c": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "beb4598548a6448e8f7a80bb8683bb05": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "bf6b4ccf1cbc413b8fb85fa5b7fc36cd": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_cd8c8dc54ce7434ea8d1691d720fe360", "IPY_MODEL_b7b7fe61a5094687b3b471fd96a9c3f0" ], "layout": "IPY_MODEL_ee020566515641e1b6ddb5a047e1be9a" } }, "c27ea7e05da146a2ae2b4713ebcd38c0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "c292aa2c40e347418d62e2d8befd013d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "c979ea3c350142b6b8105842c836e5f8": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_6e17b8245d8e4202ba9b9cc8b2a16282", "IPY_MODEL_290d0742411a4702ae281fea70f3bcaa" ], "layout": "IPY_MODEL_5a6b2b387a7141bfbe7d6826f12f54eb" } }, "cb26e37966c94d6098dd962de92a45a0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_d833f8a0581c433697eb420645e80c6e", "max": 79, "style": "IPY_MODEL_e956690c18f34ad6bc3fdf3b6fe9cf37", "value": 79 } }, "cb2e7e4d6dcd4644bf6bcc6379a2eebf": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "cb99dce53a4a4288a52f0b4b831ad906": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_fb46e5e3660f468e8a4d934bd7672907", "max": 79, "style": "IPY_MODEL_6262c5576e884ab7886d6e186df54ae8", "value": 79 } }, "cc42d02ea372471d9a4c420d65f26b4a": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_23b650880517487192d3a539ce2eee33", "IPY_MODEL_a167fa6ae8a64dd0a6a5e76074c13c70" ], "layout": "IPY_MODEL_1853c04636984ec6a178826a85930c1b" } }, "cd8c8dc54ce7434ea8d1691d720fe360": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_eea954d8b59a435abd4d0a41f5059c4e", "max": 1875, "style": "IPY_MODEL_675221f89e2140ef8d60204390e9acda", "value": 1875 } }, "d10f04a4086c4d0a82af922259aa8dfd": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "d3f17e88798a4f6a9e33e9005df7c7f9": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "d802fe45395847869f7925a7dc7394e2": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "d833f8a0581c433697eb420645e80c6e": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "da9d36f1abb3470f89dcdd21931dcfcd": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_86e7222efdff4a8392ad8ac37928440a", "max": 1875, "style": "IPY_MODEL_5a9947e7eab6439b880b24ee85c27c61", "value": 1875 } }, "dc1ee5808f0946eaa79fd898647e0ba2": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "dcf78f860cc4483d9741b04aa3d6d1db": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "de307ce4268d41fb918d154ea0409dbd": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_cb26e37966c94d6098dd962de92a45a0", "IPY_MODEL_925b89ad27154b26ad4834342ea0e497" ], "layout": "IPY_MODEL_fe91d8419f6b456f9e66d219f5991369" } }, "dfea0c5e412b4a98827d3c14e7cd07f7": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "e139b8cfb0304b9dbfe35e0418dcaa07": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_9ba772bad6514a60b6a67b5801836a54", "IPY_MODEL_1e92b05840124fe7bf8d9bd143da48cd" ], "layout": "IPY_MODEL_afa037fad5894728ac3f93bf6cfe91e7" } }, "e199254644b34a6395dcc41cee2bd738": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "e1cc23676b87406fbd3840b661dfb5e7": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_cb99dce53a4a4288a52f0b4b831ad906", "IPY_MODEL_8531fb26ddc84ed9ac53bbf1acdf5139" ], "layout": "IPY_MODEL_6027f1f6d186491e9b1c6a0b4488f602" } }, "e42169a0d360481ebd3c529cc421c2a5": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_832391149cbb481e90d8fa94041f6b0e", "IPY_MODEL_3dcea128ee1c47fabd9c1ca1513cbe0e" ], "layout": "IPY_MODEL_f65aa846efdf411bb970b52d69e6e3a5" } }, "e5b247044ffc47b5aeca6bfe487a7f28": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "e6b679d3e9d544cfb613164b03c8991d": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "e70e510e7f5844b0a1d578f7679343e0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_4a9b90eef6b94bcc9e7919fbba4419fb", "IPY_MODEL_b94d8094b6e24f9cb5f82e15b6b7a8df" ], "layout": "IPY_MODEL_beb4598548a6448e8f7a80bb8683bb05" } }, "e83e862474bd4899ad386c004d8cb9a2": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_7c5e8e8b8fec405dbf312c03d8bc8919", "IPY_MODEL_0b6e4731c92f432689b4cc7c385392d4" ], "layout": "IPY_MODEL_a2b7b5f4fe3f437c810ee41d2e75ace7" } }, "e921d08f6e2941b4a40a1cdf3167b7c2": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "e956690c18f34ad6bc3fdf3b6fe9cf37": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "ea36d1a94b36430f947fcf0962fabc40": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_33497dd7f39b42da8ce8c28ae0eb1ebf", "IPY_MODEL_32b30cb9acc244a9914e34f45879c21e" ], "layout": "IPY_MODEL_6cfd09d8ff154c2786daa740299f3cf2" } }, "eb836fe776da47d8b89c0b01ecb89d98": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_d802fe45395847869f7925a7dc7394e2", "max": 1875, "style": "IPY_MODEL_34b6a9403d5b42e695cadb3a3446004a", "value": 1875 } }, "ee020566515641e1b6ddb5a047e1be9a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "eea954d8b59a435abd4d0a41f5059c4e": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "f209fbc7469f40e7963d4c18bae6e111": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "f53c52d6047a41fb9fca83d28d5cf184": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "f612da8636d642e6b0d42ea2dfab928a": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_3f6fa3cbad674dcb93ae3b17bcfb60d6", "IPY_MODEL_7678625a957942feb9d1f19894db4207" ], "layout": "IPY_MODEL_4bbe6a4019384880accf89bd6ea4ce43" } }, "f65aa846efdf411bb970b52d69e6e3a5": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "f8a3eabaf27c46ca80f37df959abdbb4": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "IntProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_e921d08f6e2941b4a40a1cdf3167b7c2", "max": 79, "style": "IPY_MODEL_065445947c66425282ee8aaf7954a120", "value": 79 } }, "fb46e5e3660f468e8a4d934bd7672907": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "fcc3a232659a4a2a83d065a787c26402": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "fe681ff909f441eea634e7bb6169422b": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "fe91d8419f6b456f9e66d219f5991369": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} }, "ff9bf7cbddb6483a821d35aa872e4417": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_7d44bc9bc4754f09b48ba223a1311fb0", "IPY_MODEL_81b4d8b1315742a183033513b0459455" ], "layout": "IPY_MODEL_8271d20551384a1f9c2c3fe6bfa37094" } }, "fffa785bfcd54cb49a1c2a420bac2b51": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.1.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_44f2ee887e874370bc4b4d0330c843af", "style": "IPY_MODEL_4c1560d1809d4a1ea741d43dcef95076", "value": "100% 79/79 [00:01<00:00, 44.05it/s]" } }, "fffd484e385e4249bb9759808ee02fac": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.0.0", "model_name": "LayoutModel", "state": {} } }, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 2 }