{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Chemical Kinetics and Numerical Integration\n", "\n", "Here we will use methods of numerical integration to solve for the abundances of the H$_3^+$ isotopologues in the ion trap experiment from last week's notebook. After using integrated rate equations and curve fitting, we came up with this result:\n", "\n", "![Deuteration Results](https://leeping.github.io/che155/assets/images/week05/deuteration_results.png)\n", "\n", "The deviations, most notable in the D$_2$H$^+$ results, are because the reverse reactions were not included in our model. It would be very difficult to derive new rate equations, so we will use numerical methods instead.\n", "\n", "## Forward Euler Method\n", "\n", "First, we will reimplement the exact same model as last time, but this time we will solve using the Forward Euler Method. First, load in the `deuteration.csv` file. It contains the same experimental data as last week, but the time field has been rounded and lined up so that all abundances for each molecule are given at the same time values. This will make comparisons with the numerical models easier down the road." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from matplotlib import pyplot as plt\n", "import pandas as pd\n", "\n", "df = pd.read_csv('deuteration.csv')\n", " \n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a reminder, the model was defined by the equations:\n", "\n", "$$ \\frac{\\text{d}[\\text{H}_3^+]}{\\text{d}t} = -k_1[\\text{H}_3^+] $$\n", "\n", "$$ \\frac{\\text{d}[\\text{H}_2\\text{D}^+]}{\\text{d}t} = k_1[\\text{H}_3^+] - k_2[\\text{H}_2\\text{D}^+] $$\n", "\n", "$$ \\frac{\\text{d}[\\text{D}_2\\text{H}^+]}{\\text{d}t} = k_2[\\text{H}_2\\text{D}^+] - k_3[\\text{D}_2\\text{H}^+] $$\n", "\n", "$$ \\frac{\\text{d}[\\text{H}_3^+]}{\\text{d}t} = k_3[\\text{D}_2\\text{H}^+] $$\n", "\n", "We can express these in a simple form with the matrix equation:\n", "\n", "$$ \\begin{bmatrix} \\text{d}[\\text{H}_3^+]/\\text{d}t \\\\ \\text{d}[\\text{H}_2\\text{D}^+]/\\text{d}t \\\\ \\text{d}[\\text{D}_2\\text{H}^+]/\\text{d}t \\\\ \\text{d}[\\text{D}_3^+]/\\text{d}t \\end{bmatrix} = \\begin{bmatrix} -k_1 & 0 & 0 & 0 \\\\ k_1 & -k_2 & 0 & 0 \\\\ 0 & k_2 & -k_3 & 0 \\\\ 0 & 0 & k_3 & 0 \\end{bmatrix} \\begin{bmatrix}[\\text{H}_3^+] \\\\ [\\text{H}_2\\text{D}^+] \\\\ [\\text{D}_2\\text{H}^+] \\\\ [\\text{D}_3^+] \\end{bmatrix} $$\n", "\n", "Then, taking a time step $\\Delta t$, we can compute new concentrations:\n", "\n", "$$ \\begin{bmatrix}[\\text{H}_3^+] \\\\ [\\text{H}_2\\text{D}^+] \\\\ [\\text{D}_2\\text{H}^+] \\\\ [\\text{D}_3^+] \\end{bmatrix}_{\\,i+1} = \\begin{bmatrix}[\\text{H}_3^+] \\\\ [\\text{H}_2\\text{D}^+] \\\\ [\\text{D}_2\\text{H}^+] \\\\ [\\text{D}_3^+] \\end{bmatrix}_{\\,i} + \\begin{bmatrix} -k_1 & 0 & 0 & 0 \\\\ k_1 & -k_2 & 0 & 0 \\\\ 0 & k_2 & -k_3 & 0 \\\\ 0 & 0 & k_3 & 0 \\end{bmatrix} \\begin{bmatrix}[\\text{H}_3^+] \\\\ [\\text{H}_2\\text{D}^+] \\\\ [\\text{D}_2\\text{H}^+] \\\\ [\\text{D}_3^+] \\end{bmatrix}_{\\,i} \\Delta t$$\n", "\n", "As of Python 3.5, matrix multiplication (and other types of dot products) can be done with the `@` operator. When used with `numpy.ndarray` objects, the [`numpy.matmul`](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html) function is called. In our case, we will create a 4x4 matrix called `J` and a 1D array with 4 elements called `n` to store the abundances. When we call `J@n`, it multiplies each row of `J` by the 4 elements in `n`, and adds them up. Here we use the results from the curve fitting to ideally give us similar results as last time. We will set the step size `dt` to 0.1 ms, and take 1500 steps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#iniitialize rate constants\n", "hd=6.3e10\n", "k1=1.42e-9*hd\n", "k2=1.33e-9*hd\n", "k3=1.05e-9*hd\n", "\n", "#H3+ at t=0 is 932, H2D+, D2H+, and D3+ start at 0.\n", "n0 = np.array([932,0,0,0])\n", "\n", "#initialize an empty 4x4 matrix, and plug in k values at the right places\n", "J = np.zeros((4,4))\n", "J[0,0] = -k1\n", "J[1,1] = -k2\n", "J[2,2] = -k3\n", "J[1,0] = k1\n", "J[2,1] = k2\n", "J[3,2] = k3\n", "\n", "#this array n will be updated with the new concentrations at each step. Initialize it at n0\n", "n = n0\n", "dt = 1e-4\n", "steps = 1500\n", "\n", "#this array will keep track of the values of n at each step\n", "nt = np.zeros((steps+1,len(n0)))\n", "nt[0] = n0\n", "\n", "#take each steps, updating n at each one; store the results in the nt array\n", "for i in range(0,steps):\n", " n = n + J@n*dt\n", " nt[i+1] = n\n", " \n", "nt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can plot the results and compare with the experimental data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig,ax = plt.subplots()\n", "t = np.linspace(0,150e-3,len(nt))\n", "\n", "ax.scatter(df['time'],df['H3+'],color='#000000',label=r'H$_3^+$')\n", "ax.scatter(df['time'],df['H2D+'],color='#ffbf00',label=r'H$_2$D$^+$')\n", "ax.scatter(df['time'],df['D2H+'],color='#022851',label=r'D$_2$H$^+$')\n", "ax.scatter(df['time'],df['D3+'],color='#c10230',label=r'D$_3^+$')\n", "\n", "ax.set_xlabel(\"Time (s)\")\n", "ax.set_ylabel(\"Number\")\n", "\n", "lines = ax.plot(t,nt)\n", "lines[0].set_color('#000000')\n", "lines[1].set_color('#ffbf00')\n", "lines[2].set_color('#022851')\n", "lines[3].set_color('#c10230')\n", "ax.set_yscale('log')\n", "ax.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the step size is a critical parameter! If we increase the step size too much, we can get some bad results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n = n0\n", "dt = 5e-3\n", "steps = round(.15/dt)+1\n", "\n", "nt = np.zeros((steps+1,len(n0)))\n", "nt[0] = n0\n", "\n", "for i in range(0,steps):\n", " n = n + J@n*dt\n", " nt[i+1] = n\n", " \n", "fig,ax = plt.subplots()\n", "t = np.linspace(0,len(nt)*dt,len(nt))\n", "\n", "ax.scatter(df['time'],df['H3+'],color='#000000',label=r'H$_3^+$')\n", "ax.scatter(df['time'],df['H2D+'],color='#ffbf00',label=r'H$_2$D$^+$')\n", "ax.scatter(df['time'],df['D2H+'],color='#022851',label=r'D$_2$H$^+$')\n", "ax.scatter(df['time'],df['D3+'],color='#c10230',label=r'D$_3^+$')\n", "\n", "ax.set_xlabel(\"Time (s)\")\n", "ax.set_ylabel(\"Number\")\n", "\n", "lines = ax.plot(t,nt)\n", "lines[0].set_color('#000000')\n", "lines[1].set_color('#ffbf00')\n", "lines[2].set_color('#022851')\n", "lines[3].set_color('#c10230')\n", "ax.set_yscale('log')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Least Squares Fitting and Numerical Integration\n", "\n", "It is possible (though not very common) to implement least squares fitting together with the numerical integration in order to estimate the kinetics parameters. We'll walk through the process here. Last time we used `scipy.optimize.least_squares`, which required us to calculate the residuals vector between the model and the experimental data. When using integrated rate equations, this was straightforward because we could just plug in the time for each data point into the model and compute the model's prediction. With numerical integration; however, we do not have such a function!\n", "\n", "Instead, what we can do is save the model's outputs whenever the time matches the time at which an experimental data point is taken. If we choose time steps judiciously, we can make sure that we always sample the model at each time point needed. If we inspect the data frame, we can see that all of the time points are at a multiple of 0.1 ms." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Therefore, a time step `dt` of 0.1 ms (or some integer factor smaller) will ensure that the model samples each time point we need to compare with the experimental data. The code below checks to see if `i` (the current time in units of `dt`) is in the array `tvals`, which is the time array converted to units of dt, and if so it stores the current model abundances in a list for later use. Importantly, this is chosen such that all of the time comparisons are between integers so that we don't have to worry about issues with floating point comparisons.\n", "\n", "At the end of the clode block, `nm` is a 2D numpy array where each row is a time point and each column is the abundance of one of the ions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n = n0\n", "dt = 1e-4\n", "steps = 1500\n", "\n", "nmodel = []\n", "tvals = df['time'].to_numpy()/dt\n", "tvals = tvals.astype(int)\n", "\n", "for i in range(0,steps+1):\n", " n = n + J@n*dt\n", " if i in tvals:\n", " nmodel.append(n)\n", " \n", "nm = np.array(nmodel)\n", "nm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we'll plot the results. A quick side note here: we've been doing a lot of repetitive manual color changing. If you have a set of colors you want to consistently use, you can change matplotlib's default color cycling (see this [tutorial](https://matplotlib.org/tutorials/intermediate/color_cycle.html) for a quick example). Below I create a new `cycler` object that tells matplotlib to cycle between the 4 colors we have been using instead of its defaults. As the tutorial shows, you can either set the cycler on an `Axes` object like in the code below, which only affects that object, or you can apply the cycler to all subsequently created plots." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from cycler import cycler\n", "\n", "ucd_cycler = (cycler(color=['#000000','#ffbf00','#022851','#c10230','#266041','#8a532f']))\n", "\n", "fig,ax = plt.subplots()\n", "ax.set_prop_cycle(ucd_cycler)\n", "ax.plot(df['time'],nm,'o')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's turn that into a function that takes the kinetics parameters (`h30`, `k1`, `k2`, `k3`) as arguments. We also need to pass the time values at which the model should be sampled, the step size, and the number of steps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def runmodel(params,tvals,dt,steps):\n", " h30 = params[0]\n", " k1 = params[1]\n", " k2 = params[2]\n", " k3 = params[3]\n", " n = np.asarray([h30,0,0,0])\n", " nmodel = []\n", " J = np.zeros((4,4))\n", " J[0,0] = -k1\n", " J[1,1] = -k2\n", " J[2,2] = -k3\n", " J[1,0] = k1\n", " J[2,1] = k2\n", " J[3,2] = k3\n", "\n", " for i in range(0,steps+1):\n", " n = n + J@n*dt\n", " if i in tvals:\n", " nmodel.append(n)\n", " \n", " return(np.array(nmodel))\n", " \n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Test to make sure the `runmodel` function works as intended:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tvals = df['time'].to_numpy()/dt\n", "tvals = tvals.astype(int)\n", "hd=6.3e10\n", "k1=1.43e-9*hd\n", "k2=1.33e-9*hd\n", "k3=1.05e-9*hd\n", "h30 = 932\n", "\n", "runmodel(np.array([h30,k1,k2,k3]),tvals,1e-4,1500)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To perform the `least_squares` optimization, we need to create a function that computes the residuals of the model. This function must have the signature `f(x,*args,**kwargs)` where `x` is an array containing the parameters that will be optimized (`h30`, `k1`, `k2`, and `k3`), `*args` contains any additional arguments that are needed, and `**kwargs` can contain any other information.\n", "\n", "Like last time, we'll use `**kwargs` to pass in the experimental data. `*args` will contain the `tvals`, `dt`, and `steps` parameters that need to be passed to `runmodel.` Ance we have the results of the model, we need to compute the residuals." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def total_fit(x,*args,**kwargs):\n", " \n", " df = kwargs['df']\n", "\n", " nm = runmodel(x,*args)\n", " \n", " #a naive algorithm using for loops; slow, but flexible!\n", "# out = []\n", "# for i,model in enumerate(nm):\n", "# for j,mol in enumerate(['H3+','H2D+','D2H+','D3+']):\n", "# n = df.at[i,mol]\n", "# if np.isfinite(n):\n", "# out.append(n-model[j])\n", "# return out\n", " \n", " #taking advantage of numpy's array routines: fast, but requires more work if anything changes\n", " rh3 = df['H3+'] - nm[:,0]\n", " rh3 = rh3[~np.isnan(rh3)] #remove NaNs... isnan returns an array of booleans, so we take the logical not and use it as a slice to extract only the finite values\n", " \n", " rh2d = df['H2D+'] - nm[:,1]\n", " rh2d = rh2d[~np.isnan(rh2d)]\n", " \n", " #there are no NaNs in the experimental data for D2H+ or D3+\n", " rd2h = df['D2H+'] - nm[:,2]\n", " rd3 = df['D3+'] - nm[:,3]\n", " \n", " #concatenate and return\n", " return np.concatenate((rh3,rh2d,rd2h,rd3))\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can use `least_squares` to compute optimal parameters, and we can see that we get almost exactly the same results as the integrated rate equation approach. Note, however, that there is no problem with us starting out with `k1` and `k2` being equal! There is no divide by 0 error with numerical integration like there was with the integrated rate equations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import scipy.optimize as opt\n", "import numpy.linalg as la\n", "\n", "data = {\n", " 'df' : df\n", "}\n", "\n", "tvals = df['time'].to_numpy()/dt\n", "tvals = tvals.astype(int)\n", "hd=6.3e10\n", "\n", "result = opt.least_squares(total_fit,[900,1e-9*hd,1e-9*hd,1e-9*hd],\n", " args=[tvals,1e-4,1500],kwargs=data,verbose=1)\n", "pcov = la.inv(result.jac.T @ result.jac)\n", "\n", "for i,x in enumerate(['[H3+]0','k1','k2','k3']):\n", " den = hd\n", " if i==0:\n", " den = 1.\n", " print(f'{x} = {result.x[i]/den:.2e} +/- {np.sqrt(pcov[i][i])/den:.2e}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Integration with `scipy.integrate`\n", "\n", "Our manual implementation of the numerical integration uned the Forward Euler Method, whose total error is proportional to $(\\Delta t)^{1}$. It is usually desirable to use a higher-order method to achieve either higher accuracy or obtain the same accuracy with fewer steps. The function we are going to explore is [`scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html), which is made to solve initial value problems." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import scipy.integrate as spi\n", "\n", "spi.solve_ivp?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see from the function description, we need to provide at least 3 arguments:\n", "- `fun` is a function that computes the vector of derivatives. Its function signature needs to be `f(t,y,*args)`. `t` is the current time, `y` is the current state array (in our case, the array containing the molecule abundances), and the remainder of the arguments can contain anything else needed to compute the derivatives (e.g., rate coefficients, etc)\n", "- `t_span` is a tuple that specifies the initial and final time for the integration\n", "- `y0` is a vector containing the initial conditions - the starting abundances for the molecules.\n", "\n", "In addition to those required parameters, there are three other optional arguments that are useful for us:\n", "- `method` selects which numerical integration method will be employed. The default, `'RK45'`, is the fourth-order Runge-Kutta method, but several others are available, including some implicit solvers that are important when problems are \"stiff.\" A system of equations is stiff when the solutions are very sensitive to the step size even when the solution appears \"smooth.\" Chemical kinetics problems are frequently stiff when there are some very slow reactions combined with others that are very fast, and you want to evaluate the system over a long time compared with the rate of the fast reactions. In the current example, all of the reactions have comparable rates, so we will stick with `'RK45'`, but often the `'Adams'` or `'Radau'` methods are more appropriate for kinetics problems.\n", "- `t_eval` is a list of times at which the model returns abundances. If this is None, the model only gives the results at the final time. If we pass an array of times, the results will contain the abundances at all of the time values specified in `t_eval` which fall within `t_span`\n", "- `dense_output` causes the solver to construct functions that interpolate between time steps. This allows you to (approximately) evaluate the model at any time, not just at the time steps that were used in the model.\n", "\n", "Note that nowhere do you need to specify the step size! All of the methods employ various algorithms to automatically determine the step size needed to bring the error down to a certain desired value. Some even include adaptive step sizes that can take smaller or larger steps depending on the magnitudes of the derivatives.\n", "\n", "Let's re-implement the same model, but this time perform the integration with `solve_ivp`. First we need to write a function that computes the derivative." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# function must take t and y as its first 2 arguments. Since our derivatives don't explicitly depend on t, that variable isn't used in the body of the function.\n", "# to calculate the rates, we need the rate coefficients and abundances. The abundances are in y, so we need to pass the k values as arguments.\n", "def calc_derivative(t,y,k1,k2,k3):\n", " \n", " J = np.zeros((len(y),len(y)))\n", " J[0,0] = -k1\n", " J[1,1] = -k2\n", " J[2,2] = -k3\n", " J[1,0] = k1\n", " J[2,1] = k2\n", " J[3,2] = k3\n", " \n", " return J@y" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With that, we can now use `solve_ivp` to compute the solution from 0 to 0.15 seconds. We'll use the default `RK45` integrator, and set the `dense_output` flag to allow us to generate a quasi-continuous model function. In addition, we'll pass our `df['time']` array to `t_eval` so that we have the exact model values at the experimental time points.\n", "\n", "Within the `result` object that is returned, we can access the dense solution with `result.sol`, which takes a time value as an argument. The solution values are in `result.y`, and the time points for each solution are in `result.t`. The plot that this cell creates shows both the dense output and the discrete solutions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hd=6.3e10\n", "k1=1.0e-9*hd\n", "k2=1.0e-9*hd\n", "k3=1.0e-9*hd\n", "h30 = 932\n", "\n", "result = spi.solve_ivp(calc_derivative,(0,.15),y0=[h30,0,0,0],\n", " t_eval=df['time'],method='RK45',\n", " dense_output=True,args=(k1,k2,k3))\n", "fig,ax = plt.subplots()\n", "ax.set_prop_cycle(ucd_cycler)\n", "ax.scatter(df['time'],df['H3+'],color='#000000',label=r'H$_3^+$')\n", "ax.scatter(df['time'],df['H2D+'],color='#ffbf00',label=r'H$_2$D$^+$')\n", "ax.scatter(df['time'],df['D2H+'],color='#022851',label=r'D$_2$H$^+$')\n", "ax.scatter(df['time'],df['D3+'],color='#c10230',label=r'D$_3^+$')\n", "t = np.linspace(0,160e-3,1000)\n", "ax.plot(t,result.sol(t).T)\n", "#ax.plot(result.t,result.y.T,'o')\n", "ax.set_yscale('log')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Extending the System\n", "\n", "We wish to add the reverse reactions to the system and also to explicitly model the reactions using the full second-order rates instead of the pseudo-first-order ones we have been using up until this points. So far our system has been explicitly hardcoded. This is fine for a small system like this, but if we start to have many more molecules and reactions, manually coding the rates is tedious and also error-prone.\n", "\n", "We will aim to improve the reliability and flexibility of the code by defining the model in terms of chemical reactions, and we will automatically generate the rate equations from the reactions themselves. First, let's create a list of molecules. We'll use a pandas dataframe for convenience, though we could implement this with lists or numpy arrays as well.\n", "\n", "We'll start by refactoring the existing pseudo-first-order system, and then show how we can easily convert to the full second order reaction network." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "species = pd.DataFrame(['H3+','H2D+','D2H+','D3+'],columns=['name'])\n", "species" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now each molecule will be referred to by its index in this dataframe instead of by its name. We next need to define the chemical reactions that link these molecules together. We'll do this by creating a new class that contains the reactants, the products, and the rate coefficient. The class first extracts the unique reactants and products along with how many times that reactant/product appears, and stores those numbers as as numpy arrays, and uses We also make a `__str__` function for convenience that will print the reaction and its rate. We pass the `species` data into the constructor so that the reaction can get the names of the molecules." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class Reaction:\n", " def __init__(self,species,reactants=[],products=[],k=0.0):\n", " self.reactants, self.rcounts = np.unique(np.asarray(reactants),return_counts=True)\n", " self.products, self.pcounts = np.unique(np.asarray(products),return_counts=True)\n", " rnames = []\n", " pnames = []\n", " for r,c in zip(self.reactants,self.rcounts):\n", " rnames.append(self.makename(species,c,r))\n", " for p,c in zip(self.products,self.pcounts):\n", " pnames.append(self.makename(species,c,p))\n", " self.k = k\n", " self.name = f'{\" + \".join(rnames)} --> {\" + \".join(pnames)}, k = {self.k:.2e}'\n", " \n", " def __str__(self):\n", " return self.name\n", " \n", " def makename(self,species,c,n):\n", " out = species.at[n,'name']\n", " if c > 1:\n", " out = f'{c}{out}'\n", " return out\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To create a reaction, we call the `Reaction` constructor and give it the species list, then the list of the reactants' ID numbers, then the products' ID numbers, and then the rate coefficient. Since we're currently only considering the forward reactions and keeping \\[HD\\] constant, we can just include the ions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r1 = Reaction(species,[0],[1],1.43e-9*hd)\n", "r2 = Reaction(species,[1],[2],1.33e-9*hd)\n", "r3 = Reaction(species,[2],[3],1.05e-9*hd)\n", "reactions = pd.DataFrame([r1,r2,r3],columns=['reaction'])\n", "reactions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we can make reactions that involve multiple of the same molecule. A silly example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(Reaction(species,[0,0,1],[3,3,2],1.))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For computing the derivatives, we can use the definitions of the rate of an elementary reaction. For example, the elementary reaction A + B --> C + D has the following rates:\n", "\n", "$$ -\\frac{\\text{d}[A]}{\\text{d}t} = -\\frac{\\text{d}[B]}{\\text{d}t} = \\frac{\\text{d}[C]}{\\text{d}t} = \\frac{\\text{d}[D]}{\\text{d}t} = k[\\text{A}][\\text{B}] $$\n", "\n", "If the reaction has the form 2A --> C + D; this can also be written as A + A --> C + D, and the only difference is that the rate of change for \\[A\\] is twice as fast as the change for each product:\n", "\n", "$$ -\\frac{1}{2}\\frac{\\text{d}[A]}{\\text{d}t} = \\frac{\\text{d}[C]}{\\text{d}t} = \\frac{\\text{d}[D]}{\\text{d}t} = k[\\text{A}]^2 = k[\\text{A}][\\text{A}] $$\n", "\n", "What this means is that for each molecule, we just need to loop over the reactions, and each time the molecule appears as a reactant, we subtract its rate coefficient times the product of the reactants, and each time it appears as a product, we add k times the product of the reactants. This will work even if the molecule appears twice in the same reaction (either as a reactant or a product, or even both!), becase we'll add the rate once for each time the molecule appears in the reaction.\n", "\n", "The code below is a new implementation of the derivative calculation that does this. It loops over the reactions, and for each reactant it subtracts the rate, and for each product it adds the rate." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def calc_derivative_2(t,y,rxns):\n", " out = np.zeros_like(y)\n", " for r in rxns['reaction']: \n", " out[r.reactants] -= r.k*np.prod(np.power(y[r.reactants],r.rcounts))*r.rcounts\n", " out[r.products] += r.k*np.prod(np.power(y[r.reactants],r.rcounts))*r.pcounts\n", " \n", " return out\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This code takes advantage of numpy's advanced indexing capabilities. The lists of unique reactant and product IDs are used as indices to choose which concentrations to include in the rate calculations as well as which concentration derivatives to change. Note that the rates depend on the concentrations of the reactants, not the concentrations of the products. Below is some sample code showing how this works. The reaction is 3A + B --> 2C + D. Rate = k\\[A\\]^3\\[B\\] = (1e-4)(10)^3(3) = 0.3. So the concentration of A should change by -(3)(0.3) = -0.9, B should change by -0.3, C should change by +(2)(0.6) = 0.6, and D by +0.3" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = np.asarray([10.,3.,7.,2.])\n", "out = np.zeros_like(y)\n", "reactants = np.asarray([0,1,0,0])\n", "products = np.asarray([2,2,3])\n", "reactants, rcounts = np.unique(reactants,return_counts=True)\n", "products, pcounts = np.unique(products,return_counts=True)\n", "out[reactants] += -1e-4*np.prod(np.power(y[reactants],rcounts))*rcounts\n", "out[products] += 1e-4*np.prod(np.power(y[reactants],rcounts))*pcounts\n", "out" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now, as a last sanity check, we should be able to plug in our reactions and initial conditions into the solver and get the same results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r1 = Reaction(species,[0],[1],1.43e-9*hd)\n", "r2 = Reaction(species,[1],[2],1.33e-9*hd)\n", "r3 = Reaction(species,[2],[3],1.05e-9*hd)\n", "reactions = pd.DataFrame([r1,r2,r3],columns=['reaction'])\n", "result = spi.solve_ivp(calc_derivative_2,(0,.16),y0=[932,0,0,0],t_eval=df['time'],method='RK45',dense_output=True,args=[reactions])\n", "fig,ax = plt.subplots()\n", "ax.set_prop_cycle(ucd_cycler)\n", "t = np.linspace(0,160e-3,1000)\n", "ax.plot(t,result.sol(t).T)\n", "ax.plot(result.t,result.y.T,'o')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Second Order Kinetics and Reverse Reactions\n", "\n", "With our `Reaction` class and `calc_derivative_2` functions, it is now easy to include H2 and HD in the model, and do the second-order chemistry. The only addition is that we need to be careful about units. In the rate equations, the concentrations are given in molecules per cubic centimeter, so we need to divide the ion counts by the trap volume, which we do not exactly know. It may be listed in one of the many papers the group has published. However, the volume is likely on the order of 1 cubic centimeter. We can use that for now, and show that the final results in the end are not very sensitive to this number unless it's smaller than ~10-5, which seems physically impossible." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "species = pd.DataFrame(['H3+','H2D+',\"D2H+\",\"D3+\",'HD','H2'],columns=['name'])\n", "\n", "reactions = [Reaction(species,[0,4],[1,5],k=1.43e-9),\n", " Reaction(species,[1,4],[2,5],k=1.33e-9),\n", " Reaction(species,[2,4],[3,5],k=1.05e-9)]\n", "\n", "reactions = pd.DataFrame(reactions,columns=['reaction'])\n", "print(reactions)\n", "\n", "volume = 1\n", "\n", "result = spi.solve_ivp(calc_derivative_2,(0,.15),y0=[932/volume,0,0,0,6.3e10,0],t_eval=df['time'],method='RK45',dense_output=True,args=[reactions])\n", "fig,ax = plt.subplots(figsize=(10,8))\n", "ax.set_prop_cycle(ucd_cycler)\n", "t = np.linspace(0,150e-3,1000)\n", "lines = ax.plot(t,(result.sol(t)).T)\n", "for l,n in zip(lines,species['name']):\n", " l.set_label(n)\n", "ax.scatter(df['time'],df['H3+']/volume,color='#000000')\n", "ax.scatter(df['time'],df['H2D+']/volume,color='#ffbf00')\n", "ax.scatter(df['time'],df['D2H+']/volume,color='#022851')\n", "ax.scatter(df['time'],df['D3+']/volume,color='#c10230')\n", "ax.set_xlim(0,150e-3)\n", "ax.set_ylabel('Abundance (cm$^{-3}$)')\n", "ax.set_yscale('log')\n", "ax.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see from this graph why the pseudo-first-order approximation is so good: if there are only ~1000 ions in a cubic centimeter, there are over 6e10 HD molecules. Even after all 1000 H$_3^+$ ions are converted to D$_3^+$, only 3000 of the HD molecules disappeared, which is negligible. However, eventually if we make the trap volume small enough, we can start to see an effect on the model. For instance, here we make the trap volume 1.5e-3, which means there are roughly as many H$_3^+$ ions as HD molecules. The chemistry is qualitatively different, yet we did not have to rederive any rate equations. Numerical integration is versatile." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "species = pd.DataFrame(['H3+','H2D+',\"D2H+\",\"D3+\",'HD','H2'],columns=['name'])\n", "\n", "reactions = [Reaction(species,[0,4],[1,5],k=1.43e-9),\n", " Reaction(species,[1,4],[2,5],k=1.33e-9),\n", " Reaction(species,[2,4],[3,5],k=1.05e-9)]\n", "\n", "reactions = pd.DataFrame(reactions,columns=['reaction'])\n", "print(reactions)\n", "\n", "volume = 1.5e-8\n", "\n", "result = spi.solve_ivp(calc_derivative_2,(0,.15),y0=[932/volume,0,0,0,6.3e10,0],t_eval=df['time'],method='RK45',dense_output=True,args=[reactions])\n", "fig,ax = plt.subplots(figsize=(10,8))\n", "ax.set_prop_cycle(ucd_cycler)\n", "t = np.linspace(0,150e-3,1000)\n", "lines = ax.plot(t,(result.sol(t)).T)\n", "for l,n in zip(lines,species['name']):\n", " l.set_label(n)\n", "ax.scatter(df['time'],df['H3+']/volume,color='#000000')\n", "ax.scatter(df['time'],df['H2D+']/volume,color='#ffbf00')\n", "ax.scatter(df['time'],df['D2H+']/volume,color='#022851')\n", "ax.scatter(df['time'],df['D3+']/volume,color='#c10230')\n", "ax.set_xlim(0,150e-3)\n", "ax.set_ylabel('Abundance (cm$^{-3}$)')\n", "ax.set_yscale('log')\n", "ax.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Returning to more reasonable volumes, we can turn on the reverse reactions and see what happens. The paper says that the reverse reactions occur with rate coefficients that are of order 2e-10" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "species = pd.DataFrame(['H3+','H2D+',\"D2H+\",\"D3+\",'HD','H2'],columns=['name'])\n", "\n", "reactions = [Reaction(species,[0,4],[1,5],k=1.43e-9),\n", " Reaction(species,[1,4],[2,5],k=1.33e-9),\n", " Reaction(species,[2,4],[3,5],k=1.05e-9),\n", " Reaction(species,[1,5],[0,4],k=2e-10),\n", " Reaction(species,[2,5],[1,4],k=2e-10),\n", " Reaction(species,[3,5],[2,4],k=2e-10)]\n", "\n", "reactions = pd.DataFrame(reactions,columns=['reaction'])\n", "print(reactions)\n", "\n", "volume = 1\n", "\n", "result = spi.solve_ivp(calc_derivative_2,(0,.15),y0=[932/volume,0,0,0,6.3e10,0],t_eval=df['time'],method='RK45',dense_output=True,args=[reactions])\n", "fig,ax = plt.subplots(figsize=(10,8))\n", "ax.set_prop_cycle(ucd_cycler)\n", "t = np.linspace(0,150e-3,1000)\n", "lines = ax.plot(t,(result.sol(t)[0:4]).T)\n", "for l,n in zip(lines,species['name'][0:4]):\n", " l.set_label(n)\n", "ax.scatter(df['time'],df['H3+']/volume,color='#000000')\n", "ax.scatter(df['time'],df['H2D+']/volume,color='#ffbf00')\n", "ax.scatter(df['time'],df['D2H+']/volume,color='#022851')\n", "ax.scatter(df['time'],df['D3+']/volume,color='#c10230')\n", "ax.set_xlim(0,150e-3)\n", "ax.set_ylabel('Abundance (cm$^{-3}$)')\n", "ax.set_yscale('log')\n", "ax.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It appears to make no difference! This is because in our model, the abundance of H$_2$ remains tiny. However, experimentally, the HD gas has a purity of only 97%. If we plug that in for the initial abundances, we can start to see something:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "species = pd.DataFrame(['H3+','H2D+',\"D2H+\",\"D3+\",'HD','H2'],columns=['name'])\n", "\n", "reactions = [Reaction(species,[0,4],[1,5],k=1.43e-9),\n", " Reaction(species,[1,4],[2,5],k=1.33e-9),\n", " Reaction(species,[2,4],[3,5],k=1.05e-9),\n", " Reaction(species,[1,5],[0,4],k=2e-10),\n", " Reaction(species,[2,5],[1,4],k=2e-10),\n", " Reaction(species,[3,5],[2,4],k=2e-10)]\n", "\n", "reactions = pd.DataFrame(reactions,columns=['reaction'])\n", "print(reactions)\n", "\n", "volume = 1\n", "\n", "result = spi.solve_ivp(calc_derivative_2,(0,.15),y0=[930/volume,0,0,0,0.97*6.3e10,0.03*6.3e10],t_eval=df['time'],method='RK45',dense_output=True,args=[reactions])\n", "\n", "fig,ax = plt.subplots(figsize=(10,8))\n", "ax.set_prop_cycle(ucd_cycler)\n", "t = np.linspace(0,150e-3,1000)\n", "lines = ax.plot(t,(result.sol(t)[0:4]*volume).T)\n", "ax.scatter(df['time'],df['H3+'],color='#000000',label=r'H$_3^+$')\n", "ax.scatter(df['time'],df['H2D+'],color='#ffbf00',label=r'H$_2$D$^+$')\n", "ax.scatter(df['time'],df['D2H+'],color='#022851',label=r'D$_2$H$^+$')\n", "ax.scatter(df['time'],df['D3+'],color='#c10230',label=r'D$_3^+$')\n", "ax.set_ylim(0.1,2000)\n", "ax.set_xlim(0,150e-3)\n", "ax.legend(loc='lower left',bbox_to_anchor=(0.1,0.3))\n", "ax.set_xlabel(\"Time (s)\")\n", "ax.set_ylabel('Ion count')\n", "ax.set_yscale('log')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After some manual adjustment of the rate coefficients, we can obtain good agreement with the experimental data. It is theoretically possible to improve this with `least_squares`, but there are now 6 rate coefficients and an extra parameter for the percentage of H$_2$ that would need to be optimized as well, which makes the process slow. Also, some parameters have only a tiny effect on the data, so a lot of care has to be taken to ensure the optimization works well." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "species = pd.DataFrame(['H3+','H2D+',\"D2H+\",\"D3+\",'HD','H2'],columns=['name'])\n", "\n", "reactions = [Reaction(species,[0,4],[1,5],k=1.4e-9),\n", " Reaction(species,[1,4],[2,5],k=1.4e-9),\n", " Reaction(species,[2,4],[3,5],k=1.1e-9),\n", " Reaction(species,[1,5],[0,4],k=1e-10),\n", " Reaction(species,[2,5],[1,4],k=2e-10),\n", " Reaction(species,[3,5],[2,4],k=4e-10)]\n", "\n", "reactions = pd.DataFrame(reactions,columns=['reaction'])\n", "print(reactions)\n", "\n", "volume = 1\n", "\n", "result = spi.solve_ivp(calc_derivative_2,(0,.15),y0=[930/volume,0,0,0,0.97*6.3e10,0.03*6.3e10],t_eval=df['time'],method='RK45',dense_output=True,args=[reactions])\n", "\n", "fig,ax = plt.subplots(figsize=(10,8))\n", "ax.set_prop_cycle(ucd_cycler)\n", "t = np.linspace(0,150e-3,1000)\n", "lines = ax.plot(t,(result.sol(t)[0:4]*volume).T)\n", "ax.scatter(df['time'],df['H3+'],color='#000000',label=r'H$_3^+$')\n", "ax.scatter(df['time'],df['H2D+'],color='#ffbf00',label=r'H$_2$D$^+$')\n", "ax.scatter(df['time'],df['D2H+'],color='#022851',label=r'D$_2$H$^+$')\n", "ax.scatter(df['time'],df['D3+'],color='#c10230',label=r'D$_3^+$')\n", "ax.set_ylim(0.1,2000)\n", "ax.set_xlim(0,150e-3)\n", "ax.legend(loc='lower left',bbox_to_anchor=(0.1,0.3))\n", "ax.set_xlabel(\"Time (s)\")\n", "ax.set_ylabel('Ion count')\n", "ax.set_yscale('log')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using Implicit Solvers\n", "\n", "The implicit solvers that are good for stiff problems carry one additional complication: they require the Jacobian matrix in order to run efficiently. For a kinetics system with N molecules, the Jacobian matrix contains derivatives of the rates for each molecule with respect to every molecule:\n", "\n", "$$ J_{ij} = \\frac{\\partial}{\\partial [\\text{X}]_j} \\text{Rate}_i $$\n", "\n", "For a reaction aA + bB --> cC + dD, we know the rates are:\n", "\n", "$$ \\frac{\\text{d}[\\text{A}]}{\\text{d}t} = -ak[\\text{A}]^a[\\text{B}]^b, \\quad \\frac{\\text{d}[\\text{B}]}{\\text{d}t} = -bk[\\text{A}]^a[\\text{B}]^b, \\quad \\frac{\\text{d}[\\text{C}]}{\\text{d}t} = ck[\\text{A}]^a[\\text{B}]^b, \\quad \\frac{\\text{d}[\\text{D}]}{\\text{d}t} = -dk[\\text{A}]^a[\\text{B}]^b $$\n", "\n", "Taking the rate for A as an example, the derivatives with respect to each molecule are:\n", "\n", "$$ \\frac{\\partial}{\\partial [\\text{A}]} \\text{Rate}_\\text{A} = -aka[\\text{A}]^{a-1}[\\text{B}]^b, \\quad \\frac{\\partial}{\\partial [\\text{B}]} \\text{Rate}_\\text{A} = -akb[\\text{a}]^a[\\text{B}]^{b-1}, \\quad \\frac{\\partial}{\\partial [\\text{C}]} \\text{Rate}_\\text{A} = 0, \\quad \\frac{\\partial}{\\partial [\\text{D}]} \\text{Rate}_\\text{A} = 0 $$\n", "\n", "If we apply this to each rate, the Jacobian matrix for this reaction is:\n", "\n", "$$ J = \\begin{bmatrix} -aka[\\text{A}]^{a-1}[\\text{B}]^b & -akb[\\text{A}]^a[\\text{B}]^{b-1} & 0 & 0 \\\\ -bka[\\text{A}]^{a-1}[\\text{B}]^b & -bkb[\\text{A}]^a[\\text{B}]^{b-1} & 0 & 0 \\\\ cka[\\text{A}]^{a-1}[\\text{B}]^b & ckb[\\text{A}]^a[\\text{B}]^{b-1} & 0 & 0 \\\\ dka[\\text{A}]^{a-1}[\\text{B}]^b & dkb[\\text{A}]^a[\\text{B}]^{b-1} & 0 & 0\\end{bmatrix} $$\n", "\n", "Assuming our system contains two other molecules E and F, the total contribution to the Jacobian matrix for this one reaction would have 0s in all of the extra rows and columns because the rate of this reaction does not depend on the concentrations of E or F:\n", "\n", "$$ J = \\begin{bmatrix} -aka[\\text{A}]^{a-1}[\\text{B}]^b & -akb[\\text{A}]^a[\\text{B}]^{b-1} & 0 & 0 & 0 & 0 \\\\ -bka[\\text{A}]^{a-1}[\\text{B}]^b & -bkb[\\text{A}]^a[\\text{B}]^{b-1} & 0 & 0 & 0 & 0 \\\\ cka[\\text{A}]^{a-1}[\\text{B}]^b & ckb[\\text{A}]^a[\\text{B}]^{b-1} & 0 & 0 & 0 & 0 \\\\ dka[\\text{A}]^{a-1}[\\text{B}]^b & dkb[\\text{A}]^a[\\text{B}]^{b-1} & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\end{bmatrix} $$\n", "\n", "Then we can repeat the process for each reaction in the system, just adding to the appropriate elements of the Jacobian matrix. We can provide a function to calculate the Jacobian whose signature is `f(t,y,*args)` just like for `calc_derivative_2`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def calc_jacobian(t,y,rxns):\n", " J = np.zeros((y.size,y.size)) #create an empty NxN matrix, where N = number of molecules in the system\n", " for r in rxns['reaction']:\n", " #loop over reactants; each loop computes one column of the Jacobian matrix\n", " for i,(rc,ex) in enumerate(zip(r.reactants,r.rcounts)):\n", " out = np.zeros(y.size)\n", " \n", " #when we compute df/di, the power of reactant i is reduced by 1. So subtract 1 from the reactant counts at the ith position\n", " #However, we don't want to modify the reaction itself, so make a copy of rcounts\n", " ords = np.copy(r.rcounts)\n", " ords[i] -= 1\n", " \n", " #calculate the base rate = k * count * product (concentrations raised to correct powers)\n", " rate = r.k*ex*np.prod(np.power(y[r.reactants],ords))\n", " \n", " #rectants decrease by reactant count * base rate; products increase by product count * base rate\n", " out[r.reactants] -= r.rcounts*rate\n", " out[r.products] += r.pcounts*rate\n", " \n", " #add to the correct column of the Jacobian matrix for this reactant\n", " J[:,rc] += out\n", " \n", " return J\n", "\n", "#play around with the reaction definition to ensure jacobian is calculated correctly, using formulas above\n", "r = Reaction(species,[0,1,1],[2,2,3],2.)\n", "y = np.asarray([10.,20.,30.,40.])\n", "calc_jacobian(0,y,pd.DataFrame([r],columns=['reaction']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For large systems of reactions, we can also define the jacobian's sparsity structure. This is a matrix that has 1s in the positions where the Jacobian may be nonzero, and 0s where the Jacobian is always 0. The algorithms in the `solve_ivp` function can use that information to speed up the calculations because it can reduce the number of calculations it needs to perform. When the reaction network is large, there may be only a few reactions linked to some molecules, and the rows/columns corresponding to that element may contain many 0s. The sparsity structure depends only on the reaction network, not the state of the system, so we can precalculate it before running `solve_ivp`. For completeness, we'll do it here." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def compute_sparsity(species,rxns):\n", " out = np.zeros((species.size,species.size))\n", " for rxn in rxns['reaction']:\n", " for r in rxn.reactants:\n", " out[r,rxn.reactants] = 1\n", " for p in rxn.products:\n", " out[p,rxn.products] = 1\n", " out[r,p] = 1\n", " \n", " return out\n", "\n", "species = pd.DataFrame(['H3+','H2D+',\"D2H+\",\"D3+\",'HD','H2'],columns=['name'])\n", "\n", "reactions = [Reaction(species,[0,4],[1,5],k=1.4e-9),\n", " Reaction(species,[1,4],[2,5],k=1.4e-9),\n", " Reaction(species,[2,4],[3,5],k=1.1e-9),\n", " Reaction(species,[1,5],[0,4],k=1e-10),\n", " Reaction(species,[2,5],[1,4],k=2e-10),\n", " Reaction(species,[3,5],[2,4],k=4e-10)]\n", "\n", "reactions = pd.DataFrame(reactions,columns=['reaction'])\n", "print(reactions)\n", "compute_sparsity(species,reactions)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course, there are very few 0s in this matrix. H$_3^+$ is not directly linked to D$_2$H$^+$ or D$_3^+$, and H$_2$D$^+$ is not linked to D$_3^+$, but otherwise each molecule is connected by at least 1 reaction. Now that we have the Jacobian and the sparsity structure, we can use one of the implicit solvers. (Strictly speaking, it is possible to use an implicit solver without the Jacobian matrix, in which case the Jacobian can be estimated by finite differences. However, doing so is extremely slow and introduces additional error, so it should be avoided)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "species = pd.DataFrame(['H3+','H2D+',\"D2H+\",\"D3+\",'HD','H2'],columns=['name'])\n", "\n", "reactions = [Reaction(species,[0,4],[1,5],k=1.4e-9),\n", " Reaction(species,[1,4],[2,5],k=1.4e-9),\n", " Reaction(species,[2,4],[3,5],k=1.1e-9),\n", " Reaction(species,[1,5],[0,4],k=1e-10),\n", " Reaction(species,[2,5],[1,4],k=2e-10),\n", " Reaction(species,[3,5],[2,4],k=4e-10)]\n", "\n", "reactions = pd.DataFrame(reactions,columns=['reaction'])\n", "volume = 1.\n", "print(reactions)\n", "sparse = compute_sparsity(species,reactions)\n", "\n", "result = spi.solve_ivp(calc_derivative_2,(0,.15),y0=[930/volume,0,0,0,0.97*6.3e10,0.03*6.3e10],t_eval=df['time'],\n", " method='Radau',dense_output=True,args=[reactions],jac=calc_jacobian,jac_sparsity=sparse)\n", "\n", "fig,ax = plt.subplots(figsize=(10,8))\n", "ax.set_prop_cycle(ucd_cycler)\n", "t = np.linspace(0,150e-3,1000)\n", "lines = ax.plot(t,(result.sol(t)[0:4]*volume).T)\n", "ax.scatter(df['time'],df['H3+'],color='#000000',label=r'H$_3^+$')\n", "ax.scatter(df['time'],df['H2D+'],color='#ffbf00',label=r'H$_2$D$^+$')\n", "ax.scatter(df['time'],df['D2H+'],color='#022851',label=r'D$_2$H$^+$')\n", "ax.scatter(df['time'],df['D3+'],color='#c10230',label=r'D$_3^+$')\n", "ax.set_ylim(0.1,2000)\n", "ax.set_xlim(0,150e-3)\n", "ax.legend(loc='lower left',bbox_to_anchor=(0.1,0.3),framealpha=0.)\n", "ax.set_xlabel(\"Time (s)\")\n", "ax.set_ylabel('Ion count')\n", "ax.set_yscale('log')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.18" } }, "nbformat": 4, "nbformat_minor": 4 }