{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "# The Non Random Two Liquids (NRTL) model for *excess Gibbs energy* ($g^E$) and a case study of the Liquid-Liquid equilibria of water+ethanol+ethyl acetate." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Applying linear algebra matrix notation\n", "\n", "This implementation can be rewritten using linear algebra matrix notation, this can simplify analytical derivations, improve readability (although for the trained mind), maintainability, reduce human error in the coding (less typing therefore less typos), and improve execution speed (if we make use of libraries that can automatically convert matrix notation to optimal implementation of the underlying loops - e.g. NumPy).\n", "For details on the derivations of the matrix notation for the NRTL model, as well as for UNIQUAC, UNIFAC and COSMO see the lectures notes of Abreu, C. R. A. in our library [here][Abreu, yyyy, LN, Matrix algebra...].\n", "\n", "[Abreu, yyyy, LN, Matrix algebra...]: https://github.com/iurisegtovich/PyTherm-applied-thermodynamics/blob/master/Get_involved/4_Texts_Library/AbreuC.R.A.%2C%20Matrix%20Algebra%20and%20Matrix%20Differentiation%20Rules%20Applied%20to%20Excess%20Gibbs%20Energy%20Models.pdf\n", "\n", "| - - - - - - - - - - - - - - Renon & Prausnitz - - - - - - - - - - - - - - | - - - - - - - - - - - - - - Abreu - - - - - - - - - - - - - - |\n", "|:-:|:-:|\n", "|$ \\frac{g^E}{RT}=\\sum_{i=1}^n \\left[ x_i\\frac{\\sum_{j=1}^n \\tau_{j,i} G_{j,i} x_{j}}{\\sum_{k=1}^n G_{k,i} x_k} \\right] $|$ \\frac{g^E}{RT}=\\underline{x}^T\\underline{\\underline{E}} \\underline{x}$ |\n", "\n", "Where\n", "\n", "| - - - - - - - - - - - - - - Renon & Prausnitz - - - - - - - - - - - - - - | - - - - - - - - - - - - - - Abreu - - - - - - - - - - - - - - |\n", "|:-:|:-:|\n", "|$\\tau_{i,j}= \\frac{g_{i,j}-g_{j,j}}{RT}=\\frac{A_{i,j}}{T}$ | $\\underline{\\underline{\\tau}}=-T^{-1}\\underline{\\underline{A}}$|\n", "|$G_{i,j}=\\mathrm{exp}(-\\alpha_{i,j} \\tau_{i,j})$|$\\underline{\\underline{G}}= \\mathrm{exp}(\\underline{\\underline{\\alpha}} \\circ \\underline{\\underline{\\tau}})$|\n", "|---|$\\underline{\\underline{\\Lambda}}=(\\underline{\\underline{\\tau}} \\circ \\underline{\\underline{G}})$|\n", "|---|$\\underline{\\underline{E}}=\\underline{\\underline{\\Lambda}}\\mathscr{D}^{-1}(\\underline{\\underline{G}}^T\\underline{x})$|\n", "\n", "Therefore\n", "\n", "\n", "| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Renon & Prausnitz - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | - - - - - - - - - - - Abreu - - - - - - - - - - - |\n", "|:-:|:-:|\n", "|$ln(\\gamma_i)= \\frac{\\sum_{j=1}^n\\left[\\tau_{j,i} G_{j,i} x_{j}\\right]}{\\sum_{k=1}^n\\left[G_{k,i}x_{k}\\right]} + \\sum_{j=1}^n\\left[ \\left(\\frac{\\ G_{i,j} x_{j}}{\\sum_{k=1}^n\\left[G_{k,j}x_{k}\\right]}\\right) \\left(\\tau_{i,j}-\\frac{\\sum_{j=1}^n\\left[\\tau_{i,j} G_{i,j} x_{i}\\right]}{\\sum_{k=1}^n\\left[G_{k,j}x_{k}\\right]} \\right) \\right] $|$ ln(\\underline{\\gamma})=\\left(\\underline{\\underline{E}}^S-\\underline{\\underline{L}}\\mathscr{D}\\underline{x}\\underline{\\underline{E}}^T \\right)\\underline{x}$|\n", "\n", "Where\n", "\n", "* $\\underline{\\underline{M}} \\circ \\underline{\\underline{N}}$ is the Hadamard product, element-wise multiplication between matrices $\\underline{\\underline{M}} $and$ \\underline{\\underline{N}}$, i.e.,\n", "$$\\underline{\\underline{M}} \\circ \\underline{\\underline{N}} = \\left\\{ \\underline{\\underline{R}} \\mid R_{i,j}=M_{i,j} \\times N_{i,j} \\right\\}$$\n", "\n", "* $\\mathscr{D}\\underline{v}$ means matrix diagonalization of an column $\\underline{v}$, i.e.,\n", "$$\\mathscr{D}\\underline{v}= \\left\\{ \\underline{\\underline{M}} \\mid M_{i,j}=v_{i} \\text { if } {j = i} \\text{, and } M_{i,j} = 0 \\text { if } {j \\neq i} \\right\\}$$\n", "\n", "* $\\underline{\\underline{M}}^S$ means symmetrization of a matrix $\\underline{\\underline{M}}$, i.e.,\n", "$$\\underline{\\underline{M}}^S= \\left\\{ \\underline{\\underline{N}} \\mid N_{i,j}=M_{i,j}+M_{j,i}\\right\\}$$\n", "* $\\underline{\\underline{M}}^T$ means transposition of a matrix $\\underline{\\underline{M}}$, i.e.,\n", "$$\\underline{\\underline{M}}^T= \\left\\{ \\underline{\\underline{N}} \\mid N_{i,j}=M_{j,i} \\right\\}$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* **matrix multiplication**\n", "\n", "> Replace summations of matrices in one of its dimensions\n", "\n", " * elemental definition:\n", "\n", "$C_{i,j} = \\sum_k{\\left[A_{i,k} \\times B_{k,j}\\right]}$\n", "\n", "$d_{i} = \\sum_k{\\left[A_{i,k} \\times b_{k}\\right]}$\n", "\n", "$e_{j} = \\sum_k{\\left[a_{k} \\times B_{k,j}\\right]}$\n", "\n", " * matrix notation\n", "\n", "$\\underline{\\underline{C}}=\\underline{\\underline{A}}\\underline{\\underline{B}}$\n", "\n", "$\\underline{d}=\\underline{\\underline{A}}\\underline{b}$\n", "\n", "$\\underline{e}=(\\underline{a^T}\\underline{\\underline{B}})^T$\n", "\n", "\n", " * python implementation\n", "\n", "`C = A @ B`\n", "\n", "`d = A @ b`\n", "\n", "`e = (a.T @ B).T`\n", "\n", "* **element-wise multiplication**\n", "\n", "> Condensate operations that apply analogously in all elements of one or more matrices\n", "\n", " * elemental definition:\n", "\n", "$C_{i,j} = A_{i,j} \\times B_{i,j}$\n", "\n", " * matrix notation\n", "\n", "$\\underline{\\underline{C}}=\\underline{\\underline{A}} \\circ \\underline{\\underline{B}}$\n", "\n", " * python implementation\n", "\n", "`C = A * B`\n", "\n", "* **element-wise multiplication with broadcasting**\n", "\n", "> Diagonalization is used to represent element-wise multiplication between: two single line matrixes, two single columns, one single line matrix and each line of a (nl,nc) matrix, or one single column matrix and each column of a (nl,nc) matrix. This is useful in analytical differentiation presented in the lecture notes of [Abreu C. R. A.][Abreu, yyyy, LN, Matrix algebra...].\n", "On the other hand, this notation may be dropped later, at implementation time, for a numerical solution, depending on the programming enviroment, see below the usage and correspondence of notations:\n", "\n", "[Abreu, yyyy, LN, Matrix algebra...]: https://github.com/iurisegtovich/PyTherm-applied-thermodynamics/blob/master/Get_involved/4_Texts_Library/AbreuC.R.A.%2C%20Matrix%20Algebra%20and%20Matrix%20Differentiation%20Rules%20Applied%20to%20Excess%20Gibbs%20Energy%20Models.pdf\n", "\n", " * elemental definition:\n", "\n", "$C_{i,j} = A_{i,j} \\times b_{j}$\n", "\n", "$D_{i,j} = b_{i} \\times A_{i,j}$\n", "\n", "$e_{i} = a_{i} \\times b_{i}$\n", "\n", "$f_{j} = b_{j} \\times a_{j} $\n", "\n", " * matrix notation\n", "\n", "$\\underline{\\underline{C}}=\\underline{\\underline{A}} \\left( \\mathscr D \\underline{b} \\right )$\n", "\n", "$\\underline{\\underline{D}}= \\left( \\mathscr D \\underline{b} \\right ) \\underline{\\underline{A}}$\n", "\n", "$\\underline{e}= \\left( \\mathscr D \\underline{a} \\right ) \\underline{b}$\n", "\n", "$\\underline{f}=\\left( \\underline{b}^{T} \\left( \\mathscr D \\underline{a} \\right ) \\right)^T$\n", "\n", " * python implementation\n", "\n", "`C = A * b.T`\n", "\n", "`D = b * A`\n", "\n", "`e = a * b`\n", "\n", "`f = (b.T * a.T).T`\n", "\n", "> where we chose to use:\n", ">- upper case letters to represent full (nl,nc) matrixes\n", ">- lower case letters to represent single column matrixes)\n", ">\n", "> while from Python/NumPy syntax:\n", ">- `M.T` is the numpy native syntax for transposition of a matrix `M`\n", ">- `A \\* B` is the numpy native syntax for the element-wise multiplication between matrixes `A` and `B`\n", "\n", "Note that in matrix algebra in python, single columns and single line matrixes should be represented by 2d arrays having shape (nl,1) and (1,nc), respectively. This is not quite the same thing as 1d arrays. Understand the differences seeing usage examples [here][Lists and array operations].\n", "\n", "> where:\n", ">- nl stands for number of lines in a single column (nl,1) matrix or in a full (nl,nc) matrix\n", ">- nc stands for number of columns in a single line (1,nc) matrix or in a full (nl,nc) matrix\n", "\n", "[Lists and array operations]: http://localhost:8888/notebooks/LocalWorkspaces/Pytherm/GITSYNC/Get_involved/5_Syntax_cheat_sheets%2C_examples%2C_samples%2C_tests/List%20and%20Array%20operations.ipynb" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fitted parameters\n", "Renon et al. (1969) fitted 1 $\\alpha$ valid for all 3 binary interactions $\\{(1,2),(1,3),(2,3)\\}$ parameter and 6 $A_{i,j}$ parameters, two for each binary interaction filling a non symmetric $A$ matrix." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Ethyl acetate (1) + water (2) + ethanol (3)\n", "\n", "\n", "alpha12 = 0.4\n", "\n", "alpha23 = 0.3\n", "\n", "alpha13 = 0.3\n", "\n", "# 6 binary Aij parameters\n", "Dg12 = 1335 * 4.184 #J/K\n", "Dg21 = 2510 * 4.184 #J/K\n", "\n", "Dg23 = 976 * 4.184 #J/K\n", "Dg32 = 88 * 4.184 #J/K\n", "\n", "Dg13 = 301 * 4.184 #J/K\n", "Dg31 = 322 * 4.184 #J/K\n" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Feeding the fitted parameters to the model in matrix structure:\n", "we will assemble the parameters in a matrix structure so that we can access each parameter by its index, as in\n", "`A[0,0]` and `A[0,1]`rather than as `A11` and `A12`, so we can loop trough all of them using an iterator, see below:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "8.3144598\n" ] } ], "source": [ "import numpy as np\n", "from scipy.constants import R\n", "print(R)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#assemble matrix with regressed parameters Dg_i,j, according to the model all diagonal terms are zero\n", "Dg = np.array([[0, Dg12, Dg13],\n", " [Dg21, 0, Dg23],\n", " [Dg31, Dg32, 0]])\n", "\n", "A = Dg/R\n", "\n", "#assemble symmetric matrix alpha\n", "alpha = np.array([[0, alpha12, alpha13],\n", " [alpha12, 0, alpha23],\n", " [alpha13, alpha23, 0]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We previously had:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def Gamma(T,x,alpha,A):\n", " tau=np.zeros([3,3])\n", " for j in range(3):\n", " for i in range(3):\n", " tau[j,i]=A[j,i]/T \n", " \n", " G=np.zeros([3,3])\n", " for j in range(3):\n", " for i in range(3):\n", " G[j,i]=np.exp((-alpha[j,i]*tau[j,i]))\n", " \n", " Gamma=np.zeros([3])\n", " for i in range(3):\n", "\n", " Sj1=0\n", " Sj2=0\n", " Sj3=0\n", " for j in range(3):\n", " Sj1 += tau[j,i]*G[j,i]*x[j]\n", " Sj2 += G[j,i]*x[j]\n", " \n", " Sk1=0\n", " Sk2=0\n", " Sk3=0\n", " for k in range(3):\n", " Sk1 += G[k,j]*x[k]\n", " Sk2 += x[k]*tau[k,j]*G[k,j]\n", " Sk3 += G[k,j]*x[k]\n", " \n", " Sj3 += ((x[j]*G[i,j])/(Sk1))*(tau[i,j]-(Sk2)/(Sk3))\n", " \n", " Gamma[i]=np.exp(Sj1/Sj2 + Sj3)\n", " \n", " return Gamma" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 1.4967744 1.28850578 1.01628367]\n" ] } ], "source": [ "#test it to see if results match\n", "#trial temperature and composition:\n", "T = 293.15 #K\n", "x=np.array([.1,.3,.6]) #normalized\n", "ans=Gamma(T,x,alpha,A)\n", "print(ans) #ttest using those trial input" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Use linear algebra in the model" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def Gamma_linalg(T,c_x,q_alpha, q_A): # here we chose to use the starting letters s, c, l, and Q to identify scalar variables, single column matrixes, single line matrixes and square matrixes to the reader\n", " # e_T should be an scalar value for temperature\n", " # c_x should be a single-column matrix(2d array) representing composition\n", " # q_alpha should be two matrix(2d array) representing component dependent parameters inferred from experimental data\n", " # q_tau should be two matrix(2d array) representing component dependent parameters inferred from experimental data\n", " \n", " q_tau = q_A/T #element-wise division by scalar\n", " q_at = q_alpha*q_tau #M2d * N2d yields element-wise multiplication\n", " q_minusat = -q_at #element wise signal change\n", " q_G = np.exp(q_minusat) #element wise exponentiation\n", " q_Lambda = (q_tau*q_G) #element-wise multiplication\n", " q_GT = q_G.T #M.T yields the transpose matrix of M;\n", " c_den = q_GT @ c_x #M @ N yields the matrix multiplication between M and N\n", " c_invden = 1/c_den #scalar 1 is broadcast for element-wise division\n", " l_invden = c_invden.T #transposition of a single column matrix yields a single line matrix\n", " q_E = q_Lambda * l_invden #element wise multiplication between (nl,nc) matrix M with (1,nc) matrix l broadcasts the element-wise multiplication of each (1,nc) submatrix of M with the unique (1,nc) matrix l\n", " q_L = q_G * l_invden #broadcasting of element-wise multiplication\n", " l_x = c_x.T #transposition of a single column matrix yields a single line matrix\n", " q_Lx = q_L * l_x #broadcasting of element-wise multiplication\n", " q_ET = q_E.T #transposition of square matrix\n", " q_LxET = q_Lx @ q_ET #matrix multiplication\n", " q_ES = q_E+q_ET #element-wise sum\n", " q_ESminusLxET = q_ES-q_LxET #element-wise subtraction\n", " q_ESminusLxETx = q_ESminusLxET @ c_x #matrix multiplication\n", " gamma = np.exp(q_ESminusLxETx) #element-wise exponentiation\n", " return gamma" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 1.4967744 ]\n", " [ 1.28850578]\n", " [ 1.01628367]]\n" ] } ], "source": [ "#a test case for the function\n", "#where x was the composition represented in a 1d array\n", "#and now line and x_as_column is a single lne and a single column matrix, respectively to represent composition\n", "#We build it using the array function to wrap the 1d array in another 1d aray, hence a 2d array\n", "x_as_line = np.array([x])\n", "#We transpose x_as_line to creata x_as_column, which is the shape expected by the linalgGamma function\n", "x_as_column = np.array([x]).T #we wrap x with an extra braket so it is now a 2d array, a matrix, as we did not add any extra lines it is a single-line matrix, we tranpose to generate a single-column matrix (1d arrays cannot be transposed, there is no second dimension)\n", "#print the output to see if errors occur and if values are coherent(between zero and infinity, tending to 1 for ideal solutions)\n", "print(Gamma_linalg(T,x_as_column,alpha,A)) #test using those trial input" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def Gamma_linalg_tiny(T,c_x,q_alpha, q_A):\n", " #note that we used many lines for didatics\n", " #we can do it in few lines:\n", " #note that some expression occur more than once below\n", " #so it may be useful define it as a intermediary recurrent term here\n", " #and calculate it once to use it then several times\n", " q_tau = q_A/T\n", " q_G = np.exp(-(q_alpha*q_tau))\n", " l_D = ((1/((q_G.T) @ c_x)).T)\n", " q_E = (q_tau*q_G) * l_D \n", " gamma = np.exp(((q_E+(q_E.T))-(((q_G * l_D) * (c_x.T)) @ (q_E.T))) @ c_x)\n", " return gamma" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 1.4967744 ]\n", " [ 1.28850578]\n", " [ 1.01628367]]\n" ] } ], "source": [ "#test it to see that the results are the same\n", "print(Gamma_linalg_tiny(T,x_as_column,alpha,A)) #test using those trial input" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# What difference does it make?\n", "\n", "Ipython provides a profiling tool, with %timeit you can evaluate the time for execution of a line of program (with all called dependencies). In the following cells, we use it to evaluate our function in version 1 with explicit for loops and in version 2a and 2b with linear algebra matrix operations" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "10000 loops, best of 3: 102 µs per loop\n" ] } ], "source": [ "%timeit Gamma(T,x,alpha,A) #ttest using those trial input #My result was 90 micro seconds per loop" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The slowest run took 198.08 times longer than the fastest. This could mean that an intermediate result is being cached.\n", "10000 loops, best of 3: 28.1 µs per loop\n" ] } ], "source": [ "%timeit Gamma_linalg(T,x_as_column,alpha,A) #ttest using those trial input #My result was 25 micro seconds per loop" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The slowest run took 174.12 times longer than the fastest. This could mean that an intermediate result is being cached.\n", "10000 loops, best of 3: 27.2 µs per loop\n" ] } ], "source": [ "%timeit Gamma_linalg_tiny(T,x_as_column,alpha,A) #ttest using those trial input #My result was 25 micro seconds per loop" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "here more honest trials with random input" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The slowest run took 431.19 times longer than the fastest. This could mean that an intermediate result is being cached.\n", "10000 loops, best of 3: 21.7 µs per loop\n" ] } ], "source": [ "%%timeit\n", "#approximately time the random number generation to subtract later\n", "# ~21 micro seconds per loop here\n", "N=3\n", "x=np.random.rand(N,1)\n", "x=x/sum(x)\n", "alpha=np.random.rand(N,N)\n", "A=np.random.rand(N,N)\n", "T=(np.random.rand(1,1)+.5)*273" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The slowest run took 28.12 times longer than the fastest. This could mean that an intermediate result is being cached.\n", "1000 loops, best of 3: 426 µs per loop\n" ] } ], "source": [ "%%timeit\n", "# ~440 micro seconds per loop here (420 subtracting the random number generation)\n", "N=3\n", "x=np.random.rand(N,1)\n", "x=x/sum(x)\n", "alpha=np.random.rand(N,N)\n", "A=np.random.rand(N,N)\n", "T=(np.random.rand(1,1)+.5)*273\n", "\n", "_=Gamma(\n", " T,\n", " x,\n", " alpha,\n", " A)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The slowest run took 189.96 times longer than the fastest. This could mean that an intermediate result is being cached.\n", "10000 loops, best of 3: 55.7 µs per loop\n" ] } ], "source": [ "%%timeit\n", "# ~56 micro seconds per loop here (36 subtracting the random number generation)\n", "N=3\n", "x=np.random.rand(N,1)\n", "x=x/sum(x)\n", "alpha=np.random.rand(N,N)\n", "A=np.random.rand(N,N)\n", "T=(np.random.rand(1,1)+.5)*273\n", "\n", "_=Gamma_linalg(\n", " T,\n", " x,\n", " alpha,\n", " A)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The slowest run took 195.10 times longer than the fastest. This could mean that an intermediate result is being cached.\n", "10000 loops, best of 3: 56.2 µs per loop\n" ] } ], "source": [ "%%timeit\n", "# ~52 micro seconds per loop here (32 subtracting the random number generation)\n", "N=3\n", "x=np.random.rand(N,1)\n", "x=x/sum(x)\n", "alpha=np.random.rand(N,N)\n", "A=np.random.rand(N,N)\n", "T=(np.random.rand(1,1)+.5)*273\n", "\n", "_=Gamma_linalg_tiny(\n", " T,\n", " x,\n", " alpha,\n", " A)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Make it faster!\n", "Also, with minor effort, we can use Numba to further accelerate the code.\n", "\n", "See below how it is able to accelerate the functions `Gamma` `and Gamma_linalg_tiny` and compare." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#These two lines is all that it takes to accelerate this function\n", "from numba import jit\n", "@jit\n", "#now repeat the function with a different bname so we can compare\n", "def Gamma_numba(T,x,alpha,A):\n", "\n", " tau=np.zeros([3,3])\n", " for j in range(3):\n", " for i in range(3):\n", " tau[j,i]=A[j,i]/T \n", " \n", " G=np.zeros([3,3])\n", " for j in range(3):\n", " for i in range(3):\n", " G[j,i]=np.exp((-alpha[j,i]*tau[j,i]))\n", " \n", " Gamma=np.zeros([3])\n", " for i in range(3):\n", "\n", " Sj1=0\n", " Sj2=0\n", " Sj3=0\n", " for j in range(3):\n", " Sj1 += tau[j,i]*G[j,i]*x[j]\n", " Sj2 += G[j,i]*x[j]\n", " \n", " Sk1=0\n", " Sk2=0\n", " Sk3=0\n", " for k in range(3):\n", " Sk1 += G[k,j]*x[k]\n", " Sk2 += x[k]*tau[k,j]*G[k,j]\n", " Sk3 += G[k,j]*x[k]\n", " \n", " Sj3 += ((x[j]*G[i,j])/(Sk1))*(tau[i,j]-(Sk2)/(Sk3))\n", " \n", " Gamma[i]=np.exp(Sj1/Sj2 + Sj3)\n", " \n", " return Gamma" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from numba import jit\n", "@jit\n", "def lngammaNRTL(T,c_x,q_alpha, q_A):\n", " q_tau = q_A/T\n", " q_G = np.exp(-(q_alpha*q_tau))\n", " l_D = ((1/((q_G.T)@\n", " c_x)).T)\n", " q_E = (q_tau*q_G)*l_D \n", " return (((q_E+(q_E.T))-(((q_G*l_D)*(c_x.T))@\n", " (q_E.T)))@\n", " c_x)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": true }, "outputs": [], "source": [ "#These two lines are all that it takes to accelerate this function\n", "from numba import jit\n", "@jit\n", "#now repeat the function with a different name so we can compare them\n", "def Gamma_linalg_tiny_numba(T,c_x,q_alpha, q_A):\n", " q_tau = q_A/T\n", " q_G = np.exp(-(q_alpha*q_tau))\n", " l_D = ((1/((q_G.T) @ c_x)).T)\n", " q_E = (q_tau*q_G) * l_D \n", " gamma = np.exp(((q_E+(q_E.T))-(((q_G * l_D) * (c_x.T)) @ (q_E.T))) @ c_x)\n", " return gamma" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The slowest run took 34.63 times longer than the fastest. This could mean that an intermediate result is being cached.\n", "1000 loops, best of 3: 388 µs per loop\n" ] } ], "source": [ "%%timeit\n", "# ~370 micro seconds per loop here (350 subtracting the random number generation, versus 420 thats not much acceleration)\n", "N=3\n", "x=np.random.rand(N,1)\n", "x=x/sum(x)\n", "alpha=np.random.rand(N,N)\n", "A=np.random.rand(N,N)\n", "T=(np.random.rand(1,1)+.5)*273\n", "\n", "_ = Gamma_numba(\n", " T,\n", " x,\n", " alpha,\n", " A)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The slowest run took 50.88 times longer than the fastest. This could mean that an intermediate result is being cached.\n", "10000 loops, best of 3: 36.5 µs per loop\n" ] } ], "source": [ "%%timeit\n", "# ~34 micro seconds per loop here (14 subtracting the random number generation, versus 32 thats approximately half)\n", "\n", "N=3\n", "x=np.random.rand(N,1)\n", "x=x/sum(x)\n", "alpha=np.random.rand(N,N)\n", "A=np.random.rand(N,N)\n", "T=(np.random.rand(1,1)+.5)*273\n", "\n", "_ = Gamma_linalg_tiny_numba(\n", " T,\n", " x,\n", " alpha,\n", " A)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Incoming features:\n", "\n", "* 3d plots of activity coefficient vs composition\n", ">- gamma1 vs (x1,x2 | x3=1-x1-x2)\n", ">- gamma2 vs (x1,x2 | x3=1-x1-x2)\n", ">- gamma3 vs (x1,x2 | x3=1-x1-x2)\n", "\n", "* Liq-Liq Equilibria flash algorithm\n", "\n", "* 2d x1-x2-x3 triangle plots of phase equilibria envelope at given T and P\n", "\n", "* Optimizing code with cython and numba, comparison with fortran and c\n", "\n", "* Problem inversion: Regression of alpha and tau parameters from experimental data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [Root]", "language": "python", "name": "Python [Root]" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 1 }