{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"Machine-learning applications in computational chemistry.\"\"\"\n", "\n", "__authors__ = \"B. G. Peyton\"\n", "__credits__ = [\"Doaa Altarawy\", \"Matthew Welborn\", \"Daniel G. A. Smith\"]\n", "__email__ = [\"bgpeyton@vt.edu\"]\n", "\n", "__copyright__ = \"(c) 2008-2020, The Psi4Education Developers\"\n", "__license__ = \"BSD-3-Clause\"\n", "__date__ = \"2020-07-13\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Machine-learning applications in computational chemistry\n", "## 0. Environment setup " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import psi4\n", "from sklearn.linear_model import LinearRegression\n", "from sklearn.kernel_ridge import KernelRidge\n", "from sklearn.model_selection import GridSearchCV\n", "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Introduction\n", "\n", "Machine-learning (ML) is broadly defined as any algorithm which improves by feeding more data into the algorithm. Familiar concepts such as basic linear regression can be understood as applications of ML, where more data in the regression provides a better fit to a straight line which can then be used to predict new points. This is an example of **supervised** learning, where some input values (x) and output values (y) are known from the beginning. These values compose a **training set**, a defining feature of any ML method. Unsupervised learning is also possible, and is generally used for finding patterns in correlated data; however, we will restrict our discussion to supervised learning techniques.\n", "\n", "ML has found applications across the physical sciences including (but not limited to) engineering, physics, biology, and chemistry. By utilizing training sets of molecules and their various properties, chemists can make use of ML algorithms by predicting molecular properties using only the molecular formula or structure without the need for time-consuming experiments. However, generating a robust database of reference values to train the ML model is a tedious task for experimental chemists. Computational chemistry provides a quick and consistent way to produce training data for a reference set of molecules. Once a model is trained with these reference data, properties for new molecules (or geometries of the same molecule) can be predicted at very low cost -- no experiment or electronic structure calculation required. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Representing a molecule\n", "The ML algorithms we will be using attempt to find a model $f({\\bf x})$ which can map a representation vector $\\bf x$ onto some target value $y$:\n", "$$f({\\bf x}) = {\\bf x}^T {\\bf w},$$\n", "\n", "$$y = f({\\bf x}) + \\epsilon$$\n", "where $\\epsilon$ is the **noise** or **error** of the model. These equations define a simple linear regression model, where we have allowed the input vector (and therefor the weights, or slope) to be of arbitrary dimension.\n", "\n", "For our purposes, $\\bf x$ is some vector **representation** of our molecule, and $y$ is some property we'd like to predict, such as the energy. We will start by calculating the energy of the water molecule during the symmetric stretching of the O-H bonds, which will be the property $y$ in our model. The following code block will calculate the energy at 31 different bond lengths and save the geometry, charges, and energy at each bond length." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# run across an H2O symmetric bond stretching surface\n", "# 0.5 - 2.0 Angstroms, increments of 0.05, total of 31 geometries\n", "# use H-H bond angle 104.5 degrees\n", "# save geometry, nuclear charges (same across the surface), and scf energy as lists\n", "\n", "# set basis\n", "psi4.set_options({\n", " 'basis':'sto-3g'\n", "})\n", "# initialize geometry list\n", "geoms = []\n", "# initialize charge list\n", "qs = []\n", "# initialize energy list\n", "Es = []\n", "# generate bond lengths\n", "rs = []\n", "for i in range(0,31):\n", " rs.append(0.5 + i*0.05)\n", "\n", "# loop over bond lengths\n", "for i in rs:\n", " # generate a water molecule using a Z-matrix and set the O-H bond lengths\n", " mol = psi4.geometry(\"\"\"\n", " O\n", " H 1 \"\"\" + str(i) + \"\"\"\n", " H 1 \"\"\" + str(i) + \"\"\" 2 104.5\n", " \"\"\")\n", " # save the geometry\n", " geoms.append(mol.geometry().to_array())\n", " \n", " # save the charges for all three atoms as a list\n", " q = []\n", " for a in range(0,3):\n", " q.append(mol.fZ(a))\n", " qs.append(q)\n", " \n", " # calculate and save the energy\n", " Es.append(psi4.energy('scf'))\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the next cell, the change in the energy as the length of the O-H bonds change is plotted as a slice of the potential energy surface." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# plot the original surface\n", "plt.plot(rs,list(Es))\n", "plt.title(\"H$_2$O Hartree-Fock symmetric O-H stretch\")\n", "plt.xlabel('r / $\\AA$')\n", "plt.ylabel('E / $E_h$')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At a glance, the energy seems to be a simple function of the bond length $r$. One could formulate this as a simple regression problem and solve for a function that maps $r$ into the energy $E$. However, if one wishes to explore more than the symmetric stretch, more information about the molecule is necessary. We want to describe every bond in the molecule.\n", "\n", "To do this, we can encode the geometry of the molecule into a matrix using each atom distance $r_{ij}$ - that is, the distance between every pair of atoms. To differentiate the O-H bond length from the H-H \"bond\", we will use the atomic charge $Z$ of each atom. The resulting matrix is called a **Coulomb matrix** $\\bf C$, which takes the form:\n", "$$\n", "\\begin{align}\n", "C_{ij} = \n", "\\begin{cases}\n", "\\frac{Z_iZ_j}{r_{ij}}&\\text{for $i \\neq j$}\\\\\n", "0.5Z_i^{2.4}&\\text{for $i=j$}\\\\\n", "\\end{cases}\n", "\\end{align}\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ==> Build coulomb matrices according to the above equation <==\n", " \n", "def coulomb(geom,q):\n", " '''\n", " Generates the coulomb matrix given a geometry and list of atomic charges\n", " \n", " Parameters:\n", " geom: list of lists of atomic coordinates, [[xi,yi,zi],[xj,yj,zj],...]\n", " q: list of atomic charges, [qi,qj,...]\n", " \n", " Returns:\n", " cm: np.ndarray, coulomb matrix\n", " '''\n", " natom = len(q)\n", " cm = np.zeros((natom,natom)) # ==> Fill this matrix! <==\n", " \n", " # ==> YOUR CODE HERE <==#\n", " \n", " return cm\n", "\n", "# Generate the coulomb matrix for all geometries and store them in a list\n", "couls = []\n", "for i in range(0,len(geoms)):\n", " coul = coulomb(geoms[i],qs[i])\n", " couls.append(coul)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(couls[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above cell should evaluate to:\n", "```\n", "[[73.51669472 8.46683537 8.46683537]\n", " [ 8.46683537 0.5 0.66926039]\n", " [ 8.46683537 0.66926039 0.5 ]]\n", " ```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**STUDENT QUESTIONS:**\n", "\n", "A. `mol.geometry()` always returns the geometry in cartesian units Bohr. How would the Coulomb matrix compare if you used different units for your bond distances?\n", "\n", "B. Why do we not scale the diagonal elements by the bond distance?\n", "\n", "**Answers:** \n", "\n", "A. \n", "\n", "B. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Training a ML model for molecular energy\n", "Now we may take our Coulomb matrix elements and use them as features for a ML model. We will flatten $\\bf C$ into a 1D array $\\bf c$ (using the `numpy` function `ndarray.flatten()`), then find the optimum weights $\\bf w$ which map the features onto the energy:\n", "\n", "$$y \\approx f({\\bf x}) = {\\bf c}^T {\\bf w}.$$\n", "\n", "Finding the weights is done by calling `fit(x,y)` on a `LinearRegression` object, which was imported from the `scikit-learn` ML package at the top of this notebook. The resulting model can then be used to `predict` values for any new representations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**STUDENT QUESTION:** If our feature vector $\\bf c$ is a vector containing all of the elements of $\\bf C$, how many elements will there be in our weight vector $\\bf w$?\n", "\n", "**Answer:** " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will now select some points to train the ML model, and try predicting the energy for every other point. Since we know what the curve will look like, we can evaluate the accuracy of our model. Execute the following code blocks to see how the model performs." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# set up a training set (X,y) - 4 evenly-spaced points on the PES\n", "trainers = [0,7,15,23] \n", "X_train = []\n", "y_train = []\n", "for t in trainers:\n", " X_train.append(couls[t].flatten())\n", " y_train.append(Es[t])\n", "\n", "# prediction set will be everything else\n", "testers = []\n", "for i in range(0,31):\n", " testers.append(i)\n", "# delete the test set!\n", "for t in sorted(trainers,reverse=True):\n", " del testers[t]\n", "# set up test set (X,y)\n", "X_test = []\n", "y_test = []\n", "for t in testers:\n", " X_test.append(couls[t].flatten())\n", " y_test.append(Es[t])\n", "\n", "# use Linear Regression from scikit-learn to train and predict\n", "reg = LinearRegression().fit(X_train,y_train)\n", "y_pred = reg.predict(X_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# plot the true and ML surfaces\n", "plt.plot(rs,list(Es),label='Truth')\n", "plt.plot(np.asarray(rs)[testers],y_pred,label='ML')\n", "plt.legend()\n", "plt.xlabel('r / $\\AA$')\n", "plt.ylabel('E / $E_h$')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This model did not perform admirably. However, `scikit-learn` provides many pre-packaged ML algorithms which are more appropriate for this job. Let's try again, but instead of linear regression we use the popular **Kernel Ridge Regression** model. This model solves a modified linear regression equation:\n", "\n", "$$\n", "y \\approx f({\\bf x}) = \\sum_i w_i k({\\bf x},{\\bf x'}_i)\n", "$$\n", "\n", "where $i$ runs over the training set (${\\bf x'}$). The regression coefficients $\\bf w$ are defined by\n", "\n", "$$\n", "{\\bf w} \\triangleq ({\\bf K}({\\bf x'},{\\bf x'}) + \\alpha{\\bf I})^{-1}{\\bf y}\n", "$$\n", "\n", "where $\\bf I$ is the identity matrix. This method differs from linear regression in two key ways. The first is the inclusion of a dampening parameter (called a **hyperparameter**), $\\alpha$, which is optimized during training and protects against over-training. The next difference is the use of a **kernel**, vector $k$ or matrix $\\bf{K}$, which measures the **similarity** of two inputs $\\bf x$ and ${\\bf x'}$ rather than using the inputs themselves. There are many possible kernel definitions, but we will choose the popular radial basis function kernel:\n", "\n", "$$\n", "{\\bf K}({\\bf x},{\\bf x'}) = exp(-\\gamma||{\\bf x} - {\\bf x'}||^2)\n", "$$\n", "\n", "where $\\gamma$ is included as an additional optimizable hyperparameter determined at training time to determine the \"width\" of the kernel. The hyperparameters are optimized by simply trying many different values, and choosing the combination which gives the best results on a subset of the training data in a process called cross-validation.\n", "(NOTE: the algorithm must never be shown the test data until final evaluation!)\n", "\n", "`scikit-learn` can build this model given the training set and a few starting parameters with just a few lines of code, shown below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# building the kernel ridge regression model\n", "krr = KernelRidge(kernel='rbf')\n", "# define hyperparameter grid: 12 points from 1E-12 to 1E12 for each\n", "parameters = {'alpha':np.logspace(-12,12,num=12),\n", " 'gamma':np.logspace(-12,12,num=12)}\n", "# build the model - 4-fold grid search CV w/ NMSE scoring function\n", "krr_regressor = GridSearchCV(krr,parameters,cv=4,scoring='neg_mean_squared_error')\n", "# train the model on the training set\n", "krr_regressor.fit(X_train,y_train)\n", "# predict the test set\n", "y_pred = krr_regressor.predict(X_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will plot our results with `matplotlib` again, this time using an inset plot to zoom-in." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(rs,list(Es),label='Truth')\n", "plt.plot(np.asarray(rs)[testers],y_pred,label='ML')\n", "\n", "ax = plt.gca()\n", "axin = ax.inset_axes([0.25,0.4,0.4,0.4])\n", "axin.plot(rs,list(Es))\n", "axin.plot(np.asarray(rs)[testers],y_pred)\n", "axin.set_xlim([0.9,1.1])\n", "axin.set_ylim([-74.965,-74.94])\n", "axin.set_alpha(0)\n", "ax.indicate_inset_zoom(axin,label='_nolegend_')\n", "\n", "plt.legend()\n", "plt.xlabel('r / $\\AA$')\n", "plt.ylabel('E / $E_h$')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, the inclusion of multiple optimizable parameters and the use of a kernel vastly improves the flexibility of our model. Now that we have a model that seems to work, we will try a more complex system." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Predicting a hypersurface\n", "Of course, we do not need to limit ourselves to the symmetric stretch. The asymmetric stretch of water, a function of both O-H bond distances, can also be explored. Complete the following code to compute the hypersurface - you should try all values of `rs` for each bond, so reduce the number of rs to 16 instead of 31 (still ranging from 0.5 to 2 Angstroms). Running the calculations may take a few minutes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ==> Generate the slice of the hypersurface where both bonds stretch independently. <==\n", "\n", "psi4.set_options({\n", " 'basis':'sto-3g'\n", "})\n", "geoms = []\n", "qs = []\n", "Es = np.zeros((len(rs),len(rs))) # store the energies in a matrix! we can flatten it later\n", "rs = [] # 16 r values!\n", "\n", "# ==> YOUR CODE HERE <==" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we now want to plot the energy against two variables ($r_1$ and $r_2$), we can use a contour plot. The color will show us the deviation from the minimum at each point, $(r_1,r_2)$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig,ax=plt.subplots(1,1)\n", "cp = ax.contourf(rs,rs,(Es - np.min(Es)))\n", "cbar = fig.colorbar(cp,label=\"(E - min) / $E_h$\")\n", "ax.set_title('H$_2$O hypersurface')\n", "ax.set_xlabel('r$_1$ / $\\AA$')\n", "ax.set_ylabel('r$_2$ / $\\AA$')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If the calculations have run correctly, the above cell should produce the following graph:\n", "![contour](data/h2o_contour.png)\n", "We can see that there is a broad enery well, meaning that many different bond distances produce energies near the minimum. Reproducing this well is key for a useful ML model of the system. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the following cells, choose a number of training points and attempt to reproduce the above hypersurface. See if you can find the lowest number of training points to still get a good fit!\n", "\n", "NOTE: you can choose the training points however you like. You may wish to try evenly-spaced points, points near the minima or maxima of the hypersurface, or even random points! (Hint: for the latter, try `np.random.randint`)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ==> Generate the coulomb matrix for all geometries <==\n", "couls = []\n", "\n", "# ==> YOUR CODE HERE <==" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ==> Make your training set - X and y values.\n", "# Remember: you must flatten each individual Coulomb matrix, but also the whole energy array\n", "trainers = []\n", "X_train = []\n", "y_train = []\n", "\n", "# ==> YOUR CODE HERE <==" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ==> Finally, use Scikit-Learn to build a model to predict the rest of the surface <==\n", "krr = KernelRidge(kernel='rbf')\n", "parameters = {'alpha':np.logspace(-12,12,num=12),\n", " 'gamma':np.logspace(-12,12,num=12)}\n", "krr_regressor = GridSearchCV(krr,parameters,cv=4,scoring='neg_mean_squared_error')\n", "\n", "# ==> COMPLETE THE FUNCTION CALL <==\n", "krr_regressor.fit() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following cell will predict the entire surface, including the training set, to make plotting easier. We will also look at the average error across the surface. How does it compare?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_pred = krr_regressor.predict([c.flatten() for c in couls])\n", "fig,ax=plt.subplots(1,1)\n", "cp = ax.contourf(rs,rs,(y_pred.reshape(16,16) - np.min(Es)))\n", "cbar = fig.colorbar(cp,label=\"($E_{ML}$ - min) / $E_h$\")\n", "ax.set_title('H$_2$O hypersurface')\n", "ax.set_xlabel('r$_1$ / $\\AA$')\n", "ax.set_ylabel('r$_2$ / $\\AA$')\n", "plt.show()\n", "print(\"Mean prediction error across the surface: {} Hartrees\".format(np.mean(np.abs(y_pred - Es.flatten()))))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**STUDENT QUESTION:** The shape of the well may be reproduced, but \"chemical accuracy\" in computational chemistry is generally 1kcal/mol. Does your model reach this (on average)?\n", "\n", "**Answer:** " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Atomization energies and the ANI-1 dataset\n", "We can also apply the same logic to many different molecules. A variant of the Coulomb matrix can be generated for molecules from the ANI-1 dataset, a collection of over 57 thousand organic molecules with up to 8 heave (non-Hydrogen) atoms and their properties. Large datasets such as these can be used to train models to predict properties for new molecules without needing to perform an expensive calculation. \n", "\n", "Below, a subset of the ANI-1 dataset is loaded into a Pandas dataframe containing the energy from an electronic structure calculation, a molecule object, the Coulomb matrix (`feature`), and the atomization energy (`AE`) for a 1000-molecule training set and 100-molecule test set. The Pandas dataframe acts as a powerful dictionary for storing large datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_train = pd.read_pickle('data/ani1_train.pd')\n", "data_test = pd.read_pickle('data/ani1_test.pd')\n", "\n", "data_train" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, all of our data is collected into just two objects which can be indexed by name, similarly to a Python `dict` object. Another advantage of this dataframe is that every entry can be visualized within a Jupyter notebook. Feel free to explore the different molecules present." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_test[\"molecule\"][1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the cells below, train a new kernel ridge regression model. Then, evaluate the error for predicting the atomization energies of the test and training sets. The error on the training sets gives us a \"lower bound\" to what we hope the error in the test set to be. If the training set performs poorly, we know our model must be revised." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ==> Pandas dictionaries allow us to store features and data in one place. <==\n", "# ==> Select the appropriate dictionary keys to build the training and test sets. <==\n", "X_train = np.vstack(data_train[])\n", "y_train = data_train[].values\n", "X_test = np.vstack(data_test[])\n", "y_test = data_test[].values" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ==> train the model and predict for both the test and training set <==\n", "# ==> this will take a moment <==\n", "krr = KernelRidge(kernel='rbf')\n", "parameters = {'alpha':np.logspace(-12,12,num=12),\n", " 'gamma':np.logspace(-12,12,num=12)}\n", "krr_regressor = GridSearchCV(krr,parameters,cv=4,scoring='neg_mean_squared_error')\n", "\n", "# ==> COMPLETE THE FUNCTION CALLS <==\n", "krr_regressor.fit()\n", "y_pred_test = krr_regressor.predict()\n", "y_pred_train = krr_regressor.predict()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we will plot the errors in a violin plot. The y-axis will tell us the error, while the width of the violin tells us the density (frequency) of points with that error. In other words, the widest point tells us the error where most of the data points fall. The minimum, maxiumum, and mean errors are marked by horizontal lines." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_err = np.abs(y_test - y_pred_test)\n", "train_err = np.abs(y_train - y_pred_train)\n", "\n", "fig,ax = plt.subplots(1,1)\n", "ax.violinplot([test_err,train_err],showmeans=True)\n", "ax.set_xticks([1,2])\n", "ax.set_xticklabels([\"Test Set\",\"Training Set\"])\n", "ax.set_ylabel('ML error / $E_h$')\n", "print(\"Mean prediction error across the test set: {} Hartrees\".format(np.mean(test_err)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**STUDENT QUESTION:**\n", "Are these error significant or negligible? Should the model be revised? Discuss your reasoning.\n", "\n", "**Answer:** " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" }, "latex_envs": { "LaTeX_envs_menu_present": true, "autoclose": false, "autocomplete": true, "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 1, "hotkeys": { "equation": "Ctrl-E", "itemize": "Ctrl-I" }, "labels_anchors": false, "latex_user_defs": false, "report_style_numbering": true, "user_envs_cfg": true } }, "nbformat": 4, "nbformat_minor": 2 }