{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# First time here?\n", "Please see our [demo](ReadMeFirst.ipynb) on how to use the notebooks.\n", "\n", "___\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Linear Non-Linear Regressions (and Why They're Bad)\n", "\n", "Consider the classical ways of determining Michaelis-Menten constants... In the old days, curve fitting was hard and a lot of effort was spent linearizing problems in order to avoid having to perform a nonlinear fit. One such method is the Lineweaver-Burk plot that uses the reciprocal of subtrate concentration vs the reciprocal of reaction rate. The problem here is that taking the reciprocal inverts the size of measurement errors, so small errors become large ones. Moreover, the constants are determined by the axis-intercepts, which may be some distance from the data and extrapolations over long distances is always dicey. Another method is the Hanes-Woolf plot which plots the substrate concentration divided by the reaction rate against the substrate concentration. The problem here is that both axes depend on the substrate concentration, so typical methods of error estimation are no longer valid.\n", "\n", "These methods are still taught today because they provide an intuition into problem and because it's necessary to understand how the kinetics constants were found in older papers. However, we now have access to fast and ubiquitous computing. A much better solution is to use nonlinear fitting. For example, set up the Michaelis-Menten equation in gnuplot and use its fit() function, and you'll get a very quick, very accurate constant estimate with error estimates that actually mean something! For a more detailed comparison of the different methods and their errors, see the following paper: _Current statistical methods for estimating the Km and Vmax of Michaelis-Menten kinetics_ (10.1016/S0307-4412(96)00089-1)\n", "\n", "Here, we will work out an example using Python..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's import everything we're going to need..." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from scipy.optimize import curve_fit\n", "import matplotlib.pyplot as plt\n", "from ipywidgets import interact, interactive, fixed, interact_manual\n", "import ipywidgets as widgets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's load in some kinetics data. You can download the [example data](vdata.asc), or you can use your own data if you would like. You may need to adjust the code below to suit your data." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "A = np.loadtxt('vdata.asc')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we define the function we want to fit, which is just the Michaelis-Menten equation," ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "def michaelis_menten(s, km, vmax):\n", " return(s*vmax/(s+km))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we use the nonlinear fitting function from SciPy. We're going to treat these answers as the \"correct\" ones and preserve them for later use." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Km=0.597, Vmax=0.690\n" ] } ], "source": [ "opt, cov = curve_fit(michaelis_menten, A[:,0], A[:,1], p0=(0.5,0.5))\n", "correct_km = opt[0]\n", "correct_vmax = opt[1]\n", "print('Km=%.3f, Vmax=%.3f' % (correct_km, correct_vmax))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The classic method of determining Km and Vmax is to use a double-reciprocal plot, also known as the Lineweaver-Burk Plot. The data is fit using a linear regression model, or $y = ax + b$. From this, we can derive $K_m = \\frac{a}{b}$ and $V_{max} = \\frac{1}{b}$. Now let's try it using Python..." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Km=0.441, Vmax=0.585\n", "Km error = -0.155920\n" ] } ], "source": [ "(a,b) = np.polyfit(1.0/A[:,0], 1.0/A[:,1], 1)\n", "km = a/b\n", "vmax = 1.0/b\n", "print('Km=%.3f, Vmax=%.3f' % (km, vmax))\n", "print('Km error = %f' % (km - correct_km))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's a pretty sizable error!\n", "\n", "## Handling Measurement Errors\n", "\n", "Let's see how these two methods respond to different levels of measurement error. First, lets set up some functions that will let us compare the two methods..." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "ntries = 0\n", "nonlin_km_errsum = 0.0\n", "lin_km_errsum = 0.0" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "def compareFits(s, v):\n", " global ntries\n", " global nonlin_km_errsum\n", " global lin_km_errsum\n", " \n", " opt, cov = curve_fit(michaelis_menten, s, v, p0=(0.5, 0.5))\n", " km = opt[0]\n", " vmax = opt[1]\n", " \n", " si = 1.0 / s\n", " vi = 1.0 / v\n", " (a,b) = np.polyfit(si, vi, 1)\n", " km2 = a / b\n", " vmax2 = 1.0 / b\n", " \n", " plt.subplot(1,2,1)\n", " plt.plot(s,v,'ro')\n", " plt.plot(s, michaelis_menten(s, km, vmax), 'r-')\n", " plt.xlabel('[S]',labelpad=-2)\n", " plt.ylabel('V')\n", " text = 'Km = %.3f, Vm = %.3f' % (km, vmax)\n", " plt.figtext(0.15,0.85, text)\n", " err = abs(km - correct_km)\n", " nonlin_km_errsum += err\n", " text = 'Km error = %.3f' % err\n", " plt.figtext(0.15, 0.15, text)\n", " text = 'Avg Km error = %.3f' % (nonlin_km_errsum / ntries)\n", " plt.figtext(0.15, 0.0, text)\n", " plt.title('Nonlinear fit')\n", " \n", " plt.subplot(1,2,2)\n", " plt.plot(si, vi, 'go')\n", " y2 = np.polyval( [a, b], si)\n", " plt.plot(si, y2, 'g-')\n", " plt.xlabel('1/[S]', labelpad=-2)\n", " plt.ylabel('1/V')\n", " text = 'Km = %.3f, Vm = %.3f' % (km2, vmax2)\n", " plt.figtext(0.6, 0.85, text)\n", " err = abs(km2 - correct_km)\n", " lin_km_errsum += err\n", " text = 'Km error = %.3f' % err\n", " plt.figtext(0.6, 0.15, text)\n", " text = 'Avg Km error = %.3f' % (lin_km_errsum / ntries)\n", " plt.figtext(0.6, 0.0, text)\n", " lin_km_errsum += abs(km2 - correct_km) \n", " plt.title('Lineweaver-Burk Plot')\n", " \n", " ntries += 1 \n", " plt.show()\n", " \n", "def compareWithError(s, v, serror, verror):\n", " serr = (np.random.random(len(s)) * 2.0 - 1.0) * (serror / 100.0)\n", " verr = (np.random.random(len(v)) * 2.0 - 1.0) * (verror / 100.0)\n", " ve = v + v * verr\n", " se = s + s * serr\n", " compareFits(se, ve)\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following code block will now compare the nonlinear fit with the linearized (double-reciprocal) plot and show you the error in calculated Km along with the overall average error committed. There are two slides: serror and verror. These correspond to error introduced into the measurements of substrate concentration and reaction velocity. The slides allow you to add errors from 0% up to 20% of the value of each data point (drawn from a uniform distribution). Each time you move the slide, a new set of errors will be introduced. Try playing with both slides and see how the error in estimating Km (shown below the graphs) changes..." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "2ce2768bde0d4957aca47e319134eeb7", "version_major": 2, "version_minor": 0 }, "text/plain": [ "interactive(children=(FloatSlider(value=5.0, description='serror', max=20.0), FloatSlider(value=10.0, descript…" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "ntries = 1\n", "nonlin_km_errsum = 0.0\n", "lin_km_errsum = 0.0\n", "interact(compareWithError, s = fixed(A[:,0]), v = fixed(A[:,1]),\n", " serror=widgets.FloatSlider(min=0.0, max=20.0, step=0.1, value=5),\n", " verror=widgets.FloatSlider(min=0.0, max=20.0, step=0.1, value=10));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusions\n", "\n", "In general, why should you not trust a linear regression when plotting your data on a log-scale (or your linearized data)? The fundamental assumption of linear least squares regression is that the errors in your measurements are normally distributed with mean 0 and have a constant variance. When you transform your data, you are now skewing your error distribution. This leads to unreliable estimates of your parameters. Certainly, there are cases where such a transformation could be appropriate and the linear regression still work, but you must be very careful when using that approach. In general, you want to look at a nonlinear least squares fitting method.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 2 }