{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Design of Experiments\n", "====\n", "\n", "## Unit 18, Lecture 1\n", "\n", "*Numerical Methods and Statistics*\n", "\n", "----\n", "\n", "\n", "#### Prof. Andrew White, April 30, 2019" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Goals\n", "\n", "1. Know the vocubulary (treatment condition, factor, level, response, ANOVA, coding, factorial design, interaction, confound, grand mean, nuisance factor, blocking)\n", "2. Know that design of experiments and its analysis is for seeing what factors affect a response, not necessarily getting good regression models\n", "3. Recognize that design of experiments analysis is based on linear regression and hypothesis tests\n", "4. Be able to read and interpret an ANOVA table\n", "5. Be able to read and create a table of experiments following factorial or other designs\n", "6. Understand how to treat unkown nuisance factors (randomize experiment order) and known nuisance factors (blocking)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Design of experiments**: (Wikipedia)\n", "> The design of experiments is the design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Table of Experiments" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see an example of a design of experiments table:\n", "\n", "| TC | $X_1$ | $X_2$ | $Y$ |\n", "| --- |: ---- | :----:| ---:|\n", "| 1 | +1 | +1 |$y_1$| \n", "| 2 | +1 | -1 |$y_2$| \n", "| 3 | -1 | +1 |$y_3$| \n", "| 4 | -1 | -1 |$y_4$| " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* **TC** Treatment Condition\n", "* $X$ A factor\n", "* +1 The factor *level*\n", "* $Y$ The response\n", "* The use of +1,-1 is called the coding" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This table shows a 2 factor, 2 level design of experiments that has 4 treatment conditions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Factor Levels\n", "\n", "What is the meaning of the +1, -1? We do design of experiments to see if factors affect something. For example, our response might be the concentration of a chemical species and factors could be temperature and pressure. Because there are many temperatures to test, we might just only consider two temperature: hot and cold. This can be coded as levels: -1, +1. This is often done because there are standard analysis equations that would with integer levels, especially with two levels.\n", "\n", "If we regress against these integer levels, the regression coefficients aren't really meaningful. Instead, we care about p-values. That is, we care about discovering if certain factors affect our response. This will allow to say \"temperature affects the concentration\" or \"pressure does not affect concentration\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Replicates\n", "\n", "Note that our experimental design doesn't include replicates. The design of experiments is meant to be as efficient as possible. Note that here we're trying to see what matters, and not trying to get an accurate regression model. If you want to do regression for accuracy, then you should include replicates and work with actual factor values instead of levels. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Connecting to Categorical Regression\n", "\n", "We saw in unit 12, lecture 3 how to treat discrete data like this. Let's try regressing it! The data is 2 dimensional, so we will use 2 dimensional least squares. Should we include an intercept? Yes! One way to include is it to compute the **grand mean** from all responses so that they are centered at 0. Then the intercept will be 0. You should know this is commonly done, but we won't do this for our analysis. We'll just use a regular intercept as we saw in our regression unit." ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import statsmodels.api as sm\n", "import scipy.stats as ss\n", "import numpy.linalg as linalg\n", "\n", "x1 = [1, 1, -1, -1]\n", "x2 = [1, -1, 1, -1]\n", "y = [1.2, 3.2, 4.1, 3.6]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll use multidimensional ordinary least squares with an intercept:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 1., 1., 1.],\n", " [ 1., 1., -1.],\n", " [ 1., -1., 1.],\n", " [ 1., -1., -1.]])" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x_mat = np.column_stack((np.ones(4), x1, x2))\n", "x_mat" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll compute our coefficients and their standard error" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[0.625 0. 0. ]\n", " [0. 0.625 0. ]\n", " [0. 0. 0.625]] 1.25\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:1: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.\n", "To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.\n", " \"\"\"Entry point for launching an IPython kernel.\n" ] } ], "source": [ "beta, *_ = linalg.lstsq(x_mat, y)\n", "y_hat = x_mat @ beta\n", "resids = (y - y_hat)\n", "SSR = np.sum(resids**2)\n", "se2_epsilon = SSR / (len(x) - len(beta))\n", "se2_beta = se2_epsilon * linalg.inv(x_mat.transpose() @ x_mat)\n", "print(np.sqrt(se2_beta), np.sqrt(se2_epsilon))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can compute p-values and confidence intervals:" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "df = 1\n", "beta_0 is 3.02 +/- 7.94 with 95% confidence. p-value: 0.13 (T = -4.84)\n", "beta_1 is -0.83 +/- 7.94 with 95% confidence. p-value: 0.41 (T = -1.32)\n", "beta_2 is -0.38 +/- 7.94 with 95% confidence. p-value: 0.66 (T = -0.60)\n" ] } ], "source": [ "df = len(x) - len(beta)\n", "print('df = ', df)\n", "for i in range(len(beta)):\n", " #get our T-value for the confidence interval\n", " T = ss.t.ppf(0.975, df) \n", " # Get the width of the confidence interval using our previously computed standard error\n", " cwidth = T * np.sqrt(se2_beta[i,i]) \n", " # print the result, using 2 - i to match our numbering above\n", " hypothesis_T = -abs(beta[i]) / np.sqrt(se2_beta[i,i])\n", " p = 2 * ss.t.cdf(hypothesis_T, df + 1) # +1 because null hypothesis doesn't include coefficient\n", " print(f'beta_{i} is {beta[i]:.2f} +/- {cwidth:.2f} with 95% confidence. p-value: {p:.2f} (T = {hypothesis_T:.2f})') " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we found that our intercept is likely necessary (p < 0.05), but the two factors do not have a significant effect. We also found that factor 1 is more important than factor 2 as judged from the p-value" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using Statsmodels to for Regression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're going to be using a new library to do regression on this unit because of its ability to do an ANOVA analysis. We'll learn about ANOVA below, but let's first repeat the above regression with this tool. Creating a statsmodel requires two ingredients: data and a formula. The formula is a string that matches your regression model. In this case we use `y ~ x1 + x2`. The `~` means equal to here. The data should be a dictionary whose keys match the variables you used in your formula. Thus doing `data[y]` should give the `y` vector. The statsmodels regression is created by calling `ols` and then we must call `fit()` to do the regression and `summary` to get a report on the results. " ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/opt/conda/lib/python3.7/site-packages/statsmodels/stats/stattools.py:72: ValueWarning: omni_normtest is not valid with less than 8 observations; 4 samples were given.\n", " \"samples were given.\" % int(n), ValueWarning)\n" ] }, { "data": { "text/html": [ "\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "
OLS Regression Results
Dep. Variable: y R-squared: 0.678
Model: OLS Adj. R-squared: 0.033
Method: Least Squares F-statistic: 1.051
Date: Thu, 02 May 2019 Prob (F-statistic): 0.568
Time: 08:52:10 Log-Likelihood: -3.7957
No. Observations: 4 AIC: 13.59
Df Residuals: 1 BIC: 11.75
Df Model: 2
Covariance Type: nonrobust
\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "
coef std err t P>|t| [0.025 0.975]
Intercept 3.0250 0.625 4.840 0.130 -4.916 10.966
x1 -0.8250 0.625 -1.320 0.413 -8.766 7.116
x2 -0.3750 0.625 -0.600 0.656 -8.316 7.566
\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "
Omnibus: nan Durbin-Watson: 2.000
Prob(Omnibus): nan Jarque-Bera (JB): 0.667
Skew: 0.000 Prob(JB): 0.717
Kurtosis: 1.000 Cond. No. 1.00


Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified." ], "text/plain": [ "\n", "\"\"\"\n", " OLS Regression Results \n", "==============================================================================\n", "Dep. Variable: y R-squared: 0.678\n", "Model: OLS Adj. R-squared: 0.033\n", "Method: Least Squares F-statistic: 1.051\n", "Date: Thu, 02 May 2019 Prob (F-statistic): 0.568\n", "Time: 08:52:10 Log-Likelihood: -3.7957\n", "No. Observations: 4 AIC: 13.59\n", "Df Residuals: 1 BIC: 11.75\n", "Df Model: 2 \n", "Covariance Type: nonrobust \n", "==============================================================================\n", " coef std err t P>|t| [0.025 0.975]\n", "------------------------------------------------------------------------------\n", "Intercept 3.0250 0.625 4.840 0.130 -4.916 10.966\n", "x1 -0.8250 0.625 -1.320 0.413 -8.766 7.116\n", "x2 -0.3750 0.625 -0.600 0.656 -8.316 7.566\n", "==============================================================================\n", "Omnibus: nan Durbin-Watson: 2.000\n", "Prob(Omnibus): nan Jarque-Bera (JB): 0.667\n", "Skew: 0.000 Prob(JB): 0.717\n", "Kurtosis: 1.000 Cond. No. 1.00\n", "==============================================================================\n", "\n", "Warnings:\n", "[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n", "\"\"\"" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from statsmodels.formula.api import ols\n", "x1 = [1, 1, -1, -1]\n", "x2 = [1, -1, 1, -1]\n", "y = [1.2, 3.2, 4.1, 3.6]\n", "data = {'x1': x1, 'x2': x2, 'y': y}\n", "\n", "model = ols('y ~ x1 + x2', data=data).fit()\n", "model.summary()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Interpreting Statsmodels" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This regression summary has a huge amount of information. The top table includes information about the goodness of fit and regression model, like degrees of freedom and what the independent variable is. The middle table contains information about the regression coefficients including confidence intervals and p-values. The final table contains information about the residuals. The Jarque-Bera test is a normality test, like the Shapiro-Wilks test we learned previously. The p-values are slightly different because they use dof as 1, instead of 2, for their hypothesis test. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# ANOVA\n", "\n", "One of the most common analysis techniques of a design of experiments is the use of an **Analysis of Variance** (ANOVA). An ANOVA breaks up the response variance into factor variances. It explains where the variance in the response comes from. We aren't going to go deeply into the theory of ANOVA, but it's important that you know how it's used and how to intepret it. An ANOVA is based on a linear regression like above, but it's a different way of computing p-values. The p-values are the most relevant output of an ANOVA." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's an ANOVA of the above example:" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
dfsum_sqmean_sqFPR(>F)
x11.02.72252.72251.74240.412741
x21.00.56250.56250.36000.655958
Residual1.01.56251.5625NaNNaN
\n", "
" ], "text/plain": [ " df sum_sq mean_sq F PR(>F)\n", "x1 1.0 2.7225 2.7225 1.7424 0.412741\n", "x2 1.0 0.5625 0.5625 0.3600 0.655958\n", "Residual 1.0 1.5625 1.5625 NaN NaN" ] }, "execution_count": 51, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sm.stats.anova_lm(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ANOVA test gives information about each factor. The df is the degrees of freedom used to model each factor, the sum_sq is difference between the grand mean response and mean response of the treatment, the mean_sq is the sum_sq divided by the degrees of freedom, the F is an F-test statistic (like a T statistic from a t-test), and the final column contains p-value for the existence of each treatment. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### F-test\n", "\n", "The F-test is an alternative to the t-tests we do for regression coefficients being non-zero. The F-test is a little bit different than a t-test. One important idea of an F-test is that when we consider regression coefficents, we imagine our null model as being nested within the model we're considering. That means that the null hypothesis, the regression coefficient is zero, is a special case of the model we're considering where the regression coefficient is non-zero. An example of models that are not nested would be comparing using a $\\beta \\sin x$ vs $\\beta x^2$. There is no obvious way to nest one of these models in the other to create a null hypothesis. Notice that if you imagine the F-test exactly the same as the t-test (null is regression coefficient being 0), then you'll always have nested models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Designing Experiments\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## One at a time (bad example)\n", "\n", "One common choice for designing experiments might be *one at a time*. In this approach you vary each treatment once. Let's see an example. Say you want to know how water, sun, and playing music affects plant growth. A one at a time design would look like this:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "| TC | Water | Sun | Music | Plant Growth |\n", "| --- |: ---- | :----:| :---: | ---:|\n", "| 1 | 0 | 0 | 0 |$y_1$| \n", "| 2 | 1 | 0 | 0 |$y_2$| \n", "| 3 | 0 | 1 | 0 |$y_3$| \n", "| 4 | 0 | 0 | 1 |$y_4$| " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that we have switched our level coding to $0$ and $1$. The choice is arbitrary, but it demonstrates better the idea of one at a time design. What is wrong with this design? \n", "\n", "Water and sun will never be active at the same time, meaning that all of our experiments will not actually have plant growth. As we discussed in unit 12, lecture 3, this means we're missing interactions. Look at the model equation we assume with one at a time:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$\n", "y = \\beta_w x_w + \\beta_s x_s + \\beta_m x_m \\ldots + \\epsilon\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is missing those interactions, like how the system changes when both water and sun are given to the plant. The correct model equation is:\n", "\n", "$$\n", "y = \\beta_w x_w + \\beta_s x_s + \\beta_m x_m + \\beta_{ws} x_{ws} + \\beta_{wm} x_{wm} + \\beta_{sm} x_{sm} + \\beta_{wsm} x_{wsm} + \\epsilon\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To solve for all these regression coefficients, we need to have at least as many experiments. This leads to.." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Factoial Design\n", "\n", "With a factorial design, we have one treatment condition for all permutations of the factor levels. For our plant growth example, the experiments would look like: \n", "\n", "| TC | Water | Sun | Music | Plant Growth |\n", "| --- |: ---- | :----:| :---: | ---:|\n", "| 1 | 0 | 0 | 0 |$y_1$| \n", "| 2 | 1 | 0 | 0 |$y_2$| \n", "| 3 | 0 | 1 | 0 |$y_3$| \n", "| 4 | 0 | 0 | 1 |$y_4$| \n", "| 5 | 1 | 1 | 0 |$y_5$| \n", "| 6 | 0 | 1 | 1 |$y_6$| \n", "| 7 | 1 | 0 | 1 |$y_7$| \n", "| 8 | 1 | 1 | 1 |$y_8$| \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The factorial design will have $L^K$ treatment conditions, where $L$ is the number of levels and $K$ is the number of factors. $2^3 = 8$ in this case. One at a time is $1 + K$ treatment conditions for comparison. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Factorial Analysis Example\n", "\n", "Let's consider the following example data. The plant growth is in grams. We have one replicate at each condition in this example" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
sum_sqdfFPR(>F)
xw20.9306251.01339.563.404548e-10
xs22.3256251.01428.842.633625e-10
xm0.0056251.00.365.651101e-01
xw:xs21.8556251.01398.762.866344e-10
xw:xm0.0506251.03.241.095530e-01
xs:xm0.0306251.01.961.990794e-01
xw:xm:xs0.0156251.01.003.465935e-01
Residual0.1250008.0NaNNaN
\n", "
" ], "text/plain": [ " sum_sq df F PR(>F)\n", "xw 20.930625 1.0 1339.56 3.404548e-10\n", "xs 22.325625 1.0 1428.84 2.633625e-10\n", "xm 0.005625 1.0 0.36 5.651101e-01\n", "xw:xs 21.855625 1.0 1398.76 2.866344e-10\n", "xw:xm 0.050625 1.0 3.24 1.095530e-01\n", "xs:xm 0.030625 1.0 1.96 1.990794e-01\n", "xw:xm:xs 0.015625 1.0 1.00 3.465935e-01\n", "Residual 0.125000 8.0 NaN NaN" ] }, "execution_count": 57, "metadata": {}, "output_type": "execute_result" } ], "source": [ "xw = [0, 1, 0, 0, 1, 0, 1, 1]\n", "xs = [0, 0, 1, 0, 1, 1, 0, 1]\n", "xm = [0, 0, 0, 1, 0, 1, 1, 1]\n", "y = [0.4, 0.3, 0.3, 0.2, 4.6, 0.3, 0.2, 5.2, 0.3, 0.2, 0.4, 0.3, 5.0, 0.3, 0.3, 5.0]\n", "\n", "# we do xw + xw because we have 2 replicates at each condition\n", "data = {'xw': xw + xw, 'xs': xs + xs, 'xm': xm + xm, 'y': y}\n", "\n", "model = ols('y~xw + xs + xm + xw * xs + xw * xm + xs * xm + xw * xm * xs', data=data).fit()\n", "sm.stats.anova_lm(model, typ=2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fractional Factorial\n", "\n", "Due to how many treatment conditions are required for factorial, often people will neglect some of the interaction effects and leave them as **confounded**. For example, if you have 3 levels and 5 factors, there will be 11 treatment conditions measuring the treatment effects without interaction and 232 for measuring interactions. If we study only a fraction of these, we can greatly reduce the number of experiments. The choice of how to reduce the number of experiments is a complex topic, but essentially any design between one at a time and factorial is fractional factorial" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Nuisance Factors\n", "\n", "Often we have factors that are known but not interesting. For example, we might be conducting experiments on Monday and Wednesday. That is a factor, but not one we're interested in. A common example is gender in drug trails. We know gender may effect response to a drug, but we don't want to study it. There are a variety of ways to deal with these." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Unquantifiable Nuisance Factors\n", "\n", "To remove unknown nuisance factors, or nuisance factors that are hard to quantify, you can randomize your order of experiments. This gives some robustness to the possibility of unkown nuisance factors affecting your experiments. For example, if you get better at an experiment so that you conduct it more accurately and precise as time goes on, this is a hard to quantify nuisance factor. If you randomize your order though, this will mean that you will not have your accuracy indirectly affecting your conclusion about other treatment conditions. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Quantifiable Nuisance Factors\n", "\n", "For nuisance factors which you can measure and sort into levels, you can use **blocking** to remove their effect. Blocking means arranging your factors so that in each block you have the same nuisance factors. \n", "\n", "For example, if I want to do a drug trial I need to two blocks: the control and the block given the drug. If I want to remove gender, I will make each block have equal numbers of the two genders. Let's saw you have 12 participants and 8 are women. Block I would be 4 women and 2 men. Block II would be 4 women and 2 men. My experiment would look like:\n", "\n", "\n", "| TC | Block | $X$ | $Y$ |\n", "| --- |: ---- | :----:| ---:|\n", "| 1 | I | 0 |$y_1$| \n", "| 2 | II | 1 |$y_2$|" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This blocking means that the nuisance factor of gender does not vary when other factors vary. " ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }