--- title: "Solving Ordinary Least Squares (OLS) Regression Using Matrix Algebra" date: "2019-01-30" output: html_document: highlight: textmate theme: lumen code_download: true toc: yes toc_float: collapsed: yes smooth_scroll: yes ---
*Tags:* Statistics R
In psychology, we typically learn how to calculate OLS regression by calculating each coefficient separately. However, I recently learned how to calculate this using matrix algebra. Here is a brief tutorial on how to perform this using R.
## R Packages ```{r} packages <- c("tidyverse", "broom") xfun::pkg_attach(packages, message = F) ```
## Dataset ```{r} dataset <- carData::Salaries %>% select(salary, yrs.since.phd) %>% mutate(yrs.since.phd = scale(yrs.since.phd, center = T, scale = F)) ``` ```{r} summary(dataset) ``` The `Salaries` dataset is from the `carData` package, which shows the salary of professors in the US during the academic year of 2008 and 2009. Let's say we are interested in determining if professors who have had their Ph.D. degree for longer are more likely to also have higher salaries.
## Solve Using Matrix Algebra ### Design Matrix The design matrix is just a dataset of the all the predictors, which includes the `intercept` set at 1 and `yrs.since.phd`. ```{r} x <- tibble( intercept = 1, yrs.since.phd = as.numeric(dataset$yrs.since.phd) ) %>% as.matrix() head(x) ```
### Dependent Variable ```{r} y <- dataset$salary %>% as.matrix() head(y) ```
### $X'X$ First, we need to solve for $X'X$, which is the transposed design matrix ($X'$) multiplied by the design matrix ($X$). Let's take a look at what $X'$ looks like. ```{r} x_transposed <- t(x) x_transposed[, 1:6] ```
After multiplication, the matrix provides the total number of participants ($n$ = 397; really, the sum of the intercept), sum of `yrs.since.phd` ($\Sigma(yrs.since.phd)$ = 0), and sum of squared `yrs.since.phd` ($\Sigma (yrs.since.phd^2)$ = 65765.64). Respectively, $\Sigma (years.since.phd)$ and $\Sigma (yrs.since.phd^2)$ are sum of error ($\Sigma(yrs.since.phd-M_{yrs.since.phd})$) and sum of squared error ($\Sigma(yrs.since.phd-M_{yrs.since.phd})^2$) because we first centered the `yrs.since.phd` variable. ```{r} x_prime_x <- (x_transposed %*% x) x_prime_x %>% round(., 2) ```
Let's verify this. ```{r} colSums(x) %>% round(., 2) colSums(x^2) %>% round(., 2) ```
### $(X'X)^{-1}$ $(X'X)^{-1}$ is the inverse matrix of $X'X$. ```{r} x_prime_x_inverse <- solve(x_prime_x) x_prime_x_inverse ```
### $X'Y$ $X'Y$ contains the sum of Y ($\Sigma Y$ = 45141464) and sum of $XY$ ($\Sigma XY$ = 64801658). ```{r} x_prime_y <- x_transposed %*% y x_prime_y ```
Let's verify this. ```{r} sum(y) sum(x[, 2] * y) ```
### Coefficients To obtain the coefficients, we can multiply these last two matrices ($b = (X'X)^{-1}X'Y$). ```{r} coef <- x_prime_x_inverse %*% x_prime_y coef ```
### Standard Error To calculate the standard error, we multiply the inverse matrix of $X'X$ by the mean squared error (MSE) of the model and take the square root of its diagonal matrix ($\sqrt{diag((X'X)^{-1} * MSE)}$).
First, we need to calculate the $MSE$ of the model. Calculating $MSE$ of the model is still the same, $MSE = \frac{\Sigma(Y-\hat{Y})^{2}}{n-p} = \frac{\Sigma(e^2)}{df}$ where $Y$ is the DV, $\hat{Y}$ is the predicted DV, $n$ is the total number of participants (or data points), and $p$ is the total number of variables in the design matrix (or predictors, which includes the intercept).
To obtain the predicted values ($\hat{Y}$), we can also use matrix algebra by multiplying the design matrix with the coefficients ($\hat{Y} = Xb$). ```{r} y_predicted <- x %*% coef head(y_predicted) ```
Now that we have $\hat{Y}$, we can then calculate the $MSE$. ```{r} e <- y - y_predicted se <- sum(e^2) n <- nrow(x) p <- ncol(x) df <- n - p mse <- se / df mse ```
Then, we multiply $(X'X)^{-1}$ by MSE. ```{r} mse_coef <- x_prime_x_inverse * mse mse_coef %>% round(., 2) ```
Then, we take the square root of the diagonal matrix to obtain the standard error of the coefficients. ```{r} rmse_coef <- sqrt(diag(mse_coef)) rmse_coef %>% round(., 2) ```
### *t*-Statistic The *t*-statistic is just the coefficient divided by the standard error of the coefficient. ```{r} t_statistic <- as.numeric(coef) / as.numeric(rmse_coef) t_statistic ```
### *p*-Value We want the probability of obtaining that score or more extreme and not the other way around. Thus, we need to set lower to FALSE. Also, we need to multiply it by 2 to obtain a two-tailed test. ```{r} p_value <- 2 * pt(t_statistic, df, lower = FALSE) p_value ```
### Summary ```{r} tibble( term = colnames(x), estimate = as.numeric(coef), std.error = as.numeric(rmse_coef), statistic = as.numeric(t_statistic), p.value = as.numeric(p_value) ) ```
## Solve Using `lm` Function ```{r} lm(salary ~ yrs.since.phd, dataset) %>% tidy() ```