--- title: "Lab: linear regression" format: html --- ## 0. Data Execute the following code chunk to load the heart disease dataset we worked with before in previous labs ```{r} #| message: false library(tidyverse) heart_disease <- read_csv("https://raw.githubusercontent.com/36-SURE/2024/main/data/heart_disease.csv") ``` This dataset consists of 788 heart disease patients (608 women, 180 men). Your goal is to predict the `Cost` column, which corresponds to the patient's total cost of claims by subscriber (i.e., `Cost` is the response variable). You have access to the following explanatory variables: * `Age`: Age of subscriber (years) * `Gender`: Gender of subscriber * `Interventions`: Total number of interventions or procedures carried out * `Drugs`: Categorized number of drugs prescribed: `0` if none, `1` if one, `2` if more than one * `ERVisit`: Number of emergency room visits * `Complications`: Whether or not the subscriber had complications: `1` if yes, `0` if no * `Comorbidities`: Number of other diseases that the subscriber had * `Duration`: Number of days of duration of treatment condition ## 1. EDA Spend time exploring the dataset, to visually assess which of the **explanatory** variables listed above is most associated with our response `Cost`. Create scatterplots between the response and each **continuous** explanatory variable (either `Interventions`, `ERVist`, `Comorbidities`, or `Duration`). **Does any of the relationships appear to be linear?** Describe the direction and strength of the association between the explanatory and response variables. In your opinion, **which of the possible continuous explanatory variables displays the strongest relationship with cost**? ## 2. Fit a simple linear model Now that you've performed some EDA, it's time to actually fit some linear models to the data. Start the variable you think displays the strongest relationship with the response variable. **Update the following code by replacing `INSERT_VARIABLE` with your selected variable, and run to fit the model**: ```{r} #| eval: false init_cost_lm <- lm(Cost ~ INSERT_VARIABLE, data = heart_disease) ``` Before looking at the model summary, **you need to check the diagnostics** to see if it meets the necessary assumptions. To do this you can try running `plot(init_nba_lm)` in the console (what happens?). Equivalently, another way to make the same plots but with `ggplot2` perks is with the [`ggfortify`](https://cran.r-project.org/web/packages/ggfortify/vignettes/plot_lm.html) package by running the following code: ```{r} #| eval: false library(ggfortify) # install.packages("ggfortify") init_cost_lm |> autoplot() + theme_light() ``` The first plot is **residuals vs. fitted**: this plot should NOT display any clear patterns in the data, no obvious outliers, and be symmetric around the horizontal line at zero. The smooth line provided is just for reference to see how the residual average changes. **Do you see any obvious patterns in your plot for this model?** The second plot is a Q-Q plot ([see page 93 for more details](http://www.stat.cmu.edu/~cshalizi/TALR/TALR.pdf)). Without getting too much into the math behind them, **the closer the observations are to the dashed reference line, the better your model fit is.** It is bad for the observations to diverge from the dashed line in a systematic way; that means we are violating the assumption of normality discussed in lecture. **How do your points look relative to the dashed reference line?** The third plot looks at the square root of the absolute value of the standardized residuals. We want to check for homoskedascity of errors (equal, constant variance). **If we did have constant variance, what would we expect to see?** **What does your plot look like?** The fourth plot is residuals vs. leverage which helps us identify **influential** points. **Leverage** quantifies the influence the observed response for a particular observation has on its predicted value, i.e. if the leverage is small then the observed response has a small role in the value of its predicted response, while a large leverage indicates the observed response plays a large role in the predicted response. It's a value between 0 and 1, where the sum of all leverage values equals the number of coefficients (including the intercept). Specifically the leverage for observation $i$ is computed as $$ h_{ii} = \frac{1}{n} + \frac{(x_i - \bar{x})^2}{\sum_i^n (x_i - \bar{x})^2} $$ where $\bar{x}$ is the average value for variable $x$ across all observations. [See page 191 for more details on leverage and the regression hat matrix](http://www.stat.cmu.edu/~cshalizi/TALR/TALR.pdf). We're looking for points in the upper right or lower right corners, where dashed lines for [Cook's distance](https://en.wikipedia.org/wiki/Cook%27s_distance) values would indicate potential outlier points that are displaying too much influence on the model results. **Do you observed any such influential points in upper or lower right corners?** **What is your final assessment of the diagnostics, do you believe all assumptions are met? Any potential outlier observations to remove?** ## 3. Transform the `Cost` variable An obvious result from looking at the residual diagnostics above is that we are clearly violating the normality assumption. **Why do you think we're violating this assumption?** (HINT: Display a histogram of the `Cost` variable.) One way of addressing this concern is to apply a transformation to the response variable, in this case `Cost`. A common transformation for any type of dollar amount is to use the `log()` transformation. Run the following code chunk to create a new `log_cost` variable that we will use for the remainder of the lab. ```{r} #| eval: false heart_disease <- heart_disease |> mutate(log_cost = log(Cost + 1)) ``` **Why did we need to `+ 1` before taking the `log()`?** (HINT: Look at the minimum of `Cost`.) Now make another histogram, this time for the new `log_cost` variable. What happened to the distribution? ## 4. Assess the model summary Now fit the same model as before using the following code chunk. **Update the following code by replacing `INSERT_VARIABLE` with your selected variable, and run to fit the model**: ```{r} #| eval: false log_cost_lm <- lm(log_cost ~ INSERT_VARIABLE, data = heart_disease) ``` Interpret the results of this model using the `tidy()` function from the `broom` package (or the `summary()` function). **Do you think there is sufficient evidence to reject the null hypothesis that the coefficient is 0? What is the interpretation of the $R^2$ value?** Compare the square root of the raw (unadjusted) $R^2$ of your linear model to the correlation between that explanatory variable and the response using the `cor()` function. **What do you notice?** To assess the fit of a linear model, we can also plot the predicted values vs the actual values, to see how closely our predictions align with reality, and to decide whether our model is making any systematic errors. Execute the following code chunk to show the actual log(`Cost`) against our model's predictions: ```{r} #| eval: false heart_disease |> mutate(model_preds = predict(log_cost_lm)) |> ggplot(aes(x = model_preds, y = log_cost)) + geom_point(alpha = 0.75) + geom_abline(slope = 1, intercept = 0, linetype = "dashed", color = "red") + labs(x = "Predictions", y = "Observed log(Cost + 1)") + theme_light() ``` ## 5. Include multiple covariates Repeat steps 2 and 3 above but including more than one variable in your model. You can easily do this in the `lm()` function by adding another variable to the formula with the `+` operator as so (but just replace the `INSERT_VARIABLE` parts): ```{r} #| eval: false multi_cost_lm <- lm(log_cost ~ INSERT_VARIABLE_1 + INSERT_VARIABLE_2, data = heart_disease) ``` **Experiment with different sets of the continuous variables** What sets of continuous variables do you think models log(`Cost`) best? (Remember to use the **Adjusted $R^2$** when comparing models that have different numbers of variables). Beware of collinearity! Load the `car` library (install it if necessary!) and use the `vif()` function to check for possible (multi)collinearity. The `vif()` function computes the **variance inflation factor (VIF)** for predictor $x_j$ with $j \in 1,\dots, p$ as $$ \text{VIF}_j = \frac{1}{1 - R^2_j} $$ where $R^2_j$ is the $R^2$ from a variable with variable $x_j$ as the response and the other $p-1$ predictors as the explanatory variables. VIF values close to 1 indicate the variable is not correlated with other predictors, while VIF values over 5 indicate strong presence of collinearity. If present, remove a variable with VIF over 5, and redo the fit. Repeat this process until the `vif()` outputs are all less than 5. The follow code chunk displays an example of using this function: ```{r} #| eval: false library(car) # install.packages("car") vif(multi_cost_lm) ``` ## 6. Linear model with one categorical variable Run the following code to fit a model using only the `Gender` variable: ```{r} #| eval: false gender_cost_lm <- lm(log_cost ~ Gender, data = heart_disease) ``` Next, use the following code to first create a column called `model_preds` containing the predictions of the model above, to display the predictions of this model against the actual `log_cast`, but facet by the patient's gender: ```{r} #| eval: false heart_disease |> mutate(model_preds = predict(gender_cost_lm)) |> ggplot(aes(x = log_cost, y = model_preds)) + geom_point(alpha = 0.5) + facet_wrap(~ Gender, ncol = 2) + labs(x = "Actual log(Cost + 1)", y = "Predicted log(Cost + 1)") + theme_light() ``` As the figure above, **we are changing the intercept of our regression line** by including a categorical variable. To make this more clear, view the output of the summary: ```{r} #| eval: false summary(gender_cost_lm) ``` **Notice how only one coefficient is provided in addition to the intercept.** This is because, by default, `R` turns the categorical variables of $m$ levels (e.g., we have 2 genders in this dataset) into $m - 1$ indicator variables (binary with values of 1 if in that level versus 0 if not that level) for different categories relative to a **baseline level**. In this example, `R` has created an indicator for one gender: `Male`. By default, `R` will use alphabetical order to determine the baseline category, which in this example is the gender `Female`. The values for the coefficient estimates indicate the expected change in the response variable relative to the baseline. In other words, **the intercept term gives us the baseline's average y**, e.g. the average log(`Cost`) for male patients. This matches what you displayed in the predictions against observed `log_cost` scatterplots by `Gender` above. **Beware the default baseline `R` picks for categorical variables!** We typically want to choose the baseline level to be the group **with the most observations**. In this example, `Female` has the most number of observations so the default was appropriate. But in general, we can change the reference level by modifying the `factor` levels of the categorical variables (similar to how we reorder things in `ggplot2`). For example, we can use the following code to modify the `Gender` variable so that `Male` is the baseline (we use `fct_relevel()` to update `Gender` so that `Male` is the first factor level - and we do not need to modify the order of the remaining levels): ```{r} #| eval: false heart_disease <- heart_disease |> mutate(Gender = fct_relevel(Gender, "Male")) ``` **Refit the linear regression model using `Gender` above, how has the summary changed?** After you refit the model above, change the reference level back to `Female` with the following code: ```{r} #| eval: false heart_disease <- heart_disease |> mutate(Gender = fct_relevel(Gender, "Female")) ``` ## 7. Linear model with one categorical and one continuous variable Pick a single continuous variable from yesterday, use it to replace `INSERT_VARIABLE` below, then run the code to fit a model with the `Gender` included: ```{r} #| eval: false x_gender_cost_lm <- lm(log_cost ~ Gender + INSERT_VARIABLE, data = heart_disease) ``` **Create scatterplots with your predictions on the y-axis, your INSERT_VARIABLE on the x-axis, and color by `Gender`**. What do you observe? ## 8. Collapsing categorical variables Another categorical we have access to is the `Drugs` variable, which is currently coded as numeric. We can first use the `fct_recode()` function to modify the `Drugs` variable so that the integers are relabeled: ```{r} #| eval: false heart_disease <- heart_disease |> mutate(Drugs = fct_recode(as.factor(Drugs), "None" = "0", "One" = "1", "> One" = "2")) ``` Run the following code to fit a model using only the `Drugs` variable: ```{r} #| eval: false drugs_cost_lm <- lm(log_cost ~ Drugs, data = heart_disease) ``` Repeat the same from above that you considered for the `Gender` variable, viewing the predictions faceted by `Drugs` and examine the model summary (with `broom::tidy()` or `summary()`). **Do you think an appropriate reference level was used?** (HINT: Use the `table()` function on the `Drugs` variable to view the overall frequency of each level and determine if the most frequent level was used as the reference.) Given the similar values, we may decide to collapse the level of `One` and `> One` into a single level `>= One`. We can easily collapse the levels together into a smaller number of categories using `fct_collapse()`: ```{r} #| eval: false heart_disease <- heart_disease |> mutate(drugs_group = fct_collapse(Drugs, "None" = "None", ">= One" = c("One", "> One"))) ``` **Refit the model with this new `drugs_group` variable**, but assign it to a different name, e.g. `drugs_group_cost_lm`. What changed in the summary? ## 9. Interactions Remember with `ggplot2` you can directly compute and plot the results from running linear regression using `geom_smooth()` or `stat_smooth()` and specifying that `method = "lm"`. Try running the following code (replace `INSERT_VARIABLE`) to generate the linear regression fits with `geom_smooth` versus your own model's predictions (note the different `y` mapping for the point versus smooth layers): ```{r} #| eval: false heart_disease |> mutate(model_preds = predict(x_gender_cost_lm)) |> ggplot(aes(x = INSERT_VARIABLE, color = Gender)) + geom_point(aes(y = model_preds), alpha = 0.5) + geom_smooth(aes(y = log_cost), method = "lm") facet_wrap(~ Gender, ncol = 3) + labs(x = "INSERT YOUR LABEL HERE", y = "Predicted log(Cost + 1)") + theme_light() ``` **The `geom_smooth()` regression lines do NOT match!** This is because `ggplot2` is fitting **separate regressions for each position**, meaning the slope for the continuous variable on the x-axis is changing for each position. We can match the output of the `geom_smooth()` results with **interactions**. We can use interaction terms to build more complex models. Interaction terms allow for a different linear model to be fit for each category; that is, they allow for different slopes across different categories. If we believe relationships between continuous variables, and outcomes, differ across categories, we can use interaction terms to better model these relationships. To fit a model with an interaction term between two variables, include the interaction via the `*` operator like so: ```{r} #| eval: false gender_int_cost_lm <- lm(log_cost ~ Gender * INSERT_VARIABLE, data = heart_disease) ``` **Replace the predictions in the previous plot's `mutate` code with this interaction model's predictions.** How do they compare to the results from `geom_smooth()` now? You can model interactions between any type of variables using the `*` operator, feel free to experiment on your different possible continuous variables. ## 10. Polynomials Another way to increase the explanatory power of your model is to include transformations of continuous variables. For instance you can directly create a column that is a square of a variable with `mutate()` and then fit the regression with the original variable and its squared term: ```{r} #| eval: false heart_disease <- heart_disease |> mutate(duration_squared = Duration ^ 2) squared_duration_lm <- lm(log_cost ~ Duration + duration_squared, data = heart_disease) summary(squared_duration_lm) ``` **What are some difficulties with interpreting this model fit?** View the predictions for this model or other covariates you squared. ## 11. Training and testing As we've seen, using transformations such as higher-order polynomials may decrease the interpretability and increase the potential for overfitting associated with our models; however, they can also dramatically improve the explanatory power. We need a way for making sure our more complicated models have not overly fit to the noise present in our data. Another way of saying this is that a good model should generalize to a different sample than the one on which it was fit. This intuition motivates the idea of training/testing. We split our data into two parts, use one part -- the training set -- to fit our models, and the other part -- the testing set -- to evaluate our models. Any model which happens to fit to the noise present in our training data should perform poorly on our testing data. The first thing we will need to do is split our sample. Run the following code chunk to divide our data into two halves, which we will refer to as a training set and a test set. Briefly summarize what each line in the code chunk is doing. ```{r} #| eval: false heart_train <- heart_disease |> slice_sample(prop = 0.5, replace = FALSE) heart_test <- heart_disease |> anti_join(heart_train) ``` We will now compare three candidate models for predicting `log_cost` using `Gender` and `Duration`. We will fit these models on the **training data** only, ignoring the testing data for the moment. Run the below two code chunks to create two candidate models: ```{r} #| eval: false # model with interaction term candidate_model_1 <- lm(log_cost ~ Gender * poly(Duration, 2, raw = TRUE), data = heart_train) ``` ```{r} #| eval: false # model with no interaction term candidate_model_2 <- lm(log_cost ~ Gender + poly(Duration, 2, raw = TRUE), data = heart_train) ``` (Note: The `poly()` function is useful for getting higher-order polynomial transformations of variables. And here's an [explanation](https://stackoverflow.com/questions/29999900/poly-in-lm-difference-between-raw-vs-orthogonal) of the `raw` argument.) Using `broom::glance()` or `summary()`, which of these models has more explanatory power according to the training data? Which of the models is less likely to overfit? **Fit another model to predict `log_cost` using a different set of variables or polynomials**. Now that we've built our candidate models, we will evaluate them on our test set, using the criterion of mean squared error (MSE). Run the following code chunk to compute, on the test set, the MSE of predictions given by the first model compared to the actual `log_cost`. ```{r} #| eval: false model_1_preds <- predict(candidate_model_1, newdata = heart_test) model_1_mse <- mean((model_1_preds - heart_test$log_cost) ^ 2) ``` **Do this for each of your candidate models**. Compare the MSE on the test set, which model performed best (in terms of lowest test MSE)?