```{r, echo=FALSE, fig.cap="A histogram representing the distribution of mood score values."} set.seed(62) univariate_data <- generate_data() response_hist <- univariate_data %>% filter(response > 0) %>% ggplot(aes(x = response)) + geom_histogram(bins = 20, fill = "skyblue", color = "black") + labs(x = "Mood score", y = "Frequency") response_hist ``` ```{r, echo=FALSE, eval=FALSE} ggsave(filename = here("figures", "histogram.png"), plot = response_hist, width = 5, height = 4, units = "in") ``` ```{r, eval=FALSE, echo=FALSE} # get histogram count data for sonification hist_data <- ggplot_build(response_hist)$data[[1]] sound_univariate <- sonify::sonify(y = hist_data$count, duration = 10, pulse_len = 0.1) tuneR::writeWave(sound_univariate, here("audio", "sound_univariate.wav")) ```

Notice how the pitch oscillates in a similar manner to the shape of the distribution and the pulses are evenly spaced, representing the bins of the distribution!

Now, let's sonify two variables at once! The visualization below is a scatterplot, a common technique for two continuous variables. The data is simulated to represent how much I crave a burger depending on how hungry I am.

```{r, echo=FALSE, fig.cap="A scatterplot representing a linear relationship between two variables. Hunger level on the x axis and burger craving strength on the y axis"} set.seed(7485) linear_data <- generate_data(n_obs = 50, beta = 1.2, error_sigma = 1, is_exponential = FALSE) linear_plot <- linear_data %>% ggplot(aes(x = predictor, y = response)) + geom_point(fill = "skyblue", color = "black", size = 3, shape = 21) + labs(x = "Hunger level", y = "Burger craving strength") linear_plot ``` ```{r, echo=FALSE, eval=FALSE} ggsave(filename = here("figures", "linear_scatter.png"), plot = linear_plot, width = 5, height = 4, units = "in") ``` ```{r, echo=FALSE, eval=FALSE} sound_linear <- sonify::sonify(x = linear_data$predictor, y = linear_data$response, duration = 10, pulse_len = 0.1) tuneR::writeWave(sound_linear, here("audio", "sound_linear.wav")) ```

Notice how the pitch increase with time, but there are oscillations which represent error around a perfect linear relationship. Also notice how the pulses are less even- the points aren't evenly distributed on the x-axis. Pretty cool!

The final visualization I'll present is an exponential relationship between two variables. The simulated data in this scenario represents my craving for a big ol' plate of nachos dependent on how hungry I am.

```{r, echo=FALSE, fig.cap="A scatterplot representing an exponential relationship between two variables. Hunger level on the x axis and burger craving strength on the y axis"} expnt <- 2 set.seed(9999) exponential_data <- generate_data(is_exponential = TRUE, exponent = expnt) exponential_plot <- exponential_data %>% ggplot(aes(x = predictor, y = response)) + geom_point(fill = "skyblue", color = "black", size = 3, shape = 21) + labs(x = "Hunger level", y = "Nachos craving strength") exponential_plot ``` ```{r, echo=FALSE, eval=FALSE} ggsave(filename = here("figures", "exponential_scatter.png"), plot = exponential_plot, width = 5, height = 4, units = "in") ``` ```{r, eval=FALSE, echo=FALSE} exponential_sound <- sonify::sonify(x = exponential_data$predictor, y = exponential_data$response, duration = 10, pulse_len = 0.1) tuneR::writeWave(exponential_sound, here("audio", "sound_exponential.wav")) ```

In this case, the pitch increases, well, exponentially! In addition, the points are tighter together- The pitch oscillates less than the linearly related data. My nacho cravings are pretty consistent.

These are just a few ways sonification can be incorporated into your data communication and exploration arsenal. Besides making data more accessible to those who are blind or low vision, sonification opens up another avenue of data exploration to uncover patterns not evident in a visßual medium. Dr. Merced was able to [hear a distinct frequency pattern](https://youtu.be/-hY9QSdaReY?t=287) not evident from a chart that led her to discover that star formation likely plays an important part in supernova explosions! Sighted astronomers now use sonfication as a complement to visualization to investigate their data. If you would like to take a crack at sonification yourself, there are a [variety](https://osf.io/vgaxh/wiki/Resources/) of [resources](https://jarednielsen.medium.com/data-sonification-and-web-scraping-with-node-js-and-tone-js-eaf2cd35a000). The sonifications for this post were created using the R package [sonify](https://cran.r-project.org/web/packages/sonify/index.html), which is a straightforward interface that will get you there quickly, without much overhead. You can find the R code for this blog post [here](https://raw.githubusercontent.com/connor-french/sonification_ttt_post/main/sonification_ttt_post.Rmd).

Those of us at the [Graduate Center Digital Initiatives](https://gcdi.commons.gc.cuny.edu/) strive to make interactions with digital media more accessible. We provide a variety of resources to take advantage of digital tools in your research. In addition, we provide community and support with the [Digital Fellows](https://digitalfellows.commons.gc.cuny.edu/), so I encourage you to take a look and connect with us!