--- title: "Attractive, Interactive, Ready for the Web: Visualizing Your Data Using R" subtitle: "DH2019, June 9 2019" author: Andres Karjus output: html_document: toc: yes --- Welcome to the DH2019 workshop on Visualizing Your Data Using R. # Troubleshooting This section contains some basic FAQ and tips. It's here at the top so that if you get stuck or confused, you can easily find it. - Help files. You can always check the parameters of a function by executing `help(functionname)` or `?functionname` or searching for the function by name in the Help tab on the right. Function arguments have names, but names can be omitted if using them in their intended order; they can be looked up in the help files. - See the line of text between this window and the console, besides a little yellow square icon? Click this to see the table of contents and jump between sections quickly. You can also use CTRL+F (CMD+F) to search. ## There's a red badge with a white X on the left sidebar, what's that? - That's signalling a syntax error on that line; executing that line would also produce an error. Try to fix it if one pops up. Note that the yellow triangles signal warnings - this line will run, but something might be wrong with it. Note that magrittr's placeholders (.) generate warnings, but they can be ignored. ## I ran a piece of code but now there's a "+" sign on the last line of the Console (instead of >), before a blinking cursor, and nothing (or weird stuff) is happening. - The "+" indicates the Console is expecting more input. Commonly it means you fogot to close brackets, or have some other typo in the code. Press ESC, fix the code, and start over. ## What were the shortcuts for running code? - CTRL+ENTER (PC) or CMD+ENTER (Mac) runs a line and puts the cursor on the next line. ALT+ENTER runs the line but does not advance the cursor. - To run a line, the cursor has to be on the line, but it does not have to be in the beginning of the end. - You can always copy-paste or write commands to the console and run them separately from a larger code block (or drag-select a command and press ALT+ENTER). - You can always use the UP arrow key to go back to previous commands in the console. ## Plots appear in the script window instead of the Plots panel on the right, help! Tools -> Global Options -> R Markdown -> untick "Show plots inline..." ## My plotting panel suddently looks weird or axes are hidden Run the `dev.off()` command to reset the plotting device (and parameters). ## Error in somefunction(someparameters) : could not find function "somefunction" This indicates the package is not loaded. Use the relevant `library()` command to load the package that includes the missing function. There are `library("package")` calls in the beginning of each section that requires them. You really need to load a package once per session, but they are there anyway to keep the script modular for easier revisiting. In general, it's better practice to have library() calls in the head of the script file. ## Error in library("...") : there is no package called '...' Either the package is not installed, or you misspelled its name. You should have installed the necessary packages before the start of the workshop. If you did not (indicating by `library()` giving you a "package not found" error), then here are the relevant installation commands. ```{r, echo=F, eval=F} # Do NOT run these unless you are missing the packages! Also if you do, run ONLY the one you need, not all (which might take a while depending on internet speed). install.packages("ggplot2") # an alternative plotting device for R install.packages("ggmosaic") # ggplot addon install.packages("cowplot") # ggplot addon install.packages("ggbeeswarm") # ggplot addon install.packages("ggstance") # ggplot addon install.packages("ggrepel") # ggplot addon install.packages("RColorBrewer")# more colors install.packages("magrittr") # pipes for R install.packages("reshape2") # data wrangler for ggplot install.packages("dplyr") # an useful data wrangling package install.packages("corrplot") # small package that does nice correlation plots install.packages("plotly") # for the interactive plots; plot_ly function install.packages("languageR") # to get the "english" reaction time data install.packages("igraph") # constructing and plotting networks install.packages("visNetwork") # plotting interactive networks install.packages("quanteda") # corpus management and text analysis install.packages("stringdist") # calculates string distances install.packages("rmarkdown") # for R Markdown documents install.packages("rworldmap") # maps install.packages("gapminder") # a dataset # Note that these in turn have dependencies, ~50 packages in total amongst them, which will also be installed. ``` --- # A little refresher ```{r, echo=T, eval=F} # This is a code block, distinguishable by the gray shaded background. # This is a line of code: print( "Hello! Put your text cursor on this line (click on the line). Anywhere on the line. Now press CTRL+ENTER (PC) or CMD+ENTER (Mac). Just do it." ) # The command above, when executed (what you just did), printed the text in the console below. Also, this here is a comment. Commented parts of the script (anything after a # ) are not executed. This R Markdown file has both code blocks (gray background) and regular text (white background). ``` (Also, if you've been scrolling left and right in the script window to read the code, turn on text wrapping ASAP: on the menu bar above, go to Tools -> Global Options -> Code (tab on the left) -> tick "Soft-wrap R source files") So, `print()` is a function. Most functions look something like this: - `myfunction(inputs, parameters)` All the inputs to the function go inside the ( ) brackets, separated by commas. In the above case, the text is the input to the `print()` function. All text, or "strings", must be within quotes. Most functions have some output. Note that commands may be nested; in this case, the innermost are evaluated first: - `function2( function1(do, something), parameters_for_function1 )` - function1 is evaluated first, and its output becomes the input for function2 Don't worry if that's all a bit confusing for now. Let's try another function, `sum()`: ```{r basicmath, eval=F} sum(1,10) # cursor on the line, press CTRL+ENTER (or CMD+ENTER on Mac) # You should see the output (sum of 1 and 10) in the console. # Important: you can always get help for a function and check its input parameters by executing help(sum) # put the name of any function in the brackets # ...or by searching for the function by name in the Help tab on the right. # Exercise. You can also write commands directly in the console, and executing them with ENTER. Try some more simple maths - math in R can also be written using regular math symbols (which are really also functions). Write 2*3+1 in the console below, and press ENTER. # Let's plot something. The command for plotting is, surprisingly, plot(). # It (often) automatically adopts to data type (you'll see how soon enough). plot(42, main = "The greatest plot in the world") # execute the command; a plot should appear on the right. # OK, that was not very exciting. But notice that a function can have multiple inputs, or arguments. In this case, the first argument is the data (a vector of length one), and the second is 'main', which specifies the main title of the plot. # You can make to plot bigger by pressing the 'Zoom' button above the plot panel on the right. # Let's create some data to play with. We'll use the sample() command, which creates random numbers from a predifined sample. Basically it's like rolling a dice some n times, and recording the results. sample(x = 1:6, size = 50, replace = T) # execute this; its output is 50 numbers # Most functions follow this pattern: there's input(s), something is done to the input, and then there's an output. If an output is not assigned to some object, it usually just gets printed in the console. It would be easier to work with data, if we saved it in an object. For this, we need to learn assignement, which in R works using the equals = symbol (or the <-, but let's stick with = for simplicity). dice = sample(x = 1:6, size = 50, replace = T) # what it means: xdata is the name of a (new) object, the equals sign (=) signifies assignement, with the object on the left and the data on the right. In this case, the data is the output of the sample() function. Instead of printing in the console, the output is assigned to the object. dice # execute to inspect: calling an object usually prints its contents into the console below. # Let's plot: hist(dice, breaks=20, main="Frequency of dice values") # plots a histogram (distribution of values) plot(dice) # plots data as it is ordered in the object xmean = mean(dice) # calculate the mean of the 50 dice throws abline(h = xmean, lwd=3) # plot the mean as a horizontal line # Exercise: compare this plot with your neighbor. Do they look the same? Why/why not? # Exercise: use the sample() function to simulate 25 throws of an 8-sided DnD dice. ``` The next sections will go over basic data types and suitable plots. # Numerical data Numerical values include things we can measure on a continuous scale (height, weight, reaction time), things that can be ordered ("rate this on a scale of 1-5"), and things that have been counted (number of participants in an experiment, number of words in a text). ## A single continuous variable We will be using the English visual lexical decision and naming reaction time dataset from the `languageR` package. ```{r languageR} library(languageR) # load the necessary package # To make things easier in the beginning, we'll subset the (rather large) dataset; just run the following line - we'll see how indexing and subsampling works later; eng = english[ c(1:100, 2001:2100), c(1:5,7)] # We can inspect the data using convenient R commands. dim(eng) # dimensions of the data.frame summary(eng) # produces an automatic summary of the columns head(eng) # prints the first rows # In RStudio, you can also have a look at the dataframe by clicking on the little "table" icon next to it in the Environment section (top right). help(english) # built in datasets often have help files attached eng$Familiarity # the $ is used for accessing (named) column of a dataframe (or elements in a list) eng[, "Familiarity"] # this is the other indexing notation: [row, column] ``` ```{r firstplots} # Plotting time! Let's explore for example the "familiarity" score distribution plot(eng$Familiarity ) # the x-axis is just the index, the order the values are in the dataframe hist(eng$Familiarity, breaks=10) # a histogram shows the distribution of values ('breaks' change resolution) boxplot(eng$Familiarity, outline=F, ylab="Familiarity") # a boxplot is like a visual summary() stripchart(eng$Familiarity, vertical=T, add=T, method="jitter", pch=16, col=rgb(0,0,0,0.4)) with(eng[seq(1,nrow(eng),11),],text(x=1.3, y=Familiarity, Word, cex=0.7 )) ``` ```{r grouped_boxplot, echo=T, eval=T} # Another way to plot boxplots, grouping them by some relevant variable: boxplot(eng$RTnaming ~ eng$AgeSubject, main="Reaction time by age") # note the ~ notation grid(col=rgb(0,0,0,0.3)) # why not add a grid for reference # A slightly nicer version: boxplot(eng$RTnaming ~ eng$AgeSubject, main="Reaction time by age", ylab="Reaction time", border=c("darkorange", "navy"), boxwex=0.4, cex=0.4) abline(h=seq(6,7,0.1), col=rgb(0,0,0,0.2)) # adds horizontal lines instead of full grid ``` ## A note on colors The `rgb(red, green, blue, alpha)` function allows making custom colors; `alpha` controls transparency. Possible values range between 0 and 1 by default. Below is a piece of code that generates an example of how the color scheme works (don't worry if you don't understand the actual code, this is above the level of this workshop; just put the cursor in the code block and press CTRL+SHIFT+ENTER (CMD+SHIFT+ENTER on Mac). ```{r colormagic, echo=F} # An example of how RGB color mixing works. { xpar=par(no.readonly = T) par(bg="black", mar=c(0,0,1,0), mfrow=c(2,1)) plot(NA, ylim=c(-0.1,1.1), xlim=c(-0.1,1.1), type="n", xaxt="n", yaxt="n", main="") mtext("red vs blue, green=0", col="white") g=0 for(r in seq(0,1,0.2) ){ for(b in seq(0,1,0.2)){ points(r, b, col=rgb(red=r,green=g,blue=b, alpha=1), pch=16, cex=6, lwd=1) text(r, b, paste0("rgb(",paste( c(r,g,b), collapse=","), ")"), cex=0.7, col="white", family="mono", font=2) } } plot(NA, ylim=c(-0.1,1.1), xlim=c(-0.1,1.1), type="n", xaxt="n", yaxt="n", main="") mtext("red vs blue, green=0.5", col="white") g=0.5 for(r in seq(0,1,0.2) ){ for(b in seq(0,1,0.2)){ points(r, b, col=rgb(red=r,green=g,blue=b, alpha=1), pch=16, cex=6, lwd=1) text(r, b, paste0("rgb(",paste( c(r,g,b), collapse=","), ")"), cex=0.7, col="white", family="mono", font=2) } } par(xpar) } ``` Another good way to use colors is to use ready-made palettes. ## Inspecting two numeric variables. ```{r scatterplots} plot(eng$WrittenFrequency, eng$Familiarity) # scatterplot using base graphics # Let's play around with some more R functions: the linear regression model. plot(WrittenFrequency ~ Familiarity, data=eng, col="black", pch=20) grid(col=rgb(0,0,0,0.2), lty=1) # Exercise. Do the regression analysis: # use the same formula notation as above, and the same data parameter, as the input for lm() # use the lm(...) as an input to abline() # abline can handle the output of the lm (linear model) command, extracting the intercept and beta coefficient # could also adjust the look of abline a bit with: col=rgb(0,0,0,0.3), lwd=3 # *Of course the data is actually more complex (consisting of distinct groups), so a proper regression model should take that into account. ``` # ggplot2 We have now seen how to visualize some data using R's basic plotting tools, and picked up some basic R skills on the way. We'll now switch to an alternative plotting package, `ggplot2`. It uses a different approach to plotting, and a slightly different syntax. It also comes with default colors and aesthetics which many people find nicer than those of the base `plot()`. A particularly useful feature of `ggplot2` is its extendability (or rather the fact people are eager to extend it), with an ever-growing list of addon-packages on CRAN with an extended selection of themes and more niche visualizations. ## Scatterplot of two numeric variables ```{r ggplot_scatterplots} library(ggplot2) # load ggplot2 # We're using the same english dataset subset (eng) as in the first section. ggplot(eng, aes(x=WrittenFrequency, y=Familiarity)) + geom_point() # the data are defined in the ggplot command, aes() specifies variables and grouping variables # the + adds layers, themes and other options ``` Exercises: - Coloring by groups is doable in base graphics, but even easier with ggplot2. Add ` col=AgeSubject, shape=AgeSubject `to the `aes()` above to see for yourself. - try adding ` scale_colour_brewer(palette = "Dark2") ` - try adding a theme like theme_minimal() - start typing theme_ and see what RStudio's autocomplete offers - explore the relationship between `WrittenFrequency` and `RTnaming` (reaction time), using `AgeSubject` as the coloring variable; use `geom_smooth(method="lm")` to add regression lines (analogous to `abline(lm()))` from earlier. - remove or move the legend using theme(), specifying the legend.position parameter with value "none", "top", etc. Sometimes you might be dealing with data restricted to a few values, or ordinal scales. Let's see how plotting these might work. This part uses an artificial dataset of made-up agreement values on statements about language in the workplace. ```{r} library(ggplot2) library(ggmosaic) set.seed(1); x = sample(1:5, 200, T, prob = c(0.3,0.1,0.1, 0.2, 0.3)) workplace = data.frame( monolingual = x, # Agree with "Workplaces should be monolingual" preferfirst = pmax(1, pmin(5, x+sample(-2:2, length(x), T))), # Agree with "I prefer speaking my first language age = round((x+20)*runif(length(x),1,2.5)) ) dim(workplace) head(workplace) # We could look at each question separately: ggplot(workplace, aes(x=monolingual)) + geom_bar() # What if we wanted to compare how responses to these similar questions interact? With two numerical vectors, we could use a scatterplot: ggplot(workplace, aes(x=monolingual, y=preferfirst)) + geom_point(alpha=0.8) # ...but this is not very useful, is it..? ``` Exercise. Make this plot better. - Since the data is conformed to a few integer values, a scatterplot is hard to read as is. Add the following parameter to geom_point(): ` position = position_jitter(width=0.2,height=0.2) ` - Coloring by groups is doable in base graphics, but even easier with ggplot2. Add ` color=age `to the `aes()` above. - Remove or move the legend using theme(), specifying the legend.position parameter with value "none", "top", etc. - You could replace the axis labels by adding ` + xlab('Agree with "Workplaces should be monolingual"') `, and similarly for ylab(). - try changing the overall look with ` scale_color_distiller(palette = "Spectral") ` and ` theme_dark() ` Another approach is to treat the values as categorical, and produce a mosaic plot: ```{r} library(dplyr) library(ggmosaic) # These are the values we will be plotting (the table is ordered differently, look at it sideways) xtabs(~ workplace$monolingual + workplace$preferfirst) # Plot: ggplot(data = workplace %>% mutate_all(as.factor) ) + geom_mosaic(aes(x = product(monolingual,preferfirst),fill=monolingual), na.rm=TRUE) + scale_fill_hue(h = c(1, 200)) + xlab("preferfirst") + ylab("monolingual") ``` ## Heatmaps Mosaic plots and heatmaps are sort of similar. Let's have a look. ```{r ggplot_heatmap} library(quanteda) # for tokenization library(ggplot2) library(stringdist) # to calculate string distance library(reshape2) # needed to wrangle data into a ggplot2-friendly format # Heatmaps and similar structures are useful for comparing many things with many other things (e.g. parameter values, co-occurrences, correlations) # Let's calculate the edit distance of some words words = tokens(tolower("Once upon a midnight dreary, while I pondered, weak and weary, Over many a quaint and curious volume of forgotten lore - While I nodded, nearly napping, suddenly there came a tapping."), remove_punct = T) s = stringdistmatrix(unique(words[[1]]), useNames = T ) %>% as.matrix() %>% melt() # plot the heatmap of string distance values: ggplot(data=s, aes(y=Var1, x=Var2, fill=value)) + geom_tile(colour = "lightgray") + ylab("") + xlab("") + theme_minimal() # Exercises: # Discuss with a neighbor how to interpret this map. # The default colour palette is not very contrastive; change it by adding + scale_fill_viridis_c() # The x-axis labels are hard to read; add this: + theme(axis.text.x=element_text(angle=45, hjust=1)) ``` ```{r corrpolot, eval=F} # Correlation matrices may also be visualized as heatmaps # Let's find correlations between numeric variables in the eng dataset corrs = cor(eng[,c(1:3,6)]) # inspect the resulting object # Larger correlation matrices hard to grasp, but visualization helps. library(corrplot) # a little package that uses base graphics corrplot(corrs) # ggplot alternative (there's also the ggcorr which has extra options) ggplot(data = melt(corrs), aes(x=Var1, y=Var2, fill=value)) + geom_tile(color=NA) + scale_fill_gradient2(low = "blue", high = "red", mid = "white", midpoint = 0, limit = c(-1,1), name="Correlation") + coord_fixed() + theme_minimal() + labs(x="",y="") ``` ## Time series While a whole subject on its own, we will have a quick look at plotting time series - data reflecting changes in some variable over time. ```{r timeseries, eval=T, echo=T} library(quanteda, quietly = T) # load a corpus management package; we'll also make use of a dataset in it # let's inspect the data first: length(data_corpus_inaugural$documents$texts) rownames(data_corpus_inaugural$documents) head(tokens(data_corpus_inaugural$documents$texts[[1]])[[1]],30) # Exercise. Use summary() on data_corpus_inaugural$documents. Then have a look at speech number 58, and find out who's giving the speech (hint: presidents are recorded in the same dataframe) # The following line of code will tokenize the US Presidents' inaugural speeches corpus and count the words nw = data.frame(length=ntoken(tokens(data_corpus_inaugural$documents$texts)), year=data_corpus_inaugural$documents$Year, president = data_corpus_inaugural$documents$President ) ggplot(nw, aes(x=year, y=length)) + theme_minimal() + geom_point() ``` Exercises - This might be easier to follow if the points were connected; add a geom_line() - But it would be helpful to see the names of the presidents as well; you could add a custom secondary axis, or annotations: geom_text(aes(label=president), nudge_y = 100, angle=90, hjust=0 ) - Now the line gets in the way of the text though. Maybe make the line transparent (add a color parameter with an rgb value like rgb(0,0,0,0.1) ), or remove the line but add color=year to the aes() ```{r wordseries} library(quanteda) # prepare the data, a tokenized corpus: tok = tokens_tolower(tokens(data_corpus_inaugural$documents$texts)); names(tok)=data_corpus_inaugural$documents$Year # inspect the first 10 elements of the first element of the list using tok[[1]][1:10] # The following lines of code will extract & count mentions of the target words in the US Presidents' inaugural speeches corpus # This will also serve as an introduction to writing custom functions # The syntax: functionname = function(inputs/parameters){ function body; end with return() } # you can specify default values of parameters (below word 2 is set to null so the function can be used with a single word) # To use a function, you have to run its description first, saving it in the environment findmentions = function(textlist, targets){ results = data.frame(term=NULL, freq=NULL, year=NULL) for(i in seq_along(targets)){ # loops over targets # this applies the grep function to the list of texts to find and count mentions: freq = sapply(textlist, function(x) length(grep(targets[i], x))/length(x)*1000 ) term = gsub("^(.....).*", "\\1", targets[i]) # use the first 5 characters for a shorthand # concatenate the results: results = rbind(results, data.frame(term=rep(term, length(textlist)), freq, year=as.numeric(names(textlist)) )) } return(results) } # run the function desctiption above; then try out the command below # the inputs are: # textlist, a list of tokenized texts (which we tokenized above) # a character vector of targets; since they are used in grep, may be regex, or may be just a single word freqs = findmentions(textlist=tok, targets=c("^(he|him|m[ea]n|boys*|male|mr|sirs*|gentlem[ae]n)$", "^(she|her|wom[ea]n|girls*|female|mrs|miss|lad(y|ies))$") ) ggplot(freqs, aes(x=year, y=freq, color=term)) + geom_line() + geom_point() + labs(y="Frequency per 1000 tokens") # Exercises: # Add theme_minimal() or theme_gray() or theme_dark() for an automatic grid # Google rcolorbrewer palettes and fiddle with the colors (e.g. scale_color_brewer(palette="Pastel1") ) # Define your own regex and use the findmentions() function again (or just put in a single word, if you don't know regex) and visualize some more comparisons. # Exercise: if you have time, might as well explore the corpus a bit; use the kwic() function: kwic(data_corpus_inaugural, "wom[ae]n", valuetype = "regex", window = 3) # adjust the window parameter, or adjust your actual RStudio window/pane size, if the kwic's are not lined up nicely in the console ``` # Maps ```{r} # Let's create some artificial data again. The places are real though. places = data.frame( name=c('Arivruaich','Adgestone','Allerthorpe','Annesley Lane End','Atherstone','Acklam','Ailsworth','Acrise','Ardlawhill','Angram'), lng = c(-6.66245,-1.15953,-0.80909,-1.28971,-1.54642,-0.80555,-0.35292,1.13618,-2.2111,-1.20958), lat = c(58.06239,50.67265,53.91595,53.07087,52.57722,54.0452,52.57579,51.13863,57.65223,53.93104) ); places$value=places$lat/10*runif(10, 0.95,1.05) ``` ```{r} library(rworldmap) # this is new library(magrittr) library(ggplot2) # So what's in the data? ggplot(places, aes(y=value, x=name)) + geom_bar(stat="identity") + coord_flip() # Mapping time. We'll fetch a generic map from the rworldmap package data("countryExData", envir = environment(), package = "rworldmap") uk = joinCountryData2Map(countryExData, joinCode = "ISO3", nameJoinColumn = "ISO3V10", mapResolution = "low") %>% fortify(mymap) %>% subset(id=="United Kingdom") # Let's just plot the map first. coord_fixed makes sure the map stays propostional. ggplot() + geom_polygon(data=uk, aes(long, lat, group = group), inherit.aes = F) + coord_fixed() # Exercise: specify fill="lightgray" for geom_polygon to get a lighter base map; or use color="black", fill="white" to plot only the outlines. # add + theme_bw() for a different theme # remove the useless axis labels with: + theme(axis.title = element_blank()) ``` ```{r} # We could now put the points on the map. The coordinates in the dataframe could be plotted as a regular scatterplot: ggplot(places, aes(x=lng, y=lat, color=value)) + geom_point(size=3) + scale_color_viridis_c(option="C") # That's not very useful on it's own though... ``` Exercises. - Use the map code you just completed, and use the code here that plots points by longitude and latitude, to make a new plot that combines the map and the points. Hint: the ordering of layers in ggplot matters! - Make up your own story about the meaning of the (artificial) values. Discuss with a neighbor. - Add the names of the locations: + geom_text(aes(label=name), position=position_dodge(0.2)) - A better way to add text labels to a plot would be the ggrepel package, which makes sure labels don't overlap. Load the package using library(ggrepel) and add the following geom to the plot: geom_text_repel(aes(label=name)) - If you think names should not be colored, fix the geom_text() color to some value By the way, the `plotly` package we'll use soon enough gets along with `ggplot2` very nicely, and you can convert plots created using the latter into interactive ones using the `ggplotly` function. There is also the `gganimate` package which can be used to create animated plots in the form of GIFs. plotly can do animations as well, but interactive, which we'll see later. # Intermission: pipes This would be a good point to introduce magrittr's pipe %>% command. It's super useful! The shortcut in RStudio is CTRL-SHIFT-M (or CMD-SHIFT-M). If you're familiar with Bash pipes: it's the same thing. If you're interested why the somewhat curious name: https://en.wikipedia.org/wiki/The_Treachery_of_Images ```{r} library(magrittr) # Exercise. Try it out and discuss the results with your neighbor. 1:3 sum(1:3) x=1:3 sum(x) 1:3 %>% sum() # same result, and not much difference in spelling it out either 1:3 %>% sum() %>% rep(times=4) # what does that do? # "." can be used as a placeholder if the input is not the first argument, so the above could also be spelled out as: 1:3 %>% sum(.) %>% rep(., times=4) # or 1:3 %>% sum(.) %>% rep(., 4) # and it's the same as rep(sum(1:3), times=4) # another example: c(1,1,1,2) %>% match(x=2, table=.) # # something longer (take it apart to see how it works): "hello" %>% gsub(pattern="h", replacement="H", x=.) %>% paste(., "world") ``` # Categorical data Categorical/nominal/discrete values cannot be put on a continuous scale or ordered, and include things like binary values (student vs non-student) and all sorts of labels (noun, verb, adjective). Words in a text could be viewed as categorical data. ```{r} # We can also visualize categorical (countable) data. This uses the eng dataframe again from above. ggplot(eng, aes(x=AgeSubject)) + geom_bar() # Well this was boring. Let's see what letters are used in the words that make up the stimuli in the reaction time data. This bit of code splits the words up and counts them: lets = eng$Word %>% as.character() %>% strsplit("") %>% unlist() %>% table() %>% data.frame() ggplot(lets, aes(x=reorder(., Freq), y=Freq)) + geom_bar(stat="identity") + xlab("letters") + theme_bw() ``` ## Words! ```{r wordclouds} library(magrittr) library(quanteda) library(ggplot2) library(reshape2) # Let's create an object with a bunch of text: sometext = "In a hole in the ground there lived a hobbit. Not a nasty, dirty, wet hole, filled with the ends of worms and an oozy smell, nor yet a dry, bare, sandy hole with nothing in it to sit down on or to eat: it was a hobbit-hole, and that means comfort. It had a perfectly round door like a porthole, painted green, with a shiny yellow brass knob in the exact middle. The door opened on to a tube-shaped hall like a tunnel: a very comfortable tunnel without smoke, with panelled walls, and floors tiled and carpeted, provided with polished chairs, and lots and lots of pegs for hats and coats—the hobbit was fond of visitors. The tunnel wound on and on, going fairly but not quite straight into the side of the hill — The Hill, as all the people for many miles round called it — and many little round doors opened out of it, first on one side and then on another. No going upstairs for the hobbit: bedrooms, bathrooms, cellars, pantries (lots of these), wardrobes (he had whole rooms devoted to clothes), kitchens, dining-rooms, all were on the same floor, and indeed on the same passage. The best rooms were all on the left-hand side (going in), for these were the only ones to have windows, deep-set round windows looking over his garden, and meadows beyond, sloping down to the river." # Now let's do some very basic preprocessing to be able to work with the words in the text: words = gsub("[[:punct:]]", "", sometext) %>% # remove punctuation tolower() %>% # make everything lowecase strsplit(., split=" ") %>% unlist() # tokenize; the unlist is due to strsplit's default list output # Inspect the object we just created. It should be a vector of 236 words. # Quick magrittr Exercise: rewrite the following lines as a single command with %>% x = grep("hobbit", words) n = length(x) txt = paste("Hobbits are mentioned", n, "times.") print(txt, quote=F) # Some ways to inspect and visualize textual data sortedwords = table(words) %>% sort(decreasing = T) # counts the words and sorts them # Exercise: have a look at the data using the head() and tail() commands sortedwords %>% # take the object .[1:30] %>% # use top 30 only (it's sorted already) melt(value.name = "count") %>% # melt it into a ggplot-friendly dataframe ggplot(aes(x=words, y=count) ) + # feed the result as data to ggplot geom_bar(stat="identity") + # barplot of the counts coord_flip() + # horizontal is probably easier to read theme_gray() # Time to use the quanteda package we loaded earlier. # We can use it for all the preprocessing as well as the wordclouds: parsed = dfm(sometext, remove = stopwords('english'), remove_punct = TRUE, stem = FALSE) parsed[,1:10] # quick look at the new data structure textplot_wordcloud(parsed, min_count = 1, color=terrain.colors(100)) # Exercise: try setting stemming to TRUE and see how that changes the picture. # once you are done with this part, execute this to clear the plotting area parameters: dev.off() ``` # Distributions, boxplots, histograms and more We'll keep using ggplot, but do something different for a change, looking at different ways of visualizing distributions, and how visualization choices can lead to different and sometimes unintended interpretations. ```{r loess} library(cowplot) library(ggplot2) # A note on geom_smooth(), the ggplot2 "smoothed conditional means" function - it attempts to fit a model to the data, by default either a Loess or GAM curve. While this is a convenient function in itself, it should be used only if one understands how these regression methods work and what their interpretation is - particularly that of Loess, which is often misused. d=data.frame(time=1:40, value=c(rlnorm(39,2,0.2),20)) plot_grid( ggplot(d , aes(x=time, y=value)) + geom_point() + geom_smooth(method = "loess", span=0.2) + labs(subtitle = "loess, 0.2"), ggplot(d , aes(x=time, y=value)) + geom_point() + geom_smooth(method = "loess", span=1) + labs(subtitle = "loess, 1"), ggplot(d , aes(x=time, y=value)) + geom_point() + geom_smooth(method = "lm") + labs(subtitle = "lm"), nrow = 3 ) ``` ```{r ggplot_distributions} library(ggplot2) library(cowplot) # provides plot_grid() library(ggbeeswarm) # an additional geom set.seed(1); x2=round(rnorm(400,35,10))+30; x1=round(rnorm(1000,35,10)) # nevermind the random data creation for now, just run this line, and then focus on the plotting code below: # Poll: how likely is it that these are samples from the same distribution/population, or are on average similar? plot_grid( ggplot() + aes(x1) + geom_bar(width=1) + theme_gray(base_size=8)+labs(title="Are these samples likely \ndrawn from the same population?"), ggplot() + aes(x2) + geom_bar(width=1) + theme_gray(base_size=8)+labs(title="\n") ) options(scipen=999) ks.test(x1,x2) # Step 2: lims(x, y) # Visualizing distributions with different methods. set.seed(5);x=c(runif(50,1,160), rnorm(100,60,10), rnorm(100,100,10)) # some more random data, just run it # Question: is this variable ~normally distributed? (same data, just two different views) plot_grid( ggplot() + aes(x) + geom_histogram(binwidth = 23), ggplot() + aes(x) + geom_density(adjust = 2) + geom_rug(color=rgb(0,0,0,0.2)) ) # Step 2: binwidth, adjust # Here's another look at the same data: plot_grid(align = "h", ggplot() + geom_boxplot(aes(x=0,y=x),width=0.7) + xlim(-1,1) + labs(x="",y="")+theme(axis.title.x=element_blank(), axis.text.x=element_blank(), axis.ticks.x=element_blank()), ggplot() + aes(0,(x))+ geom_bar(stat = "summary", fun.y = "mean") + stat_summary(geom = "errorbar", fun.data = mean_se, position = "dodge", width=0.2) + coord_cartesian(c(-1,1), c(1,150))+ labs(x="",y="")+theme(axis.title.x=element_blank(), axis.text.x=element_blank(), axis.ticks.x=element_blank()), ggplot() + aes(0,x) + geom_violin(adjust=1) + geom_point(shape=95, size=3, color=rgb(0,0,0,0.2))+ labs(x="",y="")+theme(axis.title.x=element_blank(), axis.text.x=element_blank(), axis.ticks.x=element_blank()), ggplot() + geom_beeswarm(aes(0,x))+ labs(x="same data, different plots",y="")+theme( axis.text.x=element_blank(), axis.ticks.x=element_blank()) ) # A recent popular innovation is the "raincloud" plot, a combination of the density or violin plot, the boxplot, and actual points. library(ggstance) # this provides a horizontal boxplot geom for creating a raincloud plot; an official raincloud package should be in the works. ggplot() + aes(x = x) + geom_density(position = position_nudge(y = 0.0025), alpha = .1, fill="blue") + geom_point(aes(y=0), position = position_jitter(height = 0.002), size=0.7, alpha = 0.3, color="blue", shape=1) + geom_boxploth(aes(y=0), width=0.001, alpha=0) + theme_minimal() + theme(axis.text.y = element_blank(), axis.title = element_blank()) + expand_limits(y = c(0, 0.03)) ``` ```{r axes} # About axes. Which of these three variables (y1, y2, y3) is experiencing the most drastic change over time? set.seed(1); d=data.frame(y=sort(runif(10,3,4))*runif(10, 0.8,1.2), time=1:10) plot_grid(ncol=3, ggplot(d) + aes(x=time, y=y ) + geom_line(col="red", size=1.5) + ylab("series 1") +labs(title="") , ggplot(d) + aes(x=time, y=y ) + geom_line(col="orange",size=1.5) +ylim(0,5) + ylab("series 2") + labs(title="Which series depicts the most drastic change over time?"), ggplot(d) + aes(x=time, y=y ) + geom_line(col="darkblue", size=1.5) +ylim(0,20) + ylab("series 3") +labs(title="") ) ``` # Making things interactive ```{r plotly_conversions} library(quanteda) library(plotly) # plotly can be used to create the same sorts of plots as you've done with the base plot() and the ggplot() function, except interactive. Let's create an interactive time series plot. # We'll reuse the findmentions() function from above and the tok object tok = tokens_tolower(tokens(data_corpus_inaugural$documents$texts)); names(tok)=data_corpus_inaugural$documents$Year findmentions = function(textlist, targets){ results = data.frame(term=NULL, freq=NULL, year=NULL) for(i in seq_along(targets)){ # loops over targets # this applies the grep function to the list of texts to find and count mentions: freq = sapply(textlist, function(x) length(grep(targets[i], x))/length(x)*1000 ) term = gsub("^(.....).*", "\\1", targets[i]) results = rbind(results, data.frame(term=rep(term, length(textlist)), freq, year=as.numeric(names(textlist)))) } return(results) } freqs = findmentions(textlist=tok, targets=c("^(he|him|m[ea]n|boys*|male|mr|sirs*|gentlem[ae]n)$", "^(she|her|wom[ea]n|girls*|female|mrs|miss|lad(y|ies))$") ) plot_ly(freqs, x=~year, y=~freq, type="scatter", mode="lines", split=~term) %>% layout(yaxis=list(title="Frequency per 1000 tokens")) # Note the different syntax. There's pipes instead of +, options like layout are organized in lists; the split parameter defines groups (like color/group in ggplot2). # Explore how the interactivity works in the new plot. # But here's something interesting. Let's recreate the ggplot() version from earlier, but this time save it as an object gp = ggplot(freqs, aes(x=year, y=freq, color=term)) + geom_line() + geom_point() + labs(y="Frequency per 1000 tokens") gp # call it to have a look # Now run this: ggplotly(gp) # ggplot->plotly converter # Let's try one of the reaction time plots: gp = ggplot(eng, aes(x=WrittenFrequency, y=Familiarity, col=AgeSubject)) + geom_point() + theme_gray() gp # have a look at what that was ggplotly(gp) # magic ``` Exercise. Make an interactive map. Copy the map you made above, of the UK places, which combined the polygon and points geoms. Follow the same steps as the other conversions here: assign the ggplot to an object, then run ggplotly on that object. Tip: you can add an additional "text" value to the plot's aes() - anything specified there will be added to the hover labels in plotly, try e.g. text=name. # Plots in 3Deeeeee! ```{r scatter3d_exercise} library(plotly) library(RColorBrewer) # more color scales, brewer.pal() function # Here's a plot similar to what we've seen before: plot_ly(data=eng, x=~Familiarity, y=~RTlexdec, type="scatter", mode="markers", color=~AgeSubject) # Discuss the interpretation of the plot with your neighbor. ``` It might be useful to see how these two variables interact with some third variable of interest though. Exercise: make a copy of the code from above and carry out the following changes, inspecting the plot after every step. - Change the type value to "scatter3d", and add the z parameter, and set it's value to WrittenFrequency. Make the hover labels more useful by adding ` text=~Word, name="" ` - the first adds words to the labels, the second removes the useless trace label - change the data input to `english` (the whole dataset, instead of the subset we've been using) - add this to adjust markers to display better in this new bigger plot: marker=list(opacity=0.3, size=3) - add a fourth variable via color: color=~NumberSimplexSynsets (which quantifies homonyny) - change the following parameter to a nicer color scale: colors=brewer.pal(11,"RdBu")) and make the background all dark and cool with %>% layout(paper_bgcolor="black", plot_bgcolor="black"). - Discuss the interpretation of the new plot with your neighbor. Bonus: here's something completely useless, but maybe pretty: ```{r, eval=F} # remember the RGB color plots from earlier, the ones with the black background? col3 = data.frame(red=runif(1000),green=runif(1000),blue=runif(1000)) plot_ly(col3, x=~red,y=~green, z=~blue, type="scatter3d",mode="markers", marker=list(opacity=0.9), color=I(apply(col3,1, function(x) rgb(x[1],x[2], x[3])))) %>% layout(paper_bgcolor='black') %>% config(displayModeBar = F) ``` # Animation Plotly makes it easy to do animations. ```{r} library(gapminder) # some data library(plotly) gapminder %>% plot_ly(x = ~gdpPercap, y = ~lifeExp, size = ~pop, color = ~continent, text = ~country, mode = 'markers', type = 'scatter', hoverinfo = "text", frame = ~year # adding this turns this into an animation ) %>% layout(xaxis = list(type = "log")) # make the x-axis log scaled ``` Exercise: - add subset(.$continent=="Europe") %>% to the pipeline to view only European countries - could also change mode to "markers+text" (probably adding textfont=list(size=0.1) will also be a good idea in that case). Another example. We'll create a modified subset of the `english` dataset to produce some artificial language change data. The scenario: 10 words, over 100 years, observing the interplay of their homonymy and frequency values. ```{r animation_movement} library(plotly) library(RColorBrewer) { # just run this to create the semi-artificial dataset eng2 = english[order(english$NumberSimplexSynsets*runif(nrow(english),0.9,1.1)), c("WrittenFrequency", "NumberSimplexSynsets")] %>% .[seq(2001, 4000, 2),] eng2$NumberSimplexSynsets = eng2$NumberSimplexSynsets * rep(seq(0.8,1.2,length.out=10),100) *runif(100,0.9,1.1) eng2$year = rep(seq(1800,1899,1),each=10) eng2$word = as.factor(rep(1:10, 100)) } # inspect the dataset first # Plot the change over time: plot_ly(eng2, x=~NumberSimplexSynsets, y=~WrittenFrequency, type = 'scatter', mode = 'markers', frame=~year, # the frame argument activates the animation functionality color=~word, colors=brewer.pal(10,"Set3"), size=~WrittenFrequency, marker=list(opacity=0.8)) %>% layout(showlegend = FALSE) %>% animation_opts(frame = 800, transition = 795) %>% config(displayModeBar = F) # Exercise: change frame and transition speed parameters to something different. ``` # Graphs, networks, and some more corpus linguistics ## Social networks The following example will look into plotting social networks of who knows who. ```{r igraph_networks, eval=T, echo=T} library(igraph, warn.conflicts = F) # load the package; this needs to be done once after starting R/RStudio # Create an object with some random Scottish people (this could be a sample from a sociolinguistic study or whatever) scots=c("Angus","Archibald","Baldwin","Boyd","Cinead","Craig","Diarmid","Donald","Duncan","Eachann","Erskine","Ethan","Fergus","Fingal","Fraser","Hamilton","Iain","Irvine","James","Muir","Mungo","Owen","Raibert", "Lyall", "Margaret", "Mairi", "Morag", "Murdina","Rhona", "Sorcha", "Thomasina","Una") nscots = length(scots) # record the number of people in an object # call the nscots object to see how many there are mates = matrix(sample(0:1,nscots^2,T,prob=c(0.95,0.05)), ncol=nscots, nrow=nscots, dimnames=list(scots, scots)); diag(mates)=0 # this creates a randomized matrix signifying friendships; no need to think about this too hard for now mates[1:10,1:10] # but have a look at it anyway; '1' means these two people know each other; this line prints the first 10 rows and 10 columns scotgraph = graph_from_adjacency_matrix(mates, mode = "undirected", diag=FALSE) # creates a graph object; igraph needs to be loaded # Exercises: # Have a look at the scotgraph object (list of links/"edges"). # The raw data in the graph object is not particularly useful. Plotting the graph will help though. Call plot() on the scotgraph object. # This action produced a network... but the defaults are not very nice looking. # Let's modify the plotting parameters, and add color coding. mf = c(rep("m", nscots-9), rep("f", 9)) # create a vector of labels (there happens to be 9 women in the dataset) mfcolors = ifelse(mf=="m", yes="navy",no="tomato") par(mar=c(0,0,0,0)) # makes plot margins more suitable for igraph plotting plot(scotgraph, vertex.size=4, vertex.color="lightgray", vertex.frame.color=NA, # vertex color and size vertex.label.cex=0.9, vertex.label.dist=0.1, vertex.label.font=2, # vertex labels vertex.label.color=mfcolors, # label color edge.color=rgb(0,0,0,0.3)) # Bonus: some graph statistics ecount(scotgraph) # how many links in the network sort(degree(scotgraph), decreasing = T)[1:3] # top most popular people (vertex degree, i.e. how many edges/links a vertex/node has) distances(scotgraph, v = "Mungo", to = "Duncan") # how distant are those dudes in the network (least n edges) mean_distance(scotgraph, unconnected = T) # average distance between the vertices (people) ``` Let's try something else. Using the same graph data, we'll recreate it using another package, visNetwork, which makes graphs interactable (note that there are also other network packages, such as networkD3 and ggraph for ggplot2). ```{r visnetwork} library(visNetwork, warn.conflicts=F) scotgraph_v = toVisNetworkData(scotgraph) # converts the previous igraph object into a visNetwork object # adjust some parameters; note how the visnetwork object is really just a list with 2 dataframes. head(scotgraph_v$nodes) scotgraph_v$nodes$size = 10 scotgraph_v$edges$color = rgb(0,0,0,0.3) # plot it: visNetwork(nodes = scotgraph_v$nodes, edges = scotgraph_v$edges) #Try clicking on the nodes, moving them, and zooming. Pretty neat, no? You can also modify the physics engine to adjust the gravitational pull between the nodes, or disable it. ``` ## Citation networks In the following examples, we'll use the inaugural speeches of US presidents again. We'll start by looking into which presidents mention or address other presidents in their speeches. We'll extract the mentions programmatically rather than hand-coding them. ```{r presidential_mentions_network} library(quanteda) # make sure this is loaded library(igraph) library(visNetwork) speeches = gsub("Washington DC", "DC", data_corpus_inaugural$documents$texts) # replace city name to avoid confusion with the president Washington (hopefully) speechgivers = data_corpus_inaugural$documents$President # names of presidents giving the speech presidents = unique(data_corpus_inaugural$documents$President) # presidents (some were elected more than once) # The following piece of code looks for names of presidents in the speeches using grep(). Just run this little block: { mentions = matrix(0, ncol=length(presidents), nrow=length(presidents), dimnames=list(presidents, presidents)) for(president in presidents){ foundmentions = grep(president, speeches) mentions[speechgivers[foundmentions], president ] = 1 } } # Note: this is not perfect - the code above concatenates mentions of multiple speeches by the same re-elected president, "Bush" as well as "Roosevelt" refer to multiple people, and other presidents might share names with other people as well. You can check the context of keywords using quanteda's kwic() command: kwic(data_corpus_inaugural, "Monroe") # # Have a look at the data mentions[30:35, 30:35] # rows: one mentioning; columns: being mentioned counts = data.frame(names=colnames(mentions), count=apply(mentions, 2, sum)) ggplot(counts) + geom_col(aes(y=count, x=names), fill= brewer.pal(3,"Set2")[1]) + coord_flip() + scale_x_discrete(limits = counts$names) + theme_dark() pgraph = graph_from_adjacency_matrix(mentions, mode="directed") # this uses igraph again # you can have a look at the basic igraph plot if you want # this uses visNetwork: v = toVisNetworkData(pgraph) visNetwork(nodes = v$nodes, edges = v$edges) # check how it looks before we add all the fancy stuff # Exercise: now use pipe %>% notation and the following functions to adjust the visNetwork plot (i.e., visNetwork(..) %>% visNodes(..) etc). See how the graph changes after each addition. Feel free to play around with the parameters! # visNodes(size = 10, shadow=T, font=list(size = 30)) # visIgraphLayout("layout_in_circle", smooth=T) # steal a better layout from igraph # visEdges(arrows = "to", shadow=T, smooth=list(type="discrete"), selectionWidth=5) # visOptions(highlightNearest = list(enabled = T, hover = T, degree=1, labelOnly=F, algorithm="hierarchical"), nodesIdSelection = T) # interactive selection options # Finally, click "Export" under the Viewer tab, and select "Save as webpage". ``` ## What else is in there? While we're at it, let's try to probe into the contents of the speeches and use some more interactive plotting tools to visualize it. ```{r plotly, eval=T, echo=T} library(quanteda) library(plotly) # This block of code will extract the top terms (after removing stopwords) from the speeches, calculate the distance between the speeches based on word usage, and compress it all into 2 dimensions. termmat = dfm(data_corpus_inaugural, tolower = T, stem=F, remove=stopwords("english"), remove_punct=T) topterms = lapply(topfeatures(termmat, n=10, groups=rownames(termmat)), names) distmat = 1-textstat_simil(termmat, method="cosine") # calculate distances mds = as.data.frame(cmdscale(distmat,k = 2)) # multidimentsional scaling (reduces distance matrix to 2 dimensions) # have a look at the object using head() mds$tags = paste(names(topterms), sapply(topterms, paste, collapse="
"), sep="
") # add top word labels to the data mds$Year = data_corpus_inaugural$documents$Year # add the years to the new dataset for ease of use # Exercise. The following makes use of the plotly package. Create one out of the following components a = list(x=mds[55:58,1], y=mds[55:58,2], text=rownames(mds)[55:58], ax = -20, ay = 30, showarrow = T, arrowhead = 0) # this is a list with named elements that will be used to add some custom annotations; just run this line. plot_ly(data = mds, x=~V1, y=~V2, type="scatter", mode = 'markers', hoverinfo = 'text', text=~tags ) # this is the main plotly function - note the somewhat different usage of ~ here to specify variable names # Exercises: # add the following parameters to the function call above to color speeches by time: color=~Year # pipe this in the end as well if you'd rather hide the color legend: %>% hide_colorbar() # add annotations, use %>% layout(annotations = a ) # A look into the usage of some words across centuries termmat_prop = dfm(data_corpus_inaugural, tolower = T, stem=F, remove=stopwords("english"), remove_punct=T ) %>% dfm_weight("prop") # use normalized frequencies words = c("america", "states", "dream", "hope", "business", "peace", "war", "terror") newmat = as.matrix(termmat_prop[,words]) %>% round(5) plot_ly(x=words, y=rownames(termmat_prop), z=newmat, type="heatmap", colors = colorRamp(c("white", "orange", "darkred")), showscale = F) # Exercise (easy). Choose some other words! Also try changing the color palette (the function used here, colorRamp, takes a vector of colors as input and creates a custom palette). # Add a nice background using %>% layout(margin = list(l=130, b=50), paper_bgcolor=rgb(0.99, 0.98, 0.97)) # Discuss the what you see on the plot with your neighbor. # Exercise (a bit harder). We could get a better picture of what has been said by the presidents if we expanded our word search with regular expressions (^ stands for the beginning of a string and $ for the end, and . stands for any character, so ^white$ would match "white" but not "whites", and l.rd would match "lord" but also "lard" etc). Define some new search terms; below are some ideas. words2 = c("america$", "^nation", "^happ", "immigra", "arm[yi]", "^[0-9,.]*$") # The bit of code below uses grep() to match column names, so unless word boundaries are defined using ^$, any column name that *contains* the search string is also matched ("nation" would match "international"). For each search term, it will find and sum all matching rows. newmat = sapply(words2, function(x) rowSums(termmat_prop[, grep(x, colnames(termmat_prop)) ])) %>% round(5) # You can check which column names would be matched with: grep("america", colnames(termmat_prop), value=T) # Then copy the plotly command from above and substitute the z parameter value with newmat. ``` # Let's make some slides It's fairly straightforward to produce slides (websites, posters, books) in R using R Markdown, and export into html, pdf, or Word docx. We'll need to create a new file for this part. Exercise. Click on the icon with the green plus on a white background in the top left corner, choose "R Markdown...", then "Presentation", and then "Slidy". Slidy is a basic, simple to use slide deck template (by the way, if you are willing to fiddle a bit with CSS, I'd recommend using the `xaringan` package instead, or if you're really adventurous, `slidify` with impressjs). Change the title to anything you want, and add author: your name into the YAML header on top. Now copy this code block (the entire block, starting with the ``` ) and use it to replace the short code block in the new file where it says "Slide with Plot". Then click "Knit" (next to the little bar of yarn icon) on the top toolbar. RStudio will ask you to save the new file, just save it anywhere. ```{r, echo=F, message=F, warning=F} library(plotly, quietly = T, warn.conflicts=F) plot_ly(iris, # this uses a base dataset on some flower statistics x=~Petal.Width*2, y=~Petal.Length, z=~Sepal.Length, type="scatter3d", mode="markers",color=~Species, marker=list(opacity=0.7)) %>% layout( scene = list(xaxis=list(title="Petal width", showline=T), yaxis=list(title="Petal length", showline=T), zaxis=list(title="Sepal length", showline=T))) %>% config(displayModeBar = F) ``` An important note on data: when producing an html file from an R Markdown rmd file, functions and objects in the current global environment cannot be accessed. That means that if you're using a dataset from a package (like we've been doing), you'd need to load that package (i.e. include a `library(package)` call in a code block); if you're using your own data, you need to include code to import it. It often makes sense to deal with data processing in a separate script, save the results as an .RData file, and then just load the RData (using `load("file.RData")`) in the markdown file you intend to knit, instead of doing data cleaning and analysis upon every time you re-knit. # Let's make a website The slides we just made are basically just a single html page, cleverly separated into slide-looking segments. Making a basic website is as simple as that. This time, everybody will be doing their own thing: pick one of the exercises we did above, and create a mock "project report" based on it, pretending this is your own data. 1. Create a new R Markdown document (choose "document" and "html") 2. Pick a block of code. The Rmd document cannot use anything from your "local" workspace, so you'll need to load all data and packages that a document will use. Some exercise blocks above are self-suffient in the sense that you can run the block on its own, some depend on other blocks - for example, the blocks with the "eng" dataset require loading LanguageR, and including the line of code from one of the first blocks that subsets the full `english` dataset (`eng = english[ c(1:100, 2001:2100), c(1:5,7)`). Or just use the full dataset. 3. Here's a minimal example. Let's say your project is about the lengths of speeches of US presidents over time: ```{r, eval=T, echo=F, out.width="100%", fig.height=4, message=F, warning=F} library(quanteda) # for the dataset and the tokenizer functions library(ggplot2) # the plotting package we've been using library(ggrepel) # for auto-arranging labels in ggplots library(cowplot) # for arranging ggplots library(plotly) # for interactive plots library(magrittr) # pipes # tokenize and count words: nw = data.frame(length=ntoken(tokens(data_corpus_inaugural$documents$texts)), year=data_corpus_inaugural$documents$Year, president = data_corpus_inaugural$documents$President ) # plot the results: g = ggplot(nw, aes(x=year, y=length, label=president)) + theme_minimal() + geom_point() + geom_line(color="lightgray") + labs(title="Lengths of inaugural speeches by US presidents", y="length (words)") ggplotly(g) # produce interactive plot # create a wordcloud of sorts, using repelling text labels cloud = data_corpus_inaugural$documents$texts %>% dfm(remove = stopwords(), remove_punct = TRUE) %>% textstat_frequency() %>% .[1:100,] %>% ggplot(aes(x = rep(0,100), y=rep(0,100), label=feature, color=frequency, size=(frequency))) + geom_text_repel(segment.size = 0, alpha=0.8) + theme_void() + theme(legend.position="none") + scale_color_continuous(low="gray", high = "black") + scale_x_continuous(expand=c(0,0)) + scale_y_continuous(expand=c(0,0)) # an example of word usage (of the most common non-stopword) gov = data_corpus_inaugural$documents$texts %>% dfm() %>% dfm_weight("prop") %>% dfm_select("government", valuetype="regex") %>% rowSums() %>% as.data.frame() %>% ggplot(aes(y=., x=data_corpus_inaugural$documents$Year)) + geom_bar(stat = "identity")+ geom_text(aes(y=., x=data_corpus_inaugural$documents$Year), label=data_corpus_inaugural$documents$President, hjust=-0.1, size=2) + theme_classic() + labs(x="year", y='proportion of words containing "government"')+ coord_flip(expand = 0) plot_grid(cloud, gov) # arrange and show ggplots ``` 4. One way to get past the possible errors stemming from missing packages is to just load *all* the packages we might have been using. This makes knitting a bit slower though. You could also use the block below as a template and remove all the package loading calls that you won't need in your little project. ```{r, echo=F, eval=T, warning=F, message=F} # Set message=F, warning=F to avoid printing package loading messages. library("ggplot2") # an alternative plotting device for R library("ggmosaic") # ggplot addon library("cowplot") # ggplot addon library("ggbeeswarm") # ggplot addon library("ggstance") # ggplot addon library("RColorBrewer")# more colors library("magrittr") # pipes for R library("reshape2") # data wrangler for ggplot library("dplyr") # an useful data wrangling package library("corrplot") # small package that does nice correlation plots library("plotly") # for the interactive plots; plot_ly function library("languageR") # to get the "english" reaction time data library("igraph") # constructing and plotting networks library("visNetwork") # plotting interactive networks library("quanteda") # corpus management and text analysis library("stringdist") # calculates string distances library("rmarkdown") # for R Markdown documents library("rworldmap") # maps library("gapminder") # a dataset ``` 5. If you want code to show in the report, set echo=T in code blocks; otherwise set to F. The "eval" parameter can be used to turn the code block off entirely. 6. A little markdown refresher: headings are created using hashtags #, lists using - or numbers if you want numbers. Italics and bold are created by singe and double asterisks, respectively. Links are created using ` [text to show](url) `, but markdown also recognizes plain urls as links. Images either from the folder where the Rmd file is, or from online are done like this: ` ![](path/to/image) ` 7. If you want a table of contents (based on headings), make sure the YAML header has this bit: output: html_document: toc: yes 8. Optional step: upload the html page to your personal website or github. See here for a quick 5-minute step-by-step guide on how to set up a free personal website using Github Pages: https://guides.github.com/features/pages/ Feel free to showcase your newly acquired skills on Twitter ;) - either with a link to an uploaded page or with a few screenshots. --- Here are couple of examples of things that I've used R Markdown for myself: - my personal website: https://andreskarjus.github.io - the cover page of the aRt of the figure workshops: https://andreskarjus.github.io/artofthefigure - the slides in the beginning of this workshop https://andreskarjus.github.io/artofthefigure/intro - this recent conference poster: https://andreskarjus.github.io/lexcom_poster - a past seminar talk: https://andreskarjus.github.io/evoforces_cletalk/slides.html --- # Final words on attributions, citing and references Before we finish, a word on R and its packages. It's all free open-source software, meaning countless people have invested a lot of their own time into making this possible. If you use R, do cite it in your work (use the handy `citation()` command in R to get an up to date reference, both in plain text and BibTeX). To cite a package, use `citation("package name")`. You are also absolutely welcome to use any piece of code from this workshop, but in that case I would likewise appreciate a reference: Karjus, Andres (2018). aRt of the Figure. GitHub repository, https://github.com/andreskarjus/artofthefigure. Bibtex: ``` @misc{karjus_artofthefigure_2018, author = {Karjus, Andres}, title = {aRt of the Figure}, year = {2018}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/andreskarjus/artofthefigure}}, DOI = {10.5281/zenodo.1213335} } ``` # That's it for today Do play around with these things later when you have time, and look into the bonus sections for extras. If you get stuck, Google is your friend; also, check out www.stackoverflow.com - this site is a goldmine of programming (including R) questions and solutions. Also, if you are looking for consulting on data analysis and visualization or more workshops, take a look at my website https://andreskarjus.github.io/ If you want to stay updated keep an eye on my Twitter @AndresKarjus (for science content) and @aRtofdataviz (for R, dataviz and workshops related stuff). --- --- --- # Appendix. Getting your own data into R and getting plots out of R. Once you get around to working with your own data, you'll need to import it into R to be able to make plots based on it. There are a number of ways of doing that; but also datasets and corpora come in different formats, so unfortunately there's no single magic solution to import everything, you usually need to figure out the format of the data beforehand. Below are some examples. ## Table (csv, Excel, txt) into R, import from file This is probably the most common use case. If your data is in an Excel file formal (.xls, .xlsx), you are better off saving it as a plain text file (although there are packages to import directly from these formats, as well as from SPSS .sav files). The commands for that are read.table(), read.csv() and read.delim(). They basically all do the same thing, but differ in their default settings. For very large datasets or corpora, you might want to look into `data.table` instead. ```{r, eval=F, echo=T} # an example use case with parameters explained mydata = read.table(file="path/to/my/file.txt", # full file path as a string header=T, # if the first row contains column names row.names=1, # if the 1st (or other) column is row names sep="\t", # what character separates columns in the text file* quote="", # if there are " or ' marks in any columns, set this to "" ) # * "\t" is for tab (default if you save a text file from Excel), "," for csv, " " if space-spearated, etc # for more and to check the defaults, see help(read.table) # the path can be just a file name, if the file is in the working (R's "default") directory; use getwd() to check where that is, and setwd(full/path/to/folder) to set it (or you can use RStudio's Files tab, click on More) # If your file has an encoding other than Latin or UTF-8, specify that using the encoding parameter. mydata = read.table(file.choose() ) # alternatively: this opens a window to browse for files; specify parameters as appropriate ``` ## Importing from clipboard There is a simple way to import data from the clipboard. While importing from files is generally a better idea (you can always re-run the code and it will find the data itself), sometimes this is handy, like quickly grabbing a little piece of table from Excel. It differs between OSes: ```{r, eval=F, echo=T} mydata = read.table(file = "clipboard") # in Windows (add parameters as necessary) mydata = read.table(file = pipe("pbpaste")) # on a Mac (add parameters as necessary) ``` ## Importing text For text, the `readLines()` command usually works well enough (its output is a character vector, so if the text file has 10 lines, then readLines produces a vector of length 10, where each line is an element in that vector (you could use strsplit() or quanteda's functions to further split it into words. If the text is organized neatly in columns (e.g., like the COHA corpus), however, you might still consider read.table(), but probably with the `stringsAsFactors=FALSE` parameter (this avoids making long text strings into factors; check out the help file if needed). A corpus may be encoded using XML - there is the `xml2` package (an improvement on the older `XML` package) for that, but watch out for memory leaks if importing and parsing multiple files (this is a know issue). ## Exporting plots RStudio has handy options to export plots - click on `Export` on top of the plot panel, and choose the output format. Plots can be exported using R code as well - this is in fact a better approach, since otherwise you would have to click through the Export menus again every time you change your plot and need to re-export. Look into the help files of the `jpeg()` and `pdf()` functions to see how this works. ggplot2 has a handy `ggsave()` function. Interactive plots can be either included in R Markdown based html files, or exported as separate html files (which you can then upload as such, integrate into a website, or plug it in using an iframe). ## Anything else There are also packages to import and manipulate images, text, GIS map data, relational databases, data from all sorts of other file formats (like XML, HTML, Google Sheets), scrape websites, do OCR on scanned documents, and much more. Just google around a bit and you'll surely find what you need. ---