I am a lecturer in the Masters in Computational Social Science program. I earned my PhD in political science from Penn State University in 2015. My research interests focus on judicial politics, state courts, and agenda-setting. Methodologically I am interested in statistical learning and text analysis. I was first drawn to programming in grad school, starting out in Stata and eventually making the transition to R and Python. I learned these programming languages out of necessity - I needed to process, analyze, and code tens of thousands of judicial opinions and extract key information into a tabular format. I am not a computer scientist. I am a social scientist who uses programming and computational tools to answer my research questions.
Go to http://cfss.uchicago.edu for the course site. This contains the course objectives, required readings, schedules, slides, etc.
The goal of this course is to teach you basic computational skills and provide you with the means to learn what you need to know for your own research. I start from the perspective that you want to analyze data, and programming is a means to that end. You will not become an expert programmer - that is a given. But you will learn the basic skills and techniques necessary to conduct computational social science, and gain the confidence necessary to learn new techniques as you encounter them in your research.
We will cover many different topics in this course, including:
Teach a (wo)man to fish
This is a hands-on class. You will learn by writing programs and analysis. Don’t fear the word “program”. A program can be as simple as:
print("Hello world")
## [1] "Hello world"
One line of code, and it performs a very specific task (print the phrase “Hello world” to the screen)
More typically, your programs will perform statistical and graphical analysis on data of a variety of forms. For example, here I analyze a dataset of automobiles to assess the relationship between engine displacement and highway mileage:
# load packages
library(tidyverse)
library(broom)
# estimate and print the linear model
lm(hwy ~ displ, data = mpg) %>%
tidy() %>%
mutate(term = c("Intercept", "Engine displacement (in liters)")) %>%
knitr::kable(digits = 2,
col.names = c("Variable", "Estimate", "Standard Error",
"T-statistic", "P-Value"))
Variable | Estimate | Standard Error | T-statistic | P-Value |
---|---|---|---|---|
Intercept | 35.70 | 0.72 | 49.55 | 0 |
Engine displacement (in liters) | -3.53 | 0.19 | -18.15 | 0 |
# visualize the relationship
ggplot(data = mpg, aes(displ, hwy)) +
geom_point(aes(color = class)) +
geom_smooth(method = "lm", se = FALSE, color = "black", alpha = .25) +
labs(x = "Engine displacement (in liters)",
y = "Highway miles per gallon",
color = "Car type") +
theme_bw(base_size = 16)
But we will start small to build our way up to there.
Class sessions will include a combination of lecture and live-coding. You need to bring a laptop to class to follow along, but all class materials (including slides and notes) will be made available before/after class for your review. The emphasis of the class is on application and learning how to implement different computational techniques. However we will sometimes read and discuss examples of interesting and relevant scholarly research that demonstrates the capabilities and range of computational social science.
Lab sessions will be held each Wednesday immediately following class. I strongly encourage you to attend these sessions. Myself or the TA will be available to assist you as you practice specific skills or encounter problems completing the homework assignments.
Each class will have assigned readings. You need to complete these before coming to class. I will assume you have done so and have at least a basic understanding of the material. My general structure for the class is to spend the first 15-30 minutes lecturing, then the remaining time practicing skills via live-coding and in-class exercises. If you do not come to class prepared, then there is no point in coming to class.
15 min rule: when stuck, you HAVE to try on your own for 15 min; after 15 min, you HAVE to ask for help.- Brain AMA pic.twitter.com/MS7FnjXoGH
— Rachel Thomas (@math_rachel) August 14, 2016
We will follow the 15 minute rule in this class. If you encounter a problem in your assignments, spend 15 minutes troubleshooting the problem on your own. Make use of Google and StackOverflow to resolve the error. However, if after 15 minutes you still cannot solve the problem, ask for help. We will use GitHub to ask and answer class-related questions.
I am trying to balance two competing perspectives:
The point is that collaboration in this class is good - to a point. You are always, unless otherwise noted, expected to write and submit your own work. You should not blindly copy from your peers. You should not copy large chunks of code from the internet. That said, using the internet to debug programs is fine. Asking a classmate to help you debug your program is fine (the key phrase is help you, not do it for you).
The bottom line - if you don’t understand what the program is doing and are not prepared to explain it in detail, you should not submit it.
Each week you will complete a series of programming assignments linked to lecture materials. These assignments will generally be due the following week prior to Monday’s class. Weekly lab sessions will be held to assist students in completing these assignments. Assignments will initially come with starter code, or an initial version of the program where you need to fill in the blanks to make it work. As the quarter moves on and your skills become more developed, I will provide less help upfront.
Each assignment will be evaluated by myself or the TA, as well as by two peers. Peer review is a crucial element to this course, in that by eating each other’s dog food you will learn to read, debug, and reproduce code and analysis. And while I and the TA are competent users in R, your classmates are not - so make sure your code is well-documented so that others with basic knowledge of programming and R can follow along and reuse your code. Be sure to read the instructions for peer review so you know how to provide useful feedback.
A program is a series of instructions that specifies how to perform a computation.1
Major components to programs are:
Virtually all programs are built using these fundamental components. Obviously the more components you implement, the more complex the program will become. The skill is in breaking up a problem into smaller parts until each part is simple enough to be computed using these basic instructions.
A graphical user interface (GUI) is a visual way of interacting with a computer using elements such as a mouse, icons, and menus.
GUI software runs using all the basic programming elements, but the end user is not aware of any of this. Instructions in GUI software are implicit to the user, whereas programming requires the user to make instructions explicit.
Let’s demonstrate why you should want to learn to program.2 What are the advantages over GUI software, such as Stata?
Here is a hypothetical assignment for a UChicago undergrad:
Write a report analyzing the relationship between ice cream consumption and crime rates in Chicago.
Let’s see how two students (Jane and Sally) would complete this. Jane will use strictly GUI software, whereas Sally will use programming and the data science workflow we outlined above.
Jane finds data files online with total annual ice cream sales in the 50 largest U.S. cities from 2001-2010 and total numbers of crimes (by type) for the 50 largest U.S. cities from 2001-2010. She gets them as spreadsheets and downloads them to her computer, saving them in her main Downloads
folder which includes everything she’s downloaded over the past three years. It probably looks something like this:
Jane prints her report and turns it in to the professor. Done!
data
, graphics
, output
)data
subfolder.data
folder.output
subfolder.The professor is impressed with Jane and Sally’s findings, but wants them to verify the results using new data files for ice cream and frozen yogurt sales and crime rates for 2011-2015 before he will determine their grade.
At this point, Jane is greatly hampered by her GUI workflow. She now has to repeat steps 1-5 all over again, but she forgot how she defined violent vs. non-violent crimes. She also no longer has the original frozen yogurt sales data and has to find the original file again somewhere on her computer or online. She has to remember precisely all the menus she entered and all the settings she changed in order to reproduce her findings.
Sally’s computational workflow is much better suited to the professor’s request because it is automated. All Sally needs to do is add the updated data files to the data
subfolder, then rewrite her program in step 2 to combine the old and new data files. Next she can simply re-run the programs in steps 3 and 4 with no modifications. The analysis program accepts the new data files without issue and generates the updated regression model estimates and graph. The R Markdown document automatically includes these revised results without any need to modify the code in underlying document.
By automating her workflow, Sally can quickly update her results. Jane has to do all the same work again. Data cleaning alone is a non-trivial challenge for Jane. And the more data files in a project, the more work that has to be done. Sally’s program makes cleaning the data files trivial - if she wants to clean the data again, she simply runs the program again.
Previously researchers focused on replication - can the results be duplicated in a new study with different data? In science it is difficult to replicate articles and research, in part because authors don’t provide enough information to easily replicate experiments and studies. Institutional biases also exist against replication - no one wants to publish it, and authors don’t like to have their results challenged.
Reproducibility is “the idea that data analyses, and more generally, scientific claims, are published with their data and software code so that others may verify the findings and build upon them.”3 Scholars who implement reproducibility in their projects can quickly and easily reproduce the original results and trace back to determine how they were derived. This easily enables verification and replication, and allows the researcher to precisely replicate his or her analysis. This is extremely important when writing a paper, submiting it to a journal, then coming back months later for a revise and resubmit because you won’t remember how all the code/analysis works together when completing your revisions.
Because Jane forgot how she initially filtered the data files, she cannot replicate her original results, much less update them with the new data. There is no way to definitively prove how she got her initial results. And even if Jane does remember, she still has to do the work of cleaning the data all over again. Sally’s work is reproducible because she still has all the original data files. Any changes to the files, as well as analysis, are created in the programs she wrote. To reproduce her results, all she needs to do is run the programs again. Anyone who wishes to verify her results can implement her code to reproduce them.
Research projects involve lots of edits and revisions, and not just in the final paper. Researchers make lots of individual decisions when writing programs and conducting analysis. Why filter this set of rows and not this other set? Do I compute traditional or robust standard errors?
To keep track of all of these decisions and modifications, you could save multiple copies of the same file. But this is bad for two reasons.
Many of you are probably familiar with cloud storage systems like Dropbox or Google Drive. Why not use those to track files in research projects? For one, multiple authors cannot simultaneously edit these files - how do you combine the changes? There is also no record of who made what changes, and you cannot keep annotations describing the changes and why you made them.
Version control software (VCS) allows us to track all these changes in a detailed and comprehensive manner without resorting to 50 different copies of a file floating around. VCS works by creating a repository on a computer or server which contains all files relevant to a project. Any time you want to modify a file or directory, you check it out. When you are finished, you check it back in. The VCS tracks all changes, when the changes were made, and who made the changes.
If you make a change and realize you don’t want to keep it, you can rollback to a previous version of the repository - or even an individual file - without hassle because the VCS already contains a log of every change. VCS can be implemented locally on a single computer:
Or in conjunction with remote servers to store backups of your repository:
If Jane wanted to rollback to an earlier implementation of her linear regression model, she’d have to remember exactly what her settings were. However all Sally needs to do is use VCS when she revises her programs. Then to rollback to an earlier model formulation she just needs to find the earlier version of her program which generates that model.
Programs include comments which are ignored by the computer but are intended for humans reading the code to understand what it does. So if you decide to ignore frozen yogurt sales, you can include this comment in your code to explain why the program drops that column from the data.
Comments are the what - what is the program doing? Code is the how - how is the program going to do this?
Computer code should also be self-documenting. That is, the code should be comprehensible whenever possible. For example, if you are creating a scatterplot of the relationship between ice cream sales and crime, don’t store it in your code as graph
. Instead, give it an intelligible name that intuitively means something, such as icecream_crime_scatterplot
or even ic_crime_plot
. These records are included directly in the code and should be updated whenever the code is updated.
Comments are not just for other people reading your code, but also for yourself. The goal here is to future-proof your code. That is, future you should be able to open a program and understand what the code does. If you do not include comments and/or write the code in an interpretable way, you will forget how it works.
This is an example of badly documented code.
library(tidyverse)
library(rtweet)
tmls <- get_timeline(c("MeCookieMonster", "Grover", "elmo", "CountVonCount"), 3000)
ts_plot(group_by(tmls, screen_name), "weeks")
ts_plot()
function?3000
refer to?This program, although it works, is entirely indecipherable unless you are the original author (and even then you may not fully understand it).
This is a rewritten version of the previous program. Note that it does the exact same thing, but is much more comprehensible.
# get_to_sesame_street.R
# Program to retrieve recent tweets from Sesame Street characters
# load packages for data management and Twitter API
library(tidyverse)
library(rtweet)
# retrieve most recent 3000 tweets of Sesame Street characters
tmls <- get_timeline(
user = c("MeCookieMonster", "Grover", "elmo", "CountVonCount"),
n = 3000
)
# group by character and plot weekly tweet frequency
tmls %>%
group_by(screen_name) %>%
ts_plot(by = "weeks")
devtools::session_info()
## Session info -------------------------------------------------------------
## setting value
## version R version 3.4.3 (2017-11-30)
## system x86_64, darwin15.6.0
## ui X11
## language (EN)
## collate en_US.UTF-8
## tz America/Chicago
## date 2018-03-26
## Packages -----------------------------------------------------------------
## package * version date
## assertthat 0.2.0 2017-04-11
## backports 1.1.2 2017-12-13
## base * 3.4.3 2017-12-07
## base64enc 0.1-3 2015-07-28
## bigrquery * 0.4.1 2017-06-26
## bindr 0.1.1 2018-03-13
## bindrcpp * 0.2 2017-06-17
## bit 1.1-12 2014-04-09
## bit64 0.9-7 2017-05-08
## blob 1.1.0 2017-06-17
## boot * 1.3-20 2017-08-06
## broom * 0.4.3 2017-11-20
## caret * 6.0-78 2017-12-10
## cellranger 1.1.0 2016-07-27
## class 7.3-14 2015-08-30
## cli 1.0.0 2017-11-05
## codetools 0.2-15 2016-10-05
## colorspace 1.3-2 2016-12-14
## compiler 3.4.3 2017-12-07
## crayon 1.3.4 2017-10-03
## curl * 3.1 2017-12-12
## CVST 0.2-1 2013-12-10
## datasets * 3.4.3 2017-12-07
## DBI 0.8 2018-03-02
## dbplyr 1.2.1 2018-02-19
## ddalpha 1.3.1.1 2018-02-02
## DEoptimR 1.0-8 2016-11-19
## devtools 1.13.5 2018-02-18
## digest 0.6.15 2018-01-28
## dimRed 0.1.0 2017-05-04
## dplyr * 0.7.4.9000 2017-10-03
## DRR 0.0.3 2018-01-06
## e1071 * 1.6-8 2017-02-02
## evaluate 0.10.1 2017-06-24
## FNN * 1.1 2013-07-31
## forcats * 0.3.0 2018-02-19
## foreach * 1.4.4 2017-12-12
## foreign 0.8-69 2017-06-22
## gam * 1.15 2018-02-25
## gapminder * 0.3.0 2017-10-31
## gbm * 2.1.3 2017-03-21
## geosphere 1.5-7 2017-11-05
## gganimate * 0.1.0.9000 2017-05-26
## ggmap * 2.6.1 2016-01-23
## ggplot2 * 2.2.1 2016-12-30
## ggrepel * 0.7.0 2017-09-29
## ggstance * 0.3 2016-11-16
## glue 1.2.0 2017-10-29
## gower 0.1.2 2017-02-23
## graphics * 3.4.3 2017-12-07
## grDevices * 3.4.3 2017-12-07
## grid 3.4.3 2017-12-07
## gridExtra * 2.3 2017-09-09
## gtable 0.2.0 2016-02-26
## haven * 1.1.1 2018-01-18
## here * 0.1 2017-05-28
## hexbin * 1.27.2 2018-01-15
## hms 0.4.2 2018-03-10
## htmltools 0.3.6 2017-04-28
## htmlwidgets 1.0 2018-01-20
## httpuv 1.3.6.2 2018-03-02
## httr * 1.3.1 2017-08-20
## igraph 1.1.2 2017-07-21
## ipred 0.9-6 2017-03-01
## ISLR * 1.2 2017-10-20
## iterators 1.0.9 2017-12-12
## janeaustenr 0.1.5 2017-06-10
## jpeg 0.1-8 2014-01-23
## jsonlite * 1.5 2017-06-01
## kernlab 0.9-25 2016-10-03
## kknn * 1.3.1 2016-03-26
## knitr * 1.20 2018-02-20
## labeling 0.3 2014-08-23
## lattice * 0.20-35 2017-03-25
## lava 1.6 2018-01-13
## lazyeval 0.2.1 2017-10-29
## lubridate * 1.7.3 2018-02-27
## lvplot * 0.2.0 2016-05-01
## magrittr 1.5 2014-11-22
## mapproj 1.2-5 2017-06-08
## maps * 3.2.0 2017-06-08
## MASS 7.3-49 2018-02-23
## Matrix 1.2-12 2017-11-20
## MatrixModels * 0.4-1 2015-08-22
## memoise 1.1.0 2017-04-21
## methods * 3.4.3 2017-12-07
## microbenchmark * 1.4-4 2018-01-24
## mime 0.5 2016-07-07
## mnormt 1.5-5 2016-10-15
## ModelMetrics 1.1.0 2016-08-26
## modelr * 0.1.1 2017-08-10
## modeltools 0.2-21 2013-09-02
## munsell 0.4.3 2016-02-13
## nlme 3.1-131.1 2018-02-16
## NLP 0.1-11 2017-08-15
## nnet * 7.3-12 2016-02-02
## nycflights13 * 0.2.2 2017-01-27
## openssl 1.0.1 2018-03-03
## parallel * 3.4.3 2017-12-07
## pillar 1.2.1 2018-02-27
## pkgconfig 2.0.1 2017-03-21
## plyr 1.8.4 2016-06-08
## png 0.1-7 2013-12-03
## pROC * 1.10.0 2017-06-10
## prodlim 1.6.1 2017-03-06
## profvis * 0.3.5 2018-02-22
## proto 1.0.0 2016-10-29
## psych 1.7.8 2017-09-09
## purrr * 0.2.4 2017-10-18
## quantreg * 5.35 2018-02-02
## R6 2.2.2 2017-06-17
## randomForest * 4.6-12 2015-10-07
## rcfss * 0.1.5 2017-07-31
## Rcpp 0.12.15 2018-01-20
## RcppRoll 0.2.2 2015-04-05
## readr * 1.1.1 2017-05-16
## readxl * 1.0.0 2017-04-18
## rebird * 0.4.0 2017-04-26
## recipes 0.1.2 2018-01-11
## reshape2 1.4.3 2017-12-11
## RgoogleMaps 1.4.1 2016-09-18
## rjson 0.2.15 2014-11-03
## rlang 0.2.0 2018-02-20
## rmarkdown 1.9 2018-03-01
## robustbase 0.92-8 2017-11-01
## rpart 4.1-13 2018-02-23
## rprojroot 1.3-2 2018-01-03
## RSQLite * 2.0 2017-06-19
## rstudioapi 0.7 2017-09-07
## rtweet * 0.6.0 2017-11-16
## rvest * 0.3.2 2016-06-17
## scales * 0.5.0 2017-08-24
## sfsmisc 1.1-2 2018-03-05
## shiny * 1.0.5 2017-08-23
## slam 0.1-42 2017-12-21
## SnowballC 0.5.1 2014-08-09
## sp 1.2-7 2018-01-19
## sparklyr * 0.7.0 2018-01-23
## SparseM * 1.77 2017-04-23
## splines * 3.4.3 2017-12-07
## stats * 3.4.3 2017-12-07
## stats4 3.4.3 2017-12-07
## stringi 1.1.7 2018-03-12
## stringr * 1.3.0 2018-02-19
## survival * 2.41-3 2017-04-04
## tibble * 1.4.2 2018-01-22
## tidyr * 0.8.0 2018-01-29
## tidyselect 0.2.4 2018-02-26
## tidytext * 0.1.7 2018-02-19
## tidyverse * 1.2.1 2017-11-14
## timeDate 3043.102 2018-02-21
## titanic * 0.1.0 2015-08-31
## tm 0.7-3 2017-12-06
## tokenizers 0.1.4 2016-08-29
## tools 3.4.3 2017-12-07
## topicmodels * 0.2-7 2017-11-03
## tree * 1.0-37 2016-01-21
## tweenr * 0.1.5 2016-10-10
## utils * 3.4.3 2017-12-07
## withr 2.1.1 2017-12-19
## XML * 3.98-1.10 2018-02-19
## xml2 * 1.2.0 2018-01-24
## xtable 1.8-2 2016-02-05
## yaml 2.1.18 2018-03-08
## source
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## local
## CRAN (R 3.4.0)
## CRAN (R 3.4.1)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.1)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.2)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## local
## Github (gaborcsardi/crayon@b5221ab)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## local
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## Github (tidyverse/dplyr@1a0730a)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.1)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.2)
## CRAN (R 3.4.0)
## CRAN (R 3.4.2)
## Github (dgrtwo/gganimate@bf82002)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.2)
## CRAN (R 3.4.0)
## CRAN (R 3.4.2)
## CRAN (R 3.4.0)
## local
## local
## local
## CRAN (R 3.4.1)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.1)
## CRAN (R 3.4.1)
## CRAN (R 3.4.0)
## CRAN (R 3.4.2)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.2)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## local
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## local
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.1)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## local
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.1)
## CRAN (R 3.4.2)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## local
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## cran (@0.2.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.2)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.1)
## CRAN (R 3.4.1)
## CRAN (R 3.4.2)
## CRAN (R 3.4.0)
## cran (@0.5.0)
## CRAN (R 3.4.4)
## cran (@1.0.5)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## local
## local
## local
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.2)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## local
## CRAN (R 3.4.2)
## CRAN (R 3.4.0)
## CRAN (R 3.4.0)
## local
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.3)
## CRAN (R 3.4.0)
## CRAN (R 3.4.4)
This work is licensed under the CC BY-NC 4.0 Creative Commons License.