{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "< [Processing of textual data](tutorial_1.ipynb) | [Contents](index.ipynb) | [Key term extraction](tutorial_3.ipynb) >" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Frequency analysis\n", "This tutorial introduces frequency analysis with basic R functions. We further introduce some text preprocessing functionality provided by the R package.\n", "\n", "1. Text preprocessing\n", "2. Time series\n", "3. Grouping of semantic categories\n", "4. Heatmaps" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "options(stringsAsFactors = FALSE)\n", "require(quanteda)\n", "require(magrittr)\n", "require(dplyr)\n", "require(data.table)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Text preprocessing\n", "\n", "Like in the previous tutorial we read the CSV data file containing the job postings. This time, we add one more column for the year. For the year we select a sub string of the four first characters from the `date` column of the data frame (e.g. extracting \"1990\" from \"1990-02-12\"). In later parts of the exercise we can use these columns for grouping data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "textdata <- read.csv(\"data/data job posts.csv\", header = TRUE, sep = \",\", encoding = \"UTF-8\",quote = \"\\\"\")\n", "\n", "# we add some more metadata columns to the data frame\n", "OL <- Sys.getlocale(\"LC_TIME\")\n", "#set the new locale\n", "Sys.setlocale(\"LC_TIME\",\"C\")\n", "textdata$date <- as.Date(textdata$date, format = \"%b %d, %Y\")\n", "textdata$year <- substr(textdata$date, 0, 4)\n", "textdata$decade <- paste0(substr(textdata$date, 0, 3), \"0\")\n", "#Delete not identifiable Dates\n", "textdata <- textdata[!is.na(textdata$date),]\n", "#Change the locale back to the old value\n", "Sys.setlocale(\"LC_TIME\", OL)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, we create a corpus object again. For metadata we can add a DateTimeStamp to our table mapping of metadata and data.frame fields. Moreover, we apply different preprocessing steps to the corpus text. `removePunctuation` leaves only alphanumeric characters in the text. `removeNumbers` removes numeric characters. Then lowercase transformation is performed and an English set of stop-words is removed.\n", "\n", "We see that the text is now a sequence of text features corresponding to the selected methods (other preprocessing steps could include stemming or lemmatization).\n", "\n", "From the preprocessed corpus, we create a new DTM." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "textdata <- as.data.table(textdata)\n", "\n", "english_stopwords <- readLines(\"data/stopwords_en.txt\", encoding = \"UTF-8\")\n", "\n", "textdata %<>% filter(!duplicated(jobpost))\n", "textdata %<>% mutate(d_id = 1:nrow(textdata))\n", "\n", "#Build a dictionary of lemmas\n", "lemmaData <- read.csv2(\"data/baseform_en.tsv\", sep=\"\\t\", header=FALSE, encoding = \"UTF-8\", stringsAsFactors = F)\n", "\n", "data_corpus <- corpus(textdata$jobpost, docnames = textdata$d_id)\n", "\n", "# Create a DTM (may take a while)\n", "data_dfm_entries <- data_corpus %>% tokens() %>%\n", " tokens(remove_punct = TRUE, remove_numbers = TRUE, remove_symbols = TRUE) %>% tokens_tolower() %>% \n", " tokens_replace(., lemmaData$V1, lemmaData$V2) %>%\n", " tokens_ngrams(1) %>% tokens_remove(pattern = stopwords()) %>% dfm() \n", "\n", "\n", "data_dfm_entries_sub <- data_dfm_entries %>%\n", " dfm_select(pattern = \"[a-z]\", valuetype = \"regex\", selection = 'keep')\n", "\n", "colnames(data_dfm_entries_sub) <- colnames(data_dfm_entries_sub) %>% stringi::stri_replace_all_regex(\"[^_a-z]\", \"\") \n", "\n", "DTM <- dfm_compress(data_dfm_entries_sub, \"features\")\n", "# Show some information\n", "DTM\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Time series\n", "\n", "We now want to measure frequencies of certain terms over time. Frequencies in single years are plotted as line graphs to follow their trends over time. First, we determine which terms to analyze and reduce our DTM to this these terms." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "terms_to_observe <- c(\"experience\", \"manual\", \"creative\", \"hard\", \"team\")\n", "\n", "DTM_reduced <- as.matrix(DTM[, terms_to_observe])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The reduced DTM contains counts for each of our " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "length(terms_to_observe)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "terms and in each of the" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nrow(DTM_reduced)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "documents (rows of the reduced DTM).\n", "\n", "Information of each year per document we added in the beginning to the `textdata` variable. We use `textdata$year` as a grouping parameter for the `aggregate` function. This function sub-selects rows from the input data (`DTM_reduced`) for all different year values given in the `by`-parameter. Each sub-selection is processed column-wise using the function provided in the third parameter (`sum`)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "counts_per_year <- aggregate(DTM_reduced, by = list(year = textdata$year), sum)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`counts_per_year` now contains sums of term frequencies per year. Time series for single terms can be plotted either by the simple `plot` function. Additional time series could be added by the `lines`-function (Tutorial 2). A more simple way is to use the matplot-function which can draw multiple lines in one command." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# give x and y values beautiful names\n", "years <- counts_per_year$year\n", "frequencies <- counts_per_year[, terms_to_observe]\n", "\n", "# plot multiple frequencies\n", "matplot(years, frequencies, type = \"l\")\n", "\n", "# add legend to the plot\n", "l <- length(terms_to_observe)\n", "legend('topleft', legend = terms_to_observe, col=1:l, text.col = 1:l, lty = 1:l)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Among other things, we can observe peaks in reference to `war` around the US civil war, around 1900 and WWII. The term nation also peaks around 1900 giving hints for further investigations on possible relatedness of both terms during that time. References to security, god of terror appear to be more 'modern' phenomena." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Grouping of sentiments\n", "\n", "Frequencies cannot only be aggregated over time for time series analysis, but to count categories of terms for comparison. For instance, we can model a very simple **Sentiment analysis** approach using lists of positive an negative words. Then, counts of these words can be aggregated w.r.t to any metadata. For instance, if we count sentiment words per company, we can get an impression of who utilized emotional language to what extent.\n", "\n", "We provide a selection of very positive / negative English words extracted from SentiWordNet 3.0 (see @BACCIANELLA10.769). Have a look in the text files to see, what they consist of." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "positive_terms_all <- readLines(\"data/senti_words_positive.txt\")\n", "negative_terms_all <- readLines(\"data/senti_words_negative.txt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To count occurrence of these terms in our posts, we first need to restrict the list to those words actually occurring in our posts. These terms then can be aggregated per job post by a simple `row_sums` command." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "positive_terms_in_suto <- intersect(colnames(DTM), positive_terms_all)\n", "counts_positive <- rowSums(DTM[, positive_terms_in_suto])\n", "\n", "negative_terms_in_suto <- intersect(colnames(DTM), negative_terms_all)\n", "counts_negative <- rowSums(DTM[, negative_terms_in_suto])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since lengths of job posts tend to vary greatly, we do want to measure relative frequencies of sentiment terms. This can be achieved by dividing counts of sentiment terms by the number of all terms in each document. The relative frequencies we store in a dataframe for subsequent aggregation and visualization." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "counts_all_terms <- rowSums(DTM)\n", "\n", "relative_sentiment_frequencies <- data.frame(\n", " positive = counts_positive / counts_all_terms,\n", " negative = counts_negative / counts_all_terms\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we aggregate not per year, but per company. Further we do not take the sum (not all companies have the same number of job posts) but the mean. A sample output shows the computed mean sentiment scores per company." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sentiments_per_Company <- aggregate(relative_sentiment_frequencies, by = list(Company = textdata$Company), mean)\n", "\n", "company_count <-table(textdata$Company)\n", "company_count <- company_count[company_count > 50]\n", "head(sentiments_per_Company[sentiments_per_Company$Company %in% names(company_count),])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Scores per company can be visualized as bar plot. The package `ggplot2` offers a great variety of plots. The package `reshape2` offers functions to convert data into the right format for ggplot2. For more information on ggplot2 see: http://ggplot2.org" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "require(reshape2)\n", "df <- melt(head(sentiments_per_Company[sentiments_per_Company$Company %in% names(company_count),],n = 20), id.vars = \"Company\")\n", "require(ggplot2)\n", "ggplot(data = df, aes(x = Company, y = value, fill = variable)) + \n", " geom_bar(stat=\"identity\", position=position_dodge()) + coord_flip()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The standard output is sorted by company names alphabetically. We can make use of the reorder command, to sort by positive / negative sentiment score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# order by positive sentiments\n", "ggplot(data = df, aes(x = reorder(Company, df$value, head, 1), y = value, fill = variable)) + geom_bar(stat=\"identity\", position=position_dodge()) + coord_flip()\n", "\n", "# order by negative sentiments\n", "ggplot(data = df, aes(x = reorder(Company, df$value, tail, 1), y = value, fill = variable)) + geom_bar(stat=\"identity\", position=position_dodge()) + coord_flip()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Heatmaps\n", "\n", "The overlapping of several time series in a plot can become very confusing. Heatmaps provide an alternative for the visualization of multiple frequencies over time. In this visualization method, a time series is mapped as a row in a matrix grid. Each cell of the grid is filled with a color corresponding to the value from the time series. Thus, several time series can be displayed in parallel. \n", "\n", "In addition, the time series can be sorted by similarity in a heatmap. In this way, similar frequency sequences with parallel shapes (heat activated cells) can be detected more quickly. Dendrograms can be plotted aside to visualize quantities of similarity." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "terms_to_observe <- c(\"work\", \"responsibility\", \"health\", \"hard\", \"creative\", \n", " \"competetive\", \"friendly\", \"reliable\", \"technology\", \n", " \"manual\", \"skill\", \"payment\")\n", "DTM_reduced <- as.matrix(DTM[, terms_to_observe])\n", "counts_per_year <- aggregate(DTM_reduced, by = list(year = textdata$year), sum)\n", "rownames(counts_per_year) <- counts_per_year$year\n", "counts_per_year <- counts_per_year[!(colnames(counts_per_year) %in% \"year\")]\n", "heatmap(t(counts_per_year), Colv=NA, col = rev(heat.colors(256)), keep.dendro= FALSE, margins = c(5, 10))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "R", "language": "R", "name": "ir" }, "language_info": { "codemirror_mode": "r", "file_extension": ".r", "mimetype": "text/x-r-source", "name": "R", "pygments_lexer": "r", "version": "3.3.2" } }, "nbformat": 4, "nbformat_minor": 2 }