--- description: Quality control of single cell RNA-Seq data. Inspection of QC metrics including number of UMIs, number of genes expressed, mitochondrial and ribosomal expression, sex and cell cycle state. subtitle: Seurat Toolkit title: Quality Control ---
> **Note** > > Code chunks run R commands unless otherwise specified.
## Get data In this tutorial, we will run all tutorials with a set of 8 PBMC 10x datasets from 4 covid-19 patients and 4 healthy controls, the samples have been subsampled to 1500 cells per sample. We can start by defining our paths. ``` {r} # download pre-computed annotation fetch_annotation <- TRUE # url for source and intermediate data path_data <- "https://export.uppmax.uu.se/naiss2023-23-3/workshops/workshop-scrnaseq" path_covid <- "./data/covid" if (!dir.exists(path_covid)) dir.create(path_covid, recursive = T) path_results <- "data/covid/results" if (!dir.exists(path_results)) dir.create(path_results, recursive = T) ``` ``` {r} file_list <- c( "normal_pbmc_13.h5", "normal_pbmc_14.h5", "normal_pbmc_19.h5", "normal_pbmc_5.h5", "ncov_pbmc_15.h5", "ncov_pbmc_16.h5", "ncov_pbmc_17.h5", "ncov_pbmc_1.h5" ) for (i in file_list) { path_file <- file.path(path_covid, i) if (!file.exists(path_file)) { download.file(url = file.path(file.path(path_data, "covid"), i), destfile = path_file) } } ``` With data in place, now we can start loading libraries we will use in this tutorial. ``` {r} suppressPackageStartupMessages({ library(Seurat) library(Matrix) library(ggplot2) library(patchwork) # remotes::install_github("chris-mcginnis-ucsf/DoubletFinder", upgrade = FALSE, dependencies = TRUE) library(DoubletFinder) }) ``` We can first load the data individually by reading directly from HDF5 file format (.h5). ``` {r} cov.15 <- Seurat::Read10X_h5( filename = file.path(path_covid, "ncov_pbmc_15.h5"), use.names = T ) cov.1 <- Seurat::Read10X_h5( filename = file.path(path_covid, "ncov_pbmc_1.h5"), use.names = T ) cov.16 <- Seurat::Read10X_h5( filename = file.path(path_covid, "ncov_pbmc_16.h5"), use.names = T ) cov.17 <- Seurat::Read10X_h5( filename = file.path(path_covid, "ncov_pbmc_17.h5"), use.names = T ) ctrl.5 <- Seurat::Read10X_h5( filename = file.path(path_covid, "normal_pbmc_5.h5"), use.names = T ) ctrl.13 <- Seurat::Read10X_h5( filename = file.path(path_covid, "normal_pbmc_13.h5"), use.names = T ) ctrl.14 <- Seurat::Read10X_h5( filename = file.path(path_covid, "normal_pbmc_14.h5"), use.names = T ) ctrl.19 <- Seurat::Read10X_h5( filename = file.path(path_covid, "normal_pbmc_19.h5"), use.names = T ) ``` ## Collate We can now merge them objects into a single object. Each analysis workflow (Seurat, Scater, Scanpy, etc) has its own way of storing data. We will add dataset labels as **cell.ids** just in case you have overlapping barcodes between the datasets. After that we add a column **type** in the metadata to define covid and ctrl samples. But first, we need to create Seurat objects using each of the expression matrices we loaded. We define each sample in the `project` slot, so in each object, the sample id can be found in the metadata slot `orig.ident`. ``` {r} sdata.cov1 <- CreateSeuratObject(cov.1, project = "covid_1") sdata.cov15 <- CreateSeuratObject(cov.15, project = "covid_15") sdata.cov17 <- CreateSeuratObject(cov.17, project = "covid_17") sdata.cov16 <- CreateSeuratObject(cov.16, project = "covid_16") sdata.ctrl5 <- CreateSeuratObject(ctrl.5, project = "ctrl_5") sdata.ctrl13 <- CreateSeuratObject(ctrl.13, project = "ctrl_13") sdata.ctrl14 <- CreateSeuratObject(ctrl.14, project = "ctrl_14") sdata.ctrl19 <- CreateSeuratObject(ctrl.19, project = "ctrl_19") # add metadata sdata.cov1$type <- "Covid" sdata.cov15$type <- "Covid" sdata.cov16$type <- "Covid" sdata.cov17$type <- "Covid" sdata.ctrl5$type <- "Ctrl" sdata.ctrl13$type <- "Ctrl" sdata.ctrl14$type <- "Ctrl" sdata.ctrl19$type <- "Ctrl" # Merge datasets into one single seurat object alldata <- merge(sdata.cov1, c(sdata.cov15, sdata.cov16, sdata.cov17, sdata.ctrl5, sdata.ctrl13, sdata.ctrl14, sdata.ctrl19), add.cell.ids = c("covid_1", "covid_15", "covid_16", "covid_17", "ctrl_5", "ctrl_13", "ctrl_14", "ctrl_19")) ``` Once you have created the merged Seurat object, the count matrices and individual count matrices and objects are not needed anymore. It is a good idea to remove them and run garbage collect to free up some memory. ``` {r} # remove all objects that will not be used. rm(cov.1, cov.15, cov.16, cov.17, ctrl.5, ctrl.13, ctrl.14, ctrl.19, sdata.cov1, sdata.cov15, sdata.cov16, sdata.cov17, sdata.ctrl5, sdata.ctrl13, sdata.ctrl14, sdata.ctrl19) # run garbage collect to free up memory gc() ``` Here is how the count matrix and the metadata look like for every cell. ``` {r} as.data.frame(alldata@assays$RNA@counts[1:10, 1:2]) head(alldata@meta.data, 10) ``` ## Calculate QC Having the data in a suitable format, we can start calculating some quality metrics. We can for example calculate the percentage of mitochondrial and ribosomal genes per cell and add to the metadata. The proportion of hemoglobin genes can give an indication of red blood cell contamination. This will be helpful to visualize them across different metadata parameters (i.e. datasetID and chemistry version). There are several ways of doing this. The QC metrics are finally added to the metadata table. Citing from Simple Single Cell workflows (Lun, McCarthy & Marioni, 2017): High proportions are indicative of poor-quality cells (Islam et al. 2014; Ilicic et al. 2016), possibly because of loss of cytoplasmic RNA from perforated cells. The reasoning is that mitochondria are larger than individual transcript molecules and less likely to escape through tears in the cell membrane. ``` {r} # method 1: doing it using Seurat function alldata <- PercentageFeatureSet(alldata, "^MT-", col.name = "percent_mito") ``` ``` {r} # Ribosomal alldata <- PercentageFeatureSet(alldata, "^RP[SL]", col.name = "percent_ribo") # Percentage hemoglobin genes - includes all genes starting with HB except HBP. alldata <- PercentageFeatureSet(alldata, "^HB[^(P|E|S)]", col.name = "percent_hb") # Percentage for some platelet markers alldata <- PercentageFeatureSet(alldata, "PECAM1|PF4", col.name = "percent_plat") ```
> **Tip** > > Alternatively, percentage expression can be calculated manually. Here > is an example. Do not run this script now. > > ``` {r} > #| eval: false > # Do not run now! > total_counts_per_cell <- colSums(alldata@assays$RNA@counts) > mito_genes <- rownames(alldata)[grep("^MT-", rownames(alldata))] > alldata$percent_mito2 <- colSums(alldata@assays$RNA@counts[mito_genes, ]) / total_counts_per_cell > ```
Now you can see that we have additional data in the metadata slot. ``` {r} head(alldata[[]]) ``` ## Plot QC Now we can plot some of the QC variables as violin plots. ``` {r} #| fig-height: 8 #| fig-width: 15 feats <- c("nFeature_RNA", "nCount_RNA", "percent_mito", "percent_ribo", "percent_hb", "percent_plat") VlnPlot(alldata, group.by = "orig.ident", split.by = "type", features = feats, pt.size = 0.1, ncol = 3) ``` As you can see, there is quite some difference in quality for these samples, with for instance the covid_15 and covid_16 samples having cells with fewer detected genes and more mitochondrial content. As the ribosomal proteins are highly expressed they will make up a larger proportion of the transcriptional landscape when fewer of the lowly expressed genes are detected. We can also plot the different QC-measures as scatter plots. ``` {r} #| fig-height: 8 #| fig-width: 15 FeatureScatter(alldata, "nCount_RNA", "nFeature_RNA", group.by = "orig.ident", pt.size = .5) ```
> **Discuss** > > Plot additional QC stats that we have calculated as scatter plots. How > are the different measures correlated? Can you explain why?
## Filtering ### Detection-based filtering A standard approach is to filter cells with low number of reads as well as genes that are present in at least a given number of cells. Here we will only consider cells with at least 200 detected genes and genes need to be expressed in at least 3 cells. Please note that those values are highly dependent on the library preparation method used. ``` {r} selected_c <- WhichCells(alldata, expression = nFeature_RNA > 200) selected_f <- rownames(alldata)[Matrix::rowSums(alldata) > 3] data.filt <- subset(alldata, features = selected_f, cells = selected_c) dim(data.filt) table(data.filt$orig.ident) ``` Extremely high number of detected genes could indicate doublets. However, depending on the cell type composition in your sample, you may have cells with higher number of genes (and also higher counts) from one cell type. In this case, we will run doublet prediction further down, so we will skip this step now, but the code below is an example of how it can be run: ``` {r} # skip and run DoubletFinder instead # data.filt <- subset(data.filt, cells=WhichCells(data.filt, expression = nFeature_RNA < 4100)) ``` Additionally, we can also see which genes contribute the most to such reads. We can for instance plot the percentage of counts per gene. ``` {r} #| fig-height: 8 #| fig-width: 15 # Compute the proportion of counts of each gene per cell # Use sparse matrix operations, if your dataset is large, doing matrix devisions the regular way will take a very long time. C <- data.filt@assays$RNA@counts C@x <- C@x / rep.int(colSums(C), diff(C@p)) * 100 most_expressed <- order(Matrix::rowSums(C), decreasing = T)[20:1] boxplot(as.matrix(t(C[most_expressed, ])), cex = 0.1, las = 1, xlab = "Percent counts per cell", col = (scales::hue_pal())(20)[20:1], horizontal = TRUE ) ``` As you can see, MALAT1 constitutes up to 30% of the UMIs from a single cell and the other top genes are mitochondrial and ribosomal genes. It is quite common that nuclear lincRNAs have correlation with quality and mitochondrial reads, so high detection of MALAT1 may be a technical issue. Let us assemble some information about such genes, which are important for quality control and downstream filtering. ### Mito/Ribo filtering We also have quite a lot of cells with high proportion of mitochondrial and low proportion of ribosomal reads. It would be wise to remove those cells, if we have enough cells left after filtering. Another option would be to either remove all mitochondrial reads from the dataset and hope that the remaining genes still have enough biological signal. A third option would be to just regress out the `percent_mito` variable during scaling. In this case we had as much as 99.7% mitochondrial reads in some of the cells, so it is quite unlikely that there is much cell type signature left in those. Looking at the plots, make reasonable decisions on where to draw the cutoff. In this case, the bulk of the cells are below 20% mitochondrial reads and that will be used as a cutoff. We will also remove cells with less than 5% ribosomal reads. ``` {r} #| fig-height: 5 #| fig-width: 15 selected_mito <- WhichCells(data.filt, expression = percent_mito < 20) selected_ribo <- WhichCells(data.filt, expression = percent_ribo > 5) # and subset the object to only keep those cells data.filt <- subset(data.filt, cells = selected_mito) data.filt <- subset(data.filt, cells = selected_ribo) dim(data.filt) table(data.filt$orig.ident) ``` As you can see, a large proportion of sample covid_15 is filtered out. Also, there is still quite a lot of variation in `percent_mito`, so it will have to be dealt with in the data analysis step. We can also notice that the `percent_ribo` are also highly variable, but that is expected since different cell types have different proportions of ribosomal content, according to their function. ### Plot filtered QC Lets plot the same QC-stats once more. ``` {r} #| fig-height: 8 #| fig-width: 15 feats <- c("nFeature_RNA", "nCount_RNA", "percent_mito", "percent_ribo", "percent_hb") VlnPlot(data.filt, group.by = "orig.ident", features = feats, pt.size = 0.1, ncol = 3) + NoLegend() ``` ### Filter genes As the level of expression of mitochondrial and MALAT1 genes are judged as mainly technical, it can be wise to remove them from the dataset before any further analysis. In this case we will also remove the HB genes. ``` {r} dim(data.filt) # Filter MALAT1 data.filt <- data.filt[!grepl("MALAT1", rownames(data.filt)), ] # Filter Mitocondrial data.filt <- data.filt[!grepl("^MT-", rownames(data.filt)), ] # Filter Ribossomal gene (optional if that is a problem on your data) # data.filt <- data.filt[ ! grepl("^RP[SL]", rownames(data.filt)), ] # Filter Hemoglobin gene (optional if that is a problem on your data) # data.filt <- data.filt[!grepl("^HB[^(P|E|S)]", rownames(data.filt)), ] dim(data.filt) ``` ## Sample sex When working with human or animal samples, you should ideally constrain your experiments to a single sex to avoid including sex bias in the conclusions. However this may not always be possible. By looking at reads from chromosomeY (males) and XIST (X-inactive specific transcript) expression (mainly female) it is quite easy to determine per sample which sex it is. It can also be a good way to detect if there has been any mislabelling in which case, the sample metadata sex does not agree with the computational predictions. To get chromosome information for all genes, you should ideally parse the information from the gtf file that you used in the mapping pipeline as it has the exact same annotation version/gene naming. However, it may not always be available, as in this case where we have downloaded public data. R package biomaRt can be used to fetch annotation information. The code to run biomaRt is provided. As the biomart instances are quite often unresponsive, we will download and use a file that was created in advance.
> **Tip** > > Here is the code to download annotation data from Ensembl using > biomaRt. We will not run this now and instead use a pre-computed file > in the step below. > > ``` {r} > # fetch_annotation is defined at the top of this document > if (!fetch_annotation) { > suppressMessages(library(biomaRt)) > > # initialize connection to mart, may take some time if the sites are unresponsive. > mart <- useMart("ENSEMBL_MART_ENSEMBL", dataset = "hsapiens_gene_ensembl") > > # fetch chromosome info plus some other annotations > genes_table <- try(biomaRt::getBM(attributes = c( > "ensembl_gene_id", "external_gene_name", > "description", "gene_biotype", "chromosome_name", "start_position" > ), mart = mart, useCache = F)) > > write.csv(genes_table, file = "data/covid/results/genes_table.csv") > } > ```
Download precomputed data. ``` {r} # fetch_annotation is defined at the top of this document if (fetch_annotation) { genes_file <- file.path(path_results, "genes_table.csv") if (!file.exists(genes_file)) download.file(file.path(path_data, "covid/results/genes_table.csv"), destfile = genes_file) } ``` ``` {r} genes.table <- read.csv(genes_file) genes.table <- genes.table[genes.table$external_gene_name %in% rownames(data.filt), ] ``` Now that we have the chromosome information, we can calculate the proportion of reads that comes from chromosome Y per cell. ``` {r} chrY.gene <- genes.table$external_gene_name[genes.table$chromosome_name == "Y"] data.filt$pct_chrY <- colSums(data.filt@assays$RNA@counts[chrY.gene, ]) / colSums(data.filt@assays$RNA@counts) ``` Then plot XIST expression vs chrY proportion. As you can see, the samples are clearly on either side, even if some cells do not have detection of either. ``` {r} FeatureScatter(data.filt, feature1 = "XIST", feature2 = "pct_chrY") ``` Plot as violins. ``` {r} VlnPlot(data.filt, features = c("XIST", "pct_chrY")) ```
> **Discuss** > > Here, we can see clearly that we have three males and five females, > can you see which samples they are? Do you think this will cause any > problems for downstream analysis? Discuss with your group: what would > be the best way to deal with this type of sex bias?
## Cell cycle state We here perform cell cycle scoring. To score a gene list, the algorithm calculates the difference of mean expression of the given list and the mean expression of reference genes. To build the reference, the function randomly chooses a bunch of genes matching the distribution of the expression of the given list. Cell cycle scoring adds three slots in the metadata, a score for S phase, a score for G2M phase and the predicted cell cycle phase. ``` {r} # Before running CellCycleScoring the data need to be normalized and logtransformed. data.filt <- NormalizeData(data.filt) data.filt <- CellCycleScoring( object = data.filt, g2m.features = cc.genes$g2m.genes, s.features = cc.genes$s.genes ) ``` We can now create a violin plot for the cell cycle scores as well. ``` {r} #| fig-height: 5 #| fig-width: 15 VlnPlot(data.filt, features = c("S.Score", "G2M.Score"), group.by = "orig.ident", ncol = 3, pt.size = .1) ``` In this case it looks like we only have a few cycling cells in these datasets. Seurat does an automatic prediction of cell cycle phase with a default cutoff of the scores at zero. As you can see this does not fit this data very well, so be cautious with using these predictions. Instead we suggest that you look at the scores. ``` {r} #| fig-height: 7 #| fig-width: 7 FeatureScatter(data.filt, "S.Score", "G2M.Score", group.by = "Phase") ``` ## Predict doublets Doublets/Multiples of cells in the same well/droplet is a common issue in scRNAseq protocols. Especially in droplet-based methods with overloading of cells. In a typical 10x experiment the proportion of doublets is linearly dependent on the amount of loaded cells. As indicated from the Chromium user guide, doublet rates are about as follows:\ ![](../figs/10x_doublet_rate.png)\ Most doublet detectors simulates doublets by merging cell counts and predicts doublets as cells that have similar embeddings as the simulated doublets. Most such packages need an assumption about the number/proportion of expected doublets in the dataset. The data you are using is subsampled, but the original datasets contained about 5 000 cells per sample, hence we can assume that they loaded about 9 000 cells and should have a doublet rate at about 4%.
> **Caution** > > Ideally doublet prediction should be run on each sample separately, > especially if your samples have different proportions of cell types. > In this case, the data is subsampled so we have very few cells per > sample and all samples are sorted PBMCs, so it is okay to run them > together.
Here, we will use `DoubletFinder` to predict doublet cells. But before doing doublet detection we need to run scaling, variable gene selection and PCA, as well as UMAP for visualization. These steps will be explored in more detail in coming exercises. ``` {r} data.filt <- FindVariableFeatures(data.filt, verbose = F) data.filt <- ScaleData(data.filt, vars.to.regress = c("nFeature_RNA", "percent_mito"), verbose = F) data.filt <- RunPCA(data.filt, verbose = F, npcs = 20) data.filt <- RunUMAP(data.filt, dims = 1:10, verbose = F) ``` Then we run doubletFinder, selecting first 10 PCs and a `pK` value of 0.9. To optimize the parameters, you can run the `paramSweep` function in the package. ``` {r} suppressMessages(library(DoubletFinder)) # Can run parameter optimization with paramSweep # sweep.res <- paramSweep_v3(data.filt) # sweep.stats <- summarizeSweep(sweep.res, GT = FALSE) # bcmvn <- find.pK(sweep.stats) # barplot(bcmvn$BCmetric, names.arg = bcmvn$pK, las=2) # define the expected number of doublet cellscells. nExp <- round(ncol(data.filt) * 0.04) # expect 4% doublets data.filt <- doubletFinder_v3(data.filt, pN = 0.25, pK = 0.09, nExp = nExp, PCs = 1:10) ``` ``` {r} #| fig-height: 4 #| fig-width: 10 # name of the DF prediction can change, so extract the correct column name. DF.name <- colnames(data.filt@meta.data)[grepl("DF.classification", colnames(data.filt@meta.data))] wrap_plots( DimPlot(data.filt, group.by = "orig.ident") + NoAxes(), DimPlot(data.filt, group.by = DF.name) + NoAxes(), ncol = 2 ) ``` We should expect that two cells have more detected genes than a single cell, lets check if our predicted doublets also have more detected genes in general. ``` {r} VlnPlot(data.filt, features = "nFeature_RNA", group.by = DF.name, pt.size = .1) ``` Now, lets remove all predicted doublets from our data. ``` {r} data.filt <- data.filt[, data.filt@meta.data[, DF.name] == "Singlet"] dim(data.filt) ``` ## Save data Finally, lets save the QC-filtered data for further analysis. Create output directory `data/covid/results` and save data to that folder. This will be used in downstream labs. ``` {r} saveRDS(data.filt, file.path(path_results, "seurat_covid_qc.rds")) ``` ## Session info ```{=html}
``` ```{=html} ``` Click here ```{=html} ``` ``` {r} sessionInfo() ``` ```{=html}
```