# Chapter 11 Marker gene detection

## 11.1 Motivation

To interpret our clustering results from Chapter 10, we identify the genes that drive separation between clusters. These marker genes allow us to assign biological meaning to each cluster based on their functional annotation. In the most obvious case, the marker genes for each cluster are a priori associated with particular cell types, allowing us to treat the clustering as a proxy for cell type identity. The same principle can be applied to discover more subtle differences between clusters (e.g., changes in activation or differentiation state) based on the behavior of genes in the affected pathways.

Identification of marker genes is usually based around the retrospective detection of differential expression between clusters. Genes that are more strongly DE are more likely to have caused separate clustering of cells in the first place. Several different statistical tests are available to quantify the differences in expression profiles, and different approaches can be used to consolidate test results into a single ranking of genes for each cluster. These choices parametrize the theoretical differences between the various marker detection strategies presented in this chapter. We will demonstrate using the 10X PBMC dataset:

#--- setup ---#
library(OSCAUtils)
chapterPreamble(use_cache = TRUE)

library(BiocFileCache)
bfc <- BiocFileCache("raw_data", ask = FALSE)
raw.path <- bfcrpath(bfc, file.path("http://cf.10xgenomics.com/samples",
"cell-exp/2.1.0/pbmc4k/pbmc4k_raw_gene_bc_matrices.tar.gz"))
untar(raw.path, exdir=file.path(tempdir(), "pbmc4k"))

library(DropletUtils)
fname <- file.path(tempdir(), "pbmc4k/raw_gene_bc_matrices/GRCh38")
sce.pbmc <- read10xCounts(fname, col.names=TRUE)

#--- gene-annotation ---#
library(scater)
rownames(sce.pbmc) <- uniquifyFeatureNames(
rowData(sce.pbmc)$ID, rowData(sce.pbmc)$Symbol)

library(EnsDb.Hsapiens.v86)
location <- mapIds(EnsDb.Hsapiens.v86, keys=rowData(sce.pbmc)$ID, column="SEQNAME", keytype="GENEID") #--- cell-detection ---# set.seed(100) e.out <- emptyDrops(counts(sce.pbmc)) sce.pbmc <- sce.pbmc[,which(e.out$FDR <= 0.001)]

#--- quality-control ---#
stats <- perCellQCMetrics(sce.pbmc, subsets=list(Mito=which(location=="MT")))
high.mito <- isOutlier(stats$subsets_Mito_percent, type="higher") sce.pbmc <- sce.pbmc[,!high.mito] #--- normalization ---# library(scran) set.seed(1000) clusters <- quickCluster(sce.pbmc) sce.pbmc <- computeSumFactors(sce.pbmc, cluster=clusters) sce.pbmc <- logNormCounts(sce.pbmc) #--- variance-modelling ---# set.seed(1001) dec.pbmc <- modelGeneVarByPoisson(sce.pbmc) top.pbmc <- getTopHVGs(dec.pbmc, prop=0.1) #--- dimensionality-reduction ---# set.seed(10000) sce.pbmc <- denoisePCA(sce.pbmc, subset.row=top.pbmc, technical=dec.pbmc) set.seed(100000) sce.pbmc <- runTSNE(sce.pbmc, use_dimred="PCA") set.seed(1000000) sce.pbmc <- runUMAP(sce.pbmc, use_dimred="PCA") #--- clustering ---# g <- buildSNNGraph(sce.pbmc, k=10, use.dimred = 'PCA') clust <- igraph::cluster_walktrap(g)$membership
sce.pbmc$cluster <- factor(clust) sce.pbmc ## class: SingleCellExperiment ## dim: 33694 3922 ## metadata(1): Samples ## assays(2): counts logcounts ## rownames(33694): RP11-34P13.3 FAM138A ... AC213203.1 FAM231B ## rowData names(2): ID Symbol ## colnames(3922): AAACCTGAGAAGGCCT-1 AAACCTGAGACAGACC-1 ... TTTGTCACAGGTCCAC-1 ## TTTGTCATCCCAAGAT-1 ## colData names(3): Sample Barcode cluster ## reducedDimNames(3): PCA TSNE UMAP ## spikeNames(0): ## altExpNames(0): ## 11.2 Using pairwise $$t$$-tests ### 11.2.1 Standard application The Welch $$t$$-test is an obvious choice of statistical method to test for differences in expression between clusters. It is quickly computed and has good statistical properties for large numbers of cells (Soneson and Robinson 2018). We use the findMarkers() function to perform pairwise comparisons between clusters for each gene, which returns a list of DataFrames containing ranked candidate markers for each cluster. library(scran) markers.pbmc <- findMarkers(sce.pbmc, sce.pbmc$cluster)
markers.pbmc
## List of length 18
## names(18): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

To demonstrate, we use cluster 9 as our cluster of interest for this section. The relevant DataFrame contains log2-fold changes of expression in cluster 9 over each other cluster, along with several statistics obtained by combining $$p$$-values (Simes 1986) across the pairwise comparisons involving 9.

chosen <- "9"
interesting <- markers.pbmc[[chosen]]
colnames(interesting)
##  [1] "Top"      "p.value"  "FDR"      "logFC.1"  "logFC.2"  "logFC.3"  "logFC.4"  "logFC.5"
##  [9] "logFC.6"  "logFC.7"  "logFC.8"  "logFC.10" "logFC.11" "logFC.12" "logFC.13" "logFC.14"
## [17] "logFC.15" "logFC.16" "logFC.17" "logFC.18"

Of particular interest is the Top field, which contains the highest rank for each gene across all pairwise comparisons involving cluster 9. The set of genes with Top values of 1 contains the gene with the lowest $$p$$-value from each comparison. Similarly, the set of genes with Top values less than or equal to 10 contains the top 10 genes from each comparison. Each DataFrame produced by findMarkers() will order genes based on the Top value.

interesting[1:10,1:3]
## DataFrame with 10 rows and 3 columns
##                Top               p.value                   FDR
##          <integer>             <numeric>             <numeric>
## S100A4           1  3.29705578341852e-57  3.05195048259623e-55
## TAGLN2           1   1.6552184884693e-24  3.58425011249901e-23
## PF4              1  2.54869962577836e-35  9.99719268812294e-34
## GZMA             1 1.41952124275351e-120 7.71441108924781e-118
## HLA-DQA1         1  1.79189457530077e-88  4.75402329292782e-86
## FCN1             1 1.13468498100156e-246 4.77900946873311e-243
## SERPINA1         1  1.12795421261234e-68  1.72751314726183e-66
## RPL23A           1  2.42151256457725e-37  1.04737412517157e-35
## RPL17            1                     0                     0
## RPS21            1  1.08454452575786e-56  9.90315535254339e-55

We use the Top value to identify a set of genes that is guaranteed to distinguish cluster 9 from any other cluster. Here, we examine the top 6 genes from each pairwise comparison (Figure 11.1). Some inspection of the most upregulated genes suggest that cluster 9 contains platelets or their precursors, based on the expression of platelet factor 4 (PF4) and pro-platelet basic protein (PPBP).

best.set <- interesting[interesting$Top <= 6,] logFCs <- as.matrix(best.set[,-(1:3)]) colnames(logFCs) <- sub("logFC.", "", colnames(logFCs)) library(pheatmap) pheatmap(logFCs, breaks=seq(-5, 5, length.out=101)) We intentionally use pairwise comparisons between clusters rather than comparing each cluster to the average of all other cells. The latter approach is sensitive to the population composition, potentially resulting in substantially different sets of markers when cell type abundances change in different contexts. In the worst case, the presence of a single dominant subpopulation will drive the selection of top markers for every other cluster, pushing out useful genes that can resolve the various minor subpopulations. Moreover, pairwise comparisons naturally provide more information to interpret of the utility of a marker, e.g., by providing log-fold changes to indicate which clusters are distinguished by each gene. ### 11.2.2 Using the log-fold change Our previous findMarkers() call considers both up- and downregulated genes to be potential markers. However, downregulated genes are less appealing as markers as it is more difficult to interpret and experimentally validate an absence of expression. To focus on up-regulated markers, we can instead perform a one-sided $$t$$-test to identify genes that are upregulated in each cluster compared to the others. This is achieved by setting direction="up" in the findMarkers() call. markers.pbmc.up <- findMarkers(sce.pbmc, sce.pbmc$cluster, direction="up")
interesting.up <- markers.pbmc.up[[chosen]]
interesting.up[1:10,1:3]
## DataFrame with 10 rows and 3 columns
##                 Top              p.value                  FDR
##           <integer>            <numeric>            <numeric>
## TAGLN2            1 8.27609244234649e-25 9.29515529174743e-21
## PF4               1 1.27434981288918e-35  4.2937942595488e-31
## SDPR              2 2.26416053074668e-21 1.90721562307447e-17
## GPX1              2 1.79268835524744e-20 1.00671402402845e-16
## TMSB4X            2 1.61388516311847e-31 2.71891233430568e-27
## PPBP              3 2.67042748206677e-20 1.28539119401082e-16
## NRGN              3 1.41985609771254e-20 9.56812627126528e-17
## CCL5              5 2.55331079650534e-18 9.55902821971673e-15
## GNG11             6 2.06622582992524e-18 8.70242663918765e-15
## HIST1H2AC         7 1.05437178945511e-17 3.55260030739007e-14

The $$t$$-test also allows us to specify a non-zero log-fold change as the null hypothesis. This allows us to consider the magnitude of the log-fold change in our $$p$$-value calculations, in a manner that is more rigorous than simply filtering directly on the log-fold changes (McCarthy and Smyth 2009). (Specifically, a simple threshold does not consider the variance and can enrich for genes that have both large log-fold changes and large variances.) We perform this by setting lfc= in our findMarkers() call - when combined with direction=, this tests for genes with log-fold changes that are significantly greater than 1:

markers.pbmc.up2 <- findMarkers(sce.pbmc, sce.pbmc$cluster, direction="up", lfc=1) interesting.up2 <- markers.pbmc.up2[[chosen]] interesting.up2[1:10,1:3] ## DataFrame with 10 rows and 3 columns ## Top p.value FDR ## <integer> <numeric> <numeric> ## TAGLN2 1 4.96068036386322e-20 5.57150547266692e-16 ## PF4 1 7.32680851798585e-31 2.46869486205015e-26 ## SDPR 2 1.27378543737411e-17 1.07297316317208e-13 ## TMSB4X 2 6.87688686215199e-23 1.15854912966674e-18 ## PPBP 3 3.12847829237349e-17 2.10821895166465e-13 ## NRGN 4 2.87887431982527e-16 1.61667985553654e-12 ## GPX1 5 4.19071194112722e-16 2.01716925920486e-12 ## GNG11 5 9.80841382733969e-15 4.1310586937298e-11 ## CCL5 6 1.71914198062501e-14 6.4360855439088e-11 ## HIST1H2AC 7 3.79676561296057e-14 1.27928220563093e-10 These two settings yield a more focused set of candidate marker genes that are upregulated in cluster 9 (Figure 11.2). best.set <- interesting.up2[interesting.up2$Top <= 5,]
logFCs <- as.matrix(best.set[,-(1:3)])
colnames(logFCs) <- sub("logFC.", "", colnames(logFCs))

library(pheatmap)
pheatmap(logFCs, breaks=seq(-5, 5, length.out=101))

Of course, this increased stringency is not without cost. If only upregulated genes are requested from findMarkers(), any cluster defined by downregulation of a marker gene will not contain that gene among the top set of features in its DataFrame. This is occasionally relevant for subtypes or other states that are distinguished by high versus low expression of particular genes2. Similarly, setting an excessively high log-fold change threshold may discard otherwise useful genes. For example, a gene upregulated in a small proportion of cells of a cluster will have a small log-fold change but can still be an effective marker if the focus is on specificity rather than sensitivity.

### 11.2.3 Finding cluster-specific markers

By default, findMarkers() will give a high ranking to genes that are differentially expressed in any pairwise comparison. This is because a gene only needs a very low $$p$$-value in a single pairwise comparison to achieve a low Top value. A more stringent approach would only consider genes that are differentially expressed in all pairwise comparisons involving the cluster of interest. To achieve this, we set pval.type="all" in findMarkers() to use an intersection-union test (Berger and Hsu 1996) where the combined $$p$$-value for each gene is the maximum of the $$p$$-values from all pairwise comparisons. A gene will only achieve a low combined $$p$$-value if it is strongly DE in all comparisons to other clusters.

# We can combine this with 'direction='.
markers.pbmc.up3 <- findMarkers(sce.pbmc, sce.pbmc$cluster, pval.type="all", direction="up") interesting.up3 <- markers.pbmc.up3[[chosen]] interesting.up3[1:10,1:2] ## DataFrame with 10 rows and 2 columns ## p.value FDR ## <numeric> <numeric> ## SDPR 2.89393755520111e-21 9.7508331984946e-17 ## PF4 5.79594346408051e-21 9.76442595393642e-17 ## PPBP 3.51585963085089e-20 3.94877914672967e-16 ## NRGN 9.29995460479362e-20 7.83381676134793e-16 ## GNG11 2.8250857832966e-18 1.90376880764791e-14 ## HIST1H2AC 1.34166933215319e-17 7.53436774626161e-14 ## TUBB1 2.36416578508064e-17 1.1379743137501e-13 ## TAGLN2 6.0999991063188e-17 2.56916712360382e-13 ## CLU 7.25940078629163e-12 2.71775833437012e-08 ## RGS18 1.2272980544931e-10 4.13525806480904e-07 This strategy will only report genes that are highly specific to the cluster of interest. When it works, it can be highly effective as it generates a small focused set of candidate markers. However, any gene that is expressed at the same level in two or more clusters will simply not be detected. This is likely to discard many interesting genes, especially if the clusters are finely resolved with weak separation. To give a concrete example, consider a mixed population of CD4+-only, CD8+-only, double-positive and double-negative T cells. With pval.type="all", neither Cd4 or Cd8 would be detected as subpopulation-specific markers because each gene is expressed in two subpopulations. In comparison, pval.type="any" will detect both of these genes as they will be DE between at least one pair of subpopulations. If pval.type="all" is too stringent yet pval.type="any" is too generous, a compromise is to set pval.type="some". For each gene, we apply the Holm-Bonferroni correction across its $$p$$-values and take the middle-most value as the combined $$p$$-value. This effectively tests the global null hypothesis that at least 50% of the individual pairwise comparisons exhibit no DE. We then rank the genes by their combined $$p$$-values to obtain an ordered set of marker candidates. The aim is to improve the conciseness of the top markers for defining a cluster while mitigating the risk of discarding useful genes that are not DE to all other clusters. The downside is that taking this compromise position sacrifices the theoretical guarantees offered at the other two extremes. markers.pbmc.up4 <- findMarkers(sce.pbmc, sce.pbmc$cluster,
pval.type="some", direction="up")
interesting.up4 <- markers.pbmc.up4[[chosen]]
interesting.up4[1:10,1:2]
## DataFrame with 10 rows and 2 columns
##                        p.value                  FDR
##                      <numeric>            <numeric>
## PF4       1.79356919960007e-30 6.04325206113248e-26
## TAGLN2    8.54344427645972e-21 1.43931405725517e-16
## SDPR      2.58318429399194e-20 2.90126038672549e-16
## NRGN      1.20650632582876e-19 1.01630060356185e-15
## PPBP      2.90773807243067e-19 1.95946653224957e-15
## TMSB4X    6.03866623529367e-18 3.39111366886642e-14
## CCL5      1.49261758813089e-17 7.18460814492599e-14
## GNG11     2.31949923904908e-17 9.76915092006495e-14
## GPX1      4.41862199429119e-17 1.65423388306275e-13
## HIST1H2AC 9.60765891218369e-17 3.23720459387118e-13

## 11.3 Alternative testing regimes

### 11.3.1 Using the Wilcoxon rank sum test

The Wilcoxon rank sum test (also known as the Wilcoxon-Mann-Whitney test, or WMW test) is another widely used method for pairwise comparisons between groups of observations. Its strength lies in the fact that it directly assesses separation between the expression distributions of different clusters. The WMW test statistic is proportional to the area-under-the-curve (AUC), i.e., the concordance probability, which is the probability of a random cell from one cluster having higher expression than a random cell from another cluster. In a pairwise comparison, AUCs of 1 or 0 indicate that the two clusters have perfectly separated expression distributions. Thus, the WMW test directly addresses the most desirable property of a candidate marker gene, while the $$t$$ test only does so indirectly via the difference in the means and the intra-group variance.

We perform WMW tests by again using the findMarkers() function, this time with test="wilcox". This returns a list of DataFrames containing ranked candidate markers for each cluster. The direction=, lfc= and pval.type= arguments can be specified and have the same interpretation as described for $$t$$-tests. We demonstrate below by detecting upregulated genes in each cluster with direction="up".

markers.pbmc.wmw <- findMarkers(sce.pbmc, test="wilcox",
sce.pbmc$cluster, direction="up") names(markers.pbmc.wmw) ## [1] "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" To explore the results in more detail, we focus on the DataFrame for cluster 9. The interpretation of Top is the same as described for $$t$$-tests, and Simes’ method is again used to combine $$p$$-values across pairwise comparisons. If we want more focused sets, we can also change pval.type= as previously described. interesting.wmw <- markers.pbmc.wmw[[chosen]] interesting.wmw[1:10,1:3] ## DataFrame with 10 rows and 3 columns ## Top p.value FDR ## <integer> <numeric> <numeric> ## PF4 1 3.13748878071285e-164 1.0571454697734e-159 ## TMSB4X 1 5.07214913170393e-27 2.05904810654979e-24 ## SDPR 2 2.12114320313369e-145 3.57348995431936e-141 ## NRGN 2 1.18239551949308e-131 7.96792692676018e-128 ## TAGLN2 3 1.55560414834632e-28 6.98860348991742e-26 ## PPBP 3 3.57147966050357e-134 4.01124785603364e-130 ## GNG11 3 2.46077445015089e-126 1.38188890538975e-122 ## TUBB1 3 7.55572875137595e-133 6.36456811372159e-129 ## HIST1H2AC 4 4.6909409429761e-94 1.43687785575123e-90 ## ACTB 5 1.53722625960524e-23 5.28523485623867e-21 The DataFrame contains the AUCs from comparing cluster 9 to every other cluster (Figure 11.3). A value greater than 0.5 indicates that the gene is upregulated in the current cluster compared to the other cluster, while values less than 0.5 correspond to downregulation. We would typically expect AUCs of 0.7-0.8 for a strongly upregulated candidate marker. best.set <- interesting.wmw[interesting.wmw$Top <= 5,]
AUCs <- as.matrix(best.set[,-(1:3)])
colnames(AUCs) <- sub("AUC.", "", colnames(AUCs))

library(pheatmap)
pheatmap(AUCs, breaks=seq(0, 1, length.out=21),
color=viridis::viridis(21))

One practical advantage of the WMW test over the Welch $$t$$-test is that it is symmetric with respect to differences in the size of the groups being compared. This means that, all else being equal, the top-ranked genes on each side of a DE comparison will have similar expression profiles regardless of the number of cells in each group. In contrast, the $$t$$-test will favor genes where the larger group has the higher relative variance as this increases the estimated degrees of freedom and decreases the resulting $$p$$-value. This can lead to unappealing rankings when the aim is to identify genes upregulated in smaller groups. The WMW test is not completely immune the variance effects - for example, it will slightly favor detection of DEGs at low average abundance where the greater number of ties at zero deflates the approximate variance of the rank sum statistic - but this is relatively benign as the selected genes are still fairly interesting. We observe both of these effects in a comparison between alpha and gamma cells in the human pancreas data set from Lawlor et al. (2017) (Figure 11.4).

#--- setup ---#
library(OSCAUtils)
chapterPreamble(use_cache = TRUE)

library(scRNAseq)
sce.lawlor <- LawlorPancreasData()

#--- gene-annotation ---#
library(AnnotationHub)
edb <- AnnotationHub()[["AH73881"]]
anno <- select(edb, keys=rownames(sce.lawlor), keytype="GENEID",
columns=c("SYMBOL", "SEQNAME"))
rowData(sce.lawlor) <- anno[match(rownames(sce.lawlor), anno[,1]),-1]

#--- quality-control ---#
library(scater)
stats <- perCellQCMetrics(sce.lawlor,
subsets=list(Mito=which(rowData(sce.lawlor)$SEQNAME=="MT"))) qc <- quickPerCellQC(stats, percent_subsets="subsets_Mito_percent") sce.lawlor <- sce.lawlor[,!qc$discard]

#--- normalization ---#
library(scran)
set.seed(1000)
clusters <- quickCluster(sce.lawlor)
sce.lawlor <- computeSumFactors(sce.lawlor, clusters=clusters)
sce.lawlor <- logNormCounts(sce.lawlor)
marker.lawlor.t <- findMarkers(sce.lawlor, groups=sce.lawlor$cell type, direction="up", restrict=c("Alpha", "Gamma/PP")) marker.lawlor.w <- findMarkers(sce.lawlor, groups=sce.lawlor$cell type,
direction="up", restrict=c("Alpha", "Gamma/PP"), test.type="wilcox")

# Upregulated in alpha:
marker.alpha.t <- marker.lawlor.t$Alpha marker.alpha.w <- marker.lawlor.w$Alpha
marker.alpha.t <- marker.alpha.t[order(marker.alpha.t$p.value),] marker.alpha.w <- marker.alpha.w[order(marker.alpha.w$p.value),]
chosen.alpha.t <- rownames(marker.alpha.t)[1:20]
chosen.alpha.w <- rownames(marker.alpha.w)[1:20]
u.alpha.t <- setdiff(chosen.alpha.t, chosen.alpha.w)
u.alpha.w <- setdiff(chosen.alpha.w, chosen.alpha.t)

# Upregulated in gamma:
marker.gamma.t <- marker.lawlor.t$Gamma/PP marker.gamma.w <- marker.lawlor.w$Gamma/PP
marker.gamma.t <- marker.gamma.t[order(marker.gamma.t$p.value),] marker.gamma.w <- marker.gamma.w[order(marker.gamma.w$p.value),]
chosen.gamma.t <- rownames(marker.gamma.t)[1:20]
chosen.gamma.w <- rownames(marker.gamma.w)[1:20]
u.gamma.t <- setdiff(chosen.gamma.t, chosen.gamma.w)
u.gamma.w <- setdiff(chosen.gamma.w, chosen.gamma.t)

# Examining all uniquely detected markers in each direction.
library(scater)
subset <- sce.lawlor[,sce.lawlor$cell type %in% c("Alpha", "Gamma/PP")] gridExtra::grid.arrange( plotExpression(subset, x="cell type", features=u.alpha.t, ncol=2) + ggtitle("Upregulated in alpha, t-test-only"), plotExpression(subset, x="cell type", features=u.alpha.w, ncol=2) + ggtitle("Upregulated in alpha, WMW-test-only"), plotExpression(subset, x="cell type", features=u.gamma.t, ncol=2) + ggtitle("Upregulated in gamma, t-test-only"), plotExpression(subset, x="cell type", features=u.gamma.w, ncol=2) + ggtitle("Upregulated in gamma, WMW-test-only"), ncol=2 ) The main disadvantage of the WMW test is that the AUCs are much slower to compute compared to $$t$$-statistics. This may be inconvenient for interactive analyses involving multiple iterations of marker detection. We can mitigate this to some extent by parallelizing these calculations using the BPPARAM= argument in findMarkers(). ### 11.3.2 Using a binomial test The binomial test identifies genes that differ in the proportion of expressing cells between clusters. (For the purposes of this section, a cell is considered to express a gene simply if it has non-zero expression for that gene.) This represents a much more stringent definition of marker genes compared to the other methods, as differences in expression between clusters are effectively ignored if both distributions of expression values are not near zero. The premise is that genes are more likely to contribute to important biological decisions if they were active in one cluster and silent in another, compared to more subtle “tuning” effects from changing the expression of an active gene. From a practical perspective, a binary measure of presence/absence is easier to validate. We perform pairwise binomial tests between clusters using the findMarkers() function with test="binom". This returns a list of DataFrames containing marker statistics for each cluster such as the Top rank and its $$p$$-value. Here, the effect size is reported as the log-fold change in this proportion between each pair of clusters. Large positive log-fold changes indicate that the gene is more frequently expressed in one cluster compared to the other. We focus on genes that are upregulated in each cluster compared to the others by setting direction="up". markers.pbmc.binom <- findMarkers(sce.pbmc, test="binom", sce.pbmc$cluster, direction="up")
names(markers.pbmc.binom)
##  [1] "1"  "2"  "3"  "4"  "5"  "6"  "7"  "8"  "9"  "10" "11" "12" "13" "14" "15" "16" "17" "18"
interesting.binom <- markers.pbmc.binom[[chosen]]
colnames(interesting.binom)
##  [1] "Top"      "p.value"  "FDR"      "logFC.1"  "logFC.2"  "logFC.3"  "logFC.4"  "logFC.5"
##  [9] "logFC.6"  "logFC.7"  "logFC.8"  "logFC.10" "logFC.11" "logFC.12" "logFC.13" "logFC.14"
## [17] "logFC.15" "logFC.16" "logFC.17" "logFC.18"

Figure 11.5 confirms that the top genes exhibit strong differences in the proportion of expressing cells in cluster 9 compared to the others.

library(scater)
plotExpression(sce.pbmc, x="cluster", features=top.genes)

The disadvantage of the binomial test is that its increased stringency can lead to the loss of good candidate markers. For example, GCG is a known marker for pancreatic alpha cells but is expressed in almost every other cell of the Lawlor et al. (2017) pancreas data (Figure 11.6) and would not be highly ranked by the binomial test.

plotExpression(sce.lawlor, x="cell type", features="ENSG00000115263")

Another property of the binomial test is that it will not respond to scaling normalization. Systematic differences in library size between clusters will not be considered when computing $$p$$-values or effect sizes. This is not necessarily problematic for marker gene detection - users can treat this as retaining information about the total RNA content, analogous to spike-in normalization in Section 7.4.

### 11.3.3 Using custom DE methods

It is also possible to perform marker gene detection based on precomputed DE statistics, which allows us to take advantage of more sophisticated tests in dedicated DE analysis packages in the Bioconductor ecosystem. To demonstrate, consider the voom() approach from the limma package (Law et al. 2014). We first process our SingleCellExperiment to obtain a fit object as shown below.

library(limma)
design <- model.matrix(~0 + cluster, data=colData(sce.pbmc))
colnames(design)
##  [1] "cluster1"  "cluster2"  "cluster3"  "cluster4"  "cluster5"  "cluster6"  "cluster7"  "cluster8"
##  [9] "cluster9"  "cluster10" "cluster11" "cluster12" "cluster13" "cluster14" "cluster15" "cluster16"
## [17] "cluster17" "cluster18"
# Removing very low-abundance genes.
keep <- calculateAverage(sce.pbmc) > 0.1
summary(keep)
##    Mode   FALSE    TRUE
## logical   29482    4212
y <- convertTo(sce.pbmc, subset.row=keep)
v <- voom(y, design)
fit <- lmFit(v, design)

We then perform pairwise comparisons between clusters using the TREAT strategy (McCarthy and Smyth 2009) to test for log-fold changes that are significantly greater than 0.5. For each comparison, we store the corresponding data frame of statistics in all.results, along with the identities of the clusters involved in all.pairs.

nclust <- length(unique(sce.pbmc$cluster)) all.results <- all.pairs <- list() counter <- 1L # Iterating across the first 'nclust' coefficients in design, # and comparing them to each other in a pairwise manner. for (x in seq_len(nclust)) { for (y in seq_len(x-1L)) { con <- integer(ncol(design)) con[x] <- 1 con[y] <- -1 fit2 <- contrasts.fit(fit, con) fit2 <- treat(fit2, robust=TRUE, lfc=0.5) res <- topTreat(fit2, n=Inf, sort.by="none") all.results[[counter]] <- res all.pairs[[counter]] <- colnames(design)[c(x, y)] counter <- counter+1L # Also filling the reverse comparison. res$logFC <- -res$logFC all.results[[counter]] <- res all.pairs[[counter]] <- colnames(design)[c(y, x)] counter <- counter+1L } } These custom results are consolidated into a single marker list for each cluster with the combineMarkers() function. This combines test statistics across all pairwise comparisons involving a single cluster, yielding a per-cluster DataFrame that can be interpreted in the same manner as discussed previously. all.pairs <- do.call(rbind, all.pairs) combined <- combineMarkers(all.results, all.pairs, pval.field="P.Value") # Inspecting results for our cluster of interest again. interesting.voom <- combined[[paste0("cluster", chosen)]] colnames(interesting.voom) ## [1] "Top" "p.value" "FDR" "logFC.cluster1" "logFC.cluster2" ## [6] "logFC.cluster3" "logFC.cluster4" "logFC.cluster5" "logFC.cluster6" "logFC.cluster7" ## [11] "logFC.cluster8" "logFC.cluster10" "logFC.cluster11" "logFC.cluster12" "logFC.cluster13" ## [16] "logFC.cluster14" "logFC.cluster15" "logFC.cluster16" "logFC.cluster17" "logFC.cluster18" head(interesting.voom[,1:3]) ## DataFrame with 6 rows and 3 columns ## Top p.value FDR ## <integer> <numeric> <numeric> ## RBP7 1 0 0 ## LMNA 1 0 0 ## FCRLA 1 0 0 ## RGS18 1 0 0 ## C2orf88 1 0 0 ## SDPR 1 0 0 By default, we do not use custom DE methods to perform marker detection, for several reasons. Many of these methods rely on empirical Bayes shrinkage to share information across genes in the presence of limited replication. However, this is unnecessary when there are large numbers of “replicate” cells in each group (Section 11.5.2). These methods also make stronger assumptions about the data (e.g., equal variances for linear models, the distribution of variances during empirical Bayes) that are more likely to be violated in noisy scRNA-seq contexts. From a practical perspective, they require more work to set up and take more time to run. Nonetheless, some custom methods (e.g., MAST) may provide a useful point of difference from the simpler tests, in which case they can be converted into a marker detection scheme as described above. ## 11.4 Handling blocking factors ### 11.4.1 Using the block= argument Large studies may contain factors of variation that are known and not interesting (e.g., batch effects, sex differences). If these are not modelled, they can interfere with marker gene detection - most obviously by inflating the variance within each cluster, but also by distorting the log-fold changes if the cluster composition varies across levels of the blocking factor. To avoid these issues, we set the block= argument in the findMarkers() call, as demonstrated below for the 416B data set. #--- setup ---# library(OSCAUtils) chapterPreamble(use_cache = TRUE) #--- loading ---# library(scRNAseq) sce.416b <- LunSpikeInData(which="416b") sce.416b$block <- factor(sce.416b$block) #--- gene-annotation ---# library(AnnotationHub) ens.mm.v97 <- AnnotationHub()[["AH73905"]] rowData(sce.416b)$ENSEMBL <- rownames(sce.416b)
rowData(sce.416b)$SYMBOL <- mapIds(ens.mm.v97, keys=rownames(sce.416b), keytype="GENEID", column="SYMBOL") rowData(sce.416b)$SEQNAME <- mapIds(ens.mm.v97, keys=rownames(sce.416b),
keytype="GENEID", column="SEQNAME")

library(scater)
rownames(sce.416b) <- uniquifyFeatureNames(rowData(sce.416b)$ENSEMBL, rowData(sce.416b)$SYMBOL)

#--- quality-control ---#
mito <- which(rowData(sce.416b)$SEQNAME=="MT") stats <- perCellQCMetrics(sce.416b, subsets=list(Mt=mito)) qc <- quickPerCellQC(stats, percent_subsets=c("subsets_Mt_percent", "altexps_ERCC_percent"), batch=sce.416b$block)
sce.416b <- sce.416b[,!qc$discard] #--- normalization ---# library(scran) sce.416b <- computeSumFactors(sce.416b) sce.416b <- logNormCounts(sce.416b) #--- variance-modelling ---# dec.416b <- modelGeneVarWithSpikes(sce.416b, "ERCC", block=sce.416b$block)
chosen.hvgs <- getTopHVGs(dec.416b, prop=0.1)

#--- batch-correction ---#
library(limma)
assay(sce.416b, "corrected") <- removeBatchEffect(logcounts(sce.416b),
design=model.matrix(~sce.416b$phenotype), batch=sce.416b$block)

#--- dimensionality-reduction ---#
sce.416b <- runPCA(sce.416b, ncomponents=10, subset_row=chosen.hvgs,
exprs_values="corrected", BSPARAM=BiocSingular::ExactParam())

set.seed(1010)
sce.416b <- runTSNE(sce.416b, dimred="PCA", perplexity=10)

#--- clustering ---#
my.dist <- dist(reducedDim(sce.416b, "PCA"))
my.tree <- hclust(my.dist, method="ward.D2")

library(dynamicTreeCut)
my.clusters <- unname(cutreeDynamic(my.tree, distM=as.matrix(my.dist),
minClusterSize=10, verbose=0))
sce.416b$cluster <- factor(my.clusters) m.out <- findMarkers(sce.416b, sce.416b$cluster,
block=sce.416b$block, direction="up")  For each gene, each pairwise comparion between clusters is performed separately in each level of the blocking factor - in this case, the plate of origin. The function will then combine $$p$$-values from different plates using Stouffer’s Z method to obtain a single $$p$$-value per pairwise comparison. (These $$p$$-values are further combined across comparisons to obtain a single $$p$$-value per gene, using either Simes’ method or an intersection-union test depending on the value of pval.type=.) This approach favours genes that exhibit consistent DE in the same direction in each plate. demo <- m.out[["1"]] demo[demo$Top <= 5,1:3]
## DataFrame with 13 rows and 3 columns
##                          Top              p.value                  FDR
##                    <integer>            <numeric>            <numeric>
## Foxs1                      1 1.37386906733998e-12  4.3556322458716e-10
## Pirb                       1 2.08277331538505e-33 1.21331959487757e-29
## Myh11                      1 6.44327049750405e-47 3.00282178265678e-42
## Tmsb4x                     2 3.22944306189829e-44 7.52524822283536e-40
## Ctsd                       2 6.78109368968016e-38 7.90065225784631e-34
## ...                      ...                  ...                  ...
## Tob1                       4 6.63870491176249e-09 1.18087864010603e-06
## Pi16                       4 1.69246748880475e-32  7.8875754848257e-29
## Cd53                       5 1.08574082890043e-27 2.97646268176917e-24
## Alox5ap                    5 1.33790809207412e-28 4.15679124820148e-25
## CBFB-MYH11-mcherry         5 3.75556318692085e-35 3.50048533526517e-31

The block= argument works with all tests shown above and is robust to difference in the log-fold changes or variance between batches. However, it assumes that each pair of clusters is present in at least one batch. In scenarios where cells from two clusters never co-occur in the same batch, the comparison will be impossible and NAs will be reported in the output.

### 11.4.2 Using the design= argument

Another approach is to define a design matrix containing the batch of origin as the sole factor. findMarkers() will then fit a linear model to the log-expression values, similar to the use of limma for bulk RNA sequencing data (Ritchie et al. 2015). This handles situations where multiple batches contain unique clusters, as comparisons can be implicitly performed via shared cell types in each batch. There is also a slight increase in power when information is shared across clusters for variance estimation.

# Setting up the design matrix (we remove intercept for full rank
# in the final design matrix with the cluster-specific terms).
design <- model.matrix(~sce.416b$block) design <- design[,-1,drop=FALSE] m.alt <- findMarkers(sce.416b, sce.416b$cluster,
design=design, direction="up")
demo <- m.alt[["1"]]
demo[demo$Top <= 5,1:3] ## DataFrame with 12 rows and 3 columns ## Top p.value FDR ## <integer> <numeric> <numeric> ## Gm6977 1 7.15186642956623e-24 8.77119955482908e-21 ## Myh11 1 4.56881782028081e-64 2.12925185696366e-59 ## Tmsb4x 2 9.48996559641993e-46 2.21135178327776e-41 ## Cd63 2 1.80445633283328e-15 7.85933485377219e-13 ## Cd200r3 2 2.40861012274322e-45 3.74169553867749e-41 ## ... ... ... ... ## Actb 4 5.61750739782943e-36 2.90887016409382e-32 ## Ctsd 4 2.08646138267597e-42 2.43093615695575e-38 ## Fth1 4 1.83949046845248e-23 2.143190344794e-20 ## Ccl9 5 1.75378024921589e-30 3.71514430611171e-27 ## CBFB-MYH11-mcherry 5 9.09026308764551e-39 8.47285241873258e-35 The use of a linear model makes some strong assumptions, necessitating some caution when interpreting the results. If the batch effect is not consistent across clusters, the variance will be inflated and the log-fold change estimates will be distorted. Variances are also assumed to be equal across groups, which is not true in general. In particular, the presence of clusters in which a gene is silent will shrink the residual variance towards zero, preventing the model from penalizing genes with high variance in other clusters. Thus, we generally recommend the use of block= where possible. ## 11.5 Invalidity of $$p$$-values ### 11.5.1 From data snooping All of our DE strategies for detecting marker genes between clusters are statistically flawed to some extent. The DE analysis is performed on the same data used to obtain the clusters, which represents “data dredging” (also known as fishing or data snooping). The hypothesis of interest - are there differences between clusters? - is formulated from the data, so we are more likely to get a positive result when we re-use the data set to test that hypothesis. The practical effect of data dredging is best illustrated with a simple simulation. We simulate i.i.d. normal values, perform $$k$$-means clustering and test for DE between clusters of cells with findMarkers(). The resulting distribution of $$p$$-values is heavily skewed towards low values (Figure 11.7). Thus, we can detect “significant” differences between clusters even in the absence of any real substructure in the data. This effect arises from the fact that clustering, by definition, yields groups of cells that are separated in expression space. Testing for DE genes between clusters will inevitably yield some significant results as that is how the clusters were defined. library(scran) set.seed(0) y <- matrix(rnorm(100000), ncol=200) clusters <- kmeans(t(y), centers=2)$cluster
out <- findMarkers(y, clusters)
hist(out[[1]]\$p.value, col="grey80", xlab="p-value")

For marker gene detection, this effect is largely harmless as the $$p$$-values are used only for ranking. However, it becomes an issue when the $$p$$-values are used to define “significant differences” between clusters with respect to an error rate threshold. Meaningful interpretation of error rates require consideration of the long-run behaviour, i.e., the rate of incorrect rejections if the experiment were repeated many times. The concept of statistical significance for differences between clusters is not applicable if clusters and their interpretations are not stably reproducible across (hypothetical) replicate experiments.

### 11.5.2 Nature of replication

The naive application of DE analysis methods will treat counts from the same cluster of cells as replicate observations. This is not the most relevant level of replication when cells are derived from the same biological sample (i.e., cell culture, animal or patient). DE analyses that treat cells as replicates fail to properly model the sample-to-sample variability (A. T. L. Lun and Marioni 2017). The latter is arguably the more important level of replication as different samples will necessarily be generated if the experiment is to be replicated. Indeed, the use of cells as replicates only masks the fact that the sample size is actually one in an experiment involving a single biological sample. This reinforces the inappropriateness of using the marker gene $$p$$-values to perform statistical inference.

We strongly recommend selecting some markers for use in validation studies with an independent replicate population of cells. A typical strategy is to identify a corresponding subset of cells that express the upregulated markers and do not express the downregulated markers. Ideally, a different technique for quantifying expression would also be used during validation, e.g., fluorescent in situ hybridisation or quantitative PCR. This confirms that the subpopulation genuinely exists and is not an artifact of the scRNA-seq protocol or the computational analysis.

## 11.6 Further comments

One consequence of the DE analysis strategy is that markers are defined relative to subpopulations in the same dataset. Biologically meaningful genes will not be detected if they are expressed uniformly throughout the population, e.g., T cell markers will not be detected if only T cells are present in the dataset. In practice, this is usually only a problem when the experimental data are provided without any biological context - certainly, we would hope to have some a priori idea about what cells have been captured. For most applications, it is actually desirable to avoid detecting such genes as we are interested in characterizing heterogeneity within the context of a known cell population. Continuing from the example above, the failure to detect T cell markers is of little consequence if we already know we are working with T cells. Nonetheless, if “absolute” identification of cell types is necessary, we discuss some strategies for doing so in Chapter 12.

Alternatively, marker detection can be performed by treating gene expression as a predictor variable for cluster assignment. For a pair of clusters, we can find genes that discriminate between them by performing inference with a logistic model where the outcome for each cell is whether it was assigned to the first cluster and the lone predictor is the expression of each gene. Treating the cluster assignment as the dependent variable is more philosophically pleasing in some sense, as the clusters are indeed defined from the expression data rather than being known in advance. (Note that this does not solve the data snooping problem.) In practice, this approach effectively does the same task as a Wilcoxon rank sum test in terms of quantifying separation between clusters. Logistic models have the advantage in that they can easily be extended to block on multiple nuisance variables, though this is not typically necessary in most use cases. Even more complex strategies use machine learning methods to determine which features contribute most to successful cluster classification, but this is probably unnecessary for routine analyses.

## Session Info

R version 3.6.1 (2019-07-05)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 14.04.5 LTS

Matrix products: default
BLAS:   /home/ramezqui/Rbuild/danbuild/R-3.6.1/lib/libRblas.so
LAPACK: /home/ramezqui/Rbuild/danbuild/R-3.6.1/lib/libRlapack.so

locale:
[1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C               LC_TIME=en_US.UTF-8
[4] LC_COLLATE=C               LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8       LC_NAME=C                  LC_ADDRESS=C
[10] LC_TELEPHONE=C             LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] parallel  stats4    stats     graphics  grDevices utils     datasets  methods   base

other attached packages:
[1] limma_3.42.0                scater_1.14.3               ggplot2_3.2.1
[4] pheatmap_1.0.12             scran_1.14.3                SingleCellExperiment_1.8.0
[7] SummarizedExperiment_1.16.0 DelayedArray_0.12.0         BiocParallel_1.20.0
[10] matrixStats_0.55.0          Biobase_2.46.0              GenomicRanges_1.38.0
[13] GenomeInfoDb_1.22.0         IRanges_2.20.0              S4Vectors_0.24.0
[16] BiocGenerics_0.32.0         Cairo_1.5-10                BiocStyle_2.14.0
[19] OSCAUtils_0.0.1

loaded via a namespace (and not attached):
[1] viridis_0.5.1            edgeR_3.28.0             BiocSingular_1.2.0
[4] viridisLite_0.3.0        DelayedMatrixStats_1.8.0 assertthat_0.2.1
[7] statmod_1.4.32           BiocManager_1.30.9       highr_0.8
[10] dqrng_0.2.1              GenomeInfoDbData_1.2.2   vipor_0.4.5
[13] yaml_2.2.0               pillar_1.4.2             lattice_0.20-38
[16] glue_1.3.1               digest_0.6.22            RColorBrewer_1.1-2
[19] XVector_0.26.0           colorspace_1.4-1         cowplot_1.0.0
[22] htmltools_0.4.0          Matrix_1.2-17            pkgconfig_2.0.3
[25] bookdown_0.15            zlibbioc_1.32.0          purrr_0.3.3
[28] scales_1.0.0             tibble_2.1.3             withr_2.1.2
[31] lazyeval_0.2.2           magrittr_1.5             crayon_1.3.4
[34] evaluate_0.14            beeswarm_0.2.3           tools_3.6.1
[37] stringr_1.4.0            munsell_0.5.0            locfit_1.5-9.1
[40] irlba_2.3.3              compiler_3.6.1           rsvd_1.0.2
[43] rlang_0.4.1              grid_3.6.1               RCurl_1.95-4.12
[46] BiocNeighbors_1.4.0      igraph_1.2.4.1           labeling_0.3
[49] bitops_1.0-6             rmarkdown_1.17           gtable_0.3.0
[52] R6_2.4.1                 gridExtra_2.3            knitr_1.26
[55] dplyr_0.8.3              stringi_1.4.3            ggbeeswarm_0.6.0
[58] Rcpp_1.0.3               tidyselect_0.2.5         xfun_0.11               

### Bibliography

Berger, R. L., and J. C. Hsu. 1996. “Bioequivalence Trials, Intersection-Union Tests and Equivalence Confidence Sets.” Statist. Sci. 11 (4). The Institute of Mathematical Statistics:283–319. https://doi.org/10.1214/ss/1032280304.

Law, C. W., Y. Chen, W. Shi, and G. K. Smyth. 2014. “voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.” Genome Biol. 15 (2):R29.

Lawlor, N., J. George, M. Bolisetty, R. Kursawe, L. Sun, V. Sivakamasundari, I. Kycia, P. Robson, and M. L. Stitzel. 2017. “Single-cell transcriptomes identify human islet cell signatures and reveal cell-type-specific expression changes in type 2 diabetes.” Genome Res. 27 (2):208–22.

Lun, A. T. L., and J. C. Marioni. 2017. “Overcoming confounding plate effects in differential expression analyses of single-cell RNA-seq data.” Biostatistics 18 (3):451–64.

McCarthy, D. J., and G. K. Smyth. 2009. “Testing significance relative to a fold-change threshold is a TREAT.” Bioinformatics 25 (6):765–71.

Ritchie, M. E., B. Phipson, D. Wu, Y. Hu, C. W. Law, W. Shi, and G. K. Smyth. 2015. “limma powers differential expression analyses for RNA-sequencing and microarray studies.” Nucleic Acids Res. 43 (7):e47.

Simes, R. J. 1986. “An Improved Bonferroni Procedure for Multiple Tests of Significance.” Biometrika 73 (3):751–54.

Soneson, C., and M. D. Robinson. 2018. “Bias, robustness and scalability in single-cell differential expression analysis.” Nat. Methods 15 (4):255–61.

1. Standard operating procedure is to (i) experience a brief but crushing bout of disappointment due to the poor quality of upregulated candidate markers, (ii) rage-quit, and (iii) remember to check the genes that are changing in the other direction.