Chapter 10 Clustering

10.1 Motivation

Clustering is an unsupervised learning procedure that is used in scRNA-seq data analysis to empirically define groups of cells with similar expression profiles. Its primary purpose is to summarize the data in a digestible format for human interpretation. This allows us to describe population heterogeneity in terms of discrete labels that are easily understood, rather than attempting to comprehend the high-dimensional manifold on which the cells truly reside. After annotation based on marker genes, the clusters can be treated as proxies for more abstract biological concepts such as cell types or states. Clustering is thus a critical step for extracting biological insights from scRNA-seq data. Here, we demonstrate the application of several commonly used methods with the 10X PBMC dataset.

#--- setup ---#
library(OSCAUtils)
chapterPreamble(use_cache = TRUE)

#--- loading ---#
library(BiocFileCache)
bfc <- BiocFileCache("raw_data", ask = FALSE)
raw.path <- bfcrpath(bfc, file.path("http://cf.10xgenomics.com/samples",
    "cell-exp/2.1.0/pbmc4k/pbmc4k_raw_gene_bc_matrices.tar.gz"))
untar(raw.path, exdir=file.path(tempdir(), "pbmc4k"))

library(DropletUtils)
fname <- file.path(tempdir(), "pbmc4k/raw_gene_bc_matrices/GRCh38")
sce.pbmc <- read10xCounts(fname, col.names=TRUE)

#--- gene-annotation ---#
library(scater)
rownames(sce.pbmc) <- uniquifyFeatureNames(
    rowData(sce.pbmc)$ID, rowData(sce.pbmc)$Symbol)

library(EnsDb.Hsapiens.v86)
location <- mapIds(EnsDb.Hsapiens.v86, keys=rowData(sce.pbmc)$ID, 
    column="SEQNAME", keytype="GENEID")

#--- cell-detection ---#
set.seed(100)
e.out <- emptyDrops(counts(sce.pbmc))
sce.pbmc <- sce.pbmc[,which(e.out$FDR <= 0.001)]

#--- quality-control ---#
stats <- perCellQCMetrics(sce.pbmc, subsets=list(Mito=which(location=="MT")))
high.mito <- isOutlier(stats$subsets_Mito_percent, type="higher")
sce.pbmc <- sce.pbmc[,!high.mito]

#--- normalization ---#
library(scran)
set.seed(1000)
clusters <- quickCluster(sce.pbmc)
sce.pbmc <- computeSumFactors(sce.pbmc, cluster=clusters)
sce.pbmc <- logNormCounts(sce.pbmc)

#--- variance-modelling ---#
set.seed(1001)
dec.pbmc <- modelGeneVarByPoisson(sce.pbmc)
top.pbmc <- getTopHVGs(dec.pbmc, prop=0.1)

#--- dimensionality-reduction ---#
set.seed(10000)
sce.pbmc <- denoisePCA(sce.pbmc, subset.row=top.pbmc, technical=dec.pbmc)

set.seed(100000)
sce.pbmc <- runTSNE(sce.pbmc, use_dimred="PCA")

set.seed(1000000)
sce.pbmc <- runUMAP(sce.pbmc, use_dimred="PCA")
## class: SingleCellExperiment 
## dim: 33694 3922 
## metadata(1): Samples
## assays(2): counts logcounts
## rownames(33694): RP11-34P13.3 FAM138A ... AC213203.1 FAM231B
## rowData names(2): ID Symbol
## colnames(3922): AAACCTGAGAAGGCCT-1 AAACCTGAGACAGACC-1 ...
##   TTTGTCACAGGTCCAC-1 TTTGTCATCCCAAGAT-1
## colData names(2): Sample Barcode
## reducedDimNames(3): PCA TSNE UMAP
## spikeNames(0):
## altExpNames(0):

10.2 What is the “true clustering”?

At this point, it is worth stressing the distinction between clusters and cell types. The former is an empirical construct while the latter is a biological truth (albeit a vaguely defined one). For this reason, questions like “what is the true number of clusters?” are usually meaningless. We can define as many clusters as we like, with whatever algorithm we like - each clustering will represent its own partitioning of the high-dimensional expression space, and is as “real” as any other clustering.

A more relevant question is “how well do the clusters approximate the cell types?” Unfortunately, this is difficult to answer given the context-dependent interpretation of biological truth. Some analysts will be satisfied with resolution of the major cell types; other analysts may want resolution of subtypes; and others still may require resolution of different states (e.g., metabolic activity, stress) within those subtypes. Moreover, two clusterings can be highly inconsistent yet both valid, simply partitioning the cells based on different aspects of biology. Indeed, asking for an unqualified “best” clustering is akin to asking for the best magnification on a microscope without any context.

It is helpful to realize that clustering, like a microscope, is simply a tool to explore the data. We can zoom in and out by changing the resolution of the clustering parameters, and we can experiment with different clustering algorithms to obtain alternative perspectives of the data. This iterative approach is entirely permissible for data exploration, which constitutes the majority of all scRNA-seq data analyses.

10.3 Graph-based clustering

10.3.1 Background

Popularized by its use in Seurat, graph-based clustering is a flexible and scalable technique for clustering large scRNA-seq datasets. We first build a graph where each node is a cell that is connected to its nearest neighbours in the high-dimensional space. Edges are weighted based on the similarity between the cells involved, with higher weight given to cells that are more closely related. We then apply algorithms to identify “communities” of cells that are more connected to cells in the same community than they are to cells of different communities. Each community represents a cluster that we can use for downstream interpretation.

The major advantage of graph-based clustering lies in its scalability. It only requires a \(k\)-nearest neighbor search that can be done in log-linear time on average, in contrast to hierachical clustering methods with runtimes that are quadratic with respect to the number of cells. Graph construction avoids making strong assumptions about the shape of the clusters or the distribution of cells within each cluster, compared to other methods like \(k\)-means (that favor spherical clusters) or Gaussian mixture models (that require normality). From a practical perspective, each cell is forcibly connected to a minimum number of neighboring cells, which reduces the risk of generating many uninformative clusters consisting of one or two outlier cells.

The main drawback of graph-based methods is that, after graph construction, no information is retained about relationships beyond the neighbouring cells1. This has some practical consequences in datasets that exhibit differences in cell density, as more steps through the graph are required to move the same distance through a region of higher cell density. From the perspective of community detection algorithms, this effect “inflates” the high-density regions such that any internal substructure or noise is more likely to cause formation of subclusters. The resolution of clustering thus becomes dependent on the density of cells, which can occasionally be misleading if it overstates the heterogeneity in the data.

10.3.2 Implementation

There are several considerations in the practical execution of a graph-based clustering method:

  • How many neighbors are considered when constructing the graph.
  • What scheme is used to weight the edges.
  • Which community detection algorithm is used to define the clusters.

For example, the following code uses the 10 nearest neighbors of each cell to construct a shared nearest neighbor graph. Two cells are connected by an edge if any of their nearest neighbors are shared, with the edge weight defined from the highest average rank of the shared neighbors (Xu and Su 2015). The Walktrap method from the igraph package is then used to identify communities. All calculations are performed using the top PCs to take advantage of data compression and denoising.

## clust
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18 
## 585 518 364 458 170 791 295 107  45  46 152  84  40  60 142  16  28  21

We assign the cluster assignments back into our SingleCellExperiment object as a factor in the column metadata. This allows us to conveniently visualize the distribution of clusters in a \(t\)-SNE plot (Figure 10.1).

$t$-SNE plot of the 10X PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from graph-based clustering.

Figure 10.1: \(t\)-SNE plot of the 10X PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from graph-based clustering.

One of the most important parameters is k, the number of nearest neighbors used to construct the graph. This controls the resolution of the clustering where higher k yields a more inter-connected graph and broader clusters. Users can exploit this by experimenting with different values of k to obtain a satisfactory resolution.

## clust.5
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18 
##  81  45 457 296 168 350  79 432 428 897  64 171  68 135  79  26  18  30 
##  19  20  21  22  23 
##  21  16  36   9  16
## clust.50
##   1   2   3   4   5   6   7   8 
## 307 729 789 187 516 524 825  45

The graph itself can be visualized using a force-directed layout (Figure 10.2). This yields a dimensionality reduction result that is closely related to \(t\)-SNE and UMAP, though which of these is the most aesthetically pleasing is left to the eye of the beholder.

Force-directed layout for the shared nearest-neighbour graph of the PBMC dataset. Each point represents a cell and is coloured according to its assigned cluster identity.

Figure 10.2: Force-directed layout for the shared nearest-neighbour graph of the PBMC dataset. Each point represents a cell and is coloured according to its assigned cluster identity.

10.3.3 Other parameters

Further tweaking can be performed by changing the edge weighting scheme during graph construction. Setting type="number" will weight edges based on the number of nearest neighbors that are shared between two cells. Similarly, type="jaccard" will weight edges according to the Jaccard index of the two sets of neighbors. We can also disable weighting altogether by using buildKNNGraph(), which is occasionally useful for downstream graph operations that do not support weights.

All of these g variables are graph objects from the igraph package and can be used with any of the community detection algorithms provided by igraph. We have already mentioned the Walktrap approach, but many others are available to choose from:

It is then straightforward to compare two clustering strategies to see how they differ. For example, the results below suggest that Louvain is similar to Walktrap; fast-greedy yields coarser clusters; and Infomap provides higher resolution.

##        Walktrap
## Louvain   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16
##      1    1   0   0   0   0   1   0   0   0   0   0   0   0   0 134   0
##      2    0   0   0   0   0   0   0   0  45   0   0   0   0   0   0   0
##      3    0   0   0   0   0   0   0   0   0   0   0  84   0   0   0   0
##      4    0   0   0   0 170   0   0   0   0   0   0   0   0   0   0   0
##      5    0 517   0   0   0   0   0   0   0   0   0   0   0   0   0   0
##      6    0   0   0  11   0   0 248 107   0   0   0   0  39   0   0  16
##      7  368   0   1   0   0   2   0   0   0   0   0   0   0   0   0   0
##      8   14   1 360   0   0   0   0   0   0   0   3   0   0   0   0   0
##      9  181   0   3   0   0   0   0   0   0   0 149   0   0   0   3   0
##      10   0   0   0 447   0   0  47   0   0   0   0   0   1  60   0   0
##      11  21   0   0   0   0 788   0   0   0   0   0   0   0   0   5   0
##        Walktrap
## Louvain  17  18
##      1    0   0
##      2    0   0
##      3    0   0
##      4   26  21
##      5    0   0
##      6    0   0
##      7    0   0
##      8    2   0
##      9    0   0
##      10   0   0
##      11   0   0
##  [ reached getOption("max.print") -- omitted 1 row ]
##        Walktrap
## Infomap   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16
##      1    0   0   0   0   0 256   0   0   0   0   0   0   0   0   0   0
##      2    0   0   0   0   0 243   0   0   0   0   0   0   0   0   5   0
##      3    0   0   0 243   0   0   0   0   0   0   0   0   0   0   0   0
##      4    1   0 203   0   0   0   0   0   0   0   2   0   0   0   0   0
##      5    0   0   0 184   0   0   0   0   0   0   0   0   0   0   0   0
##      6    0   0   0  13   0   0 162   0   0   0   0   0   0   0   0   0
##      7   27   0   1   0   0   0   0   0   0   0 149   0   0   0   0   0
##      8    0 168   0   0   0   0   0   0   0   0   0   0   0   0   0   0
##      9    0   0   0   0 169   0   0   0   0   0   0   0   0   0   0   0
##      10   0 151   0   0   0   0   0   0   0   0   0   0   0   0   0   0
##      11   0   0 152   0   0   0   0   0   0   0   1   0   0   0   0   0
##        Walktrap
## Infomap  17  18
##      1    0   0
##      2    0   0
##      3    0   0
##      4    0   0
##      5    0   0
##      6    0   0
##      7    0   0
##      8    0   0
##      9    0   0
##      10   0   0
##      11   0   0
##  [ reached getOption("max.print") -- omitted 28 rows ]
##     Walktrap
## Fast   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17
##    1   0   0   0   0   0   0   0   0   0   0   0   0   0   0 108   0   0
##    2   2   0   0   0   0 771   0   0   0   0   0   0   0   0  20   0   0
##    3   0   0   0   0   0   0   0   0  45   0   0   0   0   0   0   0   0
##    4 583   0 364   0 170  20   0   0   0   0 152   0   0   0  14   0  28
##    5   0 518   0   0   0   0   0   0   0  45   0   0   0   0   0   0   0
##    6   0   0   0 458   0   0 295 107   0   1   0  84  40  60   0  16   0
##     Walktrap
## Fast  18
##    1   0
##    2   0
##    3   0
##    4  21
##    5   0
##    6   0

Some community detection algorithms operate by agglomeration and thus can be used to construct a hierarchical dendrogram based on the pattern of merges between clusters. The dendrogram itself is not particularly informative as it simply describes the order of merge steps performed by the algorithm; unlike the dendrograms produced by hierarchical clustering (Section 10.5), it does not capture the magnitude of differences between subpopulations. However, it does provide a convenient avenue for manually tuning the clustering resolution by generatig nested clusterings using the cut_at() function, as shown below.

## 
##    1    2    3    4    5 
## 3612  198   45   46   21
## 
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18 
## 533 364 458 442 170 791 295 107  45  46 152  84  40  60 142  76  52  16 
##  19  20 
##  28  21

Pipelines involving scran default to rank-based weights followed by Walktrap clustering. In contrast, Seurat uses Jaccard-based weights followed by Louvain clustering. Both of these strategies work well, and it is likely that the same could be said for many other combinations of weighting schemes and community detection algorithms.

10.3.4 Assessing cluster separation

When dealing with graphs, the modularity is a natural metric for evaluating the separation between communities/clusters. This is defined as the (scaled) difference between the observed total weight of edges between nodes in the same cluster and the expected total weight if edge weights were randomly distributed across all pairs of nodes. Larger modularity values indicate that there most edges occur within clusters, suggesting that the clusters are sufficiently well separated to avoid edges forming between neighboring cells in different clusters.

The standard approach is to report a single modularity value for a clustering on a given graph. This is useful for comparing different clusterings on the same graph - and indeed, some community detection algorithms are designed with the aim of maximizing the modularity - but it is less helpful for interpreting a given clustering. Rather, we use the clusterModularity() function with as.ratio=TRUE, which returns the ratio of the observed to expected sum of weights between each pair of clusters. We use the ratio instead of the difference as the former is less dependent on the number of cells in each cluster.

##        1        2       3     4         5     6       7         8
## 1  6.429 0.000151 0.25827 0.000 2.941e-04 0.220  0.0000  0.000000
## 2    NaN 7.782465 0.00815 0.000 0.000e+00 0.000  0.0000  0.001662
## 3    NaN      NaN 9.72531 0.000 2.746e-02 0.000  0.0000  0.000000
## 4    NaN      NaN     NaN 7.126 0.000e+00 0.000  0.7751  0.010573
## 5    NaN      NaN     NaN   NaN 2.609e+01 0.000  0.0000  0.000000
## 6    NaN      NaN     NaN   NaN       NaN 3.903  0.0000  0.000000
## 7    NaN      NaN     NaN   NaN       NaN   NaN 11.3751  1.050172
## 8    NaN      NaN     NaN   NaN       NaN   NaN     NaN 41.405410
## 9    NaN      NaN     NaN   NaN       NaN   NaN     NaN       NaN
## 10   NaN      NaN     NaN   NaN       NaN   NaN     NaN       NaN
## 11   NaN      NaN     NaN   NaN       NaN   NaN     NaN       NaN
##            9        10        11      12        13       14      15
## 1    0.00000   0.04319  0.385499  0.0000 4.832e-03  0.00000  0.3193
## 2    0.00000   0.12067  0.006695  0.0000 0.000e+00  0.00000  0.0000
## 3    0.00000   0.02429  0.922697  0.0000 0.000e+00  0.00000  0.0000
## 4    0.00000   0.00000  0.000000  0.0000 5.883e-01  1.32319  0.0000
## 5    0.00000   0.00000  0.009363  0.0000 0.000e+00  0.00000  0.0000
## 6    0.00000   0.02245  0.000000  0.0000 0.000e+00  0.00000  0.2046
## 7    0.06027   0.03715  0.000000  0.6942 1.166e+00  0.01437  0.0000
## 8    0.00000   0.07922  0.000000  0.2723 2.786e+00  0.00000  0.0000
## 9  154.09789   0.00000  0.000000  0.0000 0.000e+00  0.00000  0.0000
## 10       NaN 134.60709  0.000000  0.0000 3.755e-02  0.00000  0.0148
## 11       NaN       NaN 24.440203  0.0000 0.000e+00  0.00000  0.0000
##          16        17        18
## 1    0.0000   0.00000   0.06757
## 2    0.0000   0.03901   0.00000
## 3    0.0000   1.03255   0.00000
## 4    0.6286   0.00000   0.00000
## 5    0.0000   2.81928   1.59831
## 6    0.0000   0.00000   0.00000
## 7    1.5080   0.00000   0.00000
## 8    0.0000   0.00000   0.00000
## 9    0.0000   0.00000   0.00000
## 10   0.0000   0.00000   0.00000
## 11   0.0000   0.39224   0.00000
##  [ reached getOption("max.print") -- omitted 7 rows ]

In each matrix, each row/column corresponds to a cluster, and each entry of the matrix contains the ratio of the observed to total weight of edges between cells in the respective clusters. A dataset containing well-separated clusters should contain most of the observed total weight on the diagonal entries, i.e., most edges occur between cells in the same cluster. Indeed, concentration of the weight on the diagonal of (Figure 10.3) indicates that most of the clusters are well-separated, while some modest off-diagonal entries represent closely related clusters with more inter-connecting edges.

Heatmap of the log~2~-ratio of the total weight between nodes in the same cluster or in different clusters, relative to the total weight expected under a null model of random links.

Figure 10.3: Heatmap of the log2-ratio of the total weight between nodes in the same cluster or in different clusters, relative to the total weight expected under a null model of random links.

One useful approach is to use the ratio matrix to form another graph where the nodes are clusters rather than cells. Edges between nodes are weighted according to the ratio of observed to expected edge weights between cells in those clusters. We can then repeat our graph operations on this new cluster-level graph. For example, we could obtain clusters of clusters, or we could simply create a new cluster-based layout for visualization (Figure 10.4). This is analogous to the “graph abstraction” approach described by Wolf et al. (2017).

Force-directed layout showing the relationships between clusters based on the log-ratio of observed to expected total weights between nodes in different clusters. The thickness of the edge between a pair of clusters is proportional to the corresponding log-ratio.

Figure 10.4: Force-directed layout showing the relationships between clusters based on the log-ratio of observed to expected total weights between nodes in different clusters. The thickness of the edge between a pair of clusters is proportional to the corresponding log-ratio.

Incidentally, some readers may have noticed that all igraph commands were prefixed with igraph::. We have done this deliberately to avoid bringing igraph::normalize into the global namespace. Rather unfortunately, this normalize function accepts any argument and returns NULL, which causes difficult-to-diagnose bugs when it overwrites normalize from BiocGenerics.

10.4 \(k\)-means clustering

10.4.1 Background

\(k\)-means clustering is a classic technique that aims to partition cells into \(k\) clusters. Each cell is assigned to the cluster with the closest centroid, which is done by minimizing the within-cluster sum of squares using a random starting configuration for the \(k\) centroids. The main advantage of this approach lies in its speed, given the simplicity and ease of implementation of the algorithm. However, it suffers from a number of serious shortcomings that reduce its appeal for obtaining interpretable clusters:

  • It implicitly favours spherical clusters of equal radius. This can lead to unintuitive partitionings on real datasets that contain groupings with irregular sizes and shapes.
  • The number of clusters \(k\) must be specified beforehand and represents a hard cap on the resolution of the clustering.. For example, setting \(k\) to be below the number of cell types will always lead to co-clustering of two cell types, regardless of how well separated they are. In contrast, other methods like graph-based clustering will respect strong separation even if the relevant resolution parameter is set to a low value.
  • It is dependent on the randomly chosen initial coordinates. This requires multiple runs to verify that the clustering is stable.

That said, \(k\)-means clustering is still one of the best approaches for sample-based data compression. In this application, we set \(k\) to a large value such as the square root of the number of cells to obtain fine-grained clusters. These are not meant to be interpreted directly, but rather, the centroids are treated as “samples” for further analyses. The idea here is to obtain a single representative of each region of the expression space, reducing the number of samples and computational work in later steps like, e.g., trajectory reconstruction (Ji and Ji 2016). This approach will also eliminate differences in cell density across the expression space, ensuring that the most abundant cell type does not dominate downstream results.

10.4.2 Base implementation

Base R provides the kmeans() function that does as its name suggests. We call this on our top PCs to obtain a clustering for a specified number of clusters in the centers= argument, after setting the random seed to ensure that the results are reproducible. In general, the \(k\)-means clusters correspond to the visual clusters on the \(t\)-SNE plot in Figure 10.5, though there are some divergences that are not observed in, say, Figure 10.1. (This is at least partially due to the fact that \(t\)-SNE is itself graph-based and so will naturally agree more with a graph-based clustering strategy.)

## 
##   1   2   3   4   5   6   7   8   9  10 
## 472 200  46 515  90 320 241 865 735 438
$t$-SNE plot of the 10X PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from $k$-means clustering.

Figure 10.5: \(t\)-SNE plot of the 10X PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from \(k\)-means clustering.

If we were so inclined, we could obtain a “reasonable” choice of \(k\) by computing the gap statistic using methods from the cluster package. This is the log-ratio of the expected to observed within-cluster sum of squares, where the expected value is computed by randomly distributing cells within the minimum bounding box of the original data. A larger gap statistic represents a lower observed sum of squares - and thus better clustering - compared to a population with no structure. Ideally, we would choose the \(k\) that maximizes the gap statistic, but this is often unhelpful as the tendency of \(k\)-means to favour spherical clusters drives a large \(k\) to capture different cluster shapes. Instead, we choose the most parsimonious \(k\) beyond which the increases in the gap statistic are considered insignificant (Figure 10.6).

## [1] 9
Gap statistic with respect to increasing number of $k$-means clusters in the 10X PBMC dataset. The red line represents the chosen $k$.

Figure 10.6: Gap statistic with respect to increasing number of \(k\)-means clusters in the 10X PBMC dataset. The red line represents the chosen \(k\).

A more practical use of \(k\)-means is to deliberately set \(k\) to a large value to achieve overclustering. This will forcibly partition cells inside broad clusters that do not have well-defined internal structure. For example, we might be interested in the change in expression from one “side” of a cluster to the other, but the lack of any clear separation within the cluster makes it difficult to separate with graph-based methods, even at the highest resolution. \(k\)-means has no such problems and will readily split these broad clusters for greater resolution, though obviously one must be prepared for the additional work involved in interpreting a greater number of clusters.

## 
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18 
## 153 172  47 254 125 207 160 334 204 442 163  68 192 271 113 168 124 420 
##  19  20 
##  45 260
$t$-SNE plot of the 10X PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from $k$-means clustering with $k=20$.

Figure 10.7: \(t\)-SNE plot of the 10X PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from \(k\)-means clustering with \(k=20\).

10.4.3 Assessing cluster separation

The within-cluster sum of squares (WCSS) for each cluster is the most relevant diagnostic for \(k\)-means, given that the algorithm aims to find a clustering that minimizes the WCSS. Specifically, we use the WCSS to compute the root-mean-squared deviation (RMSD) that represents the spread of cells within each cluster. A cluster is more likely to have a low RMSD if it has no internal structure and is separated from other clusters (such that there are not many cells on the boundaries between clusters, which would result in a higher sum of squares from the centroid).

##     wcss ncells   rms
## 1   2872    153 4.333
## 2   4204    172 4.944
## 3   1443     47 5.542
## 4   4275    254 4.103
## 5   1711    125 3.700
## 6   3055    207 3.842
## 7   1633    160 3.194
## 8   2160    334 2.543
## 9   7027    204 5.869
## 10  2858    442 2.543
## 11  4259    163 5.112
## 12  2289     68 5.802
## 13  2112    192 3.317
## 14  6391    271 4.856
## 15  1603    113 3.767
## 16 12400    168 8.591
## 17  1823    124 3.834
## 18  7026    420 4.090
## 19  2591     45 7.587
## 20  3640    260 3.742

(As an aside, the RMSDs of the clusters are poorly correlated with their sizes in Figure 10.7. This highlights the risks of attempting to quantitatively interpret the sizes of visual clusters in \(t\)-SNE plots.)

To explore the relationships between \(k\)-means clusters, a natural approach is to compute distances between their centroids. This directly lends itself to visualization as a tree after hierarchical clustering (Figure 10.8).

Hierarchy of $k$-means cluster centroids, using Ward's minimum variance method.

Figure 10.8: Hierarchy of \(k\)-means cluster centroids, using Ward’s minimum variance method.

10.5 Hierarchical clustering

10.5.1 Background

Hierarchical clustering is an ancient technique that aims to generate a dendrogram containing a hierarchy of samples. This is most commonly done by greedily agglomerating samples into clusters, then agglomerating those clusters into larger clusters, and so on until all samples belong to a single cluster. Variants of hierarchical clustering methods primarily differ in how they choose to perform the agglomerations. For example, complete linkage aims to merge clusters with the smallest maximum distance between their elements, while Ward’s method aims to minimize the increase in within-cluster variance.

In the context of scRNA-seq, the main advantage of hierarchical clustering lies in the production of the dendrogram. This is a rich summary that describes the relationships between cells and subpopulations at various resolutions and in a quantitative manner based on the branch lengths. Users can easily “cut” the tree at different heights to define clusters with different granularity, where clusters defined at high resolution are guaranteed to be nested within those defined at a lower resolution. (Guaranteed nesting can be helpful for interpretation, as discussed in Section 10.6.) The dendrogram is also a natural representation of the data in situations where cells have descended from a relatively recent common ancestor.

In practice, hierachical clustering is too slow to be used for anything but the smallest scRNA-seq datasets. Most variants require a cell-cell distance matrix that is prohibitively expensive to compute for many cells. Greedy agglomeration is also likely to result in a quantitatively suboptimal partitioning (as defined by the agglomeration measure) at higher levels of the dendrogram when the number of cells and merge steps is high. Nonetheless, we will still demonstrate the application of hierarchical clustering here, as it can occasionally be useful for squeezing more information out of datasets with very few cells.

10.5.2 Implementation

As the PBMC dataset is too large, we will demonstrate on the 416B dataset instead.

#--- setup ---#
library(OSCAUtils)
chapterPreamble(use_cache = TRUE)

#--- loading ---#
library(scRNAseq)
sce.416b <- LunSpikeInData(which="416b") 
sce.416b$block <- factor(sce.416b$block)

#--- gene-annotation ---#
library(AnnotationHub)
ens.mm.v97 <- AnnotationHub()[["AH73905"]]
rowData(sce.416b)$ENSEMBL <- rownames(sce.416b)
rowData(sce.416b)$SYMBOL <- mapIds(ens.mm.v97, keys=rownames(sce.416b),
    keytype="GENEID", column="SYMBOL")
rowData(sce.416b)$SEQNAME <- mapIds(ens.mm.v97, keys=rownames(sce.416b),
    keytype="GENEID", column="SEQNAME")

library(scater)
rownames(sce.416b) <- uniquifyFeatureNames(rowData(sce.416b)$ENSEMBL, 
    rowData(sce.416b)$SYMBOL)

#--- quality-control ---#
mito <- which(rowData(sce.416b)$SEQNAME=="MT")
stats <- perCellQCMetrics(sce.416b, subsets=list(Mt=mito))
qc <- quickPerCellQC(stats, percent_subsets=c("subsets_Mt_percent",
    "altexps_ERCC_percent"), batch=sce.416b$block)
sce.416b <- sce.416b[,!qc$discard]

#--- normalization ---#
library(scran)
sce.416b <- computeSumFactors(sce.416b)
sce.416b <- logNormCounts(sce.416b)

#--- variance-modelling ---#
dec.416b <- modelGeneVarWithSpikes(sce.416b, "ERCC", block=sce.416b$block)
chosen.hvgs <- getTopHVGs(dec.416b, prop=0.1)

#--- batch-correction ---#
library(limma)
assay(sce.416b, "corrected") <- removeBatchEffect(logcounts(sce.416b), 
    design=model.matrix(~sce.416b$phenotype), batch=sce.416b$block)

#--- dimensionality-reduction ---#
sce.416b <- runPCA(sce.416b, ncomponents=10, subset_row=chosen.hvgs,
    exprs_values="corrected", BSPARAM=BiocSingular::ExactParam())

set.seed(1010)
sce.416b <- runTSNE(sce.416b, dimred="PCA", perplexity=10)
## class: SingleCellExperiment 
## dim: 46604 185 
## metadata(0):
## assays(3): counts logcounts corrected
## rownames(46604): 4933401J01Rik Gm26206 ... CAAA01147332.1
##   CBFB-MYH11-mcherry
## rowData names(4): Length ENSEMBL SYMBOL SEQNAME
## colnames(185): SLX-9555.N701_S502.C89V9ANXX.s_1.r_1
##   SLX-9555.N701_S503.C89V9ANXX.s_1.r_1 ...
##   SLX-11312.N712_S507.H5H5YBBXX.s_8.r_1
##   SLX-11312.N712_S517.H5H5YBBXX.s_8.r_1
## colData names(9): Source Name cell line ... spike-in addition
##   block
## reducedDimNames(2): PCA TSNE
## spikeNames(0):
## altExpNames(2): ERCC SIRV

We compute a cell-cell distance matrix using the top PCs and we apply hierarchical clustering with Ward’s method. The resulting tree in Figure 10.9 shows a clear split in the population caused by oncogene induction. While both Ward’s method and complete linkage (hclust()’s default) yield compact clusters, we prefer the former it is less affected by differences in variance between clusters.

Hierarchy of cells in the 416B data set after hierarchical clustering, where each leaf node is a cell that is coloured according to its oncogene induction status (red is induced, blue is control) and plate of origin (light or dark).

Figure 10.9: Hierarchy of cells in the 416B data set after hierarchical clustering, where each leaf node is a cell that is coloured according to its oncogene induction status (red is induced, blue is control) and plate of origin (light or dark).

To obtain explicit clusters, we “cut” the tree by removing internal branches such that every subtree represents a distinct cluster. This is most simply done by removing internal branches above a certain height of the tree, as performed by the cutree() function. We generally prefer to use the dynamicTreeCut package, which uses the shape of the branches to obtain a more suitable partitioning for complex dendrograms (Figure 10.10).

##  ..cutHeight not given, setting it to 783  ===>  99% of the (truncated) height range in dendro.
##  ..done.
## clust.416b
##  1  2  3  4 
## 78 69 24 14
Hierarchy of cells in the 416B data set after hierarchical clustering, where each leaf node is a cell that is coloured according to its assigned cluster identity from a dynamic tree cut.

Figure 10.10: Hierarchy of cells in the 416B data set after hierarchical clustering, where each leaf node is a cell that is coloured according to its assigned cluster identity from a dynamic tree cut.

This generally corresponds well to the grouping of cells on a \(t\)-SNE plot (Figure 10.11). The exception is cluster 2, which is split across two visual clusters in the plot. We attribute this to a distortion introduced by \(t\)-SNE rather than inappropriate behaviour of the clustering algorithm, based on the examination of some later diagnostics.

$t$-SNE plot of the 416B dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from hierarchical clustering.

Figure 10.11: \(t\)-SNE plot of the 416B dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from hierarchical clustering.

10.5.3 Assessing cluster separation

We check the separation of the clusters using the silhouette width (Figure 10.12). For each cell, we compute the average distance to cells in each other cluster. We then compute the minimum of these average distances across all clusters, as well as the average distance to cells in the same cluster. The silhouette width for each cell is defined as the difference between these two values divided by their maximum. Cells with large positive silhouette widths are closer to other cells in the same cluster than to cells in different clusters.

Each cluster would ideally contain large positive silhouette widths, indicating that it is well-separated from other clusters. This is indeed the case in Figure 10.12 - and in fact, cluster 2 has the largest width of all, indicating that it is a more coherent cluster than portrayed in Figure 10.11. Smaller widths can arise from the presence of internal subclusters, which inflates the within-cluster distance; or overclustering, where cells at the boundary of a partition are closer to the neighboring cluster than their own cluster.

Silhouette widths for cells in each cluster in the 416B dataset. Each bar represents a cell, grouped by the cluster to which it is assigned.

Figure 10.12: Silhouette widths for cells in each cluster in the 416B dataset. Each bar represents a cell, grouped by the cluster to which it is assigned.

For a more detailed examination, we identify the closest neighboring cluster for cells with negative widths. This provides a perspective on the relationships between clusters that is closer to the raw data than the dendrogram in Figure 10.10.

##        Neighbor
## Cluster 1 2 3
##       2 0 0 3
##       3 1 3 0

The average silhouette width across all cells can also be used to choose clustering parameters. The aim is to maximize the average silhouette width in order to obtain well-separated clusters. This can be helpful to automatically obtain a “reasonable” clustering, though in practice, the clustering that yields the strongest separation often does not provide the most biological insight.

10.6 Subclustering

Another simple approach to improving resolution is to repeat the feature selection and clustering within a single cluster. This aims to select HVGs and PCs that are more relevant to internal structure, improving resolution by avoiding noise from unnecessary features. Subsetting also encourages clustering methods to separate cells according to more modest heterogeneity in the absence of distinct subpopulations. We demonstrate with a cluster of putative memory T cells from the PBMC dataset, identified according to several markers (Figure 10.13).

Distribution of log-normalized expression values for several T cell markers within each cluster in the 10X PBMC dataset. Each cluster is color-coded for convenience.

Figure 10.13: Distribution of log-normalized expression values for several T cell markers within each cluster in the 10X PBMC dataset. Each cluster is color-coded for convenience.

We apply graph-based clustering within this memory subset to obtain CD4+ and CD8+ subclusters (Figure 10.14). Admittedly, the expression of CD4 is so low that the change is rather modest, but the interpretation is clear enough.

Distribution of _CD4_ and _CD8A_ log-normalized expression values within each cluster in the memory T cell subset of the 10X PBMC dataset.

Figure 10.14: Distribution of CD4 and CD8A log-normalized expression values within each cluster in the memory T cell subset of the 10X PBMC dataset.

For subclustering analyses, it is helpful to define a customized function that calls our desired algorithms to obtain a clustering from a given SingleCellExperiment. This function can then be applied multiple times on different subsets without having to repeatedly copy and modify the code for each subset. (Of course, the downside is that this assumes that a similar analysis is appropriate for each subset. If different subsets require extensive reparametrization, copying the code may actually be more straightforward.)

Subclustering is a general and conceptually straightforward procedure for increasing resolution. It can also simplify the interpretation of the subclusters, which only need to be considered in the context of the parent cluster’s identity - for example, we did not have to re-identify the cells in cluster 6 as T cells. However, this is a double-edged sword as it is difficult for practitioners to consider the uncertainty of identification for parent clusters when working with deep nesting. If cell types or states span cluster boundaries, conditioning on the putative cell type identity of the parent cluster can encourage the construction of a “house of cards” of cell type assignments, e.g., where a subcluster of one parent cluster is actually contamination from a cell type in a separate parent cluster.

Session Info

R version 3.6.1 (2019-07-05)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 14.04.5 LTS

Matrix products: default
BLAS:   /home/ramezqui/Rbuild/danbuild/R-3.6.1/lib/libRblas.so
LAPACK: /home/ramezqui/Rbuild/danbuild/R-3.6.1/lib/libRlapack.so

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] parallel  stats4    stats     graphics  grDevices utils     datasets 
[8] methods   base     

other attached packages:
 [1] dynamicTreeCut_1.63-1       dendextend_1.12.0          
 [3] cluster_2.1.0               pheatmap_1.0.12            
 [5] scater_1.14.0               ggplot2_3.2.1              
 [7] scran_1.14.0                SingleCellExperiment_1.8.0 
 [9] SummarizedExperiment_1.16.0 DelayedArray_0.12.0        
[11] BiocParallel_1.20.0         matrixStats_0.55.0         
[13] Biobase_2.46.0              GenomicRanges_1.38.0       
[15] GenomeInfoDb_1.22.0         IRanges_2.20.0             
[17] S4Vectors_0.24.0            BiocGenerics_0.32.0        
[19] Cairo_1.5-10                BiocStyle_2.14.0           
[21] OSCAUtils_0.0.1            

loaded via a namespace (and not attached):
 [1] viridis_0.5.1            edgeR_3.28.0            
 [3] BiocSingular_1.2.0       viridisLite_0.3.0       
 [5] DelayedMatrixStats_1.8.0 assertthat_0.2.1        
 [7] statmod_1.4.32           highr_0.8               
 [9] BiocManager_1.30.9       dqrng_0.2.1             
[11] GenomeInfoDbData_1.2.2   vipor_0.4.5             
[13] yaml_2.2.0               pillar_1.4.2            
[15] lattice_0.20-38          glue_1.3.1              
[17] limma_3.42.0             digest_0.6.22           
[19] RColorBrewer_1.1-2       XVector_0.26.0          
[21] colorspace_1.4-1         cowplot_1.0.0           
[23] htmltools_0.4.0          Matrix_1.2-17           
[25] pkgconfig_2.0.3          bookdown_0.14           
[27] zlibbioc_1.32.0          purrr_0.3.3             
[29] scales_1.0.0             tibble_2.1.3            
[31] withr_2.1.2              lazyeval_0.2.2          
[33] magrittr_1.5             crayon_1.3.4            
[35] evaluate_0.14            beeswarm_0.2.3          
[37] tools_3.6.1              stringr_1.4.0           
[39] munsell_0.5.0            locfit_1.5-9.1          
[41] irlba_2.3.3              compiler_3.6.1          
[43] rsvd_1.0.2               rlang_0.4.1             
[45] grid_3.6.1               RCurl_1.95-4.12         
[47] BiocNeighbors_1.4.0      igraph_1.2.4.1          
[49] bitops_1.0-6             labeling_0.3            
[51] rmarkdown_1.16           gtable_0.3.0            
[53] R6_2.4.0                 gridExtra_2.3           
[55] knitr_1.25               dplyr_0.8.3             
[57] stringi_1.4.3            ggbeeswarm_0.6.0        
[59] Rcpp_1.0.2               tidyselect_0.2.5        
[61] xfun_0.10               

Bibliography

Ji, Z., and H. Ji. 2016. “TSCAN: Pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis.” Nucleic Acids Res. 44 (13):e117.

Wolf, F. Alexander, Fiona Hamey, Mireya Plass, Jordi Solana, Joakim S. Dahlin, Berthold Gottgens, Nikolaus Rajewsky, Lukas Simon, and Fabian J. Theis. 2017. “Graph Abstraction Reconciles Clustering with Trajectory Inference Through a Topology Preserving Map of Single Cells.” bioRxiv. Cold Spring Harbor Laboratory. https://doi.org/10.1101/208819.

Xu, C., and Z. Su. 2015. “Identification of cell types from single-cell transcriptomes using a novel clustering method.” Bioinformatics 31 (12):1974–80.


  1. Sten Linarrsson talked about this in SCG2018, but I don’t know where that work ended up. So this is what passes as a reference for the time being.