Documente Academic
Documente Profesional
Documente Cultură
Before applying any clustering method on your data, its important to evaluate
whether the data sets contains meaningful clusters (i.e.: non-random structures) or
not. If yes, then how many clusters are there. This process is defined as the
assessing of clustering tendency or the feasibility of the clustering analysis.
A big issue, in cluster analysis, is that clustering methods will return clusters
even if the data does not contain any clusters. In other words, if you blindly
apply a clustering method on a data set, it will divide the data into clusters
because that is what it supposed to do.
Contents:
Required R packages
Data preparation
Visual inspection of the data
Why assessing clustering tendency?
Methods for assessing clustering tendency
Statistical methods
Visual methods
Summary
References
Related Book:
install.packages(c("factoextra", "clustertend"))
Data preparation
head(iris, 3)
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
We start by excluding the column Species at position 5
We start by visualizing the data to assess whether they contains any meaningful
clusters.
As the data contain more than two variables, we need to reduce the dimensionality
in order to plot a scatter plot. This can be done using principal component
analysis (PCA) algorithm (R function: prcomp()). After performing PCA, we use the
function fviz_pca_ind() [factoextra R package] to visualize the output.
The iris and the random data sets can be illustrated as follow:
library("factoextra")
# Plot faithful data set
fviz_pca_ind(prcomp(df), title = "PCA - Iris data",
habillage = iris$Species, palette = "jco",
geom = "point", ggtheme = theme_classic(),
legend = "bottom")
# Plot the random df
fviz_pca_ind(prcomp(random_df), title = "PCA - Random data",
geom = "point", ggtheme = theme_classic())
It can be seen that the iris data set contains 3 real clusters. However the
randomly generated data set doesnt contain any meaningful clusters.
library(factoextra)
set.seed(123)
# K-means on iris dataset
km.res1 <- kmeans(df, 3)
fviz_cluster(list(data = df, cluster = km.res1$cluster),
ellipse.type = "norm", geom = "point", stand = FALSE,
palette = "jco", ggtheme = theme_classic())
It can be seen that the k-means algorithm and the hierarchical clustering impose a
classification on the random uniformly distributed data set even if there are no
meaningful clusters present in it. This is why, clustering tendency assessment
methods should be used to evaluate the validity of clustering analysis. That is,
whether a given data set contains meaningful clusters.
In this section, well describe two methods for evaluating the clustering tendency:
i) a statistical (Hopkins statistic) and ii) a visual methods (Visual Assessment of
cluster Tendency (VAT) algorithm).
Statistical methods
The Hopkins statistic (Lawson and Jurs 1990) is used to assess the clustering
tendency of a data set by measuring the probability that a given data set is
generated by a uniform data distribution. In other words, it tests the spatial
randomness of the data.
For example, let D be a real data set. The Hopkins statistic can be calculated as
follow:
Compute the distance, xixi, from each real point to each nearest neighbor: For each
point pi?Dpi?D, find its nearest neighbor pjpj; then compute the distance between
pipi and pjpj and denote it as xi=dist(pi,pj)xi=dist(pi,pj)
Compute the distance, yiyi from each artificial point to the nearest real data
point: For each point qi?randomDqi?randomD, find its nearest neighbor qjqj in D;
then compute the distance between qiqi and qjqj and denote it
yi=dist(qi,qj)yi=dist(qi,qj)
Calculate the Hopkins statistic (H) as the mean nearest neighbor distance in the
random data set divided by the sum of the mean nearest neighbor distances in the
real and across the simulated data set.
H=?i=1nyi?i=1nxi+?i=1nyi
H=?i=1nyi?i=1nxi+?i=1nyi
A value for HH higher than 0.75 indicates a clustering tendency at the 90%
confidence level.
Put in other words, If the value of Hopkins statistic is close to 1, then we can
reject the null hypothesis and conclude that the dataset D is significantly a
clusterable data.
library(factoextra)
# Compute Hopkins statistic for iris dataset
res <- get_clust_tendency(df, n = nrow(df)-1, graph = FALSE)
res$hopkins_stat
## [1] 0.818
# Compute Hopkins statistic for a random dataset
res <- get_clust_tendency(random_df, n = nrow(random_df)-1,
graph = FALSE)
res$hopkins_stat
## [1] 0.466
It can be seen that the iris data set is highly clusterable (the H value = 0.82
which is far above the threshold 0.5). However the random_df data set is not
clusterable (H = 0.47)
Visual methods
The algorithm of the visual assessment of cluster tendency (VAT) approach (Bezdek
and Hathaway, 2002) is as follow:
Compute the dissimilarity (DM) matrix between the objects in the data set using the
Euclidean distance measure
Reorder the DM so that similar objects are close to one another. This process
create an ordered dissimilarity matrix (ODM)
The ODM is displayed as an ordered dissimilarity image (ODI), which is the visual
output of VAT
The dissimilarity matrix image confirms that there is a cluster structure in the
iris data set but not in the random one.
The VAT detects the clustering tendency in a visual form by counting the number of
square shaped dark blocks along the diagonal in a VAT image.
Summary
In this article, we described how to assess clustering tendency using the Hopkins
statistics and a visual method. After showing that a data is clusterable, the next
step is to determine the number of optimal clusters in the data. This will be
described in the next chapter.
References
Han, Jiawei, Micheline Kamber, and Jian Pei. 2012. Data Mining: Concepts and
Techniques. 3rd ed. Boston: Morgan Kaufmann. https://doi.org/10.1016/B978-0-12-
381479-1.00016-2.
Lawson, Richard G., and Peter C. Jurs. 1990. New Index for Clustering Tendency and
Its Application to Chemical Problems. Journal of Chemical Information and Computer
Sciences 30 (1): 3641. http://pubs.acs.org/doi/abs/10.1021/ci00065a010.