Biomedical Engineering Reference
In-Depth Information
Fig. 8.9. Example of two-class k -means clustering. Only the first two dimensions
are represented. After starting at random initial points, the centroids (marked by
x ) are re-calculated in every iteration along with the association of each data point
with its nearest centroid. The iteration stops when there is no further change in
centroid positions or associations
be readily observed. Hence, many Raman spectroscopic imaging applications
have utilized k -means clustering for both preliminary and final visualization
[149, 150].
8.3.10 Hierarchical Clustering Analysis (HCA)
Hierarchical clustering [151] does not involve the partitioning of data into
clusters in a single step. Instead, a series of partitions or fusions take place
such that a single cluster of M objects is successively organized into a
different number of groups. The smallest group consists of each sample and
the largest is the entire data set. Visualization of the data grouping can be ac-
complished at any intermediate step. Hierarchical clustering could broadly be
divided into agglomerative methods and divisive methods. In agglomerative
methods, every step involves the fusion of objects into successively smaller
number of groups. We start from M groups of objects each belonging to a
unique group and end when the entire data are in one group. Divisive meth-
ods involve the successive separation of objects into finer groupings at every
stage. In this case, we start from a single group of M objects and end when
each cluster contains a single object. The partitioning of fusion of groups is
based on a measure of similarity ( distance ) of the objects to one another. In
agglomerative hierarchical clustering, for example, at every step, we connect
or fuse two clusters that have the smallest distance between them. The dis-
tance between the newly formed cluster and all the other clusters is computed
Search WWH ::




Custom Search