Image Processing Reference
In-Depth Information
where
cc
(
x
,
y
) is the
Pearson's correlation coefficient
given by
n
(
x
µ
)(
y
µ
)
i
x
i
y
i
=
1
cc
(,)
xy
=
(16.5)
SS
xy
with
µ
and
µ
the mean values of
x
and
y
and
S
and
S
their standard deviations.
x
y
x
y
16.3.2
C
T
LUSTERING
ECHNIQUES
The clustering procedure consists in finding
clusters and assigning each element
of the data set to a cluster. Each cluster may be individuated by a cluster centroid
that is a time course representative of the cluster. The goal is to find homogeneous
clusters, i.e., minimizing within-group variability, and at the same time separable
clusters, i.e., maximizing between-group dissimilarities. If we define a within-class
inertia as
k
K
1
I
=
d
2
(,)
xc
(16.6)
W
j
k
N
k
=
1
jC
k
and the between-class inertia as
K
1
I
=
|
Cd
|
2
(,
cc
)
(16.7)
B
k
k
N
k
=
1
, this goal can be seen as
minimizing within-class inertia while maximizing between-class inertia. Several
clustering techniques have been applied in the analysis of functional data sets such
as hierarchical clustering [11,13-15],
where
c
is the center of gravity of the cluster centers
c
k
-means [11], fuzzy clustering [16-21], and
self-organizing maps [22,23]. A comparison can be found in Reference 24.
These clustering techniques can be first divided into hierarchical methods
and partitioning methods, because the former do not need to specify in advance
the number of clusters whereas the latter need this preliminary information.
k
16.3.2.1
Hierarchical Methods
These can be classified into agglomerative methods, which start with N clusters
of N objects and end with one cluster of N objects, and divisive methods, which
use the inverse process. Both these iterative procedures result in a treelike struc-
ture called the
. In an agglomerative approach, all the N different
elements (N voxels time series) are first classified into N different groups or
clusters. The distance or dissimilarity matrix between the N elements is computed,
dendrogram
 
Search WWH ::




Custom Search