Image Processing Reference
In-Depth Information
deleting the rows and column of the two clusters C i and C j that are merged,
and a similar procedure is followed in the next level where the distance is
computed between the old cluster, that is, the merged cluster (say C m ) and
a new cluster ( C i , C j ). This labelling procedure continues till a single cluster
is obtained. One may cut the dendrogram at any level to produce differ-
ent clustering. In this method, no a priori information about the clusters is
required. It is useful when there is a large amount of data.
There is also a divisive hierarchical clustering which does the reverse by
starting with all objects in one cluster and subdividing them into smaller
pieces. It follows a top-down approach, and the method starts with one clus-
ter and the clusters are split recursively as one move downs the hierarchy.
Divisive methods are not generally available and rarely have been applied. It
starts with all the objects in one cluster. Clusters are subdivided into smaller
clusters till each object forms a cluster on its own following the condition of
termination criterion. A cluster splits according to the maximum Euclidian
distance between the closest neighbouring objects in the cluster.
7.4 Kernel Clustering
To improve the FCM algorithm for accurate detection of boundaries, an alter-
native approach is used that transforms the input data to a high-dimensional
feature space using a non-linear mapping function so that non-linearity in the
input data can be treated linearly in the feature space according to Mercer's
theorem. Direct computation in high-dimensional feature space consumes
much time, and thus, Mercer kernels are used to make it practical. A popular
transformation of data is the use of the kernel method. The inner product
in the clustering algorithm is replaced by a kernel function. Clustering is
performed after transforming the input data to a high-dimensional feature
space.
Let us define a non-linear map ϕ : x → ϕ( x ) ∈ F , where x X , X denotes the
data space and F is the transformed feature space. With the incorporation of
kernel, the objective function in the feature space is rewritten as
n
c
1
2
ϕϕ
m
JUV
(,)
=
u
(
x
)
(
v
)
( 7. 4)
m
ik
i
k
i
=
1
k
=
where
2
ϕϕ ϕϕ ϕϕ ϕϕ
() () ( ,()
x
v xx v
=
+
(
), () ( ,( )
v xv
2
( 7. 5 )
i
k
i
i
k
k
i
k
Expressing K ( x i , v k ) = 〈φ( x i ), φ( v k )〉
Search WWH ::




Custom Search