Information Technology Reference
In-Depth Information
Table 2.9 The clustering
result of the car data set by
IFCM
Instance
Cluster ID
y 4
,
y 9
1
y 5
,
y 10
2
y 2
,
y 3
,
y 7
3
y 1
,
y 6
,
y 8
No significant membership
of any cluster
K
=
4:
.
.
.
0
381 0
336 0
283
0
.
085 0
.
071 0
.
871
0
.
105 0
.
097 0
.
798
0
.
942 0
.
029 0
.
029
0
.
090 0
.
819 0
.
090
U
(
4
) =
0
.
294 0
.
422 0
.
284
0
.
059 0
.
075 0
.
867
0
.
132 0
.
298 0
.
569
0
.
905 0
.
050 0
.
045
0
.
077 0
.
817 0
.
106
According to U
(
4
)
, we get the cluster validation measures V PC and V CE :
3
10
3
10
1
3
1
10
u ij =
V PC =
0
.
638
,
V CE =−
u ij log u ij =
0
.
947
i
=
1
j
=
1
i
=
1
j
=
1
, where
C i denotes Cluster i , then we have the clusters as follows (see Table 2.9 ) (Xu and Wu
2010).
Next, we pay special attention to the convergence of Algorithm 2.7 on the car data
set. Figure 2.3 (Xu and Wu 2010) shows the movements of the objective function
values J m (
If we further assume that u ij
0
.
75
A j
C i (
1
j
10, 1
i
3
)
along the iterations:
As can be seen in Fig. 2.3 , Algorithm 2.7 indeed can decrease the objective
function value continuously by iterating the two phases—updating the membership
degrees in Eq. ( 2.154 ) and updating the prototypical IFSs in Eq. ( 2.156 ).
If we utilize Algorithm 2.2 to cluster this car data set, the results are shown in
Table 2.10 (Xu and Wu 2010).
By comparing the above result by Algorithm 2.2 with the result by Algorithm
2.7, we know that Algorithm 2.2 can only produce “crisp” clusters. That is, each
instance of the car data set can only be assigned to one cluster if Algorithm-IFSC
is used. For Algorithm 2.7, however, things are different. By using the membership
degree matrix U , Algorithm 2.7 can produce “overlapped” clusters in which the
instances have different membership degrees. This is noteworthy, since in many real-
world applications, it makes sense that one instance shares some common grounds of
U
,
V
)
 
Search WWH ::




Custom Search