Information Technology Reference
In-Depth Information
C by using Eq. ( 2.86 )to
Eq. ( 2.87 ); otherwise, we compose the a sso ciation matrix
C , and then construct a
de rive an equivale nt association matrix
λ
-cutting matrix
C λ = ( λ c ij ) m × m of C by using Eq. ( 2.87 ).
Step 4 If all elements of the i th line (column) in C λ
C λ )
(or
are the same as the
, then the IVIFSs A i
and A j are of the same type. By this principle, we can classify all these m IVIFSs
A j (
C λ
C λ )
corresponding elements of the j th line (column) in
(or
j
=
1
,
2
,...,
m
)
.
Example 2.2 (Xu et al. 2008) We conduct experiments on both the real-world and
simulated data sets in order to demonstrate the effectiveness of the proposed cluster-
ing algorithm for IVIFSs.
Below we first introduce the experimental tool and the experimental data set,
respectively:
(1) Experimental tool. In the experiments, we use Algorithm 2.2 as a tool imple-
mented by ourselves in MATLAB. Note that if we let
X ,
then Algorithm 2.2 reduces to the traditional algorithm for clustering fuzzy sets
(denoted by Algorithm-FSC). Therefore, we can use Algorithm 2.2 to compare the
performance of both Algorithm 2.2 and Algorithm-FSC.
(2) Experimental data set. We use two kinds of data in our experiments. The car
data set contains the information of ten new cars to be classified in the Guangzhou
car market in Guangdong, China. Let y i (
π(
x
) =
0
,
for any x
be the cars, each of which
is described by six attributes: (1) G 1 : Fuel economy; (2) G 2 : Aerod degree; (3) G 3 :
Price; (4) G 4 : Comfort; (5) G 5 : Design; and (6) G 6 : Safety. The weight vector of
these attributes is w
i
=
1
,
2
,...,
10
)
T . The characteristics of
the ten new cars under the six attributes are represented by the IFSs, as shown in
Table 2.2 (Xu et al. 2008).
We also use the simulated data set for the purpose of comparison, and assume
that there are three classes in the simulated data set, denoted by C i (
= (
0
.
15
,
0
.
10
,
0
.
30
,
0
.
20
,
0
.
15
,
0
.
10
)
.The
number of IFSs in each class is exactly the same: 300. The differences of the IFSs
in different classes lie in the following aspects: (1) The IFSs in C 1 have relatively
high and positive scores; (2) the IFSs in C 2 have relatively high and negative scores;
and (3) the IFSs in C 3 have relatively high and uncertain scores. Along this line, we
generate the simulated data set as follows: (1)
i
=
1
,
2
,
3
)
μ(
x
)
U
(
0
.
7
,
1
)
and v
(
x
) + π(
x
)
U
(
0
,
1
μ(
x
))
, for any x
C 1 ;(2) v
(
x
)
U
(
0
.
7
,
1
)
and
μ(
x
) + π(
x
)
U
(
0
,
1
v
(
x
))
,
for any x
C 2 ; and (3)
π(
x
)
U
(
0
.
7
,
1
)
,
μ(
x
) +
v
(
x
)
U
(
0
,
1
π(
x
))
, for any
x
. By doing
so, we generate a simulated data set which consists of 900 IFSs from 3 classes.
Now we utilize Algorithm 2.2 to cluster the ten new cars y i
C 3 , where U
(
a
,
b
)
means the uniform distribution on the interval
[
a
,
b
]
(
i
=
1
,
2
,...,
10
)
,
which involves the following steps (Xu et al. 2008):
 
Search WWH ::




Custom Search