Information Technology Reference
In-Depth Information
Table 1.8
Fuzzy decision
matrix
F
(
1
)
y
1
y
2
y
3
y
4
G
1
0.4
0.5
0.2
0.3
G
2
0.5
0.6
0.7
0.6
G
3
0.7
0.2
0.4
0.5
G
4
0.3
0.5
0.5
0.8
Table 1.9
Fuzzy decision
matrix
F
(
2
)
y
1
y
2
y
3
y
4
G
1
0.3
0.4
0.1
0.2
G
2
0.3
0.5
0.5
0.7
G
3
0.5
0.2
0.4
0.4
G
4
0.4
0.6
0.4
0.7
Table 1.10
Fuzzy decision
matrix
F
(
3
)
y
1
y
2
y
3
y
4
G
1
0.5
0.6
0.3
0.2
G
2
0.5
0.7
0.6
0.5
G
3
0.6
0.3
0.3
0.9
G
4
0.3
0.5
0.6
0.6
S
(
r
1
)
=
0
.
4642
−
0
.
4105
=
0
.
0537
,
S
(
r
2
)
=
0
.
4857
−
0
.
3967
=
0
.
0890
S
(
r
3
)
=
0
.
4322
−
0
.
4286
=
0
.
0036
,
S
(
r
4
)
=
0
.
5710
−
0
.
2464
=
0
.
3246
Since
S
(
r
4
)>
S
(
r
2
)>
S
(
r
1
)>
S
(
r
3
)
, then by Xu and Yager (2006)'s ranking
method, we have
r
4
>
r
2
>
r
1
>
r
3
, and thus,
y
4
y
2
y
1
y
3
. Therefore,
y
4
is
the best software package.
In the illustrative example, if we use fuzzy sets, each of which is characterized only
by a membership information, to express the experts' evaluations, then Tables
1.5
,
1.6
and
1.7
can be written as Tables
1.8
,
1.9
and
1.10
(Xu 2011).
To get the optimal alternative, the following steps are involved (Xu 2011):
ξ
(
k
)
ij
Step 1
Utilize Eqs. (
1.51
)-(
1.54
) to calculate the weights
(
i
,
j
=
1
,
2
,
3
,
4
;
associated with the attribute values
r
(
k
)
ij
k
=
1
,
2
,
3
)
(
i
,
j
=
1
,
2
,
3
,
4
;
k
=
1
,
2
,
3
)
,
k
=
(ξ
(
k
)
which are contained in the matrices
respectively (here
we assume that all the non-membership degrees and the hesitancy degrees are zero):
)
4
×
4
(
k
=
1
,
2
,
3
)
ij
⎛
⎞
0
.
3911 0
.
3911 0
.
3911 0
.
4046
⎝
⎠
0
.
3911 0
.
3911 0
.
4000 0
.
3911
1
=
0
.
4000 0
.
3954 0
.
3954 0
.
3802
0
.
3954 0
.
3954 0
.
3911 0
.
4000
Search WWH ::
Custom Search