Information Technology Reference
In-Depth Information
Table 11.2
GN
1
(
x
)
,
BN
1
(
x
)
,
N
1
(
x
)
,
N
1
,
C
1
(
x
)
and
N
1
,
C
2
(
x
)
for the instances shown in Fig.
11.6
Instance
GN
1
(
x
)
BN
1
(
x
)
N
1
(
x
)
N
1
,
C
1
(
x
)
N
1
,
C
2
(
x
)
1
1
0
1
1
0
2
2
0
2
2
0
3
2
0
2
2
0
4
0
0
0
0
0
5
0
0
0
0
0
6
0
2
2
0
2
7
1
0
1
0
1
8
0
0
0
0
0
9
1
1
2
1
1
10
0
0
0
0
0
Mean
μ
GN
1
(
x
)
=
0
.
7
μ
BN
1
(
x
)
=
0
.
3
μ
N
1
(
x
)
=
1
Std.
˃
)
=
0
.
823
˃
)
=
0
.
675
˃
)
=
0
.
943
GN
1
(
x
BN
1
(
x
N
1
(
x
and
BN
1
(
x
9
)
−
μ
BN
1
(
x
)
˃
BN
1
(
x
)
1
−
0
.
3
e
−
e
−
h
b
(
x
9
)
=
e
−
w
9
=
=
=
0
.
3545
.
0
.
675
As
w
9
>
w
6
, instance 11 will be classified as rectangle according to instance 9.
From the example we can see that in hw-
k
NN all neighbors vote by their own
label. As this may be disadvantageous in some cases [
49
], in the algorithms consid-
ered below, the neighbors do not always vote by their own labels, which is a major
difference to hw-
k
NN.
11.5.2 h-FNN: Hubness-Based Fuzzy Nearest Neighbor
Consider the relative class hubness
u
C
(
x
i
)
of each nearest neighbor
x
i
:
N
k
,
C
(
x
i
)
u
C
(
x
i
)
=
.
(11.10)
N
k
(
x
i
)
The above
u
C
(
can be interpreted as the fuzziness of the event that
x
i
occurred
as one of the neighbors,
C
denotes one of the classes:
C
x
i
)
. Integrating fuzziness
as a measure of uncertainty is usual in
k
-nearest neighbor methods and h-FNN [
54
]
uses the relative class hubness when assigning class-conditional vote weights. The
approach is based on the fuzzy
k
-nearest neighbor voting framework [
27
]. Therefore,
the probability of each class
C
for the instance
x
∗
to be classified is estimated as:
∈
C
x
i
∈
N
k
(
x
∗
)
u
C
(
x
i
)
x
∗
)
=
u
C
(
x
i
∈
N
k
(
x
∗
)
C
∈
C
x
i
)
.
(11.11)
u
C
(
Search WWH ::
Custom Search