Information Technology Reference
In-Depth Information
As a numeric example, let us consider Fig. 4.1 c. The equation of the border is
=
∈[
,
]
=
/
+
/
∈[
,
]
y
2 x for x
0
1
, y
x
8
15
8for x
1
9
.The cpdfs are taken constant
along the boundary. It turns out that
1
.
913
1
.
887
EDBFM =
,
1
.
887 8
.
385
ʻ 1
=
1
.
4,
ʻ 2
=
8
.
89, v 1
=[
0
.
965
,
0
.
261
]
, v 2
=[−
0
.
261
,
0
.
965
]
. The ranking
method leads to the following weights: w 1 =
95, hence the real feature
y turns out to be more discriminant than x as the figure suggests, since the first piece
of boundary is shorter than the second one which is almost parallel to the x -axis.
3
.
68, w 2 =
8
.
4.4.2 The Algorithm
The presentedmethod is based on the calculus of the EDBFM, which in turn needs the
knowledge of the decision boundary. In order to apply it to real cases, where the deci-
sion boundary, as well as cpdfs are typically unknown, non-parametric approaches
will be considered. In non-parametric approaches we are given a set of instances of
the true phenomenon (training data) only, and no assumption on the form of cpdfs
is made. In this work we propose the use of Labeled Vector Quantizer (LVQ) archi-
tectures and the Bayes Vector Quantizer (BVQ) learning algorithm. The reason for
the choice of BVQ is twofold: (1) it has demonstrated to drive an LVQ toward a
(locally) optimal approximation of the Bayes boundary [ 10 ]; (2) the approximation
is piecewise linear, thus simplifying the calculus of the normal vectors.
An Euclidean nearest neighbor Vector Quantizer (VQ) of dimension N and order
Q is a function
N
N
ʩ : R
M
,
M ={
m 1 ,
m 2 ,
...,
m Q }
, m i
R
,
m i
=
m j ,
N
which defines a partition of
R
into Q regions
V 1 , V 2 ,...,V Q , such that
N
2
2
V i ={
x
R
:
x
m i
<
x
m j
,
j
=
i
} .
(4.6)
V i defined by ( 4.6 ) is called
the Voronoi region of the code vector m i . Note that the Voronoi region is completely
defined by
Elements of
M
are called code vectors . The region
M
. In particular, the boundary of Voronoi region
V i is defined by the
intersection of a finite set of hyperplanes
S i , j with equation
m i +
m j
(
m i
m j ) · (
x
) =
0
,
2
where m j is a neighbor code vector to m i . The definition of normal vectors to these
hyperplanes is thus straightforward and it is N ij =
m i
m j (see Fig. 4.2 ).
By associating with each code vector a class we can define a decision rule. A
Labeled Vector Quantizer (LVQ) is a pair LVQ
N
= <ʩ,L >
, where
ʩ : R
M
is a vector quantizer, and
L : M
† is a labeling function, assigning to each
 
Search WWH ::




Custom Search