Information Technology Reference
In-Depth Information
In contrast to classical RBF methods, each center denotes a support vector in
this method. Furthermore, all four types of parameters are chosen to minimize the
bound on the probability of test error.
8.5.3 Multi-layer Perceptron
Multi-layer perceptron define inner product kernel function using sigmoid
function. The number N of hidden units (the number of support vectors) are
found automatically. The sigmoid kernel satisfies Mercer conditions as follow
T
K
(
x
,
x
)
=
tanh(
γ
x
x
Θ
)
(8.41)
i
j
i
j
Using this arithmetic we avoid local minima problem that puzzles neural network.
8.5.4 Dynamic kernel function
Amari and Wu proposed a method of modifying a kernel function to improve the
performance of a support vector machine classifier (Amari, 1999). This is based
on the structure of the Riemannian geometry induced by the kernel function.
U
denotes feature mapping
U
=
Φ (
x
), then
Ã
dU
=
Φ
( x )dx
i
x
i
i
2
Ã
dU
=
g
(
x
)
dx
dx
ij
i
j
i
,
j
Ä
Ô
Ä
Ô
g ( x )
=
Φ
Φ
Φ
Φ
( x )
Å
Φ
Φ
Φ
Φ
( x )
where
Õ
. The
n×n
positive-definite matrix
Å
Õ Å
Õ
ij
x
x
Æ
Ö Æ
Ö
i
j
Ã
2
ds
=
g
(
x
)
dx
dx
(
g
ij (
x
)) is the Riemannian metric tensor induced in
S
.
is
ij
i
j
ij
Riemannian distance. The volume form in a Riemannian space is defined as
dv
=
g
(
x
)
dx
?
dx
1
n
g
(
x
)
=
det(
g
(
x
))
where
. The factor
g
(
x
) represents how a local area is
ij
magnified in
U
under the mapping
Φ (
x
). Therefore, we call it the magnification
k( x, z )
=
(
Φ
Φ
Φ
Φ
( x )
Φ
Φ
Φ
Φ
( z ))
factor. Since
we can get
Search WWH ::




Custom Search