Database Reference
In-Depth Information
vector of neuron j . The learning rate Į is a constant used to control the rate of
convergence, and
[]()
t ij is a time-varying function used to control the influence
region of a neuron to other neurons in its neighborhood.
d
[] ()
t ij is
d
[]
1
if
d
R
t
[]()
ij
t
d
=
(7.18)
0
otherwise
[]
where
tR is a positive definite monotonic decreasing function and the
minimum value for
[]
R is 1 for the network to retain organizing capability.
The weights will be organized as follows: The neighboring neurons on the
network tend to be similar to input weight vectors, and the weights represent
neighboring regions in the input pattern space. In particular, the asymptotic values
of the weight vectors will tend to be a weighted center of their influence regions.
The influence region for neuron i is the region
in the input pattern space.
During the training phase, whenever an input pattern falls in
i
, the
w will be
i
modified. That is, i
=Ω V , where V is the Voronoi region for neuron j .
We define the asymptotic values of the weight vector,
[]() []
i
j
j
N
(
)
()
Asy
t
x
=
φ
t
"
C
x
,
(7.19)
i
i
where
)( is the lattice position for neuron j that has the input weight
vector closest to input x . Then we obtained the following equation:
xC "
=
j
() [] ()
() []()
p
x
Asy
t
x
xdx
[]
i
w
ˆ
=
lim
w
t
=
i
,
(7.20)
i
i
p
x
Asy
t
x
dx
t
i
i
()
where
p is a probability distribution function [25].
We follow the backpropagation algorithm to adjust the weight between
hidden neurons and output neurons to minimize
ε in Eq. (7.21):
1
OT
(
() ()
)
2
ˆ
ε
=
o
x
o
x
(7.21)
k
k
2
k
=
1
ε
HO
i
w
=
Ș
(7.22)
k
HO
i
w
k
where Ș is the learning rate.
The neurons in the lattice are added or eliminated by the
generation/annihilation algorithm according to monitoring of the variance of
weight vector described in Section 7.2.2.
For clear comprehension, we define
WD
as a neuron with the largest
max
1
WD as a neuron with the second largest
WD . A new neuron is added in the middle of the two neurons
WD in the neighborhood
N and
max
2
WD
and
max
1
WD . The new neuron should be inserted into the lattice, taking into account
the arrangement of other neurons.
A typical neuron-generation situation is shown in Fig. 7.13. Figure 7.13(a)
shows that the
max
2
WD
neuron is parallel to the
WD
neuron. In this case,
max
1
max
2
 
Search WWH ::




Custom Search