Digital Signal Processing Reference
In-Depth Information
network:
∂E
∂w ij
=
( y i
Φ i j
i
Φ i ) w ij Ψ j k
m jk ) B kn
∂E
∂m jn
( x k
=
( y i
(6.39)
σ jk
i
Φ i ) w ij Ψ j (x n −m jn ) 2
∂E
∂σ jn
=
( y i
.
σ jn
In the transformed space the hyperellipses have the same orientation
as in the original feature space. Hence they do not represent the same
distribution as before. To overcome this problem, layers 3 and 4 will
be adapted at the same time as B . Converge these layers fast enough,
and they can be adapted to represent the transformed training data,
thus providing a model on which the adaptation of B can be based. The
adaptation with two different target functions ( E and ρ ) may become
unstable if B is adapted too fast, because layers 3 and 4 must follow
the transformation of the input space. Thus μ must be chosen
η .A
large gradient has been observed to cause instability when a feature of
extreme high relevance is added to another. This effect can be avoided
by dividing the learning rate by the relevance, that is, μ = μ 0 r .
6.6
Hopfield Neural Networks
An important concept in neural networks theory is dynamic recurrent
neural systems. The Hopfield neural network implements the operation
of auto associative (content-addressable) memory by connecting new
input vectors with the corresponding reference vectors stored in the
memory.
A pattern, in the parlance of an N -node Hopfield neural network ,
is an N -dimensional vector p =[ p 1 ,p 2 ,...,p N ]fromthespace P =
{−
N . A special subset of P represents the set of stored or reference
patterns E =
1 , 1
}
,...,e N ]. The Hop-
field network associates a vector from P with a certain reference pattern
in E . The neural network partitions P into classes whose members are
in some way similar to the stored pattern that represents the class. The
Hopfield network finds a broad application area in image restoration and
segmentation.
e k :1
,where e k =[ e k
1
,e k
2
{
k
K
}
Search WWH ::




Custom Search