Biomedical Engineering Reference
In-Depth Information
At this point let us redefine the features of a
well-designed neural network within the frame-
work of the Theory of Systems with Several
Equilibria : (a) it has several (and many, but finite
number) fixed-point equilibrium states; (b) the
network has to be convergent, i.e., each solution
of the dynamics has to converge to an equilibrium
state. From the finite number of equilibria we
deduce that the second property is equivalent to a
gradient-like behavior (every solution approaches
an equilibrium).
D. The RNN have a dynamics induced a pos-
teriori by the learning process that had established
the synaptic weights. It is not at all compulsory
that this a posteriori dynamics should have the
required properties, hence they have to be checked
separately.
In the last decades, the number of RNNs' ap-
plications increased, they being designed for clas-
sification, identification, optimization problems,
and complex image, visual and spatio-temporal
processing in fields as engineering, chemistry,
biology and medicine (see, for instance Chung
& Lee, 1991; Fortuna et al ., 2001; Arakawa,
Hasegawa, & Funatsu, 2003; Fink, 2004; Atencia,
Joya, & Sandoval, 2004; Jing-Sheng Xue, Ji-Zhou
Sun, Xu Zhang, 2004; Maurer, Hersch, & Billard,
2005; Iwahori, Kawanaka, Fukui & Funahashi,
2005; Guirguis, & Ghoneimy, 2007). All these
applications are mainly based on the existence of
several equilibria, requiring the “good behavior”
properties above discussed.
Neural networks with several equilibria
(Bidirectional Associative Memories, Hopfield,
cellular, Cohen-Grossberg) have a rather special
systemic structure that allows associating Lyapu-
nov functions in a natural way. However, since
the properties of the Lyapunov function and its
derivative give sufficient conditions on systems
parameters, one of the current stability problems
in neural networks studies is to improve Lyapunov
functions in order to obtain sharper, less restric-
tive stability conditions . These conditions will be
verified a posteriori on the mathematical model
of RNN. Another problem, but of theoretical na-
ture, is to embed the stability analysis of neural
networks in a unified stability theory for a wide
class of nonlinear systems .
RECURRENT NEURAL NETWORkS
AND LyAPUNOv FUNCTIONS
The Energy Lyapunov Function for
Hopfield Network
J.J. Hopfield (1982) showed that a physical system
whose dynamics within the state space is domi-
nated by many locally stable equilibrium points
(named attractors ) may be viewed as a general
content-addressable memory.
Consider the model for the analog Hopfield
neural network (5). If
, = is some equilibrium
of (5), then without loss of the generality one can
shift the equilibrium to the origin, obtaining the
so-called system in deviations
x i
i
1
m
m
=
z
(
t
)
=
a
z
(
t
)
c
g
(
z
)
,
i
=
1
m
i
i
i
ij
j
j
j
1
(10)
Here
and the nonlinearities defined
z
=
x
x
i
i
i
by
g
(
z
)
=
f
(
z
+
x
)
f
(
x
)
(11)
i
i
i
i
i
i
i
are also sigmoidal functions satisfying a condi-
tion of the type (4).
The Hopfield network model is based on
the idea of defining the global behavior of the
network related to the energy E of a physical
system—which is in fact a natural Lyapunov
function. Supposing that the potential energy
of the origin equals zero, if one denotes by z the
coordinates vector of some point as a measure of
its displacement from the equilibrium in origin and
takes into account that the energy, as a physical
quantity, is smooth enough to allow a multivari-
 
Search WWH ::




Custom Search