Information Technology Reference
In-Depth Information
where ț is a constant positive value, y is the output value of the unit i , D is the
factor controlling the sigmoid decay resistance, and U is the external input to the
unit j . The resulting energy function in this case is defined by
1
2
E
w u u
u U
¦¦
¦
ij
i
j
i
i
i
j
i
Network stability, as proven by Hopfield (1982), is generally guaranteed by the
symmetric network structure.
For the training of recurrent networks, Rumelhart et al. (1986) proposed a
general framework similar to that used for training feedforward networks, called
backpropagation through time . The algorithm is obtained by unfolding the
temporal operation of the network into a layered feedforward growing with each
time step. This, however, is not always satisfactory. Williams and Zipser (1988)
presented a learning algorithm for continuously running fully connected recurrent
neural networks (Figure 3.9) that adjusts the network weights in real time, i.e.
during the operational phase of the network. The proposed learning algorithm is
known as a real-time recurrent learning algorithm .
There are two basic learning paradigms for recurrent networks:
x fixed-point learning , through which the network reaches the prescribed
steady state in which a static input pattern should be stored
x trajectory learning , through which a network learns to follow a trajectory
or a sequence of samples over time, which is valuable for temporal pattern
recognition , multistep prediction, and systems control .
For trajectory learning, both the backpropagation through time and the real-
time recurrent learning are appropriate. From the mathematical point of view,
using the backpropagation through time we turn the recurrent network - by
unfolding the temporal operation - into a layered feedforward network, the
structure of which at every time step grows by one layer.
Almeida (1987) and Pineda (1987) have presented a method to train the
recurrent networks of any architecture by backpropagation. Under the assumption
that the network outputs strictly depend only on present and not on the past input
values, Almeida derived the generalized backpropagation rule for this type of
network, and addressed the problem of network stability using the energy function
formulated by Hopfield (1982). Pineda (1987), however, directly addressed the
problem of generalization of the backpropagation training algorithm and it's
extension to recurrent neural networks. Hertz et al. (1991), based on the results of
this work, have worked out a backpropagation algorithm for networks, the
activation function of which obeys the evolutionary law
dv
W
i
vgwv x
(
¦
)
,
i
ij
j
i
dt
j
 
Search WWH ::




Custom Search