Digital Signal Processing Reference
In-Depth Information
Fig. 12.7
The architecture of the Elman's RNN
layer to the input of the input layer) [ 21 ], Elman's network (which employs feed-
back connection from the output of the hidden layer to the input of the input layer)
[ 26 ], Pollack sequential cascade network [ 36 ], higher order recurrent neural network
of Giles [ 15 ], Lee and Song's network (in which each output node is connected to
itself) [ 26 ], etc. In this proposed system, Elman's RNN has been employed, which
is a popular RNN in the category of dynamically driven neural networks. Like sev-
eral other RNNs, Elman's network incorporates static MLP architecture as its basic
building block and is trained by some popular learning algorithm, employed for
training MLPs. Figure 12.7 shows the generic architecture of an Elman's RNN uti-
lized in this work. This is a three-layer architecture where layer 2 contains context
units in addition to the hidden neurons. The context units comprise a bank of unit
time delays and they store the outputs of the hidden neurons for one time step, and
then these are fed back to the input of the input layer. Hence, the context units de-
pict short-term memory of the RNN. However, as the output of the hidden layer of
the RNN, at any time step, is a nonlinear function of both the output of the input
layer at that given time step and the output of the hidden layer in the previous time
step, the network continues to recycle information over multiple time steps, which
is useful for efficient discovery of temporal patterns [ 18 ]. Mathematically speaking,
the output from the hidden layer, at the k th time step is given as:
N
P
z j (k)
z i (k)w 12
z j (k
1 )c 22
b j
=
f 1
ij +
jj +
(12.9)
i =
1
j =
1
where
z j (k) = output of the j th neuron of layer 2 at the k th time step,
Search WWH ::




Custom Search