Digital Signal Processing Reference
In-Depth Information
output
Fig. 12.10
Schematic structure of a recurrent network
• Hyperbolic tangent function
y ¼ tanh ð e Þ¼ exp ð b e Þ exp ð b e Þ
exp ð b e Þþ exp ð b e Þ
ð 12 : 4 Þ
Applying non-linear activation functions one can expect faster and more con-
sistent ANN training with better convergence, more compact network structures
(less neurons) and better generalization of knowledge, which results in higher
robustness for noisy inputs.
The other category of networks, in contrast to feed-forward nets, includes
recurrent structures that are characterized by feed-back connections (Fig. 12.10 ).
One can imagine that not only the signal/-s from the output layer but also from any
other layer can be delivered to the ANN input. Moreover, one can also require that
the output values from several time instants (sequence chain of output samples)
can be sent to the input or other layers. It is obvious that for ANNs operating on
sampled signals the feed-back connection is realized with at least unit delay, which
means that the network output is available at the input with a delay amounting to
atleast one sampling period.
The recurrent ANNs are usually more efficient and compact (less neurons
needed) than their feed-forward counterparts. However, as it results from control
theory, the recurrent networks may sometimes be prone to instability, since their
equivalent transfer functions may have poles outside of the unity circle. Therefore,
after training, when all the coefficients of weights and biases are known, the ANN
should be checked for stability, before it is implemented to realize the desired task.
Unfortunately, the transfer function as such can be defined for linear structures
 
Search WWH ::




Custom Search