Geology Reference
In-Depth Information
Fig. 4.4 Sigmoidal activation
function
where O c is the output of the output layer node unit c, P is the number of nodes in
the previous hidden layer, i c,p is an input to node c from the previous hidden layer
node p, w c,p is the weight modifying the connection from node p to node c, and b c is
the bias. h Output (x) is a linear activation function.
4.3.2 Recurrent Arti cial Neural Networks
Recurrent neural networks are totally different from feed-forward networks and it
was proposed in 1980s [ 24 , 76 , 97 ] for complex modeling of real life time series.
Recurrent models have
topology with no limitations on back loops
(information translated in backward directions as well).The modeling structure of
recurrent neural networks are similar to a standard multilayer perceptron. However,
one difference is that we allow connections among neurons within hidden layers
with a time delay, and these connections are capable of carrying information from
the past. Although it is a powerful tool, the recurrent networks are very dif
'
recurrent
'
cult to
train in comparison with feed-forward multilayer perceptron models. Some standard
examples of recurrent ANN are Elman Networks, Jordan Networks, Hop
eld
Networks, Liquid State Machines, Echo State Networks, and Topology and Weight
Evolving Neural Networks. In Fig. 4.5 we can see input, hidden and output layers,
and recurrent connections of recurrent arti
cial neural networks.
4.3.3 Elman Arti
cial Neural Networks
The Elman network is a special form of recurrent neural network in which the
connections from the hidden layer is linked back to a special copy layer. The Elman
network with synchronous extra context layer is a
finite state machine which learns
what state is relevant and to be remembered for better modeling results. In
 
Search WWH ::




Custom Search