Geology Reference
In-Depth Information
(LM), etc., are considered as some of the faster algorithms, which all make use of
standard numerical optimization techniques. Architecture of the model, including a
number of hidden layers, is also a very important factor. Minsky and Papert [ 60 ]
highlighted weaknesses of the single layer perceptrons as their ability to solve
linearly separable problems only. Therefore, in practice, it is usually most effective
to use two hidden layers [ 41 ]. In this chapter, we discuss the Broyden
Fletcher
-
-
-
Goldfarb
Shanno (BFGS) neural network training algorithm [ 25 ], CG training
algorithms, and the LM algorithm. The BFGS algorithm is a quasi-Newton method
performed iteratively using successively improved approximations to the inverse
Hessian, instead of the true inverse.
A three-layer feed-forward neural network (one input layer, one hidden layer,
and one output layer) is the most commonly used topology in hydrology. This
topology has proved its ability in modeling many real-world functional problems.
The selection of hidden neurons is the tricky part in ANN modeling as it relates to
the complexity of the system being modeled and there are several ways of doing it,
such as the geometric average between input and output vectors dimension [ 57 ]
being the same as the number of inputs used for the modeling [ 59 ], set to be twice
the input layer dimension plus one [ 33 ], etc. In this study, the Hecht-Nielsen [ 33 ]
approach has been adopted because of our past experience with it.
-
4.3.1 Feed-Forward Neural Network Architecture
In this study we adopted feed-forward architecture. A representation of typical feed-
forward neural network with 4-3-1 architecture is shown in Fig. 4.3 . This network
topology has four nodes in the
first layer (layer A) and three nodes units in the
second layer (layer B), which are called hidden layers. This network has one node
unit in the third layer (layer C), which is called the output layer. This network has
four network inputs and one network output. The network each input-to-node and
node-to-node connection is modi
ed by a weight. There is an extra input assumed
in each node which is assumed to have a constant value of one. The weight that
modi
es this extra input is called the bias. The architecture is called feed-forward
because all information propagates along the connections in the direction from the
network inputs to the network outputs.
!
h Hidden X
P
1
O c ¼
i c ; p w c ; p þ
b c
where h Hidden ð
x
Þ ¼
ð 4 : 17 Þ
1
þ
e x
p¼1
When a network runs, each hidden layer node makes a calculation in accordance
with the equation and performs the calculation in ( 5.10 ) on its inputs and transfers
the result (O c ) to the next layer of nodes. In ( 4.17 ), O c is the output of the current
hidden layer node c, P is either the number of nodes in the previous hidden layer or
number of network inputs, i c,p is an input to node c from either the previous hidden
 
Search WWH ::




Custom Search