Geoscience Reference
In-Depth Information
Fig. 2.
Architecture of three-layer FFNN.
This calibration process is generally referred to as “training”. The global
error function most commonly used is the quadratic (mean-squared error)
function.
The connection weights are then adjusted using a form of the generalized
delta-learning rule in an attempt to reduce the error function. The amount
bywhicheachconnectionweightisadjusteddependsonthelearningrate
( η ), the momentum value ( µ ), the epoch size (
), the derivative of the
transfer function and the node output. The weight update equation for the
connection weight between nodes i and j is given in Eq. (3).
s =1 {
e
w ji ( t )=
η ( d j
y j ) f (
·
) y i }
+ µ w ji ( t
1) ,
(3)
where w ji is the connection weight between nodes i and j ,( d j
y j )isthe
difference between actual and predicted values (error), f (
) is the derivative
of the transfer function with respect to its input, y i is the current output of
processing element i ,and s is the training sample presented to the network.
The output from linear programming model; inflow, storage, release and
actual demand are given as input into ANN model of the basin. Monthly
values of inflow, initial storage, demand and time period are the input into
a three-layer neural network and output from this network are monthly
release. The training set consisted of data from 1969 to 1994. The same
·
Search WWH ::




Custom Search