Environmental Engineering Reference
In-Depth Information
5.3.4 Learning Rate
There is a parameter called learning rate in the training algorithm of back propa-
gation, which is on the basis of the steepest descent. Its aim is to minimize the sum
square error of outputs. Determining the proper learning rate is one of the most
sensitive processes to use the algorithm of back propagation (Menhaj and Safepoor
1998 ). The learning rate is indicated by a symbol
and determines the velocity of
convergence in this algorithm. The performance of the steepest descent algorithm is
improved if the learning rate is permitted to change during the training process. An
adaptive learning rate attempts to make the learning step as big as possible to
keep the learning stable and requires some alters in the training procedure
(Asadollahfardi 2012 ).
α
5.3.5 Model Ef
ciency
Three error criterions of VE, MAE, and RMSE are used to evaluate the output of
obtained models. Equations ( 5.5 ),( 5.6 ) and ( 5.7 ) show these expressions (Kennedy
and Neville 1976 ):
100
X T
Volume Error ð VE) = 1
T
Obs t For t
Obs t
ð 5 : 5 Þ
t ¼ 1
X T
t ¼ 1 Obs t For t
Mean Absolute Error (MAE) = 1
T
j
j
ð 5 : 6 Þ
t
1
T
X T
t ¼ 1 ð Obs t For t Þ
2
Root Mean Square Error (RMSE) =
ð 5 : 7 Þ
where, T = discrete time, t = length of time series, Obs t = observed parameter in
time of t (1
t T) and For t = predicted parameter in time of t (1
t T). Also the
correlation coef
cient R is applied to show the validity between real data and
predicted ones which are described in Eq. 5.6 .
P ð x x Þð y y Þ
P ð x x Þ
R =
q
ð 5 : 8 Þ
2 P ð y y Þ
2
where
y = means of x and y series. R shows the relationship between observed
data and predicted data. If relations are very strong, R approaches one.
x,
Search WWH ::




Custom Search