Environmental Engineering Reference
In-Depth Information
oxides content or production temperature. h e signii cant terms in the
model were found using ANOVA for each dependent variable.
4.2.4 Artii cial Neural Network Modeling
According to StatSot Statistica's recommendations, the database is ran-
domly divided into: training data (60%), cross-validation (20%) and testing
data (20%). h e cross-validation data set was used to test the performance
of the network while training was in progress as an indicator of the level of
generalization and the time at which the network has begun to over train.
Testing data set was used to examine the network generalization capability.
To improve the behavior of the ANN, both input and output data were
normalized according to Eq. (4.2):
q
q
min(
)
i
i
q
=
(4.2)
inorm
q
q
max(
)
min(
)
i
i
where q i is i -th case, with measured compressive strength ( CS ), water
absorption ( WA ), i ring shrinkage ( FS ), weight loss during i ring ( WLF )
and volume mass of cubes ( VMC ). Normalized variables gained values in
the range of 0 to 1, and have no physical meaning.
In order to obtain a good network behavior, it was necessary to make a
trial and error procedure and also to choose the number of hidden layers,
and the number of neurons in hidden layer(s). h e use of only one layer
is advisable, because more layers exacerbates the problem of local minima
[4, 13].
A multi-layer perceptron model (MLP) consisted of three layers (input,
hidden, and output), which is the most common, l exible, and general-
purpose kind of ANN. Such a model has been proven as quite capable of
approximating nonlinear functions [4, 13], which is the reason for choos-
ing it in this study. h e network consists of one layer of linear output
neurons and one hidden layer of nonlinear neurons. h e MLP neural net-
work learns using an algorithm called “backpropagation.” h e Levenberg-
Marquardt algorithm is proven to be the fastest and particularly adapted
for networks of moderate size. During this iterative process, input data are
repeatedly presented to the network [14].
h e i rst estimation of the number of neurons can be obtained from the
following equation [15, 16]:
m = n
( x +1) + y
( n +1)
(4.3)
Search WWH ::




Custom Search