Agriculture Reference
In-Depth Information
.
.
.
Output
layer
Input layer
Hidden
layers
Fig. 7.2. Topology of a multilayer perceptron neural network containing two hidden layers.
However, additional improvements in train-
ing these networks are required, because the
training process is complex and empirical in
nature.
The function of the hidden layers and
their neurons in the network topology (archi-
tecture) is to increase the capacity of the net-
work to extract statistical information from
the inputs. Each layer constitutes an input
for the following layer. The input layer pro-
vides information to the first hidden layer
and its output signals will become inputs for
the second layer and so on. The sum of the
products of weights multiplied by the inputs
that reach a neuron is equalized by a smooth
non-linear activation function, allowing the
network to learn the non-linear patterns con-
tained in the data (Haykin, 2001).
Learning should be stressed in the study
of ANNs, because the synaptic weights are
adjusted only during the training stage. There-
fore, the training algorithms deserve special
attention. The objective is to adjust network
weights in order to minimize the difference
between the output value and the desired
value. The sum-squared error is a method
commonly used for this purpose.
The method of back-propagation of errors
(Rumelhart et al ., 1986a) promoted significant
improvement in the performance of ANNs.
During the propagation step, synaptic weights
are fixed. An error that does not fit a criterion
(supervision) is backward propagated from
the output and the synaptic weights and
bias are adjusted in each processing unit until
the error is accepted, generating an output.
The bias allows the output of a neuron to
be null even when all inputs are null.
The error in an output neuron j in the
iteration n is defined as:
en dn yn
j
() = () ()
(7.1)
j
j
Where d j ( n ) is the desired value and y j ( n ) the
calculated value in neuron j . The squared
error (SE) in a neuron j is defined as:
1
2
= ()
2
SE j
j en
(7.2)
Therefore, for a set S , containing all neurons
of the output layer:
1
2
() =
()
2
SE n
e n
j
(7.3)
j s
Mean squared error (MSE) during training is
calculated as:
1
N
å
()
MSE =
SE n
(7.4)
N
1
This is the measure of the ANN learning per-
formance. Learning means finding the synap-
tic weights that reduce mean squared error to
a minimum. If m represents the total number
of inputs applied in neuron j , excluding the
bias, then the local induced field produced in
the input in the activation function associated
to neuron j is:
() =
0
m
()()
vn wnyn
j
(7.5)
ji
i
i
Functional signal y j ( n ) at the output of neuron j
in iteration n is:
(
)
() = ()
yn vn
j
ϕ
(7.6)
j
j
 
Search WWH ::




Custom Search