Civil Engineering Reference
In-Depth Information
where
n
is a correction gain and
d
is the correction factor
j
d j
y j 1
(
y j
)
(
d j
y j
)
(7.5)
Clearly, each weight will be increased if the calculated output from its node is less than the desired value,
and vice versa. The correction factors used to update weights preceding the hidden layer nodes are updated
according to
d j
x j 1
(
x j
)
(
d k
w jk
)
(7.6)
k
where the
k
applies to the node layer succeeding the one currently being updated. The offset parameter
c
of each node is treated as an additional weight factor and updated in the same manner.
The weights and offsets of the neural network are recalculated during the back propagation as outlined
above. Then the network repeats the calculation of the output values based on the same input data,
compares them to the desired output values, and readjusts the network parameters through yet another
back propagation phase. This cycle is repeated until the calculated outputs have converged sufficiently
close to the desired outputs or an iteration limit has been reached. Once the neural network has been
tuned to the first set of input/output data, additional data sets can be used for further training in the
same way. To ensure concurrent network adaptation to all sets of data, the entire training process may
be repeated until all transformations are adequately modeled by the network. This requires, of course,
that all the data sets were obtained from the same process and therefore the underlying input/output
transformation is consistent.
As noted above, the training iteration process may be terminated either by a convergence limit or
simply by limiting the total number of iterations. In the former case we use an error measure
e
defined
as the following:
M
1
max
2
e
(
d k , m
y k , m
)
(7.7)
k
1… K
m
0
where
K
is the number of input/output data sets used for training,
M
is the number of network output
parameters in each data set, and (
d
y
) is the error in the network calculation of parameter
m
in
k,m
k,m
data set
k
. The error measure,
e
, changes after each round of network weight adjustments. In the long
run
decreases as the network is refined by training iterations. Using this indicator one can program the
network to terminate the iterative tuning process as soon as
e
e
reaches some threshold value,
e
. Alterna-
0
tively, a given network may not be able to reduce the error measure down to the specified
e
. In that case
0
the iterations may be terminated by simply specifying a maximum number for them.
The training mode, as described above, is a precondition for actually applying the neural network in
the application mode. In this mode entirely new input data is presented to the network which, in turn,
predicts new outputs based on the transfer characteristics learned during the training. If this new data
is obtained from the same local region of operation of the process as during the training phase, data
from the input/output relations should be governed by the same underlying process and the neural
network should perform adequately. The neural network is not updated in the application mode.
When compared to other modeling methodologies, neural networks have certain drawbacks as well
as advantages. The most notable drawback is the lack of comprehension of the physics of the process.
Relating the qualitative effects of the network structure or parameters to the process parameters is usually
impossible. On the other hand, most physical models resort to substantial simplifications of the process
and therefore trade accuracy for comprehensibility. The advantages of neural models include relative
accuracy, as illustrated in the following sections, and generality. If the training data for a neural network
Search WWH ::




Custom Search