Geology Reference
In-Depth Information
fied by a transfer function h ( t ) before being sent
forward along the network. The mathematical
expression for this algorithm is
either gradient- or search-based. For example, the
back-propagation or other Newtonian algorithms
implement steepest descent with either a controlled
or a calculated step. Although back-propagation is
the commonly used training method, this Chapter
presents in Section 4.2 a gradient-free search
algorithm as a straight-forward approach to the
training optimization.
J
N
R
( )
X
F
( )
X
=
h
(
W h
(
W X W
+
)
+
W
)
kj
ji
i
j
0
k
0
j
=
1
i
=
1
(3)
in which, in our case, R ( X ) is the “true” value for
the structural response obtained with the dy-
namic analysis for the input vector X , with com-
ponents X i , F ( X ) is the neural network approxima-
tion, W kj and W ji are the weight parameters, and
h (t) is the transfer function applied at the hidden
and output neurons. This function could take dif-
ferent forms, and in this Chapter we use a sigmoid:
3.2 Search-Based Optimization
as a Training Algorithm
The following describes a gradient-free, search-
based optimization algorithm for the neural
network weights W . The optimization strategy is
called OPT. Let N be the number of input variables
and NP the number of input data combinations.
The input values are then X 0 ( i , k ), with i = 1, N and
k = 1, NP . Before proceeding, the data are scaled
to values X ( i , k ), between the limits 0.01 and 0.99,
in order to eliminate potential problems with dif-
ferent units and magnitudes. Similarly, the output
results from the response analysis, T 0 ( k ), are also
scaled to values T ( k ) between 0.01 and 0.99, tak-
1 0
.
h t
( )
=
(4)
(
1
+
exp(
t
))
The weights W must be obtained in such a
way that the differences between R ( X ) and F ( X )
be minimized. This optimization is defined
as the “training” of the network, and different
minimization algorithms can be implemented,
Figure 1. Neural network architecture
Search WWH ::




Custom Search