Civil Engineering Reference
In-Depth Information
j
i
h
w hi
w jh
x
y j
= V ( x
Fig. 2.8 The general structure of the feedforward neural network has an input layer i , a hidden
layer h , and an output layer j , as well as bias nodes ( dashed nodes ). Weights are defined such that
w hi represents the weight from a node in layer i to a node in layer h , and that w jh represents the
weight from a node in layer h to a node in layer j . The value V ( x ) of a state vector x is computed
by forward propagation.
The weight update equation of the back-propagation algorithm for a weight w jh
of a neural network (a weight from a node in layer h to a node in layer j ) takes the
general form:
w jh =
ʱ
×
E
×
ʴ j ×
y h
(2.4)
where the learning parameter ʱ modulates the magnitude of the weight adjustment,
E is the prediction error, ʴ j is the local gradient that is based on the derivative of
the transfer function evaluated at the node in layer j , and y h is the output of hidden
node h (which is also the input to output node j ) and is computed as y h
=
f ( v h )
where the induced local field is v h = i w hi y i and f (
·
) is a transfer function. The
( y j
prediction error from this network is stated as E
y j ) where y j is the value
of output node j and y j is the corresponding target output value. The expression for
w jh can be written more explicitly using the partial derivative of the network error
E with respect to the network weights:
=
∂E
∂w jh
w jh =−
ʱ
∂E
∂y j
∂y j
∂v j
∂v j
∂w jh
=−
ʱ
ʱ ( y j
y j ) f ( v j ) y h
=
where f ( v j ) is the derivative of the transfer function evaluated for the induced
local field v j . This weight adjustment expression can be extended for updating the
 
Search WWH ::




Custom Search