Civil Engineering Reference
In-Depth Information
(1)
(2)
x
x
(1)
(1)
w
0
0
00
w
(2)
w
Outputs:
Inputs:
00
00
x
y
[D ]
0
0
0
w
02
x
y
[D ]
1
1
1
w 112
(2)
x
(1)
(1)
2
w
2
x
x
222
2
Output
Layer
2nd
Hidden
Layer
1st
Hidden
Layer
FIGURE 7.4
Three-input, two output neural network, using two hidden layers of three nodes each, fully interconnected.
x i ()
the input data with a weight factor. The weight factor associated with the link from to the node
producing is annotated as and a similar convention holds for the links between other layers.
Each node calculates its output by summing its weighted inputs and using the result
x j ()
w ij ()
as the argument
of a nonlinear function associated with the node. For our application this function is the same for all nodes:
s
1
f s
()
[
1
exp
[
(
s
c
)
]
]
(7.2)
where
s
is the sum of the node inputs and
c
is an internal offset value. Clearly the node output will be
confined to the range 0
f
(
s
)
1. Because the limiting values, 0 and 1, will only be approached as
s
infinity, all input and output data are scaled so that they are confined to a subinterval
of [0 … 1]. A practical region for the data is chosen to be [0.1 … 0.9]. In this case each input or output
parameter
approaches
p
is normalized as
p
before being applied to the neural network according to:
n
(
0.9
0.1
)
p n
-------------------------------
(
p min
)
0.1
(7.3)
(
p max
p min
)
. The
network starts calculating its output values by passing the weighted inputs to the nodes in the first layer.
The resulting node outputs of that layer are passed on, through a new set of weights, to the second layer,
and so on until the nodes of the output layer compute the final outputs.
Before practical application, the network has to be trained to perform the mapping of the three input
parameters to the two output parameters. This is done by repeatedly applying training data to its inputs,
calculating the corresponding outputs by the network, comparing them to the desired outputs, and
altering the internal parameters of the network for the next round. The training starts by assigning small
random values to all weights (
where
p
and
p
are the maximum and minimum values, respectively, of data parameter
p
max
min
) in the network. The first three input data values
are presented to the network which in turn calculates the two output values. Because the initial weights
and node offsets are random these values will generally be quite different from the desired output values,
w
) and node offsets (
c
ij
j
. Therefore, the differences between the desired and calculated outputs have to be utilized to
dictate improved network values, tuning each weight and offset parameter through back propagation.
The weights preceding each output node are updated according to
D
and
D
0
1
nd j x i ()
w ij t
(
1
)
w ij t
()
(7.4)
 
 
Search WWH ::




Custom Search