Biomedical Engineering Reference
In-Depth Information
target output. The knowledge is represented and stored by the strength (weights) of
the connections between the neurons [ 29 , 44 ].
There are several algorithms in a NN and the one which was used here is the
back-propagation (BP) Levenberg-Marquardt training algorithm. The BP algo-
rithm is an iterative gradient algorithm designed to compute the connection
weights by minimizing the total mean-square error between the actual output of
the multi-layer network and the desired output. In particular, at the beginning, the
weights are chosen randomly and the rule consists of a comparison of the known
and desired output value with the calculated output value by means of the current
set of weights and thresholds.
The learning algorithm can be summarized as follows:
Step 1. Select the learning rate g ¼ 0 : 1 and momentum coefficient a ¼ 0 : 1 :
Step 2. Take a group of random numbers within (-1, 1) as the initial values of the
weights w m 1
ji .
Step 3. Compute the outputs of all neurons layer by layer, starting with the input
layer using Eqs. ( 1 )-( 3 ).
Step 4. Compute the system mean square error by:
X
P
E ¼ 1
2
Þ 2
ð
D i y i
ð 4 Þ
i ¼ 1
where y i is the actual output of the ith output node while D i is the corresponding
desired output. P denotes the number of output nodes.
Step 5. If E is small enough or the learning iteration is too high, stop learning.
Step 6. Compute the learning errors for every neuron layer by layer:
d m ¼ D m y m
ð
Þ v m
ð 5 Þ
Step 7. Update the weights along the negative gradient of the error E:
w ji t þ 1
ð
Þ¼ w ji ðÞþ gd i y j þ a w ji ðÞ w ji t 1
ð
Þ
ð 6 Þ
Step 8. Repeat by going to Step 3.
In the present work, an in-house NN program called Neuromod written in Fortran
[ 17 , 21 ] was developed to perform the training and the prediction. Neuromod
includes a module which allows for the automatic selection of the best architecture of
the network based on the following steps:
• Select an initial configuration (typically, one hidden layer with the number of
hidden units set to half the sum of the number of input and output factors.
• Iteratively, conduct a number of calculations with each configuration, retaining
the best network (in terms of verification error) found.
• On each calculation, if under-learning occurs (the network does not achieve an
acceptable performance level) try adding more neurons to the hidden layer(s). If
Search WWH ::




Custom Search