Biomedical Engineering Reference
In-Depth Information
nals resulting from the difference between the
expected and actual outputs are back-propagated
from the output layer to the previous layers for
them to update their weights.
It has been shown (Cybenko, 1989) that MLPs
can approximate virtually any function with any
desired accuracy, provided that there are enough
hidden neurons in the network and that a sufficient
amount of data is available.
The implementation of GNMM can be described
in 3 main stages as follows:
Step 1: variable Selection
Randomly generate an initial population of
chromosomes of size N p ( N p >> 0 and ideally N p
2 b ). A chromosome consists of b genes, each
representing an input variable. The encoding of a
gene is binary, meaning that a particular variable
is considered as an input variable (represented by
'1') or not (represented by '0'). The assessment of
the fitness of a chromosome is the Mean Squared
Error (MSE) when a three-layer MLP is being
trained with the input variable subset X i and output
target Y for a certain number of epochs N e . The
number of neurons in the hidden layer is the same
as the number of the input variables.
As mentioned previously, provided that there
are enough hidden neurons in the network and
that a sufficient amount of data is available, MLPs
can approximate virtually any function with any
desired accuracy. However, in the current stage the
METHODOLOGy
Briefly speaking, in GNMM GAs are used as a
variable selection tool. Those variables identified
by GAs are then redirected as inputs to an MLP.
Finally, GNMM extracts regression rules from
the trained network. Let us assume there are two
data sets X = { x (1 , 1) ,…, x (a , b) } and Y = { y 1 ,…, y a },
where X is the hydraulic/geometric measurements,
Y is the corresponding longitudinal dispersion
coefficient, a is the number of measurements that
have been made and b denotes available variables.
Figure 2. Three-layer back-propagation neural network
Search WWH ::




Custom Search