Biomedical Engineering Reference
In-Depth Information
MLP. As in the previous step, the training
has to be repeated several times in order to
get satisfactory results due to its 'random'
starting point. However, the learning rate
α can be set to a relatively large value ( α
= 0.04) in order to accelerate the training
process.
Making ANN structure less complex and
easier to understand.
Reducing computational complexity and
memory requirements.
Rule extraction is the attempt to overcome
the 'black box' reputation that comes with neu-
ral networks. Such a process not only provides
a facility that explains the internal behaviour of
an ANN, helps in understanding the underlying
physical phenomena, but also makes the training
results easily applicable.
Extracting regression rules from the trained
MLP neural network, which makes the
training results much more transferable.
Since the original data have been mapped
into a specific range ([−1, 1]) before the
MLP being trained, rules extracted from
the trained MLP have to reflect this feature
(i.e. reversely map the rule executed results
into normal ranges).
FUTURE RESEARCH DIRECTIONS
During the GNMM training process, either in the
input variable selection stage or the MLP training
stage, the back-propagation learning algorithm is
being used throughout. However, this algorithm
is not immune from problems. For example, the
calculations are extensive and, as a result, train-
ing is slow. One of the most effective means to
Compared with conventional methods that
provides longitudinal dispersion prediction (e.g.
equation (3) and equation (4)), GNMM as a data
driven approach needs no a priori knowledge.
Although a priori knowledge is widely used
in many ANN applications, it is dependent on
expert knowledge and hence very subjective
and case dependent. This is particularly true for
complex problems, where the underlying physical
mechanism is not fully understood. Furthermore,
GNMM is adaptive. This means that when new
data samples are applied into the system, the sys-
tem is capable of self-learning and thus adjusting
its results and improving prediction accuracy.
Another outstanding advantage of GNMM over
conventional methods is that, due to its ANN
feature, it can approximate virtually any func-
tion with any desired accuracy without making
assumptions with regard to stream geometry or
flow dynamics.
GNMM is distinct from other solely ANN-
based methods by also incorporating variable
selection and rule extraction. GA-based variable
selection stage is capable of:
Table 4. Rules fired for the training and test
data
Rule
g 1
g 2
g 3
S 1t
S 1v
1
1
1
2
1
2
2
1
1
3
24
6
3
1
2
3
3
1
4
1
2
4
6
6
5
1
3
3
2
6
2
1
3
3
7
2
1
4
2
1
8
2
2
3
5
9
2
2
4
1
1
10
2
3
3
4
11
4
3
1
1
12
4
4
1
1
Filtering out irrelevant and noisy variables,
improving the accuracy of the model.
13
4
4
2
1
Search WWH ::




Custom Search