Biomedical Engineering Reference
In-Depth Information
Figure 5. Predicted and measured longitudinal dispersion coefficients
2 =0.96
Training data, RMSE=34.85, R
2 =0.89
Testing data, RMSE=80.39, R
y = x line
10 2
10 1
10 1
10 2
2 /s)
Table 3. Comparison of test data results when using 4 different methods
Model (Reference)
R 2
RMSE( m 2 / s )
GNMM
0.89
80.4
MLP (Tayfur & Singh, 2005)
0.70
193.0
equation (4) (Deng et al., 2001)
0.55
610.0
equation (3) (Seo & Cheong, 1998)
0.50
812.0
for the prediction of contaminant spread and
concentrations immediately following an ac-
cidental spill.
Utilizing GAs to optimize input variables,
this simplifies the MLP structure in the
GNMM, and makes the training process
more efficient. In this stage, the population
number ( N p ) has to be sufficiently large ( N p
2 b ) to include most possible input variable
combinations; while the learning rate α has
to kept small ( α = 0.01) in order to avoid net-
work oscillation. Furthermore, since weights
and thresholds for each neuron are randomly
generated, GAs have to be run several times
until a clear distinction is evident between
input variables.
CONCLUSION
In this chapter, we have presented a new method-
ology, GNMM, for the prediction of longitudinal
dispersion coefficient. Through a benchmarking
case study, the effectiveness of GNMM has been
demonstrated by comparing the results generated
by GNMM to those presented in the literature.
To conclude, the scenario of GNMM can be
summarized as follows:
Using the input variables found by the GA
with the associated targets to develop an
Search WWH ::




Custom Search