Information Technology Reference
In-Depth Information
the chosen embedding dimension and the chosen delay time result in a total
of 90 fitness cases. This dataset will be used for training in all the experi-
ments of this section. As basis for the fitness function, we are going to use
the mean squared error evaluated by equation (3.4a), being the fitness evalu-
ated by equation (3.5) and, therefore, f max = 1000.
7.3.1 Original and Enhanced STROGANOFF
For the original STROGANOFF, the function set consists obviously of just
one kind of function, namely, function F 9 of Table 7.1, but this function will
be weighted 16 times in order to facilitate the comparison with the enhanced
implementation. For the enhanced STROGANOFF, all the 16 functions of
Table 7.1 will be used in the function set, thus giving F = {F 1 , ..., F 16 }. The
set of terminals T = {a, b, c, d, e, f, g, h, i, j}, which correspond, respectively,
to t -10, t -9, ..., t -1. In all the experiments, a total of 120 random numerical
constants, ranging over the rational interval [-1, 1], will be used per chromo-
some (in the multigenic system 40 RNCs will be used per gene, giving also
120 RNCs per chromosome). The performance and the parameters used in
all the three experiments are shown in Table 7.4.
And as expected, the enhanced implementation is considerably better than
the original STROGANOFF. Note, however, that, in the GEP-ESM experi-
ment, a three-genic system was used and, therefore, the system we are simu-
lating is not a straightforward port of the enhanced STROGANOFF as de-
scribed by Nikolaev and Iba (2001) but is rather a much more efficient algo-
rithm, as it benefits from the multigenic nature of gene expression program-
ming. Indeed, the multigenic system works considerably better than the
unigenic one (average best-of-run R-square of 0.8472991731 for the unigenic
implementation and 0.8566823219 for the multigenic). It is worth emphasiz-
ing that the implementation of multiple parse trees in genetic programming
is unfeasible and so is a system similar to the one used in the GEP-ESM
experiment. Furthermore, the facility for the manipulation of random nu-
merical constants in GP is not appropriate for handling the huge amount of
random numerical constants that are necessary for polynomial induction. In
fact, genetic programming usually uses a neural network to discover a poste-
riori the coefficients of the Kolmogorov-Gabor polynomials. This obviously
raises the question of what is exactly the GP doing in this case for, without
the coefficients, the evolution of just the polynomial skeleton is not particu-
larly useful (see Table 7.5 below).
Search WWH ::




Custom Search