Information Technology Reference
In-Depth Information
Table 6.5. Experimental results of Xover opt on 4 continuous test problems. Xover opt
improves the results on the sphere function and the monotone problem sharp ridge.
sphere rastrigin roseNoise
sharp
2.40
best
5.97E-201
0.99
0.68
-7.52E+256 -4964.04
worst
1.49E-190
5.96
168.90
-7.66E+240 -4483.30
mean 5.98E-192
3.14
18.01 -3.03E+255 -4758.69
dev
0
1.73
38.14
-
138.09
Table 6.6. Analysis of the optimization quality testing k recombination settings on
the function sphere. The approximation accuracy improves with increasing k , but not
linearly.
2 3 5 10 15 20 30
best 1.31E-168 4.91E-177 1.45E-187 5.97E-201 7.88E-206 1.14E-207 5.87E-216
worst 1.11E-160 8.15E-171 2.05E-181 1.49E-190 3.25E-199 4.21E-201 2.38E-206
mean 5.05E-162 5.83E-172 1.06E-182 5.98E-192 1.51E-200 2.10E-202 1.45E-207
too weak. But a weak influence on the fitness is a strong argument that self-
adaptation of this parameter must fail. What is the reason for the weakness?
Obviously, for the considered multimodal fitness landscapes we can assert the
assumption that generating points within the convex hull of the population is not
advantageous. The results on the function sphere, see also table 6.6 and sharp
ridge are much better. Xover opt is helpful on ridge functions because the points
can concentrate on one edge of the hull due to the monotony of the function. On
the sphere function the feature of monotony also helps, because every step into
the direction of better fitness is a step into the direction of the optimum. This
condition of locality is not given for multimodal functions.
In order to test whether the fitness gain is too sparse for successful self-
adaptation, we test a couple of settings for parameter k on the sphere function,
see table 6.6 for the results. An improvement can be achieved with every increase
of k . But the fitness gain is not logarithmically linear depending on k , but rather
less.
But why is the self-adaptation of ν on the sphere function not successful? If
the points are randomly distributed around the optimum, an optimal linear com-
bination of two points exists, but its factors depend on the current situation and
not on the problem structure. Hence, the same factors for the linear combination
are not optimal in the next generation, the situation may change significantly.
This is why the success of inheriting the factors ν can not be guaranteed and
may even be doubtful.
6.6
Summary
(...) Although these results provided valuable insight and have informed many
practical implementations, it is worth bearing in mind that they are only
 
Search WWH ::




Custom Search