Information Technology Reference
In-Depth Information
self-adaptation GA on multimodal functions. Another reason for premature con-
vergence was revealed by Liang et al. [85]. They point out that a solution with a
high fitness but a far too small step size in one dimension is able to cause stag-
nation by inheriting this mutation strength to all descendants. As the mutation
strength changes with a successful mutation according to the principle of self-
adaptation, Liang et al. [85] considered the probability that after k successful
mutations the step size is smaller than an arbitrarily small positive number .
This results in a loss of step size control of their (1+1)-EP. Meyer-Nieberg and
Beyer [91] point out that the reason for the premature convergence of the EP
could be that the operators do not fulfill the postulated requirements for muta-
tion operators, see section 4.1. Hansen [53] examined the conditions under which
self-adaptation fails, in particular the inability of the step sizes to increase. He
tries to answer the question whether a step size increase is affected by a bias of
the genetic operators or due to the link between objective and strategy param-
eters. Furthermore, he identifies two properties of an EA: first, the descendants'
object parameter vector should be point-symmetrically distributed after recom-
bination and mutation. Second, the distribution of the strategy parameters given
the object vectors after recombination and mutation should be identical for all
symmetry pairs around the point-symmetric center.
In chapter 7 we prove the premature fitness reduction of a (1+1)-EA in the
vicinity of the constraint boundary. It often leads to premature convergence.
One way to overcome the premature convergence is the introduction of a lower
bound for the mutation strength. The DSES of chapter 7 makes use of a lower
bound. The disadvantage of such an approach is that the mutation strength
also decreases during convergence to the optimum . But on the other hand the
lower bound prevents the EA from convergence to the optimum due to too high
and disturbing mutation strengths. In the DSES this shortcoming is handled
by decreasing the lower bound adaptively with heuristic rules, i.e. based on the
number of infeasible mutations at the constraint boundary.
3.9 Summary
1. EAs exhibit various parameters which have to be tuned before or which have
to be control led during the run. Parameter control can be classified into
deterministic, adaptive, self-adaptive and metaevolutionary approaches. In
deterministic approaches parameters change over the generations. Heuristic
rules are the basis for adaptive parameter control.
2. Self-adaptation is an implicit parameter adaptation technique enabling the
evolutionary search to tune the strategy parameters automatically. It is com-
monly used in ES and EP, mainly for the control of mutation parameters
like the variance of the mutation distribution function. GAs in the classical
form seldom use self-adaptation. In the past, self-adaptive crossover only
focused on the adaptation of the crossover probability. Beyer and Meyer-
Nieberg [91] point out that crossover in binary standard GAs exhibit a form
of self-adaptation.
 
Search WWH ::




Custom Search