Information Technology Reference
In-Depth Information
Estimation of distribution view on self-adaptation. Self-adaptation is
an implicit parameter adaptation technique enabling the evolutionary search
to tune the strategy parameters automatically by evolution. From the point
of view of the EDA approach, we define self-adaptation as the evolution of
optimal mixture distribution properties. This is very obvious for the mutation
strength control of ES. They consist of a population of Gaussian distribu-
tions that estimate the optimum's location. A current population represents
the present and therefore potentially best solutions. Self-adaptation of step
sizes lets an ES converge to the optimum of many continuous optimization
problems with arbitrary accuracy. In this work we introduced self-adaptive
strategies that influence other distribution parameters.
For EC various genetic operators exist. ES exhibit plenty of mutation operators,
from isotropic Gaussian mutation to the derandomized CMA. Most self-adaptive
operators concern step size control of ES and EP, i.e. the variance of the mutation
distribution function.
Introduction of new biased mutation variants. Unlike the principle
of unbiasedness of mutations, the BMO was introduced as a self-adaptive
operator for biasing mutations into beneficial directions. The operator makes
use of a self-adaptive bias coecient vector ξ , which determines the direction
of the bias. The absolute bias is computed by multiplication with the step
sizes. A number of variants like the sBMO and cBMO have been introduced.
Another variant is the DMO, whose bias is not controlled self-adaptively, but
with the help of the descent direction between two successive populations'
centers. It is based on the assumption of locality, i.e. the estimated direction
to the global optimum can be derived from this local information. The ob-
vious intuition could be shown by theoretical considerations, i.e. if the bias
points into the descent direction, biased mutation is superior to unbiased
mutation on monotone functions. Experimental analysis and statistical tests
on typical test problems showed the superiority of biased mutation on multi-
modal functions like rosenbrock, rosenbrock with noise and on ridge functions
like parabolic ridge and sharp ridge. The mutation of the bias coecient ξ
with settings around γ
0 . 1 revealed the best experimental results. The
same holds for population ratios around (15,100).
Application of biased mutation to constrained search spaces. Our
experimental analysis revealed that biased mutation is appropriate for search-
ing at the boundary of constrained search spaces. The experiments on con-
strained functions like Schwefel's problem 2.40 or g04 from the g-library of
constrained search spaces showed that the BMO in combination with death
penalty exhibits better convergence properties than the ES Gaussian stan-
dard mutation. The latter suffers from premature stagnation at the constraint
boundary. Statistical tests confirmed the significance of the results.
GAs in the classical form seldom use self-adaptation. On the one hand this
is the result of the historical development, on the other hand classical GAs
Search WWH ::




Custom Search