Information Technology Reference
In-Depth Information
annealing. The second step concerns the properties of the mixture density func-
tions. An appropriate variation operator should be able to adapt the properties
of the probability distribution. In the case of EDAs, rules are invented, e.g. for
the variance scaling. Self-adaptation evolves the density properties by evolution.
This leads to the following definition.
Definition 3.2 (Self-Adaptation of EA Parameters, EDA-view)
Self-adaptation is the evolutionary control of properties
P∈
Σ of a mixture
density function
M
estimating the problem's optimum.
Basis of the optimization of the distribution features
is evolutionary search
bound to the search on the objective optimization level. From this point of view,
an ES is an estimation of a mixture density function
P
, i.e. μ Gaussian density
functions estimate the optimum's location. From this mixture density function
M
M
, λ samples are generated. Hence, the variance scaling itself is an evolutionary
process. Its success depends on the variation operators in the strategy variable
search space Σ , i.e. log-normal or meta-EP mutation. Strategy variables and in
most cases also heuristic extensions of EAs (e.g. the DSES) estimate the distribu-
tion
M
of the optimum's location. The Gaussian step sizes of ES are eminently
successful, because its parameter directly control a meaningful distribution pa-
rameter, the deviation of the Gaussian distribution.
3.7.4 Views on the Proposed Operators and Problems of This Work
We pointed out that from the point of view of EDAs, an ES is an EDA with a
Gaussian based mixture distribution
with evolutionary variance scaling. The
EDA-view on self-adaptation can be found in most of the concepts of this work:
M
BMO: The BMO is a biased mutation operator for ES, see section 4.2. Its
biased mutation vector ξ shifts the center of the Gaussian mutation into
beneficial directions within the degree of the step sizes σ .FromtheEDA
point of view, the BMO provides a mixture distribution
consisting of μ
Gaussian distributions with an additional degree of freedom, i.e. the shift ξ
of their centers.
M
DMO: Similar to the BMO the DMO, see section 4.3, makes use of a mixture
distribution
consisting of μ Gaussian distribution. But the shift of the
Gaussian distribution results from an adaptive rule, i.e. the normalized vector
M
ξ = χ t +1 χ t
|
(3.7)
χ t +1
χ t |
of two successive populations' centers χ t− 1 and χ t .This global bias shifts
the single parts of the mixture distribution
M
depending on the recent past.
SA-Crossover: Self-adaptive crossover is an attempt to exploit structural in-
formation of parts (blocks) of the solution during the optimization process.
From the EDA point of view self-adaptive recombination for ES controls
the distances of particular distributions
M i with 1
i
μ of the mixture
distribution
M
.
Search WWH ::




Custom Search