Information Technology Reference
In-Depth Information
of the SA-(1 )
ES, e.g. it confirms that the choice for the learning parame-
ter τ = c/ N , see section 4.1.2, is reasonable. Beyer and Meyer-Nieberg [17]
present results for σ -self-adaptation on the sharp ridge for a (1 )-ES without
recombination. They use the evolution equation approximation and show that
the radial and the axial progress rate as well as the self-adaptation response
depend on the distance to the ridge axis. A further analysis reveals that the
ES reaches a stationary normalized mutation strength. According to their anal-
ysis the function parameter d (see equation 4.50) determines whether the ES
concentrates on decreasing the distance to the ridge axis or increases the speed
traveling along the ridge axis. If d is smaller than a population size dependent
limit the stationary step size leads to an increase of the distance to the ridge.
Convergence Theory with Markov Chains and Supermartingales
Auger [3] investigated the (1 )
SA-ES on the one-dimensional sphere function.
She proved sucient conditions on the algorithm's parameters and showed that
the convergence is t ln(
)withparent X t at generation t . Her proof technique
is based on Markov chains for continuous search spaced and the Foster-Lyapunov
drift conditions which enable stability property proofs of Markov chains. The
main result is the log-linear type of the convergence and an estimation of the
convergence rate. Even though the proof is very sophisticated, it does not show
how the number of steps scales depending on the search space dimension.
The work of Semenov and Terkel [137] is based on a stochastic Lyapunov
function. A supermartingale (stochastic process with a relationship between a
random variable and its conditional expectation) is used to investigate the con-
vergence of a simple self-adaptive EA. A central result of their research is that
the convergence velocity of the analyzed self-adaptive EA is asymptotically ex-
ponential. They use Monte-Carlo simulations to validate the confidence of the
martingale inequalities numerically, because they cannot easily be proven.
X t
Premature Convergence
Rudolph [126] investigates the premature convergence of EAs using self-adaptive
mutation strength control. He assumes that the (1+1)-EA is located in the vicin-
ity
of a local solution with fitness f = . He proves that the probability to
move from a solution in this set
P
with a
fitness less than an f< (which in particular contains the global optimum, the
rest of the search domain exhibits fitness values f> ) is smaller than 1. This
result even holds for an infinite time horizon. Hence, an EA can get stuck at a
non-global optimum with a positive probability. Rudolph also proposes ideas to
overcome this problem.
P
to a solution in a set of solutions
S
Runtime Analysis
Rechenberg proved that the optimal step size of ES depends on the distance to
the optimum and created the 1/5th adaptation rule [114]. Recently, Jagerskupper
 
Search WWH ::




Custom Search