Information Technology Reference
In-Depth Information
Proof.
For an individual
x
on a monotone function
f
, improving mutations are
only produced in the area
.Let
Z
(
y
) be the density function of
the normal distribution, see equation 4.4 and
Z
b
(
y
)=
Z
(
y
)+
b
is
Z
(
y
)shifted
by
b>
0. The success rates are
S
=
{
y
|
y>x
}
∞
(
p
s
)
ISO
=
P
{
Z
(
y
)
≥
x
}
=
Z
(
y
)
dy
(4.47)
x
and
∞
(
p
s
)
BMO
=
P
{
Z
b
(
y
)
≥
x
}
=
P
{
Z
(
y
)
≥
x
−
b
t
}
Z
(
y
)
dy
=
(4.48)
x
−
b
t
Since
∞
∞
Z
(
y
)
dy <
Z
(
y
)
dy
(4.49)
x
x
−
b
t
we get (
p
s
)
ISO
<
(
p
s
)
BMO
.
This result holds for monotone decreasing functions
f
:
IR
IR
. Furthermore, the
increase of the success probability leads to a higher probability to increase the
step sizes (
σ
t
+1
=
γ
1
σ
t
). Hence, the DMO is able to
move fast
along monotone
parts of a function.
→
4.5
Experimental Analysis
In this chapter we analyze the BMO experimentally. As the theoretical analysis of
evolutionary approaches is frequently constrained to simple conditions, i.e. very
simple test functions and simple algorithms, the experimental analysis is able to
give insight into capabilities and drawbacks under more complex conditions. Of
course, those results cannot be generalized to every problem or every problem
class, but they state useful hints about the relationship between problem class
characteristics and algorithms. The following questions arise:
•
is the BMO able to improve an ES on uni- and multimodal problems,
•
is the BMO able to improve an ES on constrained problems,
•
what are the differences between the BMO variants,
•
what are the conditions and limitations of the self-adaptive parameter control
of the BMO, e.g. the influence of population ratios and mutation parameters
like
γ
.
In order to answer these questions, we conduct a sequence of experiments of
self-adaptive ES variants on various uni-, multimodal and constrained problems.
Search WWH ::
Custom Search