Information Technology Reference
In-Depth Information
χ
Let X be a random variable on a space
, p X is its probability density function
φ
χ
(PDF) and let the scores
be a real function on
. The CE method aims to find the
, and the corresponding states x satisfying this minimum:
γ = φ x =
minimum of
φ
over
χ
min
x χ φ (
x
)
(7.8)
The CE method provides a methodology for creating a sequence of x 0 ,
x 1 ,...,
x N
γ and x converges to x .
The algorithm generates a set of controller parameters and calculates the cost
function or performance index: the CE algorithm is then applied considering two
main stages. The first one consists of the generation of random samples using g
and levels
γ 0 1 ,...,γ N such that
γ
converges to
)
and the calculation of a performance index or cost function. The second one is to
update g
(
x
,
v
using data collected in the first stage via the CE method.
The main CE adapted to solve optimization problems is as follows. Consider the
optimization problem:
(
x
,
v
)
φ x = γ =
max
x
φ (
x
)
(7.9)
χ
The rationale behind CE for optimization is to convert the problem ( 7.9 )intoan
associated stochastic problem and then solve it adaptively as the simulation of a rare
event. If
γ
is the optimum of
φ
, the main issue is to define a family g
(.,
v
) |
v
γ t −→ γ to draw samples around the optimum.
The algorithm can be summarized as follows:
and iterate the CE algorithm such as
(1) Initialize v t =
v 0
(2) Generate a sample of size N
x i ) 1 i N
x i )
(
from g
(
x
,
v
)
, compute
φ(
, and order
φ 1 φ 2 ... φ N from biggest to smallest. Use
γ t = φ [ ρ N ] to select the elite
subset of population.
(3) Update v t with:
N
1
N
x i ,
v t + 1 =
) } ·
(
v t )
argmin
v
I
ln g
(7.10)
{ φ (
x i γ t
i
=
1
(4) Repeat from step 2 until convergence or ending criterion.
(5) Assume that convergence is reached at t
t , an optimal value for
=
φ
can be
v t )
obtained from g
(.,
.
The parameter updating step 3 is performed using the best performance samples
N elite
N , also called elite samples. The updated parameters are found to
be maximal likelihood estimates (MLEs) of the elite samples (Rubinstein 2005 ).
The sampling distribution can be quite arbitrary, and does not need to be related to
the function that is being optimised (Kroese et al. 2007 ). The normal (Gaussian)
distribution function gives easy and simple updating formulae.
= ρ ·
 
Search WWH ::




Custom Search