Information Technology Reference
In-Depth Information
Definition 2.4 (Evolutionary Convergence)
Let ( X t : t
0) be the sequence of populations generated by some EA and
let F t =min
{
f ( X t, 1 ) ,...,f ( X t,n )
}
denote the best objective function value of
the population of size n<
0 . An EA is said to converge
completely (with probability 1, in probability, in mean) to the global minimum
f =min
at generation t
{
f ( x ); x
∈X}
of objective function f :
X→
IR if the nonnegative
0) with D t = f
random sequence ( D t : t
F t converges completely (with
probability 1, in probability, in mean) to zero.
Rudolph comments that a precondition for convergence is the property of an EA
to visit the global solution f . But the property of convergence does not indicate
an advantage of finding the global optimal solution automatically. Furthermore,
we need the concept of success rate or success probability. This denotes the rate
of improved solutions and all solutions produced by a genetic operator OP .
Definition 2.5 (Success Rate, Success Probability)
Let x t be an individual at generation t and
A t =
{
y
|
y = OP ( x t )
}
the set of
samples produced by the genetic operator OP .Let
,
S t ⊆A t be the set of samples with better fitness than x t . The success rate or
success probability ( p s ) t is defined as ratio of the successful offspring individuals
S t and all offspring individuals A t
S t =
{
z
∈A t |
f ( z ) <f ( x t )
}
|S t |
|A t |
( p s ) t =
.
(2.1)
These definitions will be useful for theoretical considerations, in particular in
chapter 4, 5 and 7.
2.1.3
Classic Optimization Methods
The classical non-evolutionary optimization methods for continuous problems
can mainly be classified into direct , gradient and Hessian search methods. The
direct methods determine the search direction without using a derivative [135],
similar to EAs. Lewis et al. [82] give an overview of direct search methods. Pat-
tern search methods [29] examine the objective function with a pattern of points
which lie on a rational lattice. Simplex search [99] is based on the idea that a
gradient can be estimated with a set of N+1 points, i.e. a simplex. Direct search
methods like Rosenbrock's [121] and Powell's [110] collect information about
the curvature of the objective function during the course of the search. If the
derivatives of a function are available, the gradient and Hessian methods can be
applied. Gradient methods take the first derivative of the function into account,
while the Hessian methods also compute the second derivative. Successful exam-
ples are the quasi-Newton methods [21]. They search for the stationary point of
a function, where the gradient is 0. Quasi-Newton estimates the Hessian matrix
analyzing successive gradient vectors. In practice, the first derivative is often not
available. This and other requirements of many classic search methods are the
reasons for the success of EAs.
 
Search WWH ::




Custom Search