Information Technology Reference
In-Depth Information
vector yields a lower objective function value than a predetermined population mem-
ber, the newly generated vector replaces the vector with which it was compared. The
comparison vector can, but need not be part of the generation process mentioned above.
In addition the best parameter vector x ( G )
best , is evaluated for every generation G in order
to keep track of the progress that is made during the minimization process. Extracting
distance and direction information from the population to generate random deviations
result in an adaptive scheme with excellent convergence properties [3].
Descriptions for the earlier two most promising variants of DE (later known as DE2
and DE3) are presented in order to clarify how the search technique works, then a
complete list of the variants to date are given thereafter. The most comprehensive topic
that describes DE for continuous optimization problems is [2].
Scheme DE2
Initialization
As with all evolutionary optimization algorithms, DE works with a population of solu-
tions, not with a single solution for the optimization problem. Population P of genera-
tion G contains NP solution vectors called individuals of the population and each vector
represents potential solution for the optimization problem:
P ( G ) = X ( G )
i
i = 1 ,..., NP ; G = 1 ,..., G max
(1.5)
Additionally, each vector contains D parameters:
X ( G )
i
= x ( G )
j , i
i = 1 ,..., NP ; j = 1 ,..., D
(1.6)
In order to establish a starting point for optimum seeking, the population must be
initialized. Often there is no more knowledge available about the location of a global
optimum than the boundaries of the problem variables. In this case, a natural way to
initialize the population P (0) (initial population) is to seed it with random values within
the given boundary constraints:
x ( U )
j
P (0) = x (0)
j , i = x ( L )
x ( L )
j
+ rand j [0 , 1]
i
[1 , NP ];
j
[1 , D ]
(1.7)
j
where rand j [0 , 1] represents a uniformly distributed random value that ranges from
zero to one. The lower and upper boundary constraints are, X ( L )
and X ( L ) , respectively:
x ( L )
j
x ( U )
j
x j
j
[1 , D ]
(1.8)
For this scheme and other schemes, three operators are crucial: mutation, crossover
and selection. These are now briefly discussed.
Mutation
The first variant of DE works as follows: for each vector x ( G )
i
, i = 0 , 1 , 2 ,.., NP
1, a
trial vector v is generated according to
x ( G )
j , r 3
v ( G +1)
j , i
= x ( G )
x ( G )
j , r 1 + F
j , r 2
(1.9)
Search WWH ::




Custom Search