Information Technology Reference
In-Depth Information
Algorithm 1. Differential Evolution Algorithm
1: Initialize the population randomly
2: repeat
3: for all individual x in the population do
4: Let x 1 ,x 2 ,x 3 population, randomly obtained {x 1 ,x 2 ,x 3 ,x different from each other. }
5: Let R ∈{ 1 , ..., n} , randomly obtained { n is the length of the chain. }
6: for i =1 to n do
7: Pick r i ∈ U (0 , 1) uniformly from the open range (0,1).
8: if ( i = R ) ( r i <CR ) then
9: y i ← x 1 i + F ( x 2 i − x 3 i )
10: else
11: y i = x i
12: end if
13: end for {y =[ y 1 ,y 2 ...y n ] is a new generated candidate individual }
14: if f ( y ) <f ( x ) then
15: Replace individual x by y
16: end if
17: end for
18: until termination criterion is met
19: return z ∈ population \∀t ∈ population ,f ( z ) < = f ( t )
One of the reasons why Differential Evolution is an interesting method in many
optimization or search problems is the reduced number of parameters that are
needed to define its implementation. The parameters are F or differential weight
and CR or crossover probability. The weight factor F (usually in [0 , 2]) is applied
over the vector resulting from the difference between pairs of vectors ( x 2 and
x 3 ). CR is the probability of crossing over a given vector of the population ( x )
and a candidate vector ( y ) created from the weighted difference of two vectors
( x 1 + F ( x 2
x 3 )). Finally, the index R guarantees that at least one of the
parameters (genes) will be changed in such generation of the candidate solution.
The main problem of the genetic algorithm (GA) methodology is the need
of tuning of a series of parameters: probabilities of different genetic operators
such as crossover or mutation, decision of the selection operator (tournament,
roulette, ... ), tournament size. Hence, in a standard GA it is dicult to control
the balance between exploration and exploitation. On the contrary, DE reduces
the parameters tuning and provides an automatic balance in the search. As
Feoktistov [9] indicates, the fundamental idea of the algorithm is to adapt the
step length ( F ( x 2
x 3 )) intrinsically along the evolutionary process. At the
beginning of generations the step length is large, because individuals are far
away from each other. As the evolution goes on, the population converges and
the step length becomes smaller and smaller.
In addition to the usual implementation of DE we used these two strategies: i)
Regarding the weighted differences (line 9 in pseudo-code), we used 4 different vec-
tors to calculate the candidate solutions, because, as indicated by Price et al. [16],
for large populations four vectors are more effective regarding convergence than
the weighted difference of only two vectors. ii) Moreover, the usual implementa-
tion of DE chooses the base vector x 1 randomly or as the individual with the best
fitness found up to the moment ( x best ). To avoid the high selective pressure of the
latter, the usual strategy is to interchange the two possibilities across generations.
Instead of this, we used a tournament to pick the vector x 1 , which allows us to
easily establish the selective pressure by means of the tournament size.
 
Search WWH ::




Custom Search