Information Technology Reference
In-Depth Information
1
Start
2
t o := 0;
3
Initialize
P o ( t );
4
Evaluate P o ( t );
5
Repeat
Change P o ( t ) →P o ( t )
6
Evaluate P o ( t ) by running the following subprogram;
7
1
Start
2
t i := 0;
3
Initialize P i ( t );
4
Evaluate P i ( t );
5
Repeat
Change P i ( t ) →P i ( t )
6
Evaluate P i ( t );
7
Select P i ( t +1) from P i ( t );
8
9
t i := t i +1;
10
Until termination condition
11
End
Select P i ( t +1)from P i ( t );
8
9
t o := t o +1;
10
Until termination condition
11
End
Fig. 3.2. Pseudocode of a nested EA
population. In the case of dynamic penalty functions the penalties are increased
depending on the number of generations in order to avoid infeasible solutions in
later phases of the search. Joines and Houck [66] propose the following dynamic
penalty function:
f ( x )= f ( x )+( C
t ) α
G ( x ) (3.1)
with fitness f ( x ) of individual x , the constraint violation measure G ( x )and
parameters C and α . Annealing penalties are increased according to an external
cooling scheme.
·
·
Adaptive
Adaptive parameter control means the control of parameters according to a user
defined heuristic. A feedback from the search is used to determine magnitude and
direction of the parameter change. Furthermore, the heuristic defines whether
its parameter values persist or propagate throughout the population. Adaptive
penalty functions control the penalty of infeasible solutions according to a set of
rules.
An example for an adaptive control of endogenous strategy parameters is
the 1/5th success rule for ES by Rechenberg [114]. Running a simple (1+1)-ES
with isotropic Gaussian mutations with constant mutation steps σ and a simple
objective function, the algorithm's strategy will become very slow after a certain
 
Search WWH ::




Custom Search