Information Technology Reference
In-Depth Information
vicinity of the infeasible search space. Hence, the adaptive heuristic takes control
of the mutation strength. A minimum step size bounds the self-adaptive step
sizes. A special heuristic cares for the reduction of this minimum step size in
order to approximate the optimum.
3.3 Typical Adapted Parameters
This section summarizes parameters and features which are typically adapted.
These reach from the problem representation and the fitness function to the ge-
netic operators and the population. In the past research mainly focused on the
parameter adaptation of mutation parameters, evaluation functions and popu-
lation parameters.
Representation
In classical GAs and ES the data structure is not subject to a change during
the run of the algorithm. This is different for GP and classical EP where GP
trees and finite state machines can vary in size and shape. Changing representa-
tions also occur in the delta encoding algorithm of Whitley and Mathias [158],
which changes the representation of the function parameters in order to balance
between exploration and exploitation.
Fitness Evaluation Function
The fitness evaluation function is usually not changed during the run of an op-
timization algorithm as it is part of the problem itself 1 . An exception are the
penalty functions for constrained problems. They make use of penalty factors
which decrease the fitness of solutions according to their constraint violation.
Penalty factors cannot be controlled self-adaptively because self-adaptation de-
pends on selection and fitness evaluation. Individuals would tune their penalty
factors in order to increase their fitness and selection probability respectively.
Mutation Operators
Mutation operators are frequently subject to automatic parameter control. There
are many conflicting results concerning the mutation probability of GAs. De
Jong's [67] recommendation is p m =0 . 001, Schaffer et al. [127] recommend
0 . 005
0 . 01 and Grefenstette [51] recommends p m =0 . 01. Muhlenbein
[93] suggests to set the mutation probability p m =1 /l depending on the length
l of the representation. This discussion seems to be superfluous as the mutation
strength is problem-dependent. The idea appears to control the mutation rate
during the optimization run. Fogarty [41] suggests a deterministic control scheme
r m
1 This is not the case for dynamic optimization problems, where the problem changes
over time.
 
Search WWH ::




Custom Search