Information Technology Reference
In-Depth Information
The selection pressure is strongly connected to the population sizes. A high
selection pressure can increase the optimization speed, but otherwise in-
creases the probability of getting stuck in local optima. Again, we have the
tradeoff between eciency and accuracy.
The mutation strength influences the ability of the algorithm to converge: a
high mutation strength is advantageous at the beginning of the search. But
when approximating the optimum, the mutation strength should decrease,
so that the EA can converge. If the mutation strength is too small at the
beginning, the search will be slow.
The useful scope of the mutation strength is not easy to guess. A self-adaptive
control of the mutation strength frees the mutation from under- or oversized
steps, see chapter 3.
According to the genetic repair effect, crossover is useful to mix the common
features of parents, see chapter 6. Hence, a crossover operator should be
designed to contribute to this demand.
Concerning the initialization , the practitioner should ask himself, how much
information is available at the beginning of the search? If the first individ-
uals can be initialized with close-to-optimum solutions, the search is only a
small optimization process of diminutive improvements. For this case, only a
small mutation strength is recommendable not to destroy close to optimum
solutions. A from the scratch search requires a sophisticated representation
and choice of the operators and their parameterization.
This is only a short and incomplete survey of practical guidelines for the usage
of EAs. The section 2.4 gives an overview of theoretical EA research fields.
Example of an Application: Evolution of a Computer Game
Opponent
We tested the capabilities of EAs for building a rule base for a dynamic com-
puter game opponent, based on imitation [111] and learning from scratch [112].
The basis of our experiments was a simple scenario. Two artificial computer
opponents in a simple first person shooter scenario fight each other for approx-
imately one minute. The EA driven opponent selects his rules from a rule list
( R 1 ,...,R s )offixedsize s . We tested several settings, reaching from s =10to
s = 400. Each rule R i consists of a matrix of bits, representing the environment
divided into grids and a corresponding action. By means of the genetic opera-
tors the population of rule bases was evolved and evaluated letting the artificial
opponent play for about one minute. In each step, the current environment is
compared to the rule base and the rule with the least Euclidean distance to the
current situation is executed. The fitness of the whole rule base is calculated
by simply summing up the hits h EA , the EA agent achieves during the play,
subtracting the hits h o his opponent achieves.
f t = w EA ( h EA ) t + w o ( h o ) t
(2.15)
The weights w EA and w o allow to control the aggressiveness of the artificial
player. In a first approach rules were initialized keeping track of human or
 
Search WWH ::




Custom Search