Hardware Reference
In-Depth Information
3.3.2.2
Enhanced-ES
Evolution Strategies (ES) is a multi-objective optimizer that is able to reproduce
different evolutionary algorithms. These algorithms share the selection operator and
the operators schedule, while they differ in the ratio between parents and children
points and in the definition of the set of points among which the new parents are
selected.
The ES approach was first used at the Technical University of Berlin. During
the search for the optimal shapes of bodies in a flow, the classical attempts with
the coordinate and the well-known gradient-based strategies were unsuccessful. So,
the idea was conceived of proceeding strategically. Rechenberg and Schwefel [ 14 ]
proposed the idea of trying random changes in the parameters defining the shape,
following the example of natural mutations.
Usually, there is a huge difference between mathematical optimization and opti-
mization in the real-world applications. Thus, ES were invented to solve technical
optimization problems where no analytical objective functions are usually available.
The general Evolutionary Strategy scheme is the following:
1. Initial population creation;
2. Individuals evaluation;
3. Selection of the best individual(s);
4. Recombination;
5. Mutation;
6. Individuals evaluation;
7. Return to step 3 until the required number of generation is achieved
Selection of the best results may be done only on the set of children or on the
combined set of parents and children. The first option, which is represented usually
with the notation ( λ , μ )-ES, can “forget” some good results when all the children are
worse than their parents. The second option, which is represented by the notation
( λ
μ )-ES, applies a kind of elitist strategy for the selection.
The best solutions may be identified in different ways: the implementation of
ES provided in modeFRONTIER is capable of approximating the Pareto Set in
multi-objective optimization by using the Non-dominated/Crowding distance sorting
technique as done in NSGA-II.
The main source of variation is a mutation operator based on a normal distribution.
The standard deviation of this distribution changes during the generations in an
adaptive manner. Each input variable has its own deviation with an initial and a
minimal value that can be arbitrarily tuned.
A completely different operator has been introduced for categorical variables in
the context of MULTICUBE. If such a variable is selected for mutation, its value is
changed by following an uniform distribution (i.e. the choice is completely random),
since locality has no meaning. An adaptive strategy is performed over the probability
of mutating each variable.
A discrete recombination operator is a second source of variability. It resembles the
classical crossover operator, where information coming from two different parents
+
Search WWH ::




Custom Search