Hardware Reference
In-Depth Information
evaluation are: 184 designs (corresponding to about 2% of the design space), 461
(5%), 922 (10%), 1,843 (20%), 2,765 (30%), 3,686 (40%) and 4,608 (50%). Only the
requests of evaluation of new designs were counted, since sometimes the algorithms
request the evaluation of an already evaluated design due to the inherent behavior
of their random engines. In any practical application, the time needed to retrieve the
stored information is incomparably smaller than the time that would be spent by a
new evaluation. Besides the fact that in this experiment it is known in advance any
value thanks to the previous full factorial exploration, the real optimization process
was simulated by counting each design only once.
Some algorithms occasionally cannot generate new designs when working with
discrete problems if some parameters are not set properly. The chosen benchmark
problem has a small variable space and in the exploitation phase (where usually re-
combination is less effective than in the exploration one) the algorithms may get stuck
in the endless repeated evaluation of the same designs. This behavior was observed
and was overcame by increasing the exploration capabilities of the algorithms.
A last remark concerns the input variables nature. They are all discrete, but none
of them is categorical. This choice allows to test fairly a wider range of algorithms,
but on the other hand, the test cannot highlight completely the improvements gained
with the enhancements described above.
With a small variance, all algorithms reach an ADRS value below 2% evaluating
30% of the design space (see Fig. 3.6 ). This result can be considered very promising.
Variations in the slope of the lines for some algorithms are a consequence of possible
different behaviors in successive phases of the optimization process. The most clear
example is MOSA with its hot and cold phase. MOSA is tuned to reach the top of
the exploitation phase at 50% of evaluations and therefore its results are the worst up
to 20-30%, while at the end it is one of the most effective algorithms. APRS shows
a similar behavior.
It is very difficult to analyze the uniformity and the extent of the partial front
found by the algorithms during the optimization process. The true Pareto front is
0.14
Es
MFGA
MOGA-II
MOSA
NSGA-II
MOPSO
APRS
0.12
0.10
0.08
0.06
0.04
Fig. 3.6 Algorithms
performance comparison on
the reduced benchmark
problem in terms of ADRS
metric [ 15 ]
0.02
2%
5%
10%
20%
30%
40%
50%
Percentage of evaluated points
Search WWH ::




Custom Search