Information Technology Reference
In-Depth Information
far solution does not improve thereafter. F avg represents the average objective function
values out of five runs.
Table 5.8 shows the computational results of implementing DE without the local
search methods and those adopted from Snyder & Daskin [26]. As seen in Table 5.8,
the DE results are very competitive to the RKGA of Snyder & Daskin [26], even though
a two-sided paired t -test favors the RKGA. However, our objective is just to show how
a continuous optimization algorithm can be used for solving a combinatorial optimiza-
tion problem. We would like to point out that with some better parameter tuning, the
DE results could be further improved. In addition, our observation reveals the fact that
the performance of the DE algorithm is tremendously affected by the mutation equa-
tion [14]. After applying the mutation operator, most dimension values fall outside of
search limits (cluster sizes). To force them to be in the search range, they are randomly
re-initialized between the search bounds in order to keep the DE algorithm search for
nodes from clusters predefined. However, the random re-initialization causes the DE
algorithm to conduct a random search, which ruins its learning ability. Based on our
observation, using some different levels of crossover and mutation probabilities as well
as other mutation operators did not have so much positive effect in the solution quality.
In spite of all the disadvantages above, the inclusion of local search improvement
heuristics in Snyder & Daskin [26] has led the DE algorithm to be somehow competi-
tive to the RKGA. The computational results with the local search heuristics are pre-
sented in Table 5.9.
As seen in Table 5.9, the DE algorithm with the local search improvement heuristics
was able to generate competitive results to the RKGA of Snyder & Daskin [26]. How-
ever, as seen in both Table 5.8 and 5.9, the success was mainly due to the use of the
local search improvement heuristics. A two-sided paired t -test on the relative percent
deviations in Table 5.9 confirms that both DE and RKGA were statistically equivalent,
since the p -value was 0.014. However, DE was computationally more expensive than
RKGA.
5.5
Conclusions
A continuous DE algorithm is presented to solve the GTSP on a set of benchmark
instances ranging from 51 (11) to 442 (89) nodes (clusters). The main contribution
of this chapter is due to use of a continuous DE algorithm to solve a combinatorial
optimization problem. For this reason, a unique solution representation is presented and
the SPV rule is used to determine the tour. The pure DE algorithm without local search
heuristics is competitive to RKGA. However, inclusion of the local search heuristics led
the DE algorithm to be very competitive to the RKGA of Snyder & Daskin [26].
As we mentioned before, with some better parameter tuning, the DE results could
have been further improved. However, our observation reveals the fact that the per-
formance of the DE algorithm is tremendously affected by the mutation equation [14].
After applying the mutation operator, most parameter values fall outside of search limits
(cluster sizes). To force them to be in the search range, they are randomly re-initialized
between the search bounds in order to keep the DE algorithm search for nodes from
clusters predefined. However, the random re-initialization causes the DE algorithm to
Search WWH ::




Custom Search