Information Technology Reference
In-Depth Information
Application of evolutionary algorithms specifically to the GTSP have been few in
the literature until Snyder & Daskin [30] who proposed a random key genetic algorithm
(RKGA) to solve this problem. In their RKGA, a random key representation is used and
solutions generated by the RKGA are improved by using two local search heuristics
namely, 2-opt and “swap”. In the search process, their “swap” procedure is considered
as a speed-up method which basically removes a node j from a tour and inserts all
possible nodes ks from the corresponding cluster in between an edge ( u , v ) in a tour
(i.e., between the node u and the node v ). Such insertion is based on a modified nearest-
neighbor criterion. These two local search heuristics have been separately embedded in
the level-I improvement and level-II improvement procedures.
For each individual in the population, they store the original (pre-improvement) cost
and the final cost after improvements have been made. When a new individual is created,
they compare its pre-improvement cost to the pre-improvement cost of the individual
at position p
[0 , 1] is a parameter
of the algorithm (they use p = 0 . 05 in their implementation). These two improvement
procedures in Snyder & Daskin [30] are implemented as follows:
1. If the new solution is worse than the pre-improvement cost of this individual, the
level-I improvement is considered. That is, one 2-opt exchange and one “swap”
procedure (assuming a profitable one can be found) are performed and the resulting
individual are stored.
2. Otherwise, the level-II improvement is considered. So the 2-opts are executed un-
til no profitable 2-opts can be found, then the “swap” procedures are carried out
until no profitable swaps can be found. The procedure is repeated until no further
improvements have been made in a given pass.
The RKGA focuses on designing the local search to spend more time on improv-
ing solutions that seem promising to the previous solutions than the others. Both level-I
and level-II improvements consider a “first-improvement” strategy, which means im-
plementing the first improvement of a move, rather than the best improvement of such
move.
Thereafter, Tasgetiren et al. [34, 35, 36] presented a discrete particle swarm optimiza-
tion ( DPSO ) algorithm, a genetic algorithm ( GA ) and a hybrid iterated greedy ( HIG )
algorithm, respectively, whereas Silberholz & Golden proposed another GA in [29],
which is denoted as mrOXGA .
Section 2 introduces a brief summary of discrete differential evolution algorithm.
Section 3 provides the details of local search improvement heuristics. The computa-
tional results on benchmark instances are discussed in Section 4. Finally, Section 5
summarizes the concluding remarks.
×
N in the previous (sorted) population, where p
6.2
Discrete Differential Evolution Algorithm
Differential evolution (DE) is a latest evolutionary optimization methods proposed by
Storn & Price [31]. Like other evolutionary-type algorithms, DE is a population-based
and stochastic global optimizer. The DE algorithm starts with establishing the initial
population. Each individual has an m -dimensional vector with parameter values deter-
mined randomly and uniformly between predefined search ranges. In a DE algorithm,
Search WWH ::




Custom Search