Information Technology Reference
In-Depth Information
(RKGA) to solve this problem. In their RKGA, a random key representation is used and
solutions generated by the RKGA are improved by using two local search heuristics
namely, 2-opt and ”swap”. In the search process, their ”swap” procedure is considered
as a speed-up method which basically removes a node j from a tour and inserts all
possible nodes ks from the corresponding cluster in between an edge ( u , v ) in a tour
(i.e., between the node u and the node v ). Such insertion is based on a modified nearest-
neighbor criterion. These two local search heuristics have been separately embedded in
the level-I improvement and level-II improvement procedures.
For each individual in the population, they store the original (pre-improvement) cost
and the final cost after improvements have been made. When a new individual is created,
they compare its pre-improvement cost to the pre-improvement cost of the individual
at position p
[0 , 1] is a parameter
of the algorithm (they use p = 0 . 05 in their implementation). These two improvement
procedures in Snyder & Daskin [26] are implemented as follows:
×
N in the previous (sorted) population, where p
1. If the new solution is worse than the pre-improvement cost of this individual, the
level-I improvement is considered. That is, one 2-opt exchange and one ”swap”
procedure (assuming a profitable one can be found) are performed and the resulting
individual are stored.
2. Otherwise, the level-II improvement is considered. So the 2-opts are executed un-
til no profitable 2-opts can be found, then the ”swap” procedures are carried out
until no profitable swaps can be found. The procedure is repeated until no further
improvements have been made in a given pass.
The RKGA focuses on designing the local search to spend more time on improving
solutions that seem promising to the previous solutions than the others. Both level-I
and level-II improvements consider a ”first-improvement” strategy, which means im-
plementing the first improvement of a move, rather than the best improvement of such
move.
Thereafter, Tasgetiren et al. [30, 31, 32] presented a discrete particle swarm opti-
mization ( DPSO ) algorithm, a genetic algorithm ( GA ) and a hybrid iterated greedy
( HIG ) algorithm, respectively.They hybridized the above methods with a local search,
called variable neighborhood descend algorithm, to further improve the solution qual-
ity; at the same time, they applied some speed-up methods for greedy node insertions.
Silberholz & Golden proposed another GA in [25], which is denoted as mrOXGA .
Section 2 introduces a brief summary of discrete differential evolution algorithm.
Section 3 provides the details of solution representation. Insertion methods are summa-
rized in Section 4. Section 5 gives the details of the local search improvement heuristics.
The computational results on benchmark instances are discussed in Section 6. Finally,
Section 7 summarizes the concluding remarks.
5.2
Differential Evolution Algorithm
Differential evolution (DE) is a latest evolutionary optimization methods proposed by
Storn & Price [27]. Like other evolutionary-type algorithms, DE is a population-based
and stochastic global optimizer. The DE algorithm starts with establishing the initial
Search WWH ::




Custom Search