Information Technology Reference
In-Depth Information
2.1 Record Keeping
A problem with the IHC using a constructive multi-start search method is that the
same solution may be reached from multiple starting configurations. Therefore, many
tours chosen are duplicate tours that have been explored in previous iterations of the
algorithm. One method to combat this problem is to utilize record keeping
mechanisms that record some or all tours that have been previously computed. Thus,
if a tour is generated that has been considered previously, the algorithm restarts with a
new starting configuration. We investigate two models of record keeping [4, 8]: 1)
Unbounded Memory, 2) Software Emulation of a Cache [13]. We use set associative
caches with small set sizes in our experiments.
3 Experiments
All experiments were performed using a set of TSP graphs from TSPLIB [14] and
random graphs with a maximum of 100 vertices. The set of graphs consists of:
10 randomly-generated Euclidean graphs, 100 vertices each
10 random non-planar graphs, 100 vertices each
18 benchmark instances from TSPLIB [14], vertex counts of 16-100
400 random Euclidean graphs of 100 vertices each.
3.1 TSP Solution Quality
The experiment results are summarized in Table 1. Each row in the table shows the
average quality of the solutions found by each of the algorithms in each of the
experiments as a percentage of the best tour found by Concorde; lower numbers
indicate better quality, with 100.0 being the best tour discovered. Note that the best
tour may be found by multiple construction algorithms.
In the first set of experiments the IHC algorithm is executed using 10 randomly
generated non-planar graphs. The average performance of the algorithms relative to
the Concorde results, which are known to be Optimal with respect to this set of
graphs, is shown in the first row of Table 1. Lower numbers are better with an ideal
being 100 The second experiment runs the IHC algorithm using each of the
construction algorithms over ten randomly generated planar Euclidean graphs are
summarized in the second row of Table 1. The third experiment uses the TSPLIB
benchmarks as the input for the construction algorithms [14].
The results of the experiments show that the nearest neighbor algorithm performs
the worst of the six construction algorithms, and that the greedy based approaches
perform well regardless of the structure of the input. Similarly, the Clarke-Wright
Table 1. Average performance of the construction algorithms.
Greedy
Jump
GRASP
Random
CW
NN
Random graphs
100.08
100.90
101.42
119.62
119.08
149.19
Random Euclidean graphs
100.83
100.83
100.50
100.29
102.80
110.43
TSPLIB benchmarks
100.66
100.37
100.73
100.18
101.75
109.72
Search WWH ::




Custom Search