Information Technology Reference
In-Depth Information
Table 9.2: Results of test cases for benchmark suite 0 (20 runs of each
test case).
Test case
Simulated Tiles
Runtime in seconds
Runtime
time
Avg
Min
Max
(normalized)
TC P 01
100
0
36.71
36.08
37.38
1.00
TC P 02
200
37
71.32
70.50
72.46
1.94
TC P 03
300
60
107.87
106.57
109.56
2.94
TC P 04
400
80
144.56
142.12
146.29
3.94
TC P 05
500
94
182.25
180.48
184.12
4.96
TC P 06
1 000
181
373.23
368.55
377.47
10.17
TC P 07
2 000
322
808.42
794.10
817.77
22.02
TC P 01 b
100
0
36.70
36.28
36.99
1.00
TC P 02 b
200
37
71.30
70.00
72.07
1.94
TC P 03 b
300
60
108.19
106.96
110.24
2.95
TC P 04 b
400
80
144.29
141.46
146.96
3.93
TC P 05 b
500
94
182.53
178.71
185.19
4.97
TC P 06 b
1 000
181
373.72
369.49
379.49
10.24
TC P 07 b
2 000
322
813.31
800.02
821.40
22.16
9.5.3 Benchmark suites 1 and 2: Dependency between
number of agents and runtime
The benchmark suites 1 and 2 are intended to measure the dependency
between number of agents and runtime. Whereas suite 1 uses smaller
numbers of agents (up to 200, see Table 9.3), suite 2 uses up to 10000
agents (see Table 9.4).
As argued in chapter 8.1.2, runtime should scale quadratic with the
number of agents (due to constraint evaluation). In order to show the
influence of constraint evaluation both suites are defined without a
constraint and with a dummy constraint (indicated by the sux ' b').
Setup suite 1
All benchmarks in suite 1 operate on the same environment env04 :
Size: 1 000 × 1 000 cells
 
Search WWH ::




Custom Search