Information Technology Reference
In-Depth Information
simulation engines deliver identical simulation results when simulating
the same model.
9.5 Empirical measurements
This section provides empirical measurements of several key perform-
ance metrics. The runtime behavior of a simulation of an agent-based
model according to the GRAMS reference model is analyzed by several
benchmark suites. The primary metric used to measure performance
is the runtime which denotes the amount of computing time necessary
for executing a single simulation. A secondary metric is the amount
of memory used in total by each simulation engine to execute a single
simulation of a model.
All benchmarks are based on the Tileworld-model as described in
the next subsection. The following computing infrastructure was used:
Processor: Intel Core i5-650 (4-core processor)
Memory: 4GB
Software: Windows 7 (64 Bit), Jave Runtime Environment 1.6.20
(32 bit edition)
In the following subsections, the Tileworld-Model and the various
benchmark suites are described, the results are presented and an
interpretation of the results is given.
9.5.1 Simulation model used for benchmarks
The simulation model used for all benchmarks is adapted from the
Tileworld-model [103]. This section describes the simulation model
according to the GRAMS reference model.
Macro-level: Simulation time
As simulation time a discrete time domain is chosen, i.e.,
T
=
N
=
{
0 , 1 , 2 ,...t max }
. The simulation time has no direct relation to real
Search WWH ::




Custom Search