Information Technology Reference
In-Depth Information
Using a straight-forward approach, exploration studies can be performed by testing all
possible parameter configurations. In the case of continuous variables, a step size for
discretization or a selection of parameter values to be investigated has to be performed.
Optimization studies can be performed by coupling optimization methods (e.g., meta-
heuristics for stochastic combinatorial optimization [1]). For comparison studies, differ-
ent approaches in the fields of ranking, selection, and multiple comparisons have been
introduced (e.g., [16]).
In our work, we focus on discrete-event simulation where various random variables
can influence simulation runs. In production scenarios, for instance, randomness can
affect the delivery times of parts, duration of processes, and breakdowns of machines.
Thus, multiple runs of the same simulation model with identical parameter configu-
rations but different seed values for the random number generators usually leads to
varying simulation runs and consequently, to different results of the corresponding ob-
served measurements (e.g., manufacturing output). Technically, this situation can be
described as a stochastic process with a (usually unknown) probability distribution and
expected value for the target function. Having this situation in mind, a meaningful simu-
lation study has to perform multiple runs of the same simulation setting (i.e., model and
parameter configuration) with different random number seed values in order to draw
conclusions about configurations' qualities. This multiple runs of the same parameter
settings are called replications.
The number of replications and their results are highly relevant for computation of
statistical evidence. Depending on these results, mean values and confidence intervals
of measurement variables can be computed or statistical tests can be applied in order to
check if experimental data supports the hypothesis that one variant leads to better results
than another. Obviously, if more replications are performed, a higher confidence w.r.t.
the statistical results will be received. However, complex simulation models can lead
to costly execution times for single simulation runs and a large parameter space might
prohibit performing a large number of replications for each parameter configuration.
The approach presented here aims at the estimation if certain statistical results are
expected to be generated and when this could be the case, i.e., how many replications are
expected to be needed in order to satisfy certain statistical properties. In this work, we
focus on situations where two different variants should be compared by a statistical test.
A similar approach could be developed for an estimation when a confidence interval of a
measurement is expected to be accurate enough for the expert performing the simulation
study.
4
Significance Estimation
In this section, we present our approaches to significance estimation. For initial studies,
we have abstracted from simulation runs and use probability distributions and randomly
drawn samples of these distributions for a first investigation how data can look like. We
assume that observed measurement variables of different simulation runs also underly
certain distributions. Using well-known probability distributions allows for structured
investigations of our approaches where we can easily generate samples from distribu-
tions with known properties. Evaluations with data generated by simulation models can
 
Search WWH ::




Custom Search