Information Technology Reference
In-Depth Information
5. Analysis
6. Dissemination
Such a Lifecycle Framework provides a viable checklist of quality dimensions to
consider, based on the preceding methodological principles for social simulation.
Note that verification and validation constitute only two contexts for assessing
quality and, as shown below, some of the others involve quite a number of aspects
regarding quality evaluation.
1. Formulation. Quality can be assessed starting from the formulation of a
research problem that a given social simulation is supposed to solve. A first
set of quality assessments regards research questions: Is the research question
or class of research questions clearly formulated? Is the focal or referent em-
pirical system well defined? Beyond clarity, is the research question original
and significant? Originality should be supported by complete and reasoned
surveys of prior extant literature to assess scientific progress. Every com-
putational simulation model is designed to address a research question, so
clarity, originality, and significance are critical. Motivation is a related aspect
of problem formulation. Is the model properly motivated in terms of relevant
extant literature? Or, is the simulation model the very first of its kind? If
so, are there prior statistical or mathematical models in the same domain?
Regrettably, incomplete, poorly argued, or totally missing literature reviews
are rather common in social simulation and computational social science.
2. Implementation. Rendering an abstracted model in code involves numer-
ous aspects with quality-related implications, starting with aspects of in-
stantiation selection. Does the code instantiate relevant social theory? Is the
underlying social theory instantiated using a proper program or program-
ming language? Code quality brings up other aspects that may be collectively
referred to as “The Grimson-Guttag Standards” (Guttag, 2013): Is the code
well-written? Is the style safe/defensive? Is it properly commented? Can it
be understood with clarity one year after it was written? In addition, what
type of implementation strategy is used? I.e., is the model written in na-
tive code or using a toolkit? If toolkit (Nikolay and Madey, 2009), which
one, why, and how good is the application? Is the choice of code (native or
toolkit) well-justified, given the research questions? In terms of “nuts and
bolts,” quality questions include such things as what is the quality of the
random number generator (RNG)? Think Mersenne Twister (Luke, 2011),
MT19937, or other PRNG. Which types of data structures are used, given
the semantics? Are driven threshold dynamics used? If so, how are the firing
functions specified? In terms of algorithmic eciency, What is the imple-
mentation diculty of the problem(s) being addressed by the model? How
ecient is the code in terms of implementing the main design ideas? In
terms of computational eciency, how ecient is the code in terms of using
computational resources? This differs from algorithm eciency. From the
perspective of architectural design, is the code structured in a proper and
elegant manner commensurate to the research question? In terms of object
ontology, does the model instantiate the object-based ontology of the focal
Search WWH ::




Custom Search