Information Technology Reference
In-Depth Information
given statistical probability, though both dynamic
and static analyses are unable to provide abso-
lute assurance in all cases. Even techniques like
explicit-state model checking can only provide
assurance in very small systems, where interaction
with the external environment is well understood
and controlled.
A key reason these properties are hard to mea-
sure accurately stems from sources of (apparent)
nondeterminism in today's software systems.
Deep causal chains, multiple levels of caching,
and unpredictable interactions between threads
and their environment lead to an incomprehensible
number of behavior patterns. The openness of
operating systems in their external interactions,
such as networks, devices, and other processors,
and the use of throughput-efficient-scheduling
strategies, such as dynamic priority queues and
task preemption, are the principal cause of such
behavioral uncertainty. Although real-time and
safety-critical operating systems try to ensure
higher levels of determinism by applying con-
straints on execution, such as off-line scheduling,
resource reservation, and cache disabling, these
solutions are often not applicable for general-
purpose systems.
New tools and techniques are needed, there-
fore, that can assure behaviors of dynamic systems
with greater probability. Examples include system
execution modeling (SEM) tools (Hill, Schmidt,
& Slaby, 2007) that enable software architects,
developers, and systems engineers to explore
design alternatives from multiple computational
and valuation perspectives at multiple lifecycle
phases using multiple quality criteria with multiple
stakeholders and suppliers. In addition to validat-
ing design rules and checking for design confor-
mance, SEM tools facilitate “what if” analysis of
alternative designs to quantify the costs of certain
design choices on end-to-end system performance.
For example, SEM tools can help empirically
determine the maximum number of components
a host can handle before performance degrades,
the average and worse response time for various
workloads, and the ability of alternative system
configurations and deployments to meet end-to-
end QoS requirements for a particular workload.
Although the results of SEM tool analysis are
probabilistic—rather than absolute—they still
provide valuable information to users.
implicit Support for measurement of
infrastructure Software and
processors
Infrastructure software (such as operating sys-
tems, virtual machines, and middleware) and
processors increasingly provide measurement
logic that collects behavioral information during
multithreaded system execution. Although these
capabilities are useful, they are often provided as
add-ons, rather than being integrated seamlessly
into the infrastructure software and processors.
As a result, the measurement hooks are often not
available when needed or undue effort is required
to configure and optimize them.
New tools and techniques are needed, there-
fore, to provide implicit support for measuring of
infrastructure software and processors. In particu-
lar, the ability to measure and monitor behavior
of the system should be a first class concern.
total-System measurement that
relates and combines microscopic
measurements together to give a
Unified View of System Behavior
The nondeterministic nature of today's large-scale
systems is exacerbated by the lack of integration
between various microscopic measurement tech-
niques—both in hardware and in software—and
the need for a broader perspective in reasoning
about and analyzing end-to-end system behavior.
This problem is particularly acute in distributed
real-time and embedded (DRE) systems that must
combine hardware and software components to
meet the following challenging requirements:
Search WWH ::




Custom Search