Biomedical Engineering Reference
In-Depth Information
Separate compilation allows retention of program
modules in a library. The modules are linked into the
software system, as appropriate, by the linking loader.
The distinction between independent compilation and
separate compilation is that type checking across
compilation-unit interfaces is performed by a separate
compilation facility, but not by an independent compi-
lation facility. User-defined data types, in conjunction
with strong type checking, allow the programmer to
model and to segregate entities from the problem domain
using a different data type for each type of problem
entity. Data encapsulation defines composite data objects
in terms of the operations that can be performed on
them, and the details of data representation and data
manipulation are suppressed by the mechanisms. Data
encapsulation differs from abstract data types in that
encapsulation provides only one instance of an entity.
Data abstraction provides a powerful mechanism for
writing well-structured, easily modified programs. The
internal details of data representation and data manipu-
lation can be changed at will and, provided that the
interfaces of the manipulation procedures remain the
same, other components of the program will be
unaffected by the change, except perhaps for changes in
performance characteristics and capacity limits. Using
a data-abstraction facility, data entities can be defined
in terms of predefined types, used-defined types, and
other data abstractions, thus permitting systematic
development of hierarchical abstractions.
operating modes, maintenance modes, system failure or
unusual incidents in the environment, and errors in
human performance. Once hazards are identified, they
are assigned a severity and probability. Severity involves
a qualitative measure of the worst credible mishap that
could result from the hazard. Probability refers to the
frequency with which the hazard occurs. Once the
probability and severity are determined, a control mode
(that is, a means of reducing the probability and/or
severity of the associated potential hazard) is established.
Finally, a control method or methods will be selected, to
achieve the associated control mode.
Real-time logic is a process whereby the system de-
signer first specifies a model of the system in terms of
events and actions. The event-action model describes the
data-dependency and temporal ordering of the compu-
tational actions that must be taken in response to events
in a real-time application. The model can be translated
into Real Time Logic formulas. The formulas are trans-
formed into predicates of Presburger arithmetic with
uninterpreted integer functions. Decision procedures are
then used to determine whether a given risk assertion
is a theorem that is derivable from the system specifi-
cation. If so, the system is safe with respect to the timing
behavior denoted by that assertion, as long as the
implementation satisfies the requirements specifica-
tion. If the risk assertion is unsatisfiable with respect to
the specification, then the system is inherently unsafe
because successful implementation of the requirements
will cause the risk assertion to be violated. Finally, if the
negation of the risk assertion is satisfiable under certain
conditions, then additional constraints must be imposed
on the system to ensure its safety.
Software risk analysis
Software risk analysis techniques identify software
hazards and safety-critical single- and multiple-failure
sequences; determine software safety requirements;
including timing requirements; and analyze and measure
software for safety. While functional requirements often
focus on what the system shall do, risk requirements
must also include what the system shall not do, including
means of eliminating and controlling system hazards and
of limiting damage in case of a mishap. An important part
of the risk requirements is the specification of the ways in
which the software and the system can fail safely and the
extent to which failure is tolerable.
Several techniques have been proposed and used in for
conducting risk analysis, including:
Software hazard analysis
Software fault tree analysis
Real time logic
Software hazard analysis, like hardware hazard analysis,
is the process whereby hazards are identified and cate-
gorized with respect to criticality and probability. Po-
tential hazards that must be considered include normal
Software metrics
Software must be subjected to measurement in order to
achieve a true indication of quality and reliability. Qual-
ity attributes must be related to specific product re-
quirements and must be quantifiable. These aims are
accomplished through the use of metrics. Software-
quality metrics are defined as quantitative measures of an
attribute that describes the quality of a software product
or process. Using metrics for improving software quality,
performance, and productivity begins with a docu-
mented software development process that will be
improved incrementally. Goals are established with re-
spect to the desired extent of quality and productivity
improvements over a specified time period. These goals
are derived from, and are consistent with, the strategic
goals for the business enterprise.
Metrics that are useful to the specific objectives of the
program, that have been derived from the program
requirements, and that support the evaluation of the
Search WWH ::




Custom Search