Information Technology Reference
In-Depth Information
and thus cannot reason on the whole execution space. A relevant consequence
is the weak relationship between testing effort and software quality, which im-
plies that planning what to test, how to test it and when to stop testing is
still mostly a matter of human judgement and simple heuristics. In practice, as
testing effort grows, eliciting erroneous behaviours by means of new test cases
becomes increasingy harder. Finally, testing steers software behaviour by acting
on inputs. This implies that errors that are not triggered by some input, like
synchronization errors in concurrent software, do not typically surface during
testing.
On the other side, static analysis techniques rely on mathematical models of
program behaviour and infer properties from them, thus complementing testing.
While testing cannot reason on the whole execution space, static analysis is po-
tentially able to detect the absence of specific categories of errors from software
systems. This is because the models used in software analysis overapproximate
the possible behaviours of a program, and can deduce invariant software proper-
ties holding on all the executions. This also means that the faults detected with
a static analysis may be spurious , and must be confirmed on the actual program.
Also, current static analysis techniques do not adequately meet automation, pre-
cision and scalability requirements. This derives from the fact that no single ab-
straction suits all the combinations of verification problems and target systems.
Current analysis techniques suffer from one or more of the following problems:
do not scale well to industry-size software systems, miss relevant bugs, flood
the user by spurious error warnings, require on-line manual assistance. Con-
sequently, static analysis techniques find scarce industrial applications mostly
limited to special-purpose applications.
Recent research focused on combining static and dynamic analysis techniques
to benefit from the advantages and reduce the problems. Combined approaches
test and analyze the same program, and share the information produced by one
technique to improve the results of the other. Testing provides exact informa-
tion about feasible behaviours, thus it can be used as a cost-effective way to
build precise models. Analysis provides hints about the regions of the program
state space that may contain faults, thus it can be used to steer testing towards
faulty regions for distinguishing actual from spurious faults. Together, testing
and analysis may yield fully automated, sound verification procedures more pre-
cise, scalable and automatic than either testing or analysis alone.
Triggered by encouraging preliminary results, research on combining static
and dynamic techniques has proliferated in the literature of the last ten years.
Most of the literature presents specific combinations of static and dynamic tech-
niques without providing a general framework. The absence of a deep under-
standing of the general advantages of combining different kinds of techniques
hinders the ability of exploiting new interaction patterns within the context of
combination of different techniques. Many questions remain unanswered: What
are the structural features of the basic analysis and testing techniques avail-
able in literature? How do these features impact on the precision, convergence
and performance of the techniques? When two techniques may interplay? Which
Search WWH ::




Custom Search