Database Reference
In-Depth Information
the Select-All approach. For example, Impact Analysis provides an average reduction of
77% with a minimum of 64% and a maximum of 92%. Graph Walk and Call Graph Firewall
provide average reductions of 82% and 87%, respectively. Therefore, the three proposed
techniques are certainly useful for saving regression testing time.
In addition to their test reduction capabilities, both the Impact Analysis and Graph
Walk are safe. That is, they do not miss modifi cation-revealing tests. However, Graph
Walk produces further reduction in ST where the modifi cation affects modules that involve
selection and branching. When faced with simple code modules, Graph Walk and Impact
Analysis have similar reduction capabilities. But, since the Graph Walk technique works at
the statement level, it offers more reduction in ST as the code becomes larger and involves
deep branching hierarchy of modules. Versions M8 - M10 are examples in which the modi-
fi cations made affect branching parts of the code.
The Call Graph Firewall technique is not safe. It produces the best reduction in ST,
but at the expense of the MMRT value. Call Graph Firewall uses data fl ow information to
further reduce the tests selected by Impact Analysis. This may be advantageous for fast
regression testing in the case where Impact Analysis provides high values of ST. But, it is
not recommended for relatively small ST values as it might miss 50% of the modifi cation
revealing tests.
Comparing Impact Analysis, Graph Walk, and Call Graph Firewall with Select-Random
shows that the three proposed techniques are defi nitely better. They offer less ST values
and are certainly more reliable, as Select-Random misses 58-100% of the modifi cation
revealing tests. These high MMRT values occur despite allowing Select-Random to select
28% of the initial tests. This ST value is comparable to the three proposed techniques for
some versions (e.g., M2, M5, and M8) or has advantage over them for other versions (e.g.,
M3 and M7). In particular, Select-Random might miss all modifi cation-revealing tests for
small MRT values.
RELATED WORK
Numerous regression testing algorithms and approaches have been proposed for pro-
cedural and object-oriented programs. Rothermel, Harrold, and Dedhia (2000) provide a
regression testing method for C++ software based on control fl ow analysis of C++ source
code. The method handles some object-oriented and C++ features such as polymorphism,
dynamic binding, and passing objects as parameters. Rothermel, Yntect, Chu, and Harrold
(2001) use test case prioritization in regression testing. They provide a survey of test case
prioritization techniques and perform empirical studies with some of these techniques to
evaluate how effective they are in improving fault detection. In case safe regression testing
techniques proved not feasible, prioritization is chosen as a cost effective substitute. Bible,
Rothermel, and Rosenblum (2001) provide a comparitive empirical study of two safe regres-
sion test selection techniques implemented in two regression testing tools: the TestCube
(Chen, Rosenblum& Vo, 1994) and the DejaVu (Rothermal & Harrold, 1997). The precision
and relative cost effectiveness of these techniques are evaluated and compared to the cost of
retesting using other techniques. Harrold, Jones, Li, and Liang (2001) present a safe regres-
sion testing selection technique for Java applications. The technique handles Java language
features such as polymorphism, dynamic binding, and exception handling. The authors also
described a tool for implementing their technique. The tool provides empirical results for
Search WWH ::




Custom Search