Database Reference
In-Depth Information
The results of this analysis are used in fi nding the modifi ed database components and the
effects of this modifi cation on other database components.
Experimental Design and Procedure
To empirically investigate the use of our regression testing methodology, we use a
prototype of a payroll database application with an initial suite number of test cases used to
test its various modules and constructs. We propose random modifi cations to the application,
thus creating ten (modifi ed) versions of the application, M1 - M10. Then, we study each
version using our maintenance tool and report the affected modules and the test cases that
should be rerun for regression testing. The test suite used to test this application contains
fi fty test cases determined using a specifi cation-based test adequacy criterion.
For evaluating and comparing the regression test selection techniques, we use two
metrics: (i) the (percentage) number of tests (ST) selected by a technique from the initial
test suite for rerunning, and (ii) the (percentage) number of modifi cation-revealing tests
missed by a technique (MMRT). Obviously, the underlying assumption is that a good regres-
sion testing technique selects a small number of tests (ST) to reduce the time of regression
testing, and yet does not miss selecting the tests that reveal the modifi cations made to the
database application.
The experiment is made of two parts. In the fi rst part, we analyze the database applica-
tion and prepare the test trace information. This part involves the following steps:
(a) Use the tool to construct syntax trees, control fl ow graphs, component dependency
information, and data fl ow information.
(b) Use the tool to create a new version of the application and generate a test trace.
(c) Run all tests on the new version and collect trace information.
In the second part, a new copy of the application for each proposed modifi cation is
created. For each modifi ed version of the application:
(a) Perform database application analysis.
(b) Perform Impact Analysis.
(c) Run Graph Walk regression testing.
(d) Run Call Graph Firewall regression testing.
Results
In Table 3, we present the results of applying our regression testing methodology on
the ten program versions, M1 - M10. For each version, we show: (a) the number of test
cases in the initial test suite (selected by the Select-All approach), (i.e., 50) and the number
of tests that reveal the modifi cations (MRT) made to the application, (b) the ST and MMRT
values due to the use of a Select-Random technique, (c) the ST and MMRT values due to
the use of our proposed phase 1 - Impact Analysis technique, and (d) the ST and MMRT
values due to the use of the phase 2 techniques for further test reduction, Graph Walk and
Call Graph Firewall. All ST values are normalized with respect to 50, whereas MMRT
values are normalized with respect to MRT. The ST value of Select-Random is set to be
28% after observing that all ST values (except for M1) of our proposed techniques are less
Search WWH ::




Custom Search