Information Technology Reference
In-Depth Information
apparent when customers start calling your HelpDesk with defects that eluded your
testing. Customer-discovered software defects will be included in our analysis later
in this chapter.
One planning key to successful test results analysis is the clear defi nition of
success for each test case. It is common for a test case to have a number of expected
results. If the actual results obtained from a test execution all match the expected
results, then the test case is normally considered “attempted and successful.” If only
some of the actual results obtained from a test execution match the expected results,
then the test case is normally considered “attempted but unsuccessful.” Test cases
that have not been executed are initially marked “unattempted.”
The “unattempted” versus “attempted …” status of each test case is tracked
by testing management because this is the most obvious testing progress indica-
tor. Ninety percent “unattempted” test cases indicates that the testing effort has just
begun. Ten percent “unattempted” test cases indicates that the testing effort may
be close to fi nished. The number of attempted test cases over time gives the test
manager an indication of how fast the testing is progressing relative to the size of
the test team. If you log 15 test cases attempts by your test team in the fi rst 2 weeks
of testing, this indicates an initial attempt rate of 1.5 test case attempts/day. If the
test plan calls for a total of 100 test cases to be attempted, then you can calculate an
initial estimate of 14 weeks for your test team to “attempt” all 100 test cases in the
plan. Here are the calculations.
15 test cases attempted / 10 test work days
1.5 test case attempts/day
100 test cases to attempt / 1.5 test case attempts/day
67 days (14 workweeks)
Calculation 12.1
Estimating test execution schedule—fi rst Draft
Some of the “attempts” will result in defect discoveries requiring time for correc-
tion and retesting. So the 14-week schedule really represents the expected completion of
just the fi rst round of testing execution. Depending on the number of “unsuccessful” test
cases encountered during the 14-week period, a second, third, and possibly fourth round
of correction and retesting may be necessary to achieve mostly “successful” results.
A test case may be “attempted but unsuccessful” because the actual results do
not match the expected results or because the software halted with an error mes-
sage before the test case was completed. The challenge to the test manager is to
prioritize the unsuccessful test case results for correction. If a test case encounters
an error that stops the test case before it can be completed, this is usually considered
a severe defect suffi cient to warrant immediate corrective action by the developers.
Once that corrective action has been taken and the test case rerun, the test case may
go to completion without further showstoppers and become marked as “attempted
and successful.” On the contrary, the test case may execute a few more steps and be
halted by another defect.
If the application under test allows the test case to go to completion but provides
actual results different from the expected results, the test manager needs to priori-
tize these unsuccessful test case results based on business risk to not correct. For
example, if a functional test case shows that a set of screen input values produces
Search WWH ::




Custom Search