Information Technology Reference
In-Depth Information
concise premeditation documents. Large, extended duration testing projects can
produce stacks of premeditation documents. One criticism publicly leveled at most
commercial testing methods is that their required premeditation documentation is
often overkill, wasting valuable tester resources and testing schedule to produce
documentation that does not add commensurate value to the testing effort.
The message to the new software tester is clear. Too little premeditation places the
testing project at risk to fail because of inadequate planning. Too much premeditation
places the testing project at risk to fail because the extra time consumed in planning
cannot be recovered during test execution.
3.3.3 Repeatability
This item arises from a software process dimension called “maturity.” The Software
Engineering Institute at Carnegie-Mellon has established an industry-wide yardstick
for measuring the relative success that a company can expect when attempting software
development. [19] This yardstick is called the Capability Maturity Model Integration
(CMMi). Based on the CMMi, successful development and testing of software for
wide ranges of applications requires the testing process to be institutionalized. In
other words, once a test has been executed successfully, any member of the test team
should be able to repeat all the tests and get the same results again. Repeatability of
tests is a mature approach for test results confi rmation. A testing technique called
“regression test” described in a later chapter relies heavily on the repeatability of
tests to succeed.
3.3.4 Accountability
Accountability is the third set of written documentation in SPRAE. This item
discharges the tester's responsibility for proving he or she followed the test plan
(premeditation) and executed all scheduled tests to validate the specifi cations.Con-
trary to many development managers' expectations, testing accountability does not
include the correction of major defects discovered by testing. Defect correction
lies squarely in development accountability. Supporting test completion documen-
tation normally comes from two sources. The fi rst source is the executed tests
themselves in the form of execution logs. The more automated the testing process,
the more voluminous the log fi les and reports tend to be. The second source is
the tester's analysis and interpretation of the test results relative to the test plan
objectives.
One signifi cant implication of the accountability item is that the tester can
determine when testing is complete. Although a clear understanding of test
completion criteria appears to be a common sense milestone, you will be amazed by
how many test teams simply plan to exhaust their available testing time and declare
“testing is completed.”
There exists a philosophy of software testing called “exploratory testing” that is
emerging in the literature.[20] This philosophy advocates concurrent test design and
Search WWH ::




Custom Search