Information Technology Reference
In-Depth Information
development lifecycle decisions a test team leader must make is whether to include
automated testing tools in the test environment plans. Only after the appropriateness
of test tool usage has been determined does the question of tool choice arise. The re-
mainder of this chapter deals only with identifying development project circumstances
that determine the appropriateness or inappropriateness of automated test tool use.
11.6.1 Repeatability—a Return on Investment Issue
If a particular test script will be executed three or more times, a test tool can begin to
provide an economy of scale. The effort, resources, and cost to build the automated
test script the fi rst time are substantially higher than a single manual test that per-
forms the same script steps. The second time the automated script is executed, none
of the build costs are incurred. The second manual test of the same test script costs
as much as the fi rst manual test to perform.
After three or more test executions of the same script, the accumulated
cost of performing these executions using a test tool is less than the accumulated cost
of performing these same executions manually. The more times the automated
test script is repeated beyond three times, the less expensive the total execution ef-
fort becomes relative to the same manual testing activity. Because manual testing of-
fers no economies of scale, the total cost of manual testing continues to grow linearly
by the same incremental cost for each subsequent test execution.
11.6.2 Repeatability Precision—a Technology Issue
One of the most frustrating and time-consuming situations for a software developer
is for a tester to report a suspected defect that the developer cannot recreate. This
situation can arise from a number of different causes, the most common of which is
incomplete manual test case execution script documentation. In other words, the tes-
ter cannot tell the developer the exact sequence of actions and events that led to the
defect discovery. The developer must use trial and error to fi ll in the tester's informa-
tion gaps and try the most likely actions and events to recreate the suspected defect.
Another possible cause of defect recreation diffi culty for the developer lies with
the tester's ability to follow directions. The manual test execution script may be very
crisp and clear about the steps to be completed and the expected results. Testers,
being human, may inadvertently skip a step or complete only part of the step action
because the instruction continued on the next page. The likelihood of this human
element creeping into test execution increases with the number of times the tester
must repeat the test due to a false sense of familiarity with the steps.
Automated testing tools directly address both of these repeatability precision
issues. Every action to be taken by a testing tool is included in the tool's test script.
The state of test data prior to script execution is defi ned in the test case. The testing
tool executes its script with well-defi ned test data without human intervention. The
combination of explicit script, explicit data state, and human non-intervention guar-
antees repeatable test results with very high precision. This precision translates into
more expedient developer diagnosis of discovered defects.
Search WWH ::




Custom Search