Information Technology Reference
In-Depth Information
fi rst thing Monday morning. Dependency fl ow allows the tester to indicate that “if
script A completes successfully, then start script B, otherwise skip script B and start
script C.” Dependency in this context means that the execution of a subsequent test
script is dependent on the success or failure of an earlier test script in the list.
The last two capabilities of a test management tool address a concept that was not
introduced in the prior test tool discussions. The concept is that behavior validation
requires the comparison of two kinds of information: expected results and actual
results. Expected results are defi ned to be the documented correct behavior or re-
sponse to a specifi c set of conditions and inputs. Actual results are defi ned to be the
behavior or experienced response exhibited to a specifi c set of conditions and inputs.
Actual results may or may not match expected results. If actual results do match the
expected results, the test is normally considered successful (pass). If actual results do
not match the expected results, the test is normally considered unsuccessful (fail).
In order for a test management tool to be capable of capturing expected val-
ues from initial test script recordings, there needs to be some kind of complex
tool-to-tool communication in which values captured during recording will be the
expected value set. Sometimes the expected values are the keyboard entries that an
end user types during script recording. Sometimes the expected values appear on
the test computer screen after the end user has completed a particular action. Some-
times the expected values are hidden from the end user in some kind of data fi le or
database. Sometimes the expected values have been predetermined and, in a manner
similar to data-driven test preparation, the expected values are made available to the
test management tool independent of the script recording activity.
In order for a test management tool to be capable of capturing actual values from
subsequent test script playback, there needs to be some kind of complex tool-to-tool
communication. This communication enables predetermined variables, screen areas,
and data fi les or databases to be interrogated during test script playback for actual
values to compare with the expected values. Once the actual values are collected
from a test execution, the actual values are automatically compared with the ex-
pected values, and the success or failure of the comparison is indicated for the test
execution just completed. Many test management tools that provide this expected
values/actual values comparison also allow for the collection and comparison of
actual values from multiple playback sessions with the same expected values.
The complexity and intimacy with which the test management tool must interact with
function test execution tools and performance execution test tools has caused tool vendors
to redesign their separate tool products into tool suites. These tool suites provide better
intertool communication and operability while presenting the tester with a consistent look-
and-feel of the tools individually. One of the welcomed byproducts of such a tool suite de-
sign is a lower tool training threshold before the tester becomes profi cient in the tool suite.
11.6 THE BENEFITS THAT TESTING TOOLS CAN PROVIDE
Based on the previous sections in this chapter, you may conclude incorrectly that it is
a good idea for a tester to always use automated testing tools. This would cause the
tester to worry fi rst about which test tool to use. Quite the contrary, one of the early-
Search WWH ::




Custom Search