Information Technology Reference
In-Depth Information
is for the tester to replace the end-user typed values such as user ID “JONES” with a
reference to a data fi le containing 350 user IDs, for example. When the tester replays
the script, each of the 350 user IDs (perhaps including “JONES”) on the data fi le are
attempted for log-in.
The time-saving nature of data-driven execution becomes more profound as
more recorded script values are replaced by data fi le references, enabling large
numbers of permutations to be tested just as easily as the original set of values.
For example, a test tool records an end user placing an order for one offi ce supply
item. Three end-user data fi elds are subsequently identifi ed for a large number
of permutation tests. The fi rst permutation data fi eld is the customer's shipping
address zip-code (say 2,000 different zip-codes) that signifi es the customer's de-
livery geography. The application must identify which offi ce supply items are
available for purchase in which geography. The second permutation data fi eld is
customer type which can be one of the four values: 1
small retail customer,
2
large wholesale
customer. Different standard discounts are offered on different offi ce supplies
depending on customer type. The third and fi nal permutation data fi eld is the of-
fi ce supply item code (say 5,000 different item codes). So the test script recorded
for a purchase in just one zip code, one customer type, and one offi ce supply item
can have these end-user supplied values substituted for data fi les with all possible
values to produce a simple test script capable of verifying 2,000
large retail customer, 3
small wholesale customer and 4
4
5,000
40,000,000 permutations.
One last set of features resulting from the record/playback paradigm needs
to be acknowledged here and described in more detail later in this chapter.
About midway through the test tool maturity cycle, maybe the early 1990s, test
tools began providing some means of test tool management: capturing results of
an end-user recording session, scheduling tool playback sessions, and captur-
ing results of playback sessions for reporting. These results could range from a
message on the screen to the appearance of new screens to the hidden update
of particular database records. These test tool management features will be
discussed in the Test Management Paradigm section of this chapter. Examples
of all of these features can be found in the current major test tool vendors'
products. [45-47]
11.4 TEST TOOL TOUCHPOINT PARADIGMS
There are only two touchpoint paradigms that underlie the large number of avail-
able test tool products. A test tool touchpoint is the location of a test tool probe,
either hardware or software, in the computer under test in order to measure some
specifi c operational aspect of this test computer. The situation is similar to the
small number of software development paradigms we found in Chapter 2 for a large
number of software development methods. If we understand these two touchpoint
paradigms, we can quickly understand and anticipate how a particular test tool will
be used.
Search WWH ::




Custom Search