Information Technology Reference
In-Depth Information
The test tool will be used to play back a script several times. If the AUT operates
perfectly the fi rst time the script is played back; a recommended tester practice is
to play back the script at least one more time to prove repeatability of these perfect
results. If the (AUT) does not operate perfectly, then the script is played back again
after developers make the necessary code corrections to verify that the corrections
solved the problem. The more defects that a test tool script reveals in the AUT, the
more times the test script will be replayed to verify code corrections.
11.3.1 Test Script Command Language
There is no standard or universal script command language for all test tools.
Because all script languages have a common purpose within the same record/play-
back paradigm, the language skill a tester acquires using one vendor's tool script
transfers substantially intact to a different vendor's tool script. Tool vendors have
typically chosen one of two approaches to designing their test tool's script language.
The fi rst approach is to invent a scripting language from scratch that addressed the
unique kinds of activities that operate a computer without end-user intervention. The
vendors who started their scripting languages from scratch quickly recognized the
need to add fl ow of control constructs to their scripting languages like those found in
such standard programming languages as COBOL, FORTRAN, PL/1, and BASIC.
The second approach was to adopt one of the standard programming languages for
fl ow of control and extend that language with commands that are uniquely needed
for robot control of a computer. The four standard programming languages most of-
ten extended for test tool scripting are PASCAL, C, Visual Basic, and most recently
Java.
Because the tool scripting languages follow standard programming language
conventions, timing point information from the operating system clock is readily
available. The implication for performance testing is that specifi c application actions
can be timed very accurately. For example, if an application lets the end user search
a catalog for products, the test script language allows timing points to be placed just
before and just after the search button is clicked. The difference between these two
timing points tells the tester and the developer precisely how long the search took.
T hese tim ing points a re used both to fi nd bottlenecks in the application (places where
the application transaction takes the most time to complete) and to establish the per-
formance baseline described in Chapter 9. The performance baseline is measured by
adding timing points at the very beginning and very end of a script that has already
verifi ed the correct behavior of a particular business transaction. Re-executing the
script with these two added timing points provides the precise total processing time
necessary to complete the verifi ed business transaction in an empty system.
A time-saving feature called data-driven execution has emerged in most script-
ing languages. This feature allows the tester to record a simple end-user activity
such as logging into the AUT with one user ID. Then, by using data-driven execution
features, the tester can make the same simple recorded script execute with hundreds
or thousands of additional user IDs. The key to the data-driven execution approach
Search WWH ::




Custom Search