Information Technology Reference
In-Depth Information
11.6.3 Hidden Action and Results Verification—a
Technology Issue
Although many application responses are revealed directly to the end-user, there
may be many application responses that are hidden from the end user's view. Exam-
ples of revealed responses are new screen messages indicating “successful submis-
sion,” new screens indicating successful submission with processing details, or new
screens with requested search results. An example of hidden responses is a control
item in a screen object like a “submit” button that becomes hidden from the end user
afterwards to inhibit duplicated submissions.
Another example of hidden responses is database changes resulting from the
end user submitting some kind of response (search request, purchase order, and so
on). The revealed response is “successful submission.” The hidden response is a set
of database updates that may not be directly displayed to the end user or viewable
by the end user. Developers go to great lengths to keep these actions and results
hidden from the end user, either because revealing them will not help the end user
do a better job or because revealing them will allow the end user to put the appli-
cation in unexpected states of processing with potentially disastrous results to the
business.
Testing tool designers are aware of this duality: end user versus developer view
of the application responses. Knowing that both views are necessary for complete
testing results, the testing tool designers have produced testing tools that can observe
and act on both views of the AUT.
11.6.4 Timing Point Precision and Distribution—a
Technology Issue
One of the test tool capabilities arguably most benefi cial for software develop-
ment in the 21st century is precision timing points. Prior to having this capability,
testers were relegated to using a stopwatch, a wall clock, or a sweep second hand
on their wristwatch. If the application activity took 15 or 20 min to complete as
was common in the 1970s, then chronometers provided suffi cient timing point
precision.
As application response speeds became much faster, the Rule of 8 (see Chapter 9)
refl ected new sub-minute response time measurement needs. The tester needed and
was provided with tool timing point precision to the millisecond. This leap in preci-
sion afforded the tester extremely precise response time measurements for both the
total application process and subprocesses previously unattainable from stopwatch
accuracy. The testing benefi t of subprocess timing point precision is the ability to
identify bottlenecks in the response path as illustrated in the Round Trip Perfor-
mance Testing section of Chapter 9.
The trend toward distributed computing in the 1980s led to an additional chal-
lenge for performance testers. The application was no longer spending time just
on the end-user computer and a large host computer. The application time was
distributed across a number of specialized midsize computers called servers. Even
Search WWH ::




Custom Search