Information Technology Reference
In-Depth Information
• Exception management faults:
- Have all possible error conditions been taken into consideration?
- Are useful errors generated per fault for both end users and developers?
After completing the code walk through and having answered the previous
questions, there are numerous other methods for testing and verifying code as
described in prerequisite chapters. Some of these methods are clean room testing,
black-box testing, mathematical verification, static analysis, object orient testing,
etc.
13.5.3 Performance Testing
Performance testing is where developers get to demonstrate the power and per-
formance characteristics of their system. It is important to note, however, that
performance is relative. Performance is relative to the specifications and special
characteristics of a system, should a performance test be chosen. Marketing a
system relies heavily on these benchmarks.
There are two main forms for performance benchmarks. The first sort of
benchmark is called processor bound. There are several variations of this bench-
mark and MIPS, Sieve and Dhrystone are three of them. Processor bound
benchmarks measure the number of instructions executed in a given time. These
benchmarks are essential when determining how much work can be accomplished
with processor heavy applications.
The second benchmark is deemed the Input/Output bound benchmark, which
measures all other aspects of performance. Some common items are bridges,
gateways, database servers, physical storage, volatile storage, operating systems
and networks. These tests can be harder to perform than processor bound
benchmarks due to the fact that they encompass a huge range of software and
hardware combinations and intricacies.
13.5.4 Reporting Test Results
Nothing is better than a graph when it comes to documenting a systems perfor-
mance. Graphs provide stakeholders an insight into the benchmark without making
the results over complex or technical. Graphs however should be well documented
and should not be convoluted. Figure 13.12 is a processor bound benchmark
adapted from Burch (Burch 1992 ). Graphs and accompanying written documen-
tation should be simple and accommodate the audience. If you are writing to
technical publishers for an end user, the language should reflect this difference.
Technical
publishers
and
technical
review
committees
will
expect
technical
Search WWH ::




Custom Search