Information Technology Reference
In-Depth Information
software will experience a drastic increase in failure rate each time an upgrade is
made. The failure rate levels off gradually, partly because of the defects found and
fixed after the upgrades. 8
The upgrades in Figure 14.1(b) imply that software reliability increases are a result
of feature or functionality upgrades. With functionality upgrading, the complexity
of software is likely to increase. Functionality enhancement and bug fixes may be
a reason for additional software failures when they develop failure modes of their
own. It is possible to incur a drop in software failure rate if the goal of the upgrade
is enhancing software reliability, such as a redesign or reimplementation of some
modules using better engineering approaches, such as the clean-room method.
More time gives the DFSS team more opportunity to test variations of input
and data, but the length of time is not the defining characteristic of complete testing.
Consider the software module that controls some machinery. You would want to know
whether the hardware would survive long enough. But you also would want to know
whether the software has been tested for every usage scenario that seems reasonable
and for as many scenarios as possible that are unreasonable but conceivable. The real
issue is whether testing demonstrates that the software is fit for its duty and whether
testing can make it fail under realizable conditions.
What criteria could better serve software reliability assessment? The answer is
that it depends on (Whittaker & Voas, 2000):
Software Complexity 9 : If you are considering a simple text editor, for example,
without fancy features like table editing, figure drawing, and macros, then 4,000
hours might be a lot of testing. For modern, feature-rich word processors, 4,000
hours is not a match.
Testing Coverage : If during those 4,000 hours the software sat idle or the same
features were tested repeatedly, then more testing is required. If testers ran a
nonstop series of intensive, minimally overlapping tests, then release might be
justified.
Operating Environment : Reliability models assume (but do not enforce) testing
based on an operational profile. Certified reliability is good only for usage that
fits that profile. Changing the environment and usage within the profile can cause
failure. The operational profile simply is not adequate to guarantee reliability.
We propose studying a broader definition of usage to cover all aspects of an
application's operating environment, including configuring the hardware and
other software systems with which the application interacts.
The contemporary definition of software reliability based on time-in-test assumes
that the testers fully understand the application and its complexity. The definition
also assumes that teams applied a wide variety of tests in a wide variety of operating
conditions and omitted nothing important from the test plan. As Table 14.2 shows,
8 See Jiantao Pan at http://www.ece.cmu.edu/
koopman/des s99/sw reliability/.
9 See Chapter 5.
 
Search WWH ::




Custom Search