Information Technology Reference
In-Depth Information
specified time. Representative estimation models include exponential dis-
tribution models, the Weibull distribution model, Thompson and Chelson's
model, and so on. Exponential models and the Weibull distribution model usu-
ally are named as classical fault count/fault rate estimation models, whereas
Thompson and Chelson's model belong to Bayesian fault rate estimation mod-
els. Trending reliability can be further classified into four categories:
Error Seeding : Estimates the number of errors in a program by using mul-
tistage sampling. Errors are divided into indigenous and induced (seeded)
errors. The unknown number of indigenous errors is estimated from the
number of induced errors, and the ratio of errors obtained from debugging
data.
This technique simulates a wide variety of anomalies, including program-
mer faults, human operator errors, and failures of other subsystems (software
and hardware) with which the software being tested interacts. For example,
seeding programmer faults can be accomplished by testing the stoppage cri-
teria based on test effectiveness. One of the earliest applications of software
fault seeding was mutation testing (DeMillo et al., 1978). Mutation testing
builds a test suite that can detect all seeded, syntactic program faults. Be-
cause there are multiple definitions of what it means to detect all simulated
syntactic programmer faults, there are multiple types of mutation testing.
Once mutation testing builds the test suite, the suite is used during testing.
Seeded programmer errors are nothing more than semantic changes to the
code itself. For example, changing x
1 is a seeded fault.
By making such modifications, the DFSS team can develop a set of test
cases that distinguish these mutant programs from the original. The hypoth-
esis is that test cases that are good at detecting hypothetical (seeded) errors
are more likely to be good at detecting real errors. Using error seeding to
measure test effectiveness, the team needs to:
1. Build test suites based on the effectiveness of test cases to reveal the
seeded errors.
2. Use the test cases to test for real faults.
Just as all test cases are not equally effective for fault detection, not all
seeded faults are of equal value. This brings us to the notion of fault size.
The size of a real fault (or seeded fault) is simply the number of test cases
needed to detect the fault. When we inject a large number of errors, most test
cases can catch it. Therefore, it is more beneficial to inject a smaller number
of errors and create a test suite that reveals them. Small errors are harder
to detect, and 10 test cases that detect tiny faults are more valuable than a
20-member test suite that catches only huge errors. A test that detects small
errors almost certainly will detect huge errors. The reverse is not necessarily
true.
Failure Rate : Is used to study the program failure rate per fault at the failure
intervals. As the number of remaining faults change, the failure rate of the
program changes accordingly.
=
x-1tox
=
x
+
Search WWH ::




Custom Search