Hardware Reference
In-Depth Information
Given a circuit and a simple fault model such as the stuck-at fault model, it is
possible to construct a fault list which consists of all possible faults in the circuit un-
der the fault model. For example, a fault list under the stuck-at fault model consists
of stuck-at-1 and stuck-at-0 faults on all signal lines of a circuit. Fault simulation
would decide, for each fault in the fault list, whether any of the test patterns in
the test set detects the faults, i.e., the circuit with the pattern applied to its inputs
produces different responses when the fault is present and absent, respectively. The
fraction of detected faults among all faults is called fault coverage . Higher fault
coverage indicates a higher quality of the test set; fault coverage of 100% implies
that all faults have been detected. ATPG would be invoked for a fault which is not
detected yet; it would generate a test pattern which detects the fault and, possibly,
other faults, thus enhancing coverage.
It turns out that some faults in a circuit might be redundant, i.e., undetectable.
An instance is a stuck-at-1 fault at a line which assumes the logical value of 1 under
all input vectors. ATPG will not be able to produce a pattern detecting a redundant
fault (however, it could mathematically prove that a fault is redundant). If a circuit
has redundant faults, fault coverage of 100% cannot be achieved. Hence, the metric
called fault efficacy (sometimes also called fault efficiency) is used instead: for a
fault list of N faults, where D faults are detected and R faults are known to be
redundant, fault efficacy is defined as D/(N - R). Fault efficacy of 100% is the
maximal quality a test set could have under the considered fault model.
The complexity of fault simulation is polynomial in the size of the circuit and the
test set. The trivial fault simulation algorithm would, for every test pattern, simulate
the circuit in absence of faults and then take each fault from the fault list, simulate
the circuit in presence of that fault and compare the results. ATPG is NP-complete
and proving fault redundancy is co-NP-complete, meaning that there is probably
no algorithm which is guaranteed to generate a test pattern for any given fault in
polynomial time. State-of-the-art ATPG methods are often successful in generating
patterns for most faults even in very large practical circuits. Interestingly, calculating
fault efficacy cannot be done in polynomial time since undetected faults must be
checked for redundancy.
It is tempting to believe that achieving fault efficacy of 100% will yield perfect
test quality, i.e., all manufactured ICs having a defect will be identified during the
test process. Unfortunately, this is not the case. As mentioned above, a circuit with
a fault is just a model of a manufactured IC having a defect. The modeling often
implies abstraction, i.e., important behavioral details of low-level defective circuit
behavior are not considered to reduce the complexity of fault simulation and ATPG.
Different fault models are supposed to model different classes of actual defects with
different degrees of accuracy. Furthermore, there are defects which lead to a circuit
behavior which is so complex that there is no fault model that could represent it
( Khare 1996 ) (unmodeled defects). See Fig. 8.3 a in Chapter 8 for an example of
such a defect.
The ultimate quality of a test strategy is the quality level , measured in defective
parts per million (DPPM), indicating the number of defective chips which passed
the test, were delivered and resulted in a customer return. There is a non-trivial
Search WWH ::




Custom Search