Hardware Reference
In-Depth Information
and poorly understood relationship between fault coverage (or fault efficacy) and
the DPPM level. In general, a higher fault coverage implies a lower DPPM level.
However, it is difficult to predict the DPPM level from known fault coverage. For
products with stringent DPPM targets of a few tens and less, focusing on fault ef-
ficacy of 100% for a simple fault model such as the stuck-at fault model is often
associated with an unacceptable risk of missing the target, resulting in financial and
reputation impact. As a consequence, several strategies to improve the quality of the
test process have been devised:
Defect-based test (DBT): in addition to conventional fault models, more accurate
models are used to better capture the actual physical defect mechanisms and
the low-level behavior of the defective circuits. Instances of such models can be
found in Chapters 1 through 3 of this topic.
N-detect: simple fault models are used, but every fault is required to be detected
several times by different test patterns, thus increasing the probability of inciden-
tally detecting an unmodeled defect.
Non-nominal test: the test is applied under stress conditions outside of the IC's
specification, e.g., too high or too low supply voltage or temperature. Non-
functional behavior of the chip is observed, e.g., the current consumption during
test (I DDQ test). Chips which passed the test may be rejected based on statistical
reasoning, e.g., dies on the wafer surrounded by a large number of dies which
have failed the test are rejected also.
At this time, it is impossible to say which of the strategies is best; often a mix of
strategies appears to be optimal. DBT is a systematic, pinpointed approach which
requires a good understanding of the failure mechanisms. N-detect is attractive
because necessary adjustments to existing tools, including fault simulators and au-
tomatic test pattern generators, are limited. Non-nominal test methods often require
costly test equipment and longer test times. Furthermore, chips good under nominal
conditions could fail under stress conditions and thus be rejected (yield loss).
This chapter discusses modeling resistive faults discussed in Chapter 2 in a
way which enables efficient fault simulation and ATPG algorithms. Although this
approach clearly belongs to the class of DBT strategies, it can be leveraged to eval-
uate non-nominal test methods, in particular low-voltage and low-temperature. The
difficulty in handling resistive defects arises from the defect resistance being a con-
tinuous parameter affecting the behavior of a faulty circuit. A bridge between two
circuit lines could have an infinite number of bridge resistances. The bridge could
be detected for some values of these resistances and remain undetected for other
values. Hence, the notion of fault list in the sense introduced above is no longer
well-defined. The conventional understanding of fault coverage must thus be re-
placed by a statistical definition. Consequently, fault simulation and ATPG must be
based on different principles than the standard algorithms.
The next section introduces basic concepts used in this chapter and defines vari-
ous fault coverage metrics. In Section 4.2 , a fault simulation algorithm is presented.
Section 4.3 describes a high-performance resistive bridging fault simulator which
leverages some of the speed-up techniques known for stuck-at fault simulation.
Search WWH ::




Custom Search