Hardware Reference
In-Depth Information
to view and debug assertions, however, in this topic we are tool-agnostic and assume
only that tools generally provide waveforms for viewing with some marking that
identifies the start time and the end time of an evaluation attempt. To gather a more
insightful view of the failure, we rely on means within the SV language to provide
us with further information on the progress of the failing attempt. Debugging is
usually done in simulation, even though the assertion may be developed for or may
have failed in formal verification.
In the following sections, we address two scenarios: one, for debugging an
assertion during its development, and the other, for debugging a failing assertion
in a regression test for a design.
19.1
Debugging an Assertion Under Development
The starting point of any debugging effort while developing a custom assertion is a
good requirement specification that states the trigger conditions and the sequence of
signal combinations that must hold following the trigger. Based on this information,
a simple test bench should be developed. If the assertion is complex, a random
test bench is preferable, since a completely exhaustive test may be impractical.
This guideline is similar to developing a test bench for verifying a design. While
inspecting the results from simulating the assertion with the test bench, we must
be careful to identify and verify any unwanted vacuous successes and incomplete
evaluations.
The next step is to change the test bench to generate erroneous situations that
induce assertion failures, while avoiding the acceptable ones as much as possible.
This step is much harder, because the number of possibilities of failure may be quite
large and the resulting test bench may unavoidably contain acceptable situations
in which the assertion succeeds. It is these successes that must be scrupulously
analyzed for validity.
Suppose that either an invalid success or failure is detected. Now, one needs to
isolate the particular invalid attempt so as to not clutter the debugging information
by data from other attempts, and to observe the progress of the evaluation of
that attempt. Since most simulators provide information about the start time of
a failing/succeeding attempt, that time can be used to control the starting and
stopping of the assertion using the control system tasks $assertoff and $asserton
(see Sect. 7.3 ). Assuming that the test bench is repeatable, the assertion should be
stopped from time 0 using $assertoff till just before the start time of the attempt
of interest at which point it should be started using $asserton for just one clock
tick, and thereupon stopped again using $assertoff . This will start exactly the
single attempt of interest.
Once the invalid attempt is isolated, we can instrument the assertion by adding
local variables for collecting data, using match items in sequences to assign and
display signal values, and use action blocks to display any additional information
Search WWH ::




Custom Search