Information Technology Reference
In-Depth Information
switching activity cannot be pinpointed to something in particular but it consists of everything
that is going on in the processor to execute incorrect instructions. 12 It is therefore orthogonal to
all other types of activity discussed in this chapter. It is only characterized as switching activity
executing down the wrong path.
A solution for this type of activity was alluded in Section 4.10. Sodani and Sohi observed
that a good deal of instruction reuse is due to speculative execution down the wrong path.
This is because many times the wrong path and the correct path of execution converge,
sometimes quickly, resulting in the same instructions being executed twice: at first following
the misspeculation and then again after the branch is resolved. An instruction reuse buffer can
capture some of this repetition and reduce the negative impact of incorrect execution, but such
a technique has not been researched from a power consumption perspective. Failing to salvage
some of the incorrect execution, another high-level approach is needed to curb the power impact
of incorrect execution.
Pipeline gating : This approach, proposed by Manne, Klauser, and Grunwald, is called
pipeline gating [ 161 ]. The idea is to gate and stall the whole pipeline when the processor
treads down very uncertain (execution) paths. Since pipeline gating refrains from executing
when confidence in branch prediction is low, it can hardly hurt performance. There are two
cases when it does: when execution would eventually turn out to be correct and was stalled,
or when incorrect execution had a positive effect on the overall performance (e.g., because of
prefetching). On the other hand, it can effectively avoid a considerable amount of incorrect
execution and save the corresponding power. Saving power without affecting performance is
the ideal goal for an effective architectural technique.
The success of pipeline gating depends on how confidence in branch prediction is assessed.
Two metrics matter for a confidence estimator. First, how many of the mispredicted branches
can be detected as low-confidence—this is the coverage of the estimator. Second, out of those
detected low-confidence branch-predictions how many turn out to be wrong. Since what is of
interest here is to detect wrong predictions, this is the “accuracy” of the estimator. 13 Coverage
and accuracy are usually antagonistic in a design. Increasing one diminishes the other. It turns
out that it is easier to increase the coverage than the accuracy of an estimator. The estimators
proposed by Manne et al. range in coverage from 72% to 88% (for gshare and McFarling
combined gshare+bimodal branch predictors) but can hardly reach 37% accuracy in the best case.
This shows that even low-confidence predictions are usually—two out of three times—correct.
12 This includes the fetching, decoding, renaming, issuing, and executing of instructions, but of course not the final
committing.
13 For convenience, the terms “coverage” and “accuracy” are used here in the place of the more rigorous terms
Specificity and Predictive Value of a Negative Test [ 161 ].
Search WWH ::




Custom Search