Information Technology Reference
In-Depth Information
The downside of resizing the cache in the number of its sets is that already resident cache
lines become inaccessible. The oversized tags prevent erroneous matches and make flushing
the remaining part of the cache unnecessary. The miss rate, however, is still affected with every
resizing since everything is lost in the turned-off part.
The DRI approach to resizing is in stark contrast to the resizing approaches for dynamic
power covered in Chapter 4, Section 4.8. Techniques such as Selective Cache Ways [ 8 ], the
Accounting Cache [ 68 , 9 ], or Miss Tag Resizing [ 243 ], resize the data cache to reduce dynamic
power by disabling associative ways, i.e., changing the cache associativity. In fact, the Miss Tag
Resizing technique [ 243 ]alsousesthegated- V dd mechanism to completely turn off cache lines
and save leakage power along with the dynamic power. One of the benefits of these resizing
approaches is that no change in the indexing of the cache is needed, meaning that data already
resident remain accessible. Although these techniques could very well be adapted for static
power, Powell et al. took the approach of resizing the cache in the number of its sets. Their
reasoning is that resizing in associativity is not really necessary for instruction caches because it
would preclude direct-mapped caches and would affect both capacity and conflict misses.
Losing the ability to access resident lines might not be as disastrous for instruction
caches as for data caches. This is because changes in the working set of code tend to be more
abrupt than the corresponding changes for data—execution simply moves to another part of the
code, scrapping the previous working set. In addition, the read-only nature of code eliminates
consistency and coherency problems stemming from turning off or “misplacing” cache lines in
the cache with the new indexing. 5
Resizing policy : The policy proposed to resize the DRI I-cache is based on monitoring
the miss rate. Misses are counted within a fixed time interval (on the order of a few thousands
of cycles). At the end of the interval a resizing decision is made. The decision compares the
measured number of misses to a user-defined preset “miss bound.” If the cache does not perform
up to expectations (measured misses
miss bound) the number of sets is increased; otherwise
the cache is further downsized. A user-defined “size bound” prevents downsizing of the index
beyond some point. This is a safety mechanism to prevent overzealous downsizing.
The size bound prevents pathological oscillations between two sizes. This can happen
when the miss rate exceeds the bound for the smaller size but is well under the bound with the
next larger size. Finally, a parameter, called divisibility of the cache controls how many index
bits to enable or disable at a time—i.e., it is the divisor (2, 4, 8, . . . ) or multiplicand for resizing
the cache.
Although this policy dynamically resizes the cache under the miss bound constraint,
critical parameters such as the size bound, the miss bound itself, and the divisibility factor
>
5 A notable exception for read-only code is Intel's IA-32 ISA which allows self-modifying code.
Search WWH ::




Custom Search