Information Technology Reference
In-Depth Information
partition ( LP ). We use LRU as replacement policy only for LP . A subset of
ways (max 50%) can be the part of LP . Rest of the ways belongs to RP and
uses a random replacement policy which requires no extra hardware. During the
block eviction phase, the LRU block of LP is replaced by a randomly chosen
block from RP and the newly incoming block is allocated to RP , in place of the
random block. So ultimately the newly incoming block is placed in a randomly
selected position of RP and the victim block in that position is moved into LP ,
replacing the LRU block of LP .
Since LRU policy is implemented only for a subset of ways (e.g., max 50%)
the hardware cost is much lesser than true LRU. Also, instead of choosing the
LRU block from the whole set as victim block, our policy selects the LRU block
from a number of randomly chosen blocks as the final victim block. This can
partially solve the two major issues of implementing LRU based scheme in LLC
as discussed earlier. Note that our proposed replacement policy is for the LLC
(here L2), the replacement policy for L1 can be any existing policy.
The rest of the paper is organized as follows. The next section presents the
related works in CMP cache architectures and replacement policies. Section 3
describes our proposed cache replacement policy. Section 4 covers performance
evaluation using full-system simulation. In the Section 5, we conclude the paper.
2 Related Work
Replacement policies have been well-studied in past [8, 9]. But emergence of
larger sized LLCs in CMP, motivated researchers for more innovation in this field.
It has been generally believed that some version of LRU based policy performs
better than other replacement policies [1]. But multicore and hierarchical cache
organizations affects the performance as well as the cost of LRU policies. The
pros and cons of both local and global replacement policies are discussed in [10].
The author proposed several global replacement policies and compared them
with local replacement policies. He found that global replacement policies are
not always performing better than local replacement policies.
An MLP (memory level parallelism) based replacement policy has been pro-
posed in [7] to consider both the fetching (from main memory) cost and recency
into account during the replacement of a block. However counting and managing
the mlp-cost for each block adds significant memory and hardware overhead.
As discussed in Section 1, dead lines and never-reused lines degrade the perfor-
mance of LRU based policies. A counter based technique [4] has been proposed
to deal with this issues. Though the proposed technique improves performance,
using counter for each cache line is an extra overhead.
Cache replacement policy is a major area of research and there are many more
innovative ideas already been proposed. Some resent papers in this area includes
[6, 11-13].
All the LRU based policies described above perform better than LRU but
most of them are more performance oriented than hardware cost. LRU is nor-
mally considered expensive in terms of hardware requirements. Also all the above
Search WWH ::




Custom Search