Information Technology Reference
In-Depth Information
random selection of a cache block for replacement is done based on the output of the
random number generator at the time of replacement. This technique is simple and
does not require much additional overhead. However, its main shortcoming is that it
does not take locality into consideration. Random techniques have been found effec-
tive enough such that they have been first used by Intel in its iAPX microprocessor
series.
The FIFO technique takes the time spent by a block in the cache as a measure for
replacement. The block that has been in the cache the longest is selected for replace-
ment regardless of the recent pattern of access to the block. This technique requires
keeping track of the lifetime of a cache block. Therefore, it is not as simple as the
random selection technique. Intuitively, the FIFO technique is reasonable to use
for straightline programs where locality of reference is not of concern.
According to the LRU replacement technique, the cache block that has been
recently used the least is selected for replacement. Among the three replacement
techniques, the LRU technique is the most effective. This is because the history
of block usage (as the criterion for replacement) is taken into consideration. The
LRU algorithm requires the use of a cache controller circuit that keeps track of refer-
ences to all blocks while residing in the cache. This can be achieved through a
number of possible implementations. Among these implementations is the use of
counters. In this case each cache block is assigned a counter. Upon a cache hit,
the counter of the corresponding block is set to 0, all other counters having a smaller
value than the original value in the counter of the hit block are incremented by 1, and
all counters having a larger value are kept unchanged. Upon a cache miss, the block
whose counter is showing the maximum value is chosen for replacement, the counter
is set to 0, and all other counters are incremented by 1.
Having introduced the above three cache mapping technique, we offer the follow-
ing example, which illustrates the main observations made about the three techniques.
Example 5
8 two-dimensional array of numbers, A.
Assume that each number in the array occupies one word and that the array elements
are stored column-major order in the main memory from location 1000 to location
1031. The cache consists of eight blocks each consisting of just two words. Assume
also that whenever needed, LRU replacement policy is used. We would like to exam-
ine the changes in the cache if each of the above three mapping techniques is used as
the following sequence of requests for the array elements are made by the processor:
Consider the case of a 4
a 0,0 , a 0,1 , a 0,2 , a 0,3 , a 0,4 , a 0,5 , a 0,6 , a 0,7
a 1,0 , a 1,1 , a 1,2 , a 1,3 , a 1,4 , a 1,5 , a 1,6 , a 1,7
Solution
The distribution of the array elements in the main memory is shown in
Figure 6.11. Shown also is the status of the cache before the above requests were made.
Direct Mapping
Table 6.3 shows that there were 16 cache misses (not a single
cache hit) and that the number of replacements made is 12 (these are shown
Search WWH ::




Custom Search