Information Technology Reference
In-Depth Information
Secondary
Level
Memory Management
Unit (MMU)
Miss
Block
Translation
Function
Address in
Primary Memory
System
Address
Primary Level
Hit
The requested element
Figure 6.3 Address mapping operation
available to the processor. If, on the other hand, the element is not currently in the
cache, then it will be brought (as part of a block) from the main memory and placed
in the cache and the element requested is made available to the processor.
6.2.5. Cache Memory Organization
There are three main different organization techniques used for cache memory. The
three techniques are discussed below. These techniques differ in two main aspects:
1. The criterion used to place, in the cache, an incoming block from the main
memory.
2. The criterion used to replace a cache block by an incoming block (on cache full).
Direct Mapping
This is the simplest among the three techniques. Its simplicity
stems from the fact that it places an incoming main memory block into a specific
fixed cache block location. The placement is done based on a fixed relation between
the incoming block number, i, the cache block number, j, and the number of cache
blocks, N:
j ΒΌ i mod N
Example 1
Consider, for example, the case of a main memory consisting of 4K
blocks, a cache memory consisting of 128 blocks, and a block size of 16 words.
Figure 6.4 shows the division of the main memory and the cache according to the
direct-mapped cache technique.
As the figure shows, there are a total of 32 main memory blocks that map to
a given cache block. For example, main memory blocks 0, 128, 256, 384,
,
3968 map to cache block 0. We therefore call the direct-mapping technique a
...
Search WWH ::




Custom Search