Hardware Reference
In-Depth Information
On a write miss, the word that has been modified is written to main memory.
The line containing the word referenced is not loaded into the cache. On a write
hit, the cache is updated and the word is written through to main memory in addi-
tion. The essence of this protocol is that all write operations result in the word
being written going through to memory to keep memory up to date at all times.
Now let us look at all these actions again, but this time from the snooper's
point of view, shown in the right-hand column of Fig. 8-27. Let us call the cache
performing the actions cache 1 and the snooping cache cache 2. When cache 1
misses on a read, it makes a bus request to fetch a line from memory. Cache 2 sees
this but does nothing. When cache 1 has a read hit, the request is satisfied locally,
and no bus request occurs, so cache 2 is not aware of cache 1's read hits.
Writes are more interesting. If CPU 1 does a write, cache 1 will make a write
request on the bus, both on misses and on hits. On all writes, cache 2 checks to see
whether it has the word being written. If not, from its point of view this is a re-
mote request/write miss and it does nothing. (To clarify a subtle point, note that in
Fig. 8-27 a remote miss means that the word is not present in the snooper's cache;
it does not matter whether it was in the originator's cache or not. Thus a single re-
quest may be a hit locally and a miss at the snooper, or vice versa.)
Now suppose that cache 1 writes a word that is present in cache 2's cache (re-
mote request/write hit). If cache 2 does nothing, it will have stale data, so it marks
the cache entry containing the newly modified word as being invalid. In effect, it
removes the item from the cache. Because all caches snoop on all bus requests,
whenever a word is written, the net effect is to update it in the originator's cache,
update it in memory, and purge it from all the other caches. In this way, inconsis-
tent versions are prevented.
Of course, cache 2's CPU is free to read the same word on the very next cycle.
In that case, cache 2 will read the word from memory, which is up to date. At that
point, cache 1, cache 2, and the memory will all have identical copies of it. If ei-
ther CPU does a write now, the other one's cache will be purged, and memory will
be updated.
Many variations on this basic protocol are possible. For example, on a write
hit, the snooping cache normally invalidates its entry containing the word being
written. Alternatively, it could accept the new value and update its cache instead of
marking it as invalid. Conceptually, updating the cache is the same as invalidating
it followed by reading the word from memory. In all cache protocols, a choice
must be made between an update strategy and an invalidate strategy . These pro-
tocols perform differently under different loads. Update messages carry payloads
and are thus larger than invalidates but may prevent future cache misses.
Another variant is loading the snooping cache on write misses. The cor-
rectness of the algorithm is not affected by loading it, only the performance. The
question is: ''What is the probability that a word just written will be written again
soon?'' If it is high, there is something to be said for loading the cache on write
misses, known as a write-allocate policy . If it is low, it is better not to update on
 
Search WWH ::




Custom Search