Hardware Reference
In-Depth Information
the amount of remote data needed greatly exceeds the cache capacity, cache misses
will occur constantly and performance will be poor.
Thus we have a situation that UMA machines have excellent performance but
are limited in size and are quite expensive. NC-NUMA machines scale to some-
what larger sizes but require manual or semi-automated placement of pages, often
with mixed results. The problem is that it is hard to predict which pages will be
needed where, and in any case, a page is often too large a unit to move around.
CC-NUMA machines, such as the Sun Fire E25K, may experience poor per-
formance if many CPUs need a lot of remote data. All in all, each of these designs
has serious limitations.
An alternative kind of multiprocessor tries to get around all these problems by
using each CPU's main memory as a cache. In this design, called COMA ( Cache
Only Memory Access ), pages do not have fixed home machines, as they do in
NUMA and CC-NUMA machines. In fact, pages are not significant at all.
Instead, the physical address space is split into cache lines, which migrate
around the system on demand. Blocks do not have home machines. Like nomads
in some Third World countries, home is where you are right now. A memory that
just attracts lines as needed is called an attraction memory . Using the main RAM
as a big cache greatly increases the hit rate, hence the performance.
Unfortunately, as usual, there is no such thing as a free lunch. COMA systems
introduce two new problems:
1. How are cache lines located?
2. When a line is purged, what happens if it is the last copy?
The first problem relates to the fact that after the MMU has translated a virtual ad-
dress to a physical address, if the line is not in the true hardware cache, there is no
easy way to tell if it is in main memory at all. The paging hardware does not help
here at all because each page is made up of many individual cache lines that wan-
der around independently. Furthermore, even if it is known that a line is not in
main memory, where is it then? It is not possible to just ask the home machine, be-
cause there is no home machine.
Some solutions to the location problem have been proposed. To see if a cache
line is in main memory, new hardware could be added to keep track of the tag for
each cached line. The MMU could then compare the tag for the line needed to the
tags for all the cache lines in memory to look for a hit. This solution needs addi-
tional hardware.
A somewhat different solution is to map entire pages in but not require that all
the cache lines be present. In this solution, the hardware would need a bit map per
page, giving one bit per cache line indicating the line's presence or absence. In
this design, called simple COMA if a cache line is present, it must be in the right
position in its page, but if it is not present, any attempt to use it causes a trap to
allow the software to go find it and bring it in.
 
Search WWH ::




Custom Search