Hardware Reference
In-Depth Information
siredmemorylocationisoutsidethecurrentpage,oneormorewaitstatesareaddedwhile
the system selects the new page.
To improve further on memory access speeds, systems have evolved to enable faster ac-
cesstoDRAM.Oneimportantchangewastheimplementationofburstmodeaccessinthe
486andlater processors.Burstmodecycling takes advantage oftheconsecutive nature of
most memory accesses. After setting up the row and column addresses for a given access,
usingburstmode,youcanthenaccessthenextthreeadjacentaddresseswithnoadditional
latency or wait states. A burst access usually is limited to four total accesses. To describe
this, we often refer to the timing in the number of cycles for each access. A typical burst
mode access of standard DRAM is expressed as x-y-y-y; x is the time for the first access
(latency plus cycle time), and y represents the number of cycles required for each consec-
utive access.
Standard 60ns-rated DRAM normally runs 5-3-3-3 burst mode timing. This means the
first access takes a total of five cycles (on a 66MHz system bus, this is about 75ns total,
or 5 × 15ns cycles), and the consecutive cycles take three cycles each (3 × 15ns = 45ns).
As you can see, the actual system timing is somewhat less than the memory is technic-
ally rated for. Without the bursting technique, memory access would be 5-5-5-5 because
the full latency is necessary for each memory transfer. The 45ns cycle time during burst
transfers equals about a 22.2MHz effective clock rate; on a system with a 64-bit (8-byte)
wide memory bus, this would result in a maximum throughput of 177MBps (22.2MHz ×
8 bytes = 177MBps).
DRAM memory that supports paging and this bursting technique is called Fast Page
Mode (FPM) memory. This term refers to the ability to access data on the same memory
page faster than data on other memory pages.
Most 386, 486, and Pentium systems from 1987 through 1995 used FPM memory, which
came in either 30-pin or 72-pin SIMM form.
AnothertechniqueforspeedingupFPMmemoryiscalled interleaving .Inthisdesign,two
separate banks of memory are used together, alternating access from one to the other as
even and odd bytes. While one is being accessed, the other is being precharged, when the
row and column addresses are being selected. Then, by the time the first bank in the pair
is finished returning data, the second bank in the pair is finished with the latency part of
thecycleandisnowreadytoreturndata.Whilethesecondbankisreturningdata,thefirst
bank is being precharged, selecting the row and column address of the next access. This
overlapping of accesses in two banks reduces the effect of the latency or precharge cycles
and allows for faster overall data retrieval. The only problem is that to use interleaving,
you must install identical pairs of banks together, doubling the number of modules re-
quired.
Search WWH ::




Custom Search