Hardware Reference
In-Depth Information
at the cost of greater latency. This subsection presents techniques that take advantage of the
nature of DRAMs.
As mentioned earlier, a DRAM access is divided into row access and column access.
DRAMs must buffer a row of bits inside the DRAM for the column access, and this row is
usually the square root of the DRAM sizeā€”for example, 2 Kb for a 4 Mb DRAM. As DRAMs
grew, additional structure and several opportunities for increasing bandwith were added.
First, DRAMs added timing signals that allow repeated accesses to the row buffer without
another row access time. Such a buffer comes naturally, as each array will buffer 1024 to 4096
bits for each access. Initially, separate column addresses had to be sent for each transfer with
a delay after each new set of column addresses.
Originally, DRAMs had an asynchronous interface to the memory controller, so every trans-
fer involved overhead to synchronize with the controller. The second major change was to
add a clock signal to the DRAM interface, so that the repeated transfers would not bear that
overhead. Synchronous DRAM (SDRAM) is the name of this optimization. SDRAMs typically
also have a programmable register to hold the number of bytes requested, and hence can send
many bytes over several cycles per request. Typically, 8 or more 16-bit transfers can occur
without sending any new addresses by placing the DRAM in burst mode; this mode, which
supports critical word first transfers, is the only way that the peak bandwidths shown in Fig-
ure 2.14 can be achieved.
Search WWH ::




Custom Search