Hardware Reference
In-Depth Information
1.3.1
DMA
Direct memory access (DMA) is a technique used by many peripheral devices to trans-
fer data between the device and the main memory. The purpose of DMA is to relieve
the central processing unit (CPU) of the task of controlling the input/output (I/O) trans-
fer. Since both the CPU and the I/O device share the same bus, the CPU has to be
blocked when the DMA device is performing a data transfer. Several different transfer
methods exist.
One of the most common methods is called cycle stealing , according to which the
DMA device steals a CPU memory cycle in order to execute a data transfer. During
the DMA operation, the I/O transfer and the CPU program execution run in parallel.
However, if the CPU and the DMA device require a memory cycle at the same time,
the bus is assigned to the DMA device and the CPU waits until the DMA cycle is
completed. Using the cycle stealing method, there is no way of predicting how many
times the CPU will have to wait for DMA during the execution of a task; hence the
response time of a task cannot be precisely determined.
A possible solution to this problem is to adopt a different technique, which requires the
DMA device to use the memory time-slice method [SR88]. According to this method,
each memory cycle is split into two adjacent time slots: one reserved for the CPU and
the other for the DMA device. This solution is more expensive than cycle stealing but
more predictable. In fact, since the CPU and DMA device do not conflict, the response
time of the tasks do not increase due to DMA operations and hence can be predicted
with higher accuracy.
1.3.2
CACHE
The cache is a fast memory that is inserted as a buffer between the CPU and the random
access memory (RAM) to speed up processes' execution. It is physically located after
the memory management unit (MMU) and is not visible at the software programming
level. Once the physical address of a memory location is determined, the hardware
checks whether the requested information is stored in the cache: if it is, data are read
from the cache; otherwise the information is taken from the RAM, and the content of
the accessed location is copied in the cache along with a set of adjacent locations. In
this way, if the next memory access is done to one of these locations, the requested
data can be read from the cache, without having to access the memory.
This buffering technique is motivated by the fact that statistically the most frequent ac-
cesses to the main memory are limited to a small address space, a phenomenon called
Search WWH ::




Custom Search