Information Technology Reference
In-Depth Information
When considering the numerous data movements required for
most processing and how to increase computer performance, three
approaches may come to mind. The first is obvious: Developers of
computer equipment work to reduce latency in the various compo
nents. For example, main memory is accessed frequently during
most processing. Thus, speeding up how quickly memory stores and
retrieves information (reducing latency) would likely have a signifi
cant effect in making overall processing go faster. Computer equip
ment is sometimes called hardware , and hardware developers con
stantly seek ways to reduce the time it takes to move data and
process information.
As a second approach, one might try to increase the amount of
memory (the number of registers) within a CPU, so there is less need
to move data repeatedly between main memory and the CPU.
Unfortunately, providing full processing capabilities to memory
locations in the main part of a CPU is quite expensive and techno
logically complex. For example, even as sophisticated a processor as
the Pentium CPU chip has under 40 fullcapability storage locations.
(Technically, a complex CPU like the Pentium has different types of
registers for various specialized tasks, so a count of registers depends
on which tasks you want to include. For the Pentium, most register
counts would be 8, 16, 24, or 32.) Some other types of processors
(called Reduced Instruction Set Computers or RISC processors) can
achieve as many as 64 or so such storage locations, but even then
the numbers are rather modest.
A third approach (actually the second realistic approach) to
increasing computer performance builds on the observation that
although processing may require much data over time, relatively few
items are usually needed at any given moment. The idea is to keep
current data very close to the CPU, where it can be recalled quickly
when needed. Such highspeed memory near the CPU is called cache .
Cache normally is rather small, but quite effective. (If you have
worked with a Web browser, you might have encountered the notion
of cache for keeping track of recent Web pages. Although that disk
cache follows a similar philosophy as cache for highspeed memory,
the discussion here involves different technology than a Web cache.)
The idea of mainmemory cache is that the first time a piece of
data is needed for processing, that information must come from main
memory—but a copy is placed in cache. When the same information
Search WWH ::




Custom Search