Hardware Reference
In-Depth Information
contention and improve performance. One design, shown in Fig. 2-8(b), gives
each processor some local memory of its own, not accessible to the others. This
memory can be used for program code and those data items that need not be shar-
ed. Access to this private memory does not use the main bus, greatly reducing bus
traffic. Other schemes (e.g., caching—see below) are also possible.
Multiprocessors have the advantage over other kinds of parallel computers that
the programming model of a single shared memory is easy to work with. For ex-
ample, imagine a program looking for cancer cells in a photograph of some tissue
taken through a microscope. The digitized photograph could be kept in the com-
mon memory, with each processor assigned some region of the photograph to hunt
in. Since each processor has access to the entire memory, studying a cell that starts
in its assigned region but straddles the boundary into the next region is no problem.
Multicomputers
256) are rel-
atively easy to build, large ones are surprisingly difficult to construct. The dif-
ficulty is in connecting so many the processors to the memory. To get around these
problems, many designers have simply abandoned the idea of having a shared
memory and just build systems consisting of large numbers of interconnected com-
puters, each having its own private memory, but no common memory. These sys-
tems are called multicomputers . The CPUs in a multicomputer are said to be
loosely coupled , to contrast them with the tightly coupled multiprocessor CPUs.
The CPUs in a multicomputer communicate by sending each other messages,
something like email, but much faster. For large systems, having every computer
connected to every other computer is impractical, so topologies such as 2D and 3D
grids, trees, and rings are used. As a result, messages from one computer to anoth-
er often must pass through one or more intermediate computers or switches to get
from the source to the destination. Nevertheless, message-passing times on the
order of a few microseconds can be achieved without much difficulty. Multicom-
puters with over 250,000 CPUs, such as IBM's Blue Gene/P, have been built.
Since multiprocessors are easier to program and multicomputers are easier to
build, there is much research on designing hybrid systems that combine the good
properties of each. Such computers try to present the illusion of shared memory
without going to the expense of actually constructing it. We will go into multi-
processors and multicomputers in detail in Chap. 8.
Although multiprocessors with a modest number of processors (
2.2 PRIMARY MEMORY
The memory is that part of the computer where programs and data are stored.
Some computer scientists (especially British ones) use the term store or storage
rather than memory, although more and more, the term ''storage'' is used to refer
 
 
Search WWH ::




Custom Search