Hardware Reference
In-Depth Information
these views were premature; in fact, during the period of 1986-2003, uniprocessor perform-
ance growth, driven by the microprocessor, was at its highest rate since the first transistorized
computers in the late 1950s and early 1960s.
Nonetheless, the importance of multiprocessors was growing throughout the 1990s as de-
signers sought a way to build servers and supercomputers that achieved higher performance
than a single microprocessor, while exploiting the tremendous cost-performance advantages
of commodity microprocessors. As we discussed in Chapters 1 and 3 , the slowdown in uni-
processor performance arising from diminishing returns in exploiting instruction-level paral-
lelism (ILP) combined with growing concern over power, is leading to a new era in computer
architecture—an era where multiprocessors play a major role from the low end to the high
end. The second quotation captures this clear inflection point.
This increased importance of multiprocessing reflects several major factors:
■ The dramatically lower efficiencies in silicon and energy use that were encountered
between 2000 and 2005 as designers atempted to ind and exploit more ILP, which turned
out to be inefficient, since power and silicon costs grew faster than performance. Other
than ILP, the only scalable and general-purpose way we know how to increase perform-
ance faster than the basic technology allows (from a switching perspective) is through mul-
tiprocessing.
■ A growing interest in high-end servers as cloud computing and software-as-a-service be-
come more important.
■ A growth in data-intensive applications driven by the availability of massive amounts of
data on the Internet.
■ The insight that increasing performance on the desktop is less important (outside of graph-
ics, at least), either because current performance is acceptable or because highly compute-
and data-intensive applications are being done in the cloud.
■ An improved understanding of how to use multiprocessors effectively, especially in server
environments where there is significant natural parallelism, arising from large datasets,
natural parallelism (which occurs in scientific codes), or parallelism among large numbers
of independent requests (request-level parallelism).
■ The advantages of leveraging a design investment by replication rather than unique design;
all multiprocessor designs provide such leverage.
In this chapter, we focus on exploiting thread-level parallelism (TLP). TLP implies the ex-
istence of multiple program counters and hence is exploited primarily through MIMDs. Al-
though MIMDs have been around for decades, the movement of thread-level parallelism to
the forefront across the range of computing from embedded applications to high-end severs
is relatively recent. Likewise, the extensive use of thread-level parallelism for general-purpose
applications, versus scientific applications, is relatively new.
Our focus in this chapter is on multiprocessors , which we define as computers consisting of
tightly coupled processors whose coordination and usage are typically controlled by a single
operating system and that share memory through a shared address space. Such systems ex-
ploit thread-level parallelism through two different software models. The first is the execution
of a tightly coupled set of threads collaborating on a single task, which is typically called par-
allel processing . The second is the execution of multiple, relatively independent processes that
may originate from one or more users, which is a form of request-level parallelism , although at a
much smaller scale than what we explore in the next chapter. Request-level parallelism may be
exploited by a single application running on multiple processors, such as a database respond-
ing to queries, or multiple applications running independently, often called multiprogramming .
Search WWH ::




Custom Search