Hardware Reference
In-Depth Information
Third, continuing improvement of semiconductor manufacturing as predicted by Moore's
law has led to the dominance of microprocessor-based computers across the entire range of
computer design. Minicomputers, which were traditionally made from off-the-shelf logic or
from gate arrays, were replaced by servers made using microprocessors. Even mainframe
computers and high-performance supercomputers are all collections of microprocessors.
The hardware innovations above led to a renaissance in computer design, which emphas-
ized both architectural innovation and efficient use of technology improvements. This rate of
growth has compounded so that by 2003, high-performance microprocessors were 7.5 times
faster than what would have been obtained by relying solely on technology, including im-
proved circuit design; that is, 52% per year versus 35% per year.
This hardware renaissance led to the fourth impact, which is on software development.
This 25,000-fold performance improvement since 1978 (see Figure 1.1 ) allowed programmers
today to trade performance for productivity. In place of performance-oriented languages like
C and C++, much more programming today is done in managed programming languages like
Java and C#. Moreover, scripting languages like Python and Ruby, which are even more pro-
ductive, are gaining in popularity along with programming frameworks like Ruby on Rails.
To maintain productivity and try to close the performance gap, interpreters with just-in-time
compilers and trace-based compiling are replacing the traditional compiler and linker of the
past. Software deployment is changing as well, with Software as a Service (SaaS) used over
the Internet replacing shrink-wrapped software that must be installed and run on a local com-
puter.
The nature of applications also changes. Speech, sound, images, and video are becoming
increasingly important, along with predictable response time that is so critical to the user ex-
perience. An inspiring example is Google Goggles. This application lets you hold up your cell
phone to point its camera at an object, and the image is sent wirelessly over the Internet to
a warehouse-scale computer that recognizes the object and tells you interesting information
about it. It might translate text on the object to another language; read the bar code on a book
cover to tell you if a book is available online and its price; or, if you pan the phone camera, tell
you what businesses are nearby along with their websites, phone numbers, and directions.
Alas, Figure 1.1 also shows that this 17-year hardware renaissance is over. Since 2003, single-
processor performance improvement has dropped to less than 22% per year due to the twin
hurdles of maximum power dissipation of air-cooled chips and the lack of more instruction-
level parallelism to exploit efficiently. Indeed, in 2004 Intel canceled its high-performance uni-
processor projects and joined others in declaring that the road to higher performance would
be via multiple processors per chip rather than via faster uniprocessors.
This milestone signals a historic switch from relying solely on instruction-level parallelism
(ILP), the primary focus of the irst three editions of this topic, to data-level parallelism (DLP)
and thread-level parallelism (TLP), which were featured in the fourth edition and expanded
in this edition. This edition also adds warehouse-scale computers and request-level parallelism
( RLP ). Whereas the compiler and hardware conspire to exploit ILP implicitly without the pro-
grammer's atention, DLP, TLP, and RLP are explicitly parallel, requiring the restructuring
of the application so that it can exploit explicit parallelism. In some instances, this is easy; in
many, it is a major new burden for programmers.
This text is about the architectural ideas and accompanying compiler improvements that
made the incredible growth rate possible in the last century, the reasons for the dramatic
change, and the challenges and initial promising approaches to architectural ideas, compilers,
and interpreters for the 21st century. At the core is a quantitative approach to computer design
and analysis that uses empirical observations of programs, experimentation, and simulation
Search WWH ::




Custom Search