Information Technology Reference
In-Depth Information
In servers and data centers, energy and thermal issues are important on an aggregate
scale. Recent internet data centers are estimated to draw in 50 MW of aggregate power or
more [ 175 ]. On the aggregate scale, reducing processor power can have high leverage; a rough
rule of thumb is that 1 W of power saved in the processor translates into an additional watt
saved in power supply efficiency, and another 1 W saved in cooling requirement reductions. A
recent report by the Boyd Co. indicates that within the United States, even the least expensive
possible data center sites will result in annual operating costs of roughly $10M per year for
a data center of 75 employees and electricity plays an increasingly major role in data center
siting costs [ 71 ]. For example, HSBC's decision to build a large data center near Buffalo,
NY is said to have been strongly influenced by a New York State incentive package including
11 MW of cheap hydroelectric power. Likewise, Google, Microsoft, and Yahoo are all said
to be building large data centers along the Columbia River in Washington and Oregon for
proximity to inexpensive electricity [ 203 ].
Among researchers, circuits and VLSI specialists focused on the power problem much
earlier than architects. This comes as no surprise since people in circuits and VLSI came into
contact with chip power budgets well before architects. They also have more direct tools for
analyzing power issues (late in the design timeline) and direct circuit techniques to address
some of them. While architects addressed problems later than the “lower” hardware fields, their
advantage is in leverage. Addressing power issues early and holistically in the design process
has the potential for better and more adaptable power-performance tradeoffs.
By the late 1990s, power was universally recognized by architects and chip developers
as a first-class constraint in computer systems design. Today, power cannot be ignored in any
new microarchitectural proposal. At the very least, a microarchitectural idea that promises to
increase performance must justify not only its cost in chip area but also its cost in power. Thus,
much of the research described in this topic was proposed in the last ten years.
1.4 THIS TOPIC
The target readers of this topic are engineers or researchers who are fairly fluent in computer
architecture concepts, but who want to build their understanding of how power-aware design
influences architectures. We envision a computer architecture graduate student or advanced un-
dergraduate, as well as industry engineers. We write this without assuming detailed knowledge
of transistor or circuits details, beyond the basics of CMOS gate structures.
In addition to offering background information on how and why power trends arise, we
also see the topic as a compendium of basic strategies in power-aware design. While no topic
of this length could enumerate all possible power-saving techniques, we try to include the most
fundamental ones known to the field as we write this in the Summer of 2007.
Search WWH ::




Custom Search