Hardware Reference
In-Depth Information
Thus, increasing the number of transistors increases power even if they are idle, and leakage
current increases in processors with smaller transistor sizes. As a result, very low power sys-
tems are even turning of the power supply ( power gating ) to inactive modules to control loss
due to leakage. In 2011, the goal for leakage is 25% of the total power consumption, with leak-
age in high-performance designs sometimes far exceeding that goal. Leakage can be as high as
50% for such chips, in part because of the large SRAM caches that need power to maintain the
storage values. (The S in SRAM is for static.) The only hope to stop leakage is to turn of power
to subsets of the chips.
Finally, because the processor is just a portion of the whole energy cost of a system, it can
make sense to use a faster, less energy-efficient processor to allow the rest of the system to go
into a sleep mode. This strategy is known as race-to-halt .
The importance of power and energy has increased the scrutiny on the efficiency of an in-
novation, so the primary evaluation now is tasks per joule or performance per wat as opposed
to performance per mm 2 of silicon. This new metric affects approaches to parallelism, as we
shall see in Chapters 4 and 5 .
1.6 Trends in Cost
Although costs tend to be less important in some computer designs—speciically
supercomputers—cost-sensitive designs are of growing significance. Indeed, in the past 30
years, the use of technology improvements to lower cost, as well as increase performance, has
been a major theme in the computer industry.
Textbooks often ignore the cost half of cost-performance because costs change, thereby dat-
ing topics, and because the issues are subtle and difer across industry segments. Yet, an un-
derstanding of cost and its factors is essential for computer architects to make intelligent de-
cisions about whether or not a new feature should be included in designs where cost is an
issue. (Imagine architects designing skyscrapers without any information on costs of steel
beams and concrete!)
This section discusses the major factors that influence the cost of a computer and how these
factors are changing over time.
The Impact Of Time, Volume, And Commoditization
The cost of a manufactured computer component decreases over time even without major
improvements in the basic implementation technology. The underlying principle that drives
costs down is the learning curve —manufacturing costs decrease over time. The learning curve
itself is best measured by change in yield —the percentage of manufactured devices that sur-
vives the testing procedure. Whether it is a chip, a board, or a system, designs that have twice
the yield will have half the cost.
Understanding how the learning curve improves yield is critical to projecting costs over a
product's life. One example is that the price per megabyte of DRAM has dropped over the
long term. Since DRAMs tend to be priced in close relationship to cost—with the exception of
periods when there is a shortage or an oversupply—price and cost of DRAM track closely.
Microprocessor prices also drop over time, but, because they are less standardized than
DRAMs, the relationship between price and cost is more complex. In a period of signiicant
competition, price tends to track cost closely, although microprocessor vendors probably
rarely sell at a loss.
 
Search WWH ::




Custom Search