Information Technology Reference
In-Depth Information
Based on the current interval, PAST assesses the number of cycles that the proces-
sor is going to be busy in the next interval. If the processor, because of its speed setting,
misses the deadline to complete its work in the current interval, unfinished work spills over
to the next interval. If, on the other hand, the processor completes its work before the end
of the quantum, the remaining idle time is taken into account for the speed setting for the
next interval. The speed setting policy raises speed if the current interval was more busy
than idle and lowers speed if idle time exceeds some percentage of the quantum time. These
comparisons (busy versus idle, as a fraction of the quantum) are based on empirically de-
rived parameters which lead to speed changes that smooth the transitions from high to low
frequencies.
Wiser et al. examine several voltage minima and several interval sizes in relation to the
three algorithms. PAST tends to fall behind when a light-load interval is followed by a heavy-
load interval. Unfinished work spills over to the next interval causing speed to vary more from
interval to interval until PAST manages to catch up. Because of this, it is less efficient in power
consumption than either OPT or FUTURE.
In general, there is a trade-off between the number of missed deadlines and energy savings
which depends on interval size. The smaller the interval, the fewer the missed deadlines because
speed can be adjusted at a finer time resolution. But energy savings are smaller because there is a
frequent switching between high and low speeds. In contrast, with large intervals, better energy
savings can be achieved, but at the expense of more missed deadlines, more work spilled-over,
and, as a result, a decreased response time for the workload. Regarding actual results, Wiser et
al. conclude that, for their setup, the optimal interval size ranges between 20 and 30 ms yielding
power savings between 5% and 75%.
3.2.2 Discovering and Exploiting Deadlines
Whereas the DVFS techniques of Wiser et al. are based on the idle time as seen by the
operating system (OS) (e.g., the idle loop), Flautner, Reinhardt, and Mudge look into a more
general problem on how to reduce frequency and voltage without missing deadlines [ 78 ]. Their
technique targets general purpose systems that run interactive workloads.
What do “deadlines” mean in this context? In the area of real-time systems, the notion
of a deadline is well defined. Hard real-time systems have fixed, known deadlines that have
to be respected at all times. Since most real time systems are embedded systems with a well-
understood workload, they can be designed (scheduled) to operate at an optimal frequency
and voltage, consuming minimum energy while meeting all deadlines. An example would be a
mobile handset running voice codecs. If the real-time workload is not mixed with non-real-time
applications, then DVFS controlled by an on-line policy is probably not necessary—scheduling
can be determined off-line.
Search WWH ::




Custom Search