Hardware Reference
In-Depth Information
these run-time situations are clustered into a few dominant application scenarios
and a backup scenario. At run time, the actual scenario is detected with a simple
detector, and the application is executed with the configuration decided for that
scenario at design time.
￿
The task scheduling techniques [ 34 , 35 ] schedule one task at a time, making use
of the entire platform for each task. This leads to inefficient usage of the platform.
The techniques in [ 31 , 37 ] allow parallel execution of the tasks, while sharing
the platform resources. But they assume that all tasks start at the same time.
This assumption can lead to idle platform processors until the next RRM call.
These techniques are extended in [ 24 ], which allows overlapped sharing of the
platform resources. The latter technique considers the start time and the periodic
information present in applications such as frame processing in video decoding,
or packet processing in wireless applications.
￿
RRM for multi-core embedded platforms [ 16 , 17 ] also exploits the number of
available cores, in addition to voltage and frequency. Different parallelized ver-
sions of a single application are used to trade-off the available platform resources
with the performance and the power consumption.
￿
The addition of the parallelism to the set of platform parameters significantly
increases the design space of operating modes. Innovative and efficient techniques
for RRM are needed to extend the traditional approaches for power consumption
optimization. Recent studies in this field address the problem by modeling it as
a Multi-dimension Multiple-choice Knapsack Problem (MMKP) and solving it
through dedicated heuristics [ 31 , 37 ]. Another approach [ 22 ] proposes a run-
time management technique for task-level parallelism in order to optimize the
performance under a power consumption budget.
￿
Advanced technologies such as sub-45 nm CMOS and 3D integration are known
for the increased number of reliability failure mechanisms. Nevertheless, classi-
cal reliability-aware approaches are no longer viable, since they propose ad-hoc
failure or worst-case solutions, which incur a significant cost penalty. In [ 26 ],
the state-of-the-art in reliability management techniques is summarized, and a
new proactive energy management approach is proposed, which handles both
temperature and lifetime at run time.
To allow integration and collaboration of all these complementary techniques, a RRM
framework has been developed in [ 40 ], with the most relevant generic services.
This Chapter describes a tool flow (shown in Fig. 5.1 ), combining a design-time
exploration with a lightweight run-time manager for embedded multi-core platforms
[ 21 ]. The run-time manager leverages a set of pre-determined run-time configu-
rations (or operating points ) identified at design-time (see Fig. 5.1 ) by analyzing
and exploring the architecture run-time parameters impact on the QoS through
an architecture simulator. The operating points consist of information/knowledge
about parameters (e.g. the power consumption, the throughput) that designers wish
to optimize and resource usage associated with each configuration of the run-time
parameters of the hardware/software infrastructure. The overall goal of the run-time
manager is to make a reasonable assignment of the run-time parameters to optimize
Search WWH ::




Custom Search