Digital Signal Processing Reference
In-Depth Information
￿
Self-timed Scheduling : Typical for applications modeled with dataflow MoCs.
A self-timed schedule is close to a static one. Once a static schedule is computed,
the code blocks are ordered on the corresponding PEs and synchronization
primitives are inserted that ensure the presence of data for the computation. This
kind of scheduling is used for SDF applications. For a more detailed discussion
the reader is referred to [ 76 ] .
￿
Quasi-static Scheduling : Used in the case where control paths introduce a pre-
dictable time variation. In this approach, unbalanced control paths are balanced
and a self-timed schedule is computed. Quasi-static scheduling for dynamically
parameterized SDF graphs is explored in [ 13 ] (see also Chap. [ 14 ] ).
￿
Dynamic Scheduling : Used when the timing behavior of the application is diffi-
cult to predict and/or when the number of applications is not known in advance
(like in the case of general purpose computing). The scheduling overhead is
usually higher, but so is also the average utilization of the processors in the
platform. There are many dynamic scheduling policies. Fair queue scheduling is
common in general purpose operating systems (OSs), whereas different flavors
of priority based scheduling are typically used in embedded systems with real
time constraints, e.g. rate monotonic (RM) and earliest deadline first (EDF).
￿
Hybrid Scheduling : Term used to refer to scheduling approaches in which several
static or self-timed schedules are computed for a given application at compile
time, and are switched dynamically at run-time depending on the scenario [ 29 ] .
This approach is applied to streaming multimedia applications, and allows to
adapt at runtime making it possible to save energy [ 58 ] .
Virtually every MPSoC platform provides support for implementing mapping
and scheduling. The support can be provided in software or in hardware and might
restrict the available policies that can be implemented. This has to be taken into
account by the compiler, which needs to generate/synthesize appropriate code (see
Sect. 2.5 ) . Software support can be provided by a full fledged OS or by light micro
kernels. Several commercial OS vendors provide solutions for MPSoC platforms,
but full efficient support for highly heterogeneous MPSoCs remains a major
challenge. In order to reduce the overhead introduced by software stacks, hardware
support for mapping and scheduling has been proposed both in the HPC [ 32 , 47 ] and
in the embedded communities [ 19 , 69 ] .
2.4.2
Computing a Schedule
Independent of which scheduling approach and how this is supported, the MPSoC
compiler has to compute a schedule (or several of them). This problem is known to
be NP-complete even for simple directed acyclic graphs (DAGs). Uni-processor
compilers therefore employ heuristics, most of them being derived from the
classical List Scheduling algorithm [ 36 ] . Computing a schedule for multiprocessor
platforms is by no means simpler. The requirements and characteristics of the
schedule depend on the underlying MoC with which the application was modeled.
In this chapter we distinguish between application modeled with centralized and
distributed control flow.
Search WWH ::




Custom Search