Digital Signal Processing Reference
In-Depth Information
Self-timed Scheduling
: Typical for applications modeled with dataflow MoCs.
A self-timed schedule is close to a static one. Once a static schedule is computed,
the code blocks are ordered on the corresponding PEs and synchronization
primitives are inserted that ensure the presence of data for the computation. This
kind of scheduling is used for SDF applications. For a more detailed discussion
Quasi-static Scheduling
: Used in the case where control paths introduce a pre-
dictable time variation. In this approach, unbalanced control paths are balanced
and a self-timed schedule is computed. Quasi-static scheduling for dynamically
Dynamic Scheduling
: Used when the timing behavior of the application is diffi-
cult to predict and/or when the number of applications is not known in advance
(like in the case of general purpose computing). The scheduling overhead is
usually higher, but so is also the average utilization of the processors in the
platform. There are many dynamic scheduling policies.
Fair queue scheduling
is
common in general purpose operating systems (OSs), whereas different flavors
of priority based scheduling are typically used in embedded systems with real
time constraints, e.g.
rate monotonic
(RM) and
earliest deadline first
(EDF).
Hybrid Scheduling
: Term used to refer to scheduling approaches in which several
static or self-timed schedules are computed for a given application at compile
This approach is applied to streaming multimedia applications, and allows to
Virtually every MPSoC platform provides support for implementing mapping
and scheduling. The support can be provided in software or in hardware and might
restrict the available policies that can be implemented. This has to be taken into
account by the compiler, which needs to generate/synthesize appropriate code (see
kernels. Several commercial OS vendors provide solutions for MPSoC platforms,
but full efficient support for highly heterogeneous MPSoCs remains a major
challenge. In order to reduce the overhead introduced by software stacks, hardware
2.4.2
Computing a Schedule
Independent of which scheduling approach and how this is supported, the MPSoC
compiler has to compute a schedule (or several of them). This problem is known to
be NP-complete even for simple
directed acyclic graphs
(DAGs). Uni-processor
compilers therefore employ heuristics, most of them being derived from the
platforms is by no means simpler. The requirements and characteristics of the
schedule depend on the underlying MoC with which the application was modeled.
In this chapter we distinguish between application modeled with centralized and
distributed control flow.