Digital Signal Processing Reference
In-Depth Information
to compute scheduling and mapping automatically to meet given constraints [ 16 ] .
This applies not only to single applications but also to multiple application
scenarios. Besides directly running the platform simulator, the partitioning/mapping
can also be exercised and refined with a fast, high-level SystemC based simulation
environment MVP (MAPS Virtual Platform [ 21 ] ), which has been put in practice
to evaluate different software settings in a multi-application scenario. After further
refinement, a code generation phase translates the tasks into C codes for compilation
onto the respective PEs with their native compilers and OS primitives. The SW can
then be executed, depending on availability, either on the real HW or on a cycle-
approximate virtual platform incorporating instruction-set simulators. MAPS also
develops a dedicated task dispatching ASIP (OSIP, operating system ASIP [ 19 ] ) in
order to enable higher PE utilization via more fine-grained tasks and lower context
switching overhead. Early evaluation case studies exhibited great potential of the
OSIP approach in lowering the task-switching overhead, compared to an additional
RISC performing scheduling, in a typical MPSoC environment. MAPS has been
used and extended to support programming the OSIP-based platform and provide
debugging support. The cast study [ 17 ] has proved that MAPS is flexible and
extensible enough to support complicated heterogeneous MPSoCs with specialized
APIs and configurations. It also has shown the productivity increase provided by
MAPS. The MAPS compiler allows to test different configurations faster than if
coding each of them by hand using the OSIP-APIs. The debugging facilities greatly
simplifies application development cycles as well.
Summary
In this chapter we presented an overview of the challenges for building MPSoC
compilers and described some of the techniques, both established and emerging,
that are being used to leverage the computing power of current and yet to come
MPSoC platforms. The chapter concluded with selected academic and industrial
examples that show how the concepts are applied to real systems.
We have seen how new programming models are being proposed that change
the requirements of the MPSoC compiler. We discussed that, independent of the
programming model, an MPSoC compiler has to find a suitable granularity to
expose parallelism beyond the instruction level (ILP), demanding advanced analysis
of the data and control flows. Computing a schedule and a mapping of an application
is one of the most complex tasks of the MPSoC compiler and can only be achieved
successfully with accurate performance estimation or simulation. Most of these
analyses are target specific, hence the MPSoC itself needs to be abstracted and
fed to the compiler. With this information, the compiler can tune the different
optimizations to the target MPSoC and finally generate executable code.
The whole flow shares similarities with that of a traditional uni-processor
compiler, but is much more complex in the case of an MPSoC. We have presented
Search WWH ::




Custom Search