Digital Signal Processing Reference
In-Depth Information
problem is modeled as an interference graph where two variables interfere if they are
in the same basic block and there is no dependence between them. The interference
is weighted according to the number of potentially parallel memory accesses.
More recently a more accurate integer linear programming model for DSP mem-
ory assignment has been presented [ 17 ] . The model described here is considerably
more complicated than the one previously presented in [ 27 ] but provides larger
improvements.
Finally, a technique that operates at a higher level than the other methods
is described in [ 43 ] . It performs memory assignment on the high-level IR, thus
allowing the coloring method to be used with each of the back-ends within the
compiler. The problem is modeled as an independence graph and the weights
between variables take account of both execution frequency and how close the two
accesses are in the code.
Source-level transformations targeting DSP-C as an output language [ 35 ] and
the use of machine-learning techniques [ 34 ] for dual memory bank assignment have
been proposed in recent years.
3.5
Optimizations for Code Size
Most digital signal processors comprise fast, but small on-chip memories that
store both data and the program code. While it is possible to provide additional,
external memory this may not be desirable for reasons of cost and higher PCB
integration. In such a situation the size of available on-chip memories place hard
constraints on the code size which as an optimization goal becomes at least as
important as performance. Incidentally, smaller code may also be faster code,
especially if an instruction cache is present. Lower instruction count and a reduced
number of instruction cache misses contribute to higher performance. However,
there is no strict correlation between code size and performance. For example,
some optimizations for performance such as loop unrolling (see Sect. 3.3.1 ) increase
code size whereas more code-size aware code optimization such as redundant code
elimination may lead to lower performance. Eventually, it will be the responsibility
of the application design team to trade-off the (real-time) performance requirements
of their application and the memory constraints set by their chosen processor. In
the following paragraph we present a number of compiler-driven optimizations
for code size that are applicable to most DSPs and do not require any additional
hardware modules e.g. for code decompression. A comprehensive survey of code-
size reductions methods can be found in [ 5 ] .
3.5.1
Generic Optimizations for Code Compaction
In this paragraph we discuss a number of code optimizations that are routinely
applied by most compilers as part of their effort to eliminate code redundancies.
 
 
Search WWH ::




Custom Search