Digital Signal Processing Reference
In-Depth Information
Care needs to be taken when applying unreachable code elimination because it is
not always possible to accurately work out the control flow relations in a program.
For example, if a program makes use of both the C and assembly languages certain
C routines that appear to be unreachable may get called from the assembly code.
The compiler needs to be conservative to ensure correctness and cannot make any
assumptions about e.g. functions with external linkage.
Dead Code Elimination targets computations whose results are never used. Note
that a code fragment may reachable, but still dead if the results computed by this
fragment are not used (or output) at any later point in the program. “Results” of a
computation do include any side effects such as exceptions generated or signals
raised by this piece of code and that affect other parts of the program under
optimization. Only if it can be guaranteed that a fragment of code does not produce
any direct or indirect results in this strict sense (e.g. it cannot produce any Division-
by-Zero Exceptions or similar) that are consumed elsewhere it can be eliminated.
Strength Reduction [ 7 ] replaces a costly sequence of instructions with an
equivalent, but cheaper (= faster or shorter) instruction sequence. For example, an
optimizing compiler may replace the operation 2
1.
Strength reduction is most widely known for use inside loops where repeated
multiplications are replaced with cheaper additions. In addition to the obvious
performance improvements from replacing a costly operation with a faster one,
strength reduction often decreases the total number of instructions in a loop and,
thus, contributes to code size reductions. If applied in combinations with other
optimizations such as algebraic reassociation, strength reduction further reduces
the number of operations. On the other hand, strength reduction may increase the
number of additions as the number of multiplies is decreased.
Certainly, optimizations for code size interfere with each other as much as
optimizations for performance. Hence, it is a difficult problem to select the most
appropriate set of code transformations (and their order of application) to achieve
a maximum code size reduction effect. An iterative approach to compiler tuning
where many different versions of the same code are tested and statistical analysis is
employed to identify the compiler options that have the largest effect on code size
is presented in [ 18 ] . The use of genetic algorithms to find optimization sequences
that generate small object codes is subject of [ 8 ] . Code compaction in a binary
rewriting tool is described in [ 9 ] . For architectures that provide an additional narrow
instruction set the generation of mixed-mode instruction width code is discussed in
e.g. [ 23 ] .
×
x with either x
+
x or x
3.5.2
Coalescing of Repeated Instruction Sequences
In addition to the generic optimizations discussed in the previous paragraph that
target the obvious sources of code size reduction such as dead or unreachable code,
there exist further optimizations that aim at specifically generating more compact
code, possibly at the cost of decreased performance. One of these techniques [ 6 ]
tries to identify repeated code fragments that are subsequently replaced by a single
Search WWH ::




Custom Search