Hardware Reference
In-Depth Information
FIGURE A.19 Compilers typically consist of two to four passes, with more highly op-
timizing compilers having more passes . This structure maximizes the probability that a pro-
gram compiled at various levels of optimization will produce the same output when given the
same input. The optimizing passes are designed to be optional and may be skipped when
faster compilation is the goal and lower-quality code is acceptable. A pass is simply one
phase in which the compiler reads and transforms the entire program. (The term phase is of-
ten used interchangeably with pass .) Because the optimizing passes are separated, multiple
languages can use the same optimizing and code generation passes. Only a new front end is
required for a new language.
A compiler writer's first goal is correctness—all valid programs must be compiled correctly.
The second goal is usually speed of the compiled code. Typically, a whole set of other goals
follows these two, including fast compilation, debugging support, and interoperability among
languages. Normally, the passes in the compiler transform higher-level, more abstract repres-
entations into progressively lower-level representations. Eventually it reaches the instruction
set. This structure helps manage the complexity of the transformations and makes writing a
bug-free compiler easier.
The complexity of writing a correct compiler is a major limitation on the amount of optim-
ization that can be done. Although the multiple-pass structure helps reduce compiler com-
plexity, it also means that the compiler must order and perform some transformations before
others. In the diagram of the optimizing compiler in Figure A.19 , we can see that certain high-
level optimizations are performed long before it is known what the resulting code will look
like. Once such a transformation is made, the compiler can't afford to go back and revisit all
steps, possibly undoing transformations. Such iteration would be prohibitive, both in compila-
tion time and in complexity. Thus, compilers make assumptions about the ability of later steps
to deal with certain problems. For example, compilers usually have to choose which proced-
ure calls to expand inline before they know the exact size of the procedure being called. Com-
piler writers call this problem the phase-ordering problem .
How does this ordering of transformations interact with the instruction set architecture? A
good example occurs with the optimization called global common subexpression elimination . This
 
Search WWH ::




Custom Search