Java Reference
In-Depth Information
type to other types. It is created from scratch in the initial compilation, and updated
incrementally with new information on each subsequent compilation. Using this graph, the
compiler can decide if any structural changes 6 occur as a consequence of changing, adding,
or deleting a file. The computation of these structural changes involves the set of source files
that might compile differently as a consequence. The compiler deletes obsolete class files
and associated Java problem markers that were added to the source files due to previous
compilation errors, and compiles only the computed subset of source files. The dependency
graph is saved between sessions with workspace saves. The dependency graph does not
have to be re-generated, and the compiler avoids full compilation every time the project is
opened. Of course, the last built state is updated with the new reference information for the
compiled type, and new problem markers are generated for each compiled type if it has any
compilation problems, as the final steps in compilation [Rivieres and Beaton, 2006]. Figure
8.1 shows these steps.
Incremental compilation is very effective, especially in big projects with hundreds of
source files, as most of the source files will remain unchanged between two consecutive com-
pilations. Frequent compilations on hundreds or thousands of source files can be performed
without delay.
Many times when a Java file is changed, it does not result in any structural changes;
there is only a single file to be compiled. Even when structural changes occur and all
referencing types need to be re-compiled, those secondary types will almost never have
structural changes themselves, so the compilation will be completed in, at most, a couple of
iterations. Of course, one cannot claim that there will never be significant structural changes
that may cause many files to be re-compiled. ECJ considers this trade-off worth the risk,
assuming that the compiler runs very fast for the most common cases; rare occasions of
longer delays are acceptable to users.
Incremental compilers are usually accused of not optimizing enough because of the lo-
cality and relatively small amount of the code that has been changed since the last build.
However, this is not a real problem, as all the heavy-weight optimizations are performed
during run-time. Java compilers are not expected to optimize heavily when compiling from
source code to bytecode. ECJ performs light optimizations like discarding unused local
variables from the generated bytecode and inlining to load the bytecode faster into the VM
(since the verification process is then much simpler even though the code size to load in-
creases), but the real run-time performance difference comes from the VM that is used (for
example, IBM's JVM, Oracle's JVM, etc.). And if one decides to use, say, Oracle's JVM,
choosing the latest version would be a good practice because HotSpot is being improved in
each new version.
Perhaps one of the most interesting features of ECJ is its ability to run and debug code
that contains errors. Consider the following na ve test:
publicclassFoo{
publicvoidfoo(){
System.println("IfeellikeIforgotsomething...");
}
publicstaticvoidmain(String[]args){
System.out.println("Itreallyworks!");
}
}
The ECJ will flag the erroneous use of println() within foo() , marking the line in the
editor and underlining the println ; however, it will still generate the bytecode. And when
6 Structural changes are the changes that can affect the compilation of a referencing type, for example,
added or removed methods, fields or types, or changed method signatures.
 
Search WWH ::




Custom Search