resolve that balance differently, which is one reason why one profiling tool may happen to
report much different data than another tool.
Figure 3-2 shows a basic sampling profile taken to measure the startup of a domain of the
GlassFish application server. The profile shows that the bulk of the time (19%) was spent in
the defineClass1() method, followed by the getPackageSourcesInternal() method,
and so on. It isn't a surprise that the startup of a program would be dominated by the per-
formance of defining classes; in order to make this code faster, the performance of classload-
ing must be improved.
Figure 3-2. A sample-based profile
Note carefully the last statement: it is the performance of classloading that must be im-
proved, and not the performance of the defineClass1() method. The common assumption
when looking at a profile is that improvements must come from optimizing the top method in
the profile. However, that is often too limiting an approach. In this case, the
defineClass1() method is part of the JDK, and a native method at that; its performance
isn't going to be improved without rewriting the JVM. Still, assume that it could be rewritten
so that it took 60% less time. That will translate to a 10% overall improvement in the execu-
tion time—which is certainly nothing to sneeze at.
More commonly, though, the top method in a profile may take 2% of 3% of total time; cut-
ting its time in half (which is usually enormously difficult) will only speed up application
performance by 1%. Focusing only on the top method in a profile isn't usually going to lead
to huge gains in performance.
Instead, the top method(s) in a profile should point you to the area in which to search for op-
timizations. GlassFish performance engineers aren't going to attempt to make class defini-
tion faster, but they can figure out how to speed up classloading in general—by loading few-
er classes, loading classes in parallel, and so on.