sume a lot of one CPU on the machine, but as long as there are more CPUs available, the ap-
plication itself won't be impacted, since the compilation happens in the background.)
Once the impact of the native code has been examined, it can be filtered out to focus on the
actual startup ( Figure 3-7 ).
Figure 3-7. A filtered native profiler
Once again, the sampling profiler here points to the defineClass1() method as the hottest
method, though the actual time spent in that method and its children—0.67 seconds out of
5.041 seconds—is about 11% (significantly less than what the last sample-based profiler re-
ported). This profile also points to some additional things to examine: reading and unzipping
JAR files. As these are related to classloading, we were on the right track for those any-
way—but in this case it is interesting to see that the actual I/O for reading the JAR files (via
the inflateBytes() method) is a few percentage points. Other tools didn't show us
that—partly because the native code involved in the Java ZIP libraries got treated as a block-
ing call and was filtered out.
No matter which profiling tool—or better yet, tools—you use, it is quite important to become
familiar with their idiosyncrasies. Profilers are the most important tool to guide the search
for performance bottlenecks, but you must learn to use them to guide you to areas of the code
to optimize, rather than focusing solely on the top hot spot.