FULL SYSTEM TESTING WITH MULTIPLE JVMS
One particularly important case of testing a full application occurs when multiple applications are
run at the same time on the same hardware. Many aspects of the JVM are tuned by default to as-
sume that all machine resources are available to them, and if those JVMs are tested in isolation,
they will behave well. If they are tested when other applications are present (including, but not
limited to, other JVMs), their performance will be quite different.
Examples of this are given in later chapters, but here is one quick preview: when executing a GC
cycle, one JVM will (in its default configuration) drive the CPU usage on a machine to 100% of
all processors. If CPU is measured as an average during the program's execution, the usage may
average out to 40%—but that really means that the CPU is 30% busy at some times and 100%
busy at other times. When the JVM is run in isolation, that may be fine, but if the JVM is running
concurrently with other applications, it will not be able to get 100% of the machine's CPU during
GC. Its performance will be measurably different than when it was run by itself.
This is another reason why microbenchmarks and module-level benchmarks cannot necessarily
give you the full picture of an application's performance.
It's not the case that the time spent optimizing the calculations in this example is entirely
wasted: once effort is put into the bottlenecks elsewhere in the system, the performance be-
nefit will finally be apparent. Rather, it is a matter of priorities: without testing the entire ap-
plication, it is impossible to tell where spending time on performance work will pay off.
I work with the performance of both Java SE and EE, and each of those groups has a set of
tests they characterize as microbenchmarks. To a Java SE engineer, that term connotes an ex-
ample even smaller than that in the first section: the measurement of something quite small.
Java EE engineers tend to use that term to apply to something else: benchmarks that measure
one aspect of performance, but that still execute a lot of code.
An example of a Java EE “microbenchmark” might be something that measures how quickly
the response from a simple JSP can be returned from an application server. The code in-
volved in such a request is substantial compared to a traditional microbenchmark: there is a
lot of socket-management code, code to read the request, code to find (and possibly compile)
the JSP, code to write the answer, and so on. From a traditional standpoint, this is not mi-