Java Reference
In-Depth Information
Table 12-1. Time to serialize/deserialize 10,000 objects with compression
Serialization time Deserialization time
Unbuffered compression/decompression 60.3 seconds
79.3 seconds
Buffered compression/decompression
26.8 seconds
12.7 seconds
The failure to properly buffer the I/O resulted in as much as a 6x performance penalty.
1. Issues around buffered I/O are common due to the default implementation of the
simple input and output stream classes.
2. I/O must be properly buffered for files and sockets, and also for internal opera-
tions like compression and string encoding.
The performance of classloading is the bane of anyone attempting to optimize either program
startup or deployment of new code in a dynamic system (e.g., deploying a new application
into a Java EE application server, or loading an applet in a browser).
There are many reasons for that. The primary one is that the class data (i.e., the Java byte-
codes) is typically not quickly accessible. That data must be loaded from disk or from the
network, it must be found in one of several JAR files on the classpath, and it must be found
in one of several classloaders. There are some ways to help this along: for example, Java We-
bStart caches classes it reads from the network into a hidden directory, so that next time it
starts the same application, it can read the classes more quickly from the local disk than from
the network. Packaging an application into fewer JAR files will also speed up its classload-
ing performance.
In a complex environment, one obvious way to speed things up is to parallelize classloading.
Take a typical application server: on startup, it may need to initialize multiple applications,
each of which uses its own classloader. Given the multiple CPUs available to most applica-
tion servers, parallelization should be an obvious win here.
Search WWH ::

Custom Search