The allocateDirect() method call is quite expensive; direct byte buffers should be reused
as much as possible. The ideal situation is when threads are independent and each can keep a
direct byte buffer as a thread-local variable. That can sometimes use too much native
memory if there are many threads that need buffers of variable sizes, since eventually each
thread will end up with a buffer at the maximum possible size. For that kind of situation—or
when thread-local buffers don't fit the application design—an object pool of direct byte buf-
fers may be more useful.
Byte buffers can also be managed by slicing them. The application can allocate one very
large direct byte buffer, and individual requests can allocate a portion out of that buffer using
the slice() method of the ByteBuffer class. This solution can become unwieldy when the
slices are not always the same size: the original byte buffer can then become fragmented in
the same way the heap becomes fragmented when allocating and freeing objects of different
sizes. Unlike the heap, however, the individual slices of a byte buffer cannot be compacted,
so this solution really works well only when all the slices are a uniform size.
From a tuning perspective, the one thing to realize with any of these programming models is
that the amount of direct byte buffer space that an application can allocate can be limited by
the JVM. The total amount of memory that can be allocated for direct byte buffers is speci-
fied by setting the -XX:MaxDirectMemorySize = N flag. Starting in Java 7, the default value
for this flag is 0, which means there is no limit (subject to the address space size and any op-
erating system limits on the process). That flag can be set to limit the direct byte buffer use
of an application (and to provide compatibility with previous releases of Java, where the lim-
it was 64 MB).
1. The total footprint of the JVM has a significant effect on its performance, particu-
larly if physical memory on the machine is constrained. Footprint is another as-
pect of performance tests that should be commonly monitored.
2. From a tuning perspective, the footprint of the JVM can be limited in the amount
of native memory it uses for direct byte buffers, thread stack sizes, and the code
cache (as well as the heap).