There's no official answer here. Certainly in other Sun products the guarantee was that a
discontinued interface would continue to be supported for five years past the announcement date.
So, if you're using any of the deprecated methods, you're probably OK for some time to come, but
change your code next time you do a major release.
The Effect of Using a JIT
To maintain complete hardware independence, Java is always compiled to a byte code which is
then interpreted. (Yes, we know that there are some companies that write full, native compilers,
but that's not "proper" according to the rules for Java. We'll subsume those in the JIT discussion.)
Although the performance of the byte interpreters is quite impressive and is sufficient for many
I/O-bound programs, it still doesn't hold a candle to native code for computing.
A Just In Time Compiler loads the byte code and then compiles that down to native code (possibly
at load time, possibly at runtime). The CPU-intensive portions of your program will now run
much faster (a factor of 5 or so). The I/O portions won't improve at all (they're either running
kernel code or they're blocked, waiting for I/O!). How does this affect your MT programming?
Probably not at all. The threads' functions already run almost entirely in the JVM; hence, a JIT
will not speed them up at all.
In the HotSpot compiler only selected portions of code are compiled to native format. As the
program runs, HotSpot continues to monitor its progress, compiling other methods as it sees fit.
HotSpot has one enormous advantage over JIT compilers: It can compile many things in-line
which JIT compilers cannot. To maintain full Java semantics, all programs must allow new
subclasses to be loaded at any point during computation. This dynamic loading may invalidate
some of the in-line calls that you would like the compiler to make for you. JIT compilers handle
this by not compiling in-line. HotSpot gets around this problem by recompiling those sections of
code affected by the new classes.
HotSpot (or rather the ExactVM, which it is based on) also has a number of optimizations to
improve the speed and reduce the memory required for locks. Basically, instead of allocating locks
in permanent hashtables or the like, locks are allocated on the stack when first used, and popped
from the stack when the owner releases them. Only when another thread blocks on it will the lock
be copied off the stack and placed into permanent memory.
The threads functions are mostly in the JVM itself and will not benefit much from the JIT. You
should expect the percentage of time that thread overhead takes to increase by a factor of 10 or so
(because everything else is getting faster). As long as that can be held to a small percentage of
total processing time, you should have no problems.
APIs Used in This Chapter
The Class java.lang.Thread
public Thread(ThreadGroup group, String name)
public Thread(ThreadGroup group, Runnable run)
public Thread(ThreadGroup group, Runnable run,
Search WWH :