Java Reference
In-Depth Information
Chapter 16. The Java Memory Model
Throughout this topic, we've mostly avoided the low-level details of the Java Memory Model
(JMM) and instead focused on higher-level design issues such as safe publication, specifica-
tion of, and adherence to synchronization policies. These derive their safety from the JMM,
and you may find it easier to use these mechanisms effectively when you understand why they
work. This chapter pulls back the curtain to reveal the low-level requirements and guarantees
of the Java Memory Model and the reasoning behind some of the higher-level design rules
offered in this topic.
16.1. What is a Memory Model, and Why would I Want One?
Suppose one thread assigns a value to aVariable :
aVariable = 3;
A memory model addresses the question “Under what conditions does a thread that reads
aVariable see the value 3?” This may sound like a dumb question, but in the absence of
synchronization, there are a number of reasons a threadmight not immediately—or ever—see
the results of an operation in another thread. Compilers may generate instructions in a different
order than the “obvious” one suggested by the source code, or store variables in registers in-
stead of in memory; processors may execute instructions in parallel or out of order; caches may
vary the order in which writes to variables are committed to main memory; and values stored
in processor-local caches may not be visible to other processors. These factors can prevent a
thread from seeing the most up-to-date value for a variable and can cause memory actions in
other threads to appear to happen out of order—if you don't use adequate synchronization.
In a single-threaded environment, all these tricks played on our program by the environment
are hidden from us and have no effect other than to speed up execution. The Java Language
Specification requires the JVM to maintain withinthread as-if-serial semantics : as long as the
program has the same result as if it were executed in program order in a strictly sequential
environment, all these games are permissible. And that's a good thing, too, because these re-
arrangements are responsible for much of the improvement in computing performance in re-
cent years. Certainly higher clock rates have contributed to improved performance, but so has
increased parallelism—pipelined superscalar execution units, dynamic instruction scheduling,
speculative execution, and sophisticated multilevel memory caches. As processors have be-
come more sophisticated, so too have compilers, rearranging instructions to facilitate optim-
al execution and using sophisticated global register-allocation algorithms. And as processor
Search WWH ::




Custom Search