Java Reference
In-Depth Information
Modern JVMs can optimize uncontended lock acquisition and release fairly effectively, but
if multiple threads request the lock at the same time the JVM enlists the help of the operat-
ing system. If it gets to this point, some unfortunate thread will be suspended and have to be
resumed later. [1] When that thread is resumed, it may have to wait for other threads to fin-
ish their scheduling quanta before it is actually scheduled. Suspending and resuming a thread
has a lot of overhead and generally entails a lengthy interruption. For lock-based classes with
fine-grained operations (such as the synchronized collections classes, where most methods
contain only a few operations), the ratio of scheduling overhead to useful work can be quite
high when the lock is frequently contended .
Volatile variables are a lighter-weight synchronization mechanism than locking because they
do not involve context switches or thread scheduling. However, volatile variables have some
limitations compared to locking: while they provide similar visibility guarantees, they cannot
be used to construct atomic compound actions. This means that volatile variables cannot be
used when one variable depends on another, or when the new value of a variable depends on
its old value. This limits when volatile variables are appropriate, since they cannot be used to
reliably implement common tools such as counters or mutexes. [2]
For example, while the increment operation ( ++i ) may look like an atomic operation, it is
actually three distinct operations—fetch the current value of the variable, add one to it, and
then write the updated value back. In order to not lose an update, the entire read-modify-write
operation must be atomic. So far, the only way we've seen to do this is with locking, as in
Counter on page 56 .
Counter is thread-safe, and in the presence of little or no contention performs just fine. But
under contention, performance suffers because of context-switch overhead and scheduling
delays. When locks are held so briefly, being put to sleep is a harsh penalty for asking for the
lock at the wrong time.
Locking has a few other disadvantages. When a thread is waiting for a lock, it cannot do any-
thing else. If a thread holding a lock is delayed (due to a page fault, scheduling delay, or the
like), then no thread that needs that lock can make progress. This can be a serious problem if
the blocked thread is a high-priority thread but the thread holding the lock is a lower-priority
thread—a performance hazard known as priority inversion . Even though the higher-priority
thread should have precedence, it must wait until the lock is released, and this effectively
downgrades its priority to that of the lower-priority thread. If a thread holding a lock is per-
manently blocked (due to an infinite loop, deadlock, livelock, or other liveness failure), any
threads waiting for that lock can never make progress.
Search WWH ::




Custom Search