Java Reference
In-Depth Information
The package java.util.concurrent and its two subpackages, java.util.concurrent.atomic and
java.util.concurrent.locks , include very useful concurrency constructs. You use the constructs available in
these packages only when you are developing an advanced level multi-threaded program. I will not cover all new
concurrency constructs in this section because describing everything available in these packages could take more
than a hundred pages. I will briefly cover some of the most useful concurrency constructs available in these packages.
We can broadly categorize these concurrency features into four categories:
Atomic variables
Locks
Synchronizers
Concurrent collections (Please refer to Chapter 12 for concurrent collections)
Atomic Variables
Typically, when you need to share an updateable variable among threads, synchronization is used. Synchronization
among multiple threads used to be achieved using the synchronized keyword and it was based on an object's monitor.
If a thread is not able to acquire an object's monitor, that thread is suspended and it has to be resumed later. This way
of synchronization (suspending and resuming) uses a great deal of system resources. The problem is not in the locking
and unlocking mechanism of the monitor lock; rather it is in suspending and resuming the threads. If there is no
contention for acquiring a lock, using the keyword synchronized to synchronize threads does not hurt much.
An atomic variable uses a lock-free synchronization of a single variable. Note that if your program needs to
synchronize on more than one shared variable, you still need to use the old synchronization methods. By lock-free
synchronization, I mean that multiple threads can access a shared variable safely using no object monitor lock. JDK takes
advantage of a hardware instruction called “ compare-and-swap" (CAS) to implement the lock-free synchronization for
one variable.
CAS is based on three operands: a memory location M , an expected old value O , and a new value N . If the memory
location M contains a value O , CAS updates it atomically to N ; otherwise, it does not do anything. CAS always returns the
current value at the location M that existed before the CAS operation started. The pseudo code for CAS is as follows:
CAS(M, O, N) {
currentValueAtM = get the value at Location M;
if (currentValueAtM == O) {
set value at M to N;
}
return currentValueAtM;
}
The CAS instruction is lock free. It is directly supported in most modern computers' hardware. However, CAS is
not always guaranteed to succeed in a multi-threaded environment. CAS takes an optimistic approach by assuming
that there are no other threads updating the value at location M ; if the location M contains value O , update it to N ; if the
value at location M is not O , do not do anything. Therefore, if multiple threads attempt to update the value at location M
to different values simultaneously, only one thread will succeed and others will fail.
The synchronization using locks takes a pessimistic approach by assuming that other threads may be working
with location M and acquires a lock before it starts working at location M , so that other threads will not access location
M while one is working with it. In case CAS fails, the caller thread may try the action again or give up; the caller thread
using CAS never blocks. However, in case of synchronization using a lock, the caller thread may have to be suspended
and resumed if it could not acquire the lock. Using synchronization, you also run the risk of a deadlock , a livelock, and
other synchronization-related failures.
 
Search WWH ::




Custom Search