Java Reference
In-Depth Information
In a microbenchmark built around these code snippets that is run with only two threads, there
will be an enormous amount of contention on the shared resource. That isn't realistic either:
in a real application, it is quite unlikely that two threads will always be accessing the shared
resource simultaneously. Adding more threads simply adds more unrealistic contention to the
equation.
CONTENTION AND VOLATILE VARIABLES
Developers sometimes think of using volatile variables to reduce synchronization and hence re-
duce contention in their applications. It turns out that simultaneous writes to volatile variables
are quite slow.
Earlier in this chapter, the example using the ForkJoinPool contained a loop designed to con-
sume a lot of CPU cycles by writing nonsense values to a volatile variable:
for
for ( int
int j = 0 ; j < d . length - i ; j ++) {
for
for ( int
int k = 0 ; k < 100 ; k ++) {
dummy = j * k + i ; // dummy is volatile, so multiple writes occur
d [ i ] = dummy ;
}
}
dummy is defined as an instance variable within the class defining this code, and although there are
four threads simultaneously executing in the example, they are operating on different instances of
the class. Hence, there is no contention around using the dummy variable, and the test in the ex-
ample completed in 16 seconds.
Change the definition of dummy to a static , however, and things change. Now there are multiple
threads accessing that volatile variable at the same time, and the same test requires 209
seconds.
As discussed in Chapter 2 , microbenchmarks tend to greatly overstate the effect of synchron-
ization bottlenecks on the test in question. This discussion hopefully elucidates that point. A
much more realistic picture of the trade-off will be obtained if the code in this section is used
in an actual application.
In the general case, the following guidelines apply to the performance of CAS-based utilities
compared to traditional synchronization:
Search WWH ::




Custom Search