Java Reference
In-Depth Information
Figure 11.3. Comparing Scalability of Map Implementations.
The numbers for the synchronized collections are not as encouraging. Performance for the
one-thread case is comparable to ConcurrentHashMap , but once the load transitions from
mostly uncontended to mostly contended—which happens here at two threads—the syn-
chronized collections suffer badly. This is common behavior for code whose scalability is
limited by lock contention. So long as contention is low, time per operation is dominated by
the time to actually do the work and throughput may improve as threads are added. Once con-
tention becomes significant, time per operation is dominated by context switch and schedul-
ing delays, and adding more threads has little effect on throughput.
11.6. Reducing Context Switch Overhead
Many tasks involve operations that may block; transitioning between the running and blocked
states entails a context switch. One source of blocking in server applications is generating
log messages in the course of processing requests; to illustrate how throughput can be im-
proved by reducing context switches, we'll analyze the scheduling behavior of two logging
approaches.
Most logging frameworks are thin wrappers around println ; when you have something to
log, just write it out right then and there. Another approach was shown in LogWriter on
page 152: the logging is performed in a dedicated background thread instead of by the re-
questing thread. From the developer's perspective, both approaches are roughly equivalent.
But there may be a difference in performance, depending on the volume of logging activity,
Search WWH ::




Custom Search