Java Reference
In-Depth Information
how many threads are doing logging, and other factors such as the cost of context switch-
ing. [16]
The service time for a logging operation includes whatever computation is associated with
the I/O stream classes; if the I/O operation blocks, it also includes the duration for which
the thread is blocked. The operating system will deschedule the blocked thread until the I/O
completes, and probably a little longer. When the I/O completes, other threads are probably
active and will be allowed to finish out their scheduling quanta, and threads may already be
waiting ahead of us on the scheduling queue—further adding to service time. Alternatively,
if multiple threads are logging simultaneously, theremay be contention for the output stream
lock, in which case the result is the same as with blocking I/O—the thread blocks waiting for
the lock and gets switched out. Inline logging involves I/O and locking, which can lead to
increased context switching and therefore increased service times.
Increasing request service time is undesirable for several reasons. First, service time affects
quality of service: longer service times mean someone is waiting longer for a result. But more
significantly, longer service times in this case mean more lock contention. The “get in, get
out” principle of Section 11.4.1 tells us that we should hold locks as briefly as possible, be-
cause the longer a lock is held, the more likely that lock will be contended. If a thread blocks
waiting for I/O while holding a lock, another thread is more likely to want the lock while the
first thread is holding it. Concurrent systems perform much better when most lock acquisi-
tions are uncontended, because contended lock acquisition means more context switches. A
coding style that encourages more context switches thus yields lower overall throughput.
Moving the I/O out of the request-processing thread is likely to shorten the mean service time
for request processing. Threads calling log no longer block waiting for the output stream
lock or for I/O to complete; they need only queue the message and can then return to their
task. On the other hand, we've introduced the possibility of contention for the message queue,
but the put operation is lighter-weight than the logging I/O (which might require system
calls) and so is less likely to block in actual use (as long as the queue is not full). Because
the request thread is now less likely to block, it is less likely to be context-switched out in
the middle of a request. What we've done is turned a complicated and uncertain code path
involving I/O and possible lock contention into a straight-line code path.
To some extent, we are just moving the work around, moving the I/O to a thread where its
cost isn't perceived by the user (which may in itself be a win). But by moving all the log-
ging I/O to a single thread, we also eliminate the chance of contention for the output stream
and thus eliminate a source of blocking. This improves overall throughput because fewer re-
sources are consumed in scheduling, context switching, and lock management.
Search WWH ::




Custom Search