Java Reference
In-Depth Information
11.3. Costs Introduced by Threads
Single-threaded programs incur neither scheduling nor synchronization overhead, and need
not use locks to preserve the consistency of data structures. Scheduling and interthread co-
ordination have performance costs; for threads to offer a performance improvement, the per-
formance benefits of parallelization must outweigh the costs introduced by concurrency.
11.3.1. Context Switching
If the main thread is the only schedulable thread, it will almost never be scheduled out. On
the other hand, if there are more runnable threads than CPUs, eventually the OS will preempt
one thread so that another can use the CPU. This causes a contextswitch , which requires sav-
ing the execution context of the currently running thread and restoring the execution context
of the newly scheduled thread.
Context switches are not free; thread scheduling requires manipulating shared data structures
in the OS and JVM. The OS and JVMuse the same CPUs your program does; more CPU
time spent in JVM and OS code means less is available for your program. But OS and JVM
activity is not the only cost of context switches. When a new thread is switched in, the data
it needs is unlikely to be in the local processor cache, so a context switch causes a flurry of
cache misses, and thus threads run a little more slowly when they are first scheduled. This is
one of the reasons that schedulers give each runnable thread a certain minimum time quantum
even when many other threads are waiting: it amortizes the cost of the context switch and
its consequences over more uninterrupted execution time, improving overall throughput (at
some cost to responsiveness).
Listing 11.2. Synchronization that has No Effect. Don't do this.
When a thread blocks because it is waiting for a contended lock, the JVM usually suspends
the thread and allows it to be switched out. If threads block frequently, they will be unable to
use their full scheduling quantum. A program that does more blocking (blocking I/O, waiting
for contended locks, or waiting on condition variables) incurs more context switches than one
that is CPU-bound, increasing scheduling overhead and reducing throughput. (Nonblocking
algorithms can also help reduce context switches; see Chapter 15 . )
Search WWH ::




Custom Search