Java Reference
In-Depth Information
1.3.2. Liveness Hazards
It is critically important to pay attention to thread safety issues when developing concurrent
code: safety cannot be compromised. The importance of safety is not unique to multithreaded
programs—single-threaded programs also must take care to preserve safety and correct-
ness—but the use of threads introduces additional safety hazards not present in single-
threaded programs. Similarly, the use of threads introduces additional forms of liveness fail-
ure that do not occur in single-threaded programs.
While safety means “nothing bad ever happens”, liveness concerns the complementary goal
that “something good eventually happens”. A liveness failure occurs when an activity gets
into a state such that it is permanently unable to make forward progress. One form of liveness
failure that can occur in sequential programs is an inadvertent infinite loop, where the code
that follows the loop never gets executed. The use of threads introduces additional liveness
risks. For example, if thread A is waiting for a resource that thread B holds exclusively, and
B never releases it, A will wait forever. Chapter 10 describes various forms of liveness fail-
ures and how to avoid them, including deadlock ( Section 10.1 ) , starvation ( Section 10.3.1 ) ,
and livelock ( Section 10.3.3 ). Like most concurrency bugs, bugs that cause liveness failures
can be elusive because they depend on the relative timing of events in different threads, and
therefore do not always manifest themselves in development or testing.
1.3.3. Performance Hazards
Related to liveness is performance . While liveness means that something good eventually
happens, eventually may not be good enough—we often want good things to happen quickly.
Performance issues subsume a broad range of problems, including poor service time, re-
sponsiveness, throughput, resource consumption, or scalability. Just as with safety and live-
ness, multithreaded programs are subject to all the performance hazards of single-threaded
programs, and to others as well that are introduced by the use of threads.
In well designed concurrent applications the use of threads is a net performance gain, but
threads nevertheless carry some degree of runtime overhead. Context switches —when the
scheduler suspends the active thread temporarily so another thread can run—are more fre-
quent in applications with many threads, and have significant costs: saving and restoring ex-
ecution context, loss of locality, and CPU time spent scheduling threads instead of running
them. When threads share data, they must use synchronization mechanisms that can inhibit
compiler optimizations, flush or invalidate memory caches, and create synchronization traffic
on the shared memory bus. All these factors introduce additional performance costs; Chapter
11 covers techniques for analyzing and reducing these costs.
Search WWH ::




Custom Search