Information Technology Reference
In-Depth Information
from all stages. On the other hand, for some workloads, passing a request from
stage to stage will hurt cache hit rates compared to doing all of the processing
for a request on one processor.
Also note that for good performance, the processing in each stage must be
large enough to amortize the cost of sending and receiving messages.
The special case when there is exactly one thread per stage is called event
processing . A special property of event processing architectures is that there is
Denition: event
processing
no concurrency within a stage, so no locking is required and each message is
processed atomically with respect to that stage's state.
Overload. One challenge with staged architectures is dealing with overload.
The throughput of the system will be limited by that of the slowest stage.
If the system is overloaded, the slowest stage will fall behind and the queue
before it will grow. Depending on the system's implementation, two bad things
can happen. First, the queue can grow indefinitely, consuming more and more
memory until the system runs out of memory. Second, if the queue is limited to
a finite size, once that size is reached, earlier stages must either discard messages
they want to send to the overloaded stage or they must block until the queue
has room. Notice that if they block, then the backpressure will limit earlier
stages' throughput to that of the bottleneck stage, and their queues may begin
to grow.
One solution is to dynamically vary the number of threads per stage. If a
stage's incoming queue is growing, shift processing resources to it by stopping
one of the threads for a stage with a short queue start a new thread for the
stage that is falling behind.
6.2
Deadlock
A challenge to constructing programs that include multiple shared objects is
deadlock.
Deadlock is a cycle of waiting among a set of threads where each
Denition: Deadlock
thread is waiting for some other thread in the cycle to take some action.
Figure 6.4 shows two examples of deadlock. In mutually recursive locking ,
,Definition: mutually
recursive locking
code in each of two shared objects s1 and s2 holds a lock while calling into a
method in the other shared object that uses that object's lock. Then, threads
1 and 2 can deadlock if thread 1 calls a method in s1 that holds the lock and
tries to call a method in s2 that needs a lock while thread 1 calls a method in
s2 that holds s2's lock and that tries to call a method in s1 that needs s1's lock.
In nested waiting , code in one shared object s1 calls a method of another
Denition: nested waiting
shared object s2, which waits on a condition variable. The condition variable's
wait() method releases s2's lock but not s1's, so the thread that would have
done a signal in s2 may stuck waiting for s1's lock.
 
Search WWH ::




Custom Search