Java Reference
In-Depth Information
Blocking queues also provide an offer method, which returns a failure status if the item
cannot be enqueued. This enables you to create more flexible policies for dealing with over-
load, such as shedding load, serializing excess work items and writing them to disk, reducing
the number of producer threads, or throttling producers in some other manner.
Bounded queues are a powerful resource management tool for building reliable applications:
they make your program more robust to overload by throttling activities that threaten to pro-
duce more work than can be handled.
While the producer-consumer pattern enables producer and consumer code to be decoupled
from each other, their behavior is still coupled indirectly through the shared work queue. It is
tempting to assume that the consumers will always keep up, so that you need not place any
bounds on the size of work queues, but this is a prescription for rearchitecting your system
later. Build resource management into your design early using blocking queues—it is a lot
easier to do this up front than to retrofit it later. Blocking queues make this easy for a num-
ber of situations, but if blocking queues don't fit easily into your design, you can create other
blocking data structures using Semaphore (see Section 5.5.3 ).
The class library contains several implementations of BlockingQueue . LinkedBlock-
ingQueue and ArrayBlockingQueue are FIFO queues, analogous to LinkedList
and ArrayList but with better concurrent performance than a synchronized List . Pri-
orityBlockingQueue is a priority-ordered queue, which is useful when you want to pro-
cess elements in an order other than FIFO. Just like other sorted collections, PriorityB-
lockingQueue can compare elements according to their natural order (if they implement
Comparable ) or using a Comparator .
The last BlockingQueue implementation, SynchronousQueue , is not really a queue
at all, in that it maintains no storage space for queued elements. Instead, it maintains a list
of queued threads waiting to enqueue or dequeue an element. In the dish-washing analogy,
this would be like having no dish rack, but instead handing the washed dishes directly to the
next available dryer. While this may seem a strange way to implement a queue, it reduces
the latency associated with moving data from producer to consumer because the work can be
handed off directly. (In a traditional queue, the enqueue and dequeue operations must com-
plete sequentially before a unit of work can be handed off.) The direct handoff also feeds
back more information about the state of the task to the producer; when the handoff is accep-
ted, it knows a consumer has taken responsibility for it, rather than simply letting it sit on a
queue somewhere—much like the difference between handing a document to a colleague and
Search WWH ::




Custom Search