Java Reference
In-Depth Information
can help reduce memory usage, CPU usage, and context switching, at the cost of potentially
constraining throughput.
For very large or unbounded pools, you can also bypass queueing entirely and instead hand
off tasks directly from producers to worker threads using a SynchronousQueue . A Syn-
chronousQueue is not really a queue at all, but a mechanism for managing handoffs
between threads. In order to put an element on a SynchronousQueue , another thread must
already be waiting to accept the handoff. If no thread is waiting but the current pool size is
less than the maximum, Thread-PoolExecutor creates a new thread; otherwise the task
is rejected according to the saturation policy. Using a direct handoff is more efficient because
the task can be handed right to the thread that will execute it, rather than first placing it on a
queue and then having the worker thread fetch it from the queue. SynchronousQueue is
a practical choice only if the pool is unbounded or if rejecting excess tasks is acceptable. The
newCachedThreadPool factory uses a SynchronousQueue .
Using a FIFO queue like LinkedBlockingQueue or ArrayBlockingQueue causes
tasks to be started in the order in which they arrived. For more control over task execution
order, you can use a PriorityBlockingQueue , which orders tasks according to priority.
Priority can be defined by natural order (if tasks implement Comparable ) or by a Com-
parator .
The newCachedThreadPool factory is a good default choice for an Executor , provid-
ing better queuing performance than a fixed thread pool. [5] A fixed size thread pool is a good
choice when you need to limit the number of concurrent tasks for resource-management pur-
poses, as in a server application that accepts requests from network clients and would other-
wise be vulnerable to overload.
Bounding either the thread pool or the work queue is suitable only when tasks are in-
dependent. With tasks that depend on other tasks, bounded thread pools or queues can
cause
thread
starvation
deadlock;
instead,
use
an
unbounded
pool
configuration
like
newCachedThreadPool . [6]
8.3.3. Saturation Policies
When a bounded work queue fills up, the saturation policy comes into play. The saturation
policy for a ThreadPoolExecutor can be modified by calling setRejectedExecu-
tionHandler . (The saturation policy is also used when a task is submitted to an Execut-
Search WWH ::




Custom Search