in other situations. On the other hand, if you need a thread pool where the number of
threads is easy to tune, this is the better choice.
When the executor uses an unbounded queue (such as a LinkedBlockedingQueue ), no
task will ever by rejected (since the queue size is unlimited). In this case, the executor
will only use at most the number of threads specified by the core (i.e., minimum) thread
pool size: the maximum pool size is ignored. If the core and maximum pool size are the
same value, this choice comes closest to the operation of a traditional thread pool con-
figured with a fixed number of threads.
Executors that use a bounded queue (e.g., an ArrayBlockingQueue ) employ a quite
complicated algorithm to determine when to start a new thread. For example, say that the
pool's core size is 4, its maximum size is 8, and the maximum size of the ArrayBlock-
ingQueue is 10. As tasks arrive and are placed in the queue, the pool will run a maxim-
um of 4 threads (the core pool size). Even if the queue completely fills up—so that it is
holding 10 pending tasks—the executor will only utilize 4 threads.
An additional thread will only be started when the queue is full, and a new task is added
to the queue. Instead of rejecting the task (since the queue is full), the executor starts a
new thread. That new thread runs the first task on the queue, making room for the
pending task to be added to the queue.
In this example, the only way the pool will end up with 8 threads (its specified maxim-
um) is if there are 7 tasks in progress, 10 tasks in the queue, and a new task is added to
The idea behind this algorithm is that the pool will operate with only the core threads
(four) most of the time, even if a moderate number of tasks are in the queue waiting to be
run. That allows the pool to act as a throttle (which is advantageous). If the backlog of re-
quests becomes too great, the pool then attempts to run more threads to clear out the
backlog (subject to a second throttle, the maximum number of threads).
If there are no external bottlenecks in the system and there are available CPU cycles, then
everything here works out: adding the new threads will process the queue faster and
likely bring it back to its desired size. So cases where this algorithm is appropriate can
certainly be constructed.