figured with (more on that in the next section); application servers usually have some tuning
parameter to adjust this value.
Like the maximum size of the thread pool, there is no universal rule regarding how this value
should be tuned. An application server with 30,000 items in its queue and four available
CPUs can clear the queue in 6 minutes if it takes only 50 ms to execute a task (assuming no
new tasks arrive during that time). That might be acceptable, but if each task requires 1
second to execute, it will take 2 hours to clear the queue. Once again, measuring your actual
application is the only way to be sure of what value will give you the performance you re-
In any case, when the queue limit is reached, attempts to add a task to the queue will fail. A
ThreadPoolExecutor has a rejectedExecution method that handles that case (by default,
it throws a RejectedExecutionException ). Application servers will return some error to
the user: either an HTTP status code of 500 (for an internal error), or—in the best case—the
web application will catch the error and display a reasonable explanation to the user.
Sizing a ThreadPoolExecutor
The general behavior for a thread pool is that it starts with a minimum number of threads,
and if a task arrives when all existing threads are busy, a new thread is started (up to the
maximum number of threads) and the task is executed immediately. Otherwise, the task is
queued, unless there is some large number of pending tasks already, in which case the task is
rejected. While that is the canonical behavior of a thread pool, the ThreadPoolExecutor can
behave somewhat differently.
The ThreadPoolExecutor decides when to start a new thread based on the type of queue
used to hold the tasks. There are three possibilities.
When the executor uses a SynchronousQueue , the thread pool behaves as expected with
respect to the number of threads: new tasks will start a new thread if all existing threads
are busy and if the pool has less than the number of maximum threads. However, this
queue has no way to hold pending tasks: if a task arrives, and the maximum number of
threads is already busy, the task is always rejected. So this choice is good for managing a
small number of tasks, but otherwise may be unsuitable. The documentation for this class
suggests specifying a very large number for the maximum thread size—which may be
OK if the tasks are completely CPU-bound, but as we've seen may be counterproductive