On the other hand, this algorithm has no idea why the queue size has suddenly increased.
If it is due to an external backlog, then adding more threads is the wrong thing to do. If
the pool is running on a machine that is CPU-bound, adding more threads is the wrong
thing to do. Adding threads will only make sense if the backlog occurred because addi-
tional load came into the system (e.g., more clients started making an HTTP request).
(Yet if that is the case, why wait to add threads until the queue size has reached some
bound? If the additional resources are available to utilize additional threads, then adding
them sooner will improve the overall performance of the system.)
There are many arguments for and against each of these choices, but when attempting to
maximize performance, this is a time to apply the KISS principle: keep it simple, stupid.
Specify that the ThreadPoolExecutor has the same number of core and maximum threads
and utilize a LinkedBlockingQueue to hold the pending tasks (if an unbounded task list is
appropriate), or an ArrayBlockingQueue (if a bounded task list is appropriate).
1. Thread pools are one case where object pooling is a good thing: threads are ex-
pensive to initialize, and a thread pool allows the number of threads on a system
to be easily throttled.
2. Thread pools must be carefully tuned. Blindly adding new threads into a pool can,
in some circumstances, have a detrimental effect on performance.
3. Using simpler options for a ThreadPoolExecutor will usually provide the best
(and most predictable) performance.
Java 7 introduces a new thread pool: the ForkJoinPool class. This class looks just like any
other thread pool; like the ThreadPoolExecutor class, it implements the Executor and Ex-
ecutorService interfaces. When those interfaces are used, the ForkJoinPool uses an in-
ternal unbounded list of tasks that will be run by the number of threads specified in its con-
structor (or the number of CPUs on the machine if the no-args constructor is used).