suming 100% of available CPU, and although the machine may not be running any other
user-level applications, there are various system-level processes that will kick in and use
some CPU, preventing the JVM from utilizing all 100% of the cycles.
Still, this application is doing a good job of scaling, and even if the number of threads in the
pool is overestimated, there is only a slight penalty to pay.
In other circumstances, though, that penalty can be larger. In the servlet version of the stock
history calculator, having too many threads has a bigger effect, as is shown in Table 9-2 . The
application server is configured here to have the given number of threads, and a load generat-
or is sending 20 simultaneous requests to the server.
Table 9-2. Operations per second for mock stock prices through a servlet
Number of threads Operations per second Percent of baseline
Given that the application server has four available CPUs, maximum throughput is achieved
with that many threads in the pool.
Chapter 1 discussed the need to determine where the bottleneck is when investigating per-
formance issues. In this example, the bottleneck is clearly the CPU: at four CPUs, the CPU is
100% utilized. Still, the penalty for adding more threads in this case is somewhat minimal, at
least until there are eight times too many threads.
But what if the bottleneck is elsewhere? This example is also somewhat unusual in that the
tasks are completely CPU-bound: they do no I/O. Typically, the threads might be expected to
make calls to a database, or write their output somewhere, or even rendezvous with some
other resource. In that case, the CPU won't necessarily be the bottleneck: that external re-
source might be.