For those reasons, many examples in this topic are batch-oriented (even if that is a little un-
A throughput measurement is based on the amount of work that can be accomplished in a
certain period of time. Although the most common examples of throughput measurements
involve a server processing data fed by a client, that is not strictly necessary: a single, stan-
dalone application can measure throughput just as easily as it measures elapsed time.
In a client-server test, a throughput measurement means that clients have no think time. If
there is a single client, that client sends a request to the server. When it receives a response, it
immediately sends a new request. That process continues; at the end of the test, the client re-
ports the total number of operations it achieved. Typically, the client has multiple threads do-
ing the same thing, and the throughput is the aggregate measure of the number of operations
all clients achieved. Usually this number is reported as the number of operations per second,
rather than the total number of operations over the measurement period. This measurement is
frequently referred to as transactions per second (TPS), requests per second (RPS), or opera-
tions per second (OPS).
All client-server tests run the risk that the client cannot send data quickly enough to the serv-
er. This may occur because there aren't enough CPU cycles on the client machine to run the
desired number of client threads, or because the client has to spend a lot of time processing
the request before it can send a new request. In those cases, the test is effectively measuring
the client performance rather than the server performance, which is usually not the goal.
This risk depends on the amount of work that each client thread performs (and the size and
configuration of the client machine). A zero-think-time (throughput-oriented) test is more
likely to encounter this situation, since each client thread is performing a lot of work. Hence,
throughput tests are typically executed with fewer client threads (less load) than a corres-
ponding test that measures response time.
It is common for tests that measure throughput also to report the average response time of its
requests. That is an interesting piece of information, but changes in that number aren't indic-
ative of a performance problem unless the reported throughput is the same. A server that can
sustain 500 OPS with a 0.5-second response time is performing better than a server than re-
ports a 0.3-second response time but only 400 OPS.
Throughput measurements are almost always taken after a suitable warm-up period, particu-
larly since what is being measured is not a fixed set of work.