Databases Reference
In-Depth Information
However, if you run the same benchmark again for 100 seconds, the
throughput result is a more useful and reliable metric—for example, 30,000 rows
per second—because any blips caused by another service running are averaged
over a longer period.
Similarly, a system clock used to measure a benchmark can experience blips
in its timekeeping that cause the clock to drift suddenly. For example, suppose
that you run a benchmark over 5 seconds and a blip occurs causing the system
clock to drift by 500ms. That's a significant difference that you may not even real-
ize occurred. Running the benchmark for a sufficient duration—100 seconds, for
example—ensures that any system clock blips are averaged over a longer period.
Other factors, such as Java class loaders and the .NET Just-in-Time (JIT)
compiler, can skew results on short-running benchmarks. In Java, classes are
loaded into the Java environment by class loaders when they are referenced by
name, often at the start of an application. Similarly, in ADO.NET environments,
the JIT compiler is invoked when a method is called the first time during an appli-
cation's execution. These factors front-load some performance costs. For example,
suppose we run a benchmark for only 10 seconds, as shown in Figure 9-1.
70000
60000
50000
40000
30000
20000
10000
0
1
2
3
4
5
6
7
8
9
10
Seconds
Figure 9-1
Benchmark run for 10 seconds
Now, let's look at different results of the same benchmark that is run over a
longer duration—100 seconds—as shown in Figure 9-2. Notice how the perfor-
mance impact is not as significant over time.
Search WWH ::




Custom Search