Database Reference
In-Depth Information
You can see that the results are all over the map. It's dificult to say how to synthesize these
results without losing some information that might be important.
For comparison, here are the results with Criterium:
user=> (bench (mc-pi 1000000))
WARNING: Final GC required 1.6577782102371632 % of runtime
Evaluation count : 120 in 60 samples of 2 calls.
Execution time mean : 1.059337 sec
Execution time std-deviation : 61.159841 ms
Execution time lower quantile : 963.110499 ms ( 2.5%)
Execution time upper quantile : 1.132513 sec (97.5%)
Overhead used : 1.788607 ns
Found 1 outliers in 60 samples (1.6667 %)
low-severe 1 (1.6667 %)
Variance from outliers : 43.4179 % Variance is moderately inflated by
outliers
The results are immediately clear (without having to type them into a spreadsheet, which I did
to create the chart), and there's a lot more information given.
How it works…
So, how does Criterium help us? First, it runs the code several times, and just throws away
the results. This means that we don't have to worry about initial inconsistencies while the
JVM, memory cache, and disk cache get settled.
Second, it runs the code a lot more than ive times. Quick benchmarking runs it six times.
Standard benchmarking runs it sixty times. This gives us a lot more data and a lot more
conidence in our interpretation.
Third, it provides us with a lot more information about the runs. With time , we have to
eyeball the results and go with our gut instinct for what all those numbers mean. If we want
to be more precise, we can retype all of the numbers into a spreadsheet and generate some
statistics. Criterium does that for us. It also analyzes the results to tell us whether some
outliers are throwing off the statistics. For instance, in the results mentioned previously,
we can see that there was one low outlier.
Criterium gives us a much better basis on which to make decisions about how best to
optimize our code and improve its performance.
 
Search WWH ::




Custom Search