Database Reference
In-Depth Information
The summary file is the first one to look at. It shows the command line parameters, I/O sizes, the I/O type among
other information. From a storage-backend perspective, the matrix data points are most interesting:
Duration for each data point: 60 seconds
Small Columns:, 1, 2, ... 20
Large Columns:, 0
Total Data Points: 21
Here you see that the OLTP test does not use large I/Os at all. The name and size of the LUNs used are also
recorded just prior to the performance figures:
Maximum Small IOPS=9899 @ Small=17 and Large=0
Small Read Latency: avg=1711 us, min=791 us, max=12316 us, std dev=454 us @ Small=17 and Large=0
Minimum Small Latency=123.58 usecs @ Small=1 and Large=0
Small Read Latency: avg=124 us, min=41 us, max=4496 us, std dev=76 us @ Small=1 and Large=0
Small Read / Write Latency Histogram @ Small=17 and Large=0
Following that information, you are shown the same latency histogram you were presented at the end of the
ORION run. In the above example, that's the histogram for the data point of 17 small I/Os and 0 large I/Os. All other
histograms can be found in the test1_20130722_0953_hist.txt file. The other files contain the information listed in
Table 3-4 .
Table 3-4. Files Generated During an ORION Benchmark (taken from the file headers)
File name
Contents as per file header
*_hist.txt
Contains histograms of the latencies observed for each data point test. Each data point test used a
fixed number of outstanding small and large I/Os.
For each data point, histograms are printed for the latencies of small reads, small writes, large reads,
and large writes.
The value specifies the number of I/Os that were observed within the bucket's latency range.
*_iops.csv
Contains the rates sustained by small I/Os in IOPS. Each value corresponds to a data point test that
used a fixed number of outstanding small and large I/Os.
*_lat.csv
Contains the average latency sustained by small I/Os in microseconds. Each value corresponds to a
data point test that used a fixed number of outstanding small and large I/Os.
*_mbps.csv
Contains the rates sustained by large I/Os in MBps. Each value corresponds to a data point test that
used a fixed number of outstanding small and large I/Os.
*_trace.txt
Raw data
The use of ORION should have given you a better understanding of the capabilities of your storage subsystem.
Bear in mind that the figures do not represent a true Oracle workload due to the lack of the synchronization in the
Oracle-shared memory structures. Furthermore, ORION does not use the pread/pwrite calls Oracle employs for single
block I/O. However, the initial test of your storage subsystem should be a good enough approximation.
There is a lot more to ORION which could not be covered here, especially when it comes to testing I/O
performance of multiple LUNs. It is possible to simulate striping and mirroring, and even to simulate log write
behavior, by instructing the software to stream data sequentially.
 
Search WWH ::




Custom Search