Database Reference
In-Depth Information
[root@prddb1 ~]# iostat -d -x 10 | grep 'emcpower.1'
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq avgqu await svctm %util
emcpoweri1 3.81 0.03 6.51 0.09 1569.01 14.28 239.79 0.07 10.21 3.88 2.56
emcpowerj1 3.84 0.03 6.53 0.09 1581.08 14.27 240.71 0.07 10.23 3.87 2.57
emcpowerk1 3.83 0.03 6.64 0.10 1578.72 14.14 236.53 0.07 9.99 3.76 2.53
emcpowerl1 3.81 0.03 6.47 0.09 1568.73 13.96 241.12 0.07 10.26 3.86 2.53
emcpowerm1 3.86 0.03 6.61 0.10 1588.03 13.86 238.75 0.07 9.98 3.79 2.54
emcpowern1 3.82 0.03 6.50 0.10 1571.59 14.54 240.04 0.07 10.13 3.85 2.54
emcpowero1 3.81 0.03 7.99 0.13 1617.88 16.13 201.09 0.07 8.52 3.37 2.74
emcpowerp1 3.86 0.03 9.17 0.46 1670.98 26.49 176.18 0.07 7.49 3.09 2.98
emcpowera1 3.84 0.03 6.63 0.11 1581.65 14.48 236.65 0.07 10.23 3.83 2.58
emcpowerb1 3.85 0.03 6.55 0.10 1583.33 15.78 240.19 0.07 10.27 3.78 2.51
emcpowerc1 3.82 0.03 6.52 0.10 1572.85 15.02 240.10 0.07 10.42 3.88 2.56
emcpowerd1 3.85 0.03 6.52 0.10 1583.59 14.52 241.22 0.07 10.49 3.89 2.58
emcpowere1 3.83 0.03 6.51 0.09 1574.33 13.99 240.73 0.07 10.31 3.79 2.50
emcpowerf1 3.85 0.03 6.55 0.09 1585.99 13.95 240.70 0.07 10.32 3.84 2.55
emcpowerg1 3.86 0.03 6.52 0.09 1584.81 14.00 241.68 0.07 10.36 3.85 2.55
emcpowerh1 3.83 0.03 6.56 0.09 1577.65 14.83 239.39 0.07 10.26 3.84 2.56
Where,
rrqm/s : The number of read requests merged per second that were queued to the hard disk
wrqm/s : The number of write requests merged per second that were queued to the hard disk
r/s : The number of read requests per second
w/s : The number of write requests per second
rsec/s : The number of sectors read from the hard disk per second
wsec/s : The number of sectors written to the hard disk per second
avgrq : The average size (in sectors) of the requests that were issued to the device
avgqu : The average queue length of the requests that were issued to the device
await : The average time (in milliseconds) for I/O requests issued to the device to be served.
This includes the time spent by the requests in the queue and the time spent servicing them
svctm : The average service time (in milliseconds) for I/O requests that were issued to the
device
%util : Percentage of CPU time during which I/O requests were issued to the device
(bandwidth utilization for the device). Device saturation occurs when this value is
close to 100%
The columns that are important for measuring the queue depth utilization are rrqm/s , wrqm/s , avgrq-sz ( avgrq ),
avgqu-sz ( avgqu ), and svctm .
HBAs with larger queue depths help SQL queries that are highly I/O intensive—for example, in a data
warehouse—and also help increase the number of I/O requests allowed to be in flight. However this may not be true
for other kinds of applications like online transaction processing (OLTP) or, for that matter, clustered environments. In
the case of RAC implementation where there is the potential transfer of uncommitted blocks between instances, this
may not be useful unless there is service available to instance affinity. This is due to requests for blocks that have not
been committed.
As we saw earlier in Figure 12-2 , the controllers receive data from the HBA devices via the SAN switches, which
is then written to the logical disks. Similar to the queue depth setting on the HBA cards, there is a cache size defined
on the controllers that increases performance and improves scalability. The cache tier helps buffer I/O requests to
 
Search WWH ::




Custom Search