Database Reference
In-Depth Information
From the preceding output, the transfer times are significantly high for both the CR and the current blocks.
Investigating further, it was also noticed that a few of the servers (Servers 1, 2, and 3) had high CPU utilization.
However, the block prepare time on these servers was low. This is likely an indication that the high CPU utilization
and system load influenced the latencies. Reasons for high latency of the interconnect could be
Poorly written SQL statements causing large blocks to be queried from the disk and loaded
into buffer. When users on other instances execute the same query, the blocks are transferred
over the interconnect, causing high traffic. This normally happens when the capacity of the
interconnect is exceeded due to high traffic.
Network bandwidth is not sufficient or a low bandwidth network is used. Speed of the network
is low, causing slower movement of blocks over the interconnect.
The database is using the public network for the cache fusion traffic. Normally, public
networks are low-speed networks; and besides this, the public networks carry other user data,
causing network contention and slower data movement. It's a requirement that a dedicated
private network be used for cache fusion/interconnect.
There are wrongly sized network buffers. The data traffic on the private network can be bursty
and cause network waits for receive buffers which in turn cause high CPU utilization. The
effect of low buffer sizes can also be packet loses at the O/S or blocks lost inside the database.
Almost always lost blocks and poorly configured interconnects are the reason for poorly
performing interconnect and high latencies caused by the interconnect itself.
Step 3
The next step is to monitor the network utilization using O/S level utilities such as netstat and IPTraf .
ifconfig -a on the private NIC showed that there are no errors at the NIC level. In the output following, errors,
dropped, overrun, and collision counts columns all have zero values, indicating there are probably no errors with the
NIC card configuration:
[oracle@prddb1]$ /sbin/ifconfig -a
eth1 Link encap:Ethernet HWaddr 00:0C:29:3A:F1:6E
inet addr:172.35.1.11 Bcast:172.35.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe3a:f16e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:42232 errors:0 dropped:0 overruns:0 frame:0
TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3483899 (3.3 MiB) TX bytes:678 (678.0 b)
Base address:0x2440 Memory:d8960000-d8980000
Similar to the ifconfig , which gives you the current configuration, the NIC card, and some of the statistics
collected, the ethtool can also be helpful in getting this information:
[root@prddb1 ~]# ethtool -S eth24
NIC statistics:
rcvd bad skb: 0
xmit called: 2225012436
xmited frames: 0
xmit finished: 2225012436
bad skb len: 0
no cmd desc: 0
 
Search WWH ::




Custom Search