Database Reference
In-Depth Information
RS@SQL> select * from gv$cluster_interconnects order by inst_id;
INST_ID NAME IP_ADDRESS IS_ SOURCE
---------- -------------- ---------------- --- ------------------------------
1 ce0:1 169.254.72.158 NO
1 ce6:1 169.254.195.203 NO
2 ce6:2 169.254.60.160 NO
2 ce6:1 169.254.181.11 NO
You can also use specific IP addresses using the cluster_interconnnects initialization parameter. Usually,
changes to the cluster_interconnects parameter are not needed, as the database will retrieve link local IP addresses
from the Clusterware processes, so this parameter is best left untouched from 11gR2 onward. Notice that the
SOURCE column indicates the source of the IP address values; in this example, values are retrieved from the
cluster_interconnects database parameter.
cluster_interconnects string 172.29.1.11:172.29.2.12
RS @ RAC1> select * from gv$cluster_interconnects
INST_ID NAME IP_ADDRESS IS_ SOURCE
---------- --------------- ---------------- --- -------------------------------
1 aggr3 172.29.1.11 NO cluster_interconnects parameter
1 aggr4 172.29.2.12 NO cluster_interconnects parameter
3 aggr3 172.29.1.31 NO cluster_interconnects parameter
3 aggr4 172.29.2.32 NO cluster_interconnects parameter
2 aggr3 172.29.1.21 NO cluster_interconnects parameter
2 aggr4 172.29.2.22 NO cluster_interconnects parameter
Network Failover
Network failover can happen in both public and private network interfaces. As Clusterware processes are monitoring
network interfaces, failure is immediately detected, and appropriate action is taken to improve high availability.
If there is OS-based network high availability solution already configured, such as Linux Bonding, then the high
availability is handled completely by the OS solution. Clusterware does not implement high availability solution in
this scenario and simply uses the implemented OS solution.
If there is no OS-based solution, then HAIP can be used for cluster interconnect to implement high availability.
If the private network interface fails, and if there are multiple interfaces configured for cluster interconnect, then
the link local IP address from the failed interface is failed over to the available interface. Database connections
might briefly run into errors during the IP reconfiguration, but those errors should quickly disappear, and in most
cases, foreground processes will retry a request and succeed. After a link local IP failover to a surviving interface,
Clusterware will perform re-ARP inducing a new map between an IP address and MAC address.
If all interfaces configured for cluster interconnect fail, then the link local address cannot be configured in any
interface. This would also lead to heartbeat failure; so, rebootless restart or node eviction might be the result.
If a network adapter for public network fails, and if there are no other suitable adapters configured, then the VIP
listener will be shut down, and VIP address will fail over to a surviving node, but no listener will be restarted on
failed-over VIP. Any new connection to that failed VIP will get CONN RESET immediately, and the connection will try
the next connection entry in the list.
If a network adapter for SCAN IP fails, then the SCAN IP and SCAN listener will be failed over to a surviving node.
 
Search WWH ::




Custom Search