Database Reference
In-Depth Information
While you can possibly use any IP address, including a routable IP address, for a private network, it is a bad practice
to use routable IP addresses for private networks. If you use a routable IP address for cluster_interconnect, a cluster may
function normally, as ARP ping will populate the MAC address of a local node in the ARP cache, but there is a possibility
that packets can be intermittently routed to the Internet instead of another node in the cluster. This incorrect routing
can cause intermittent cluster heartbeat failure, even possibly node reboots due to heartbeat failures. Therefore, you
should configure only non-routable IP addresses for private networks. Essentially, router or switch should not forward
the packets out of the local network. In the case of a RAC cluster, since all cluster nodes are connected to a local
network (notwithstanding Stretch RAC), you should keep all private network IP addresses non-routable.
Realizing the danger of routable IP addresses for cluster interconnect configuration, Oracle Database version 11.2
introduced a feature for cluster interconnect known as High Availability IP (HAIP). Essentially, a set of IP addresses,
known as link local IP addresses, are configured for cluster interconnect. Link local IP addresses are in the range of
169.254.1.0 through 169.254.254.255. Switch or router does not forward frames to these link local IP addresses outside
that network segment. Essentially, even if routable IP addresses are set up for private network on the interface, link
local IP addresses protect the frames from being forwarded to the Internet.
if you are configuring multiple interfaces for cluster_interconnect, use different subnets for those private
network interfaces. For example, if eth3 and eth4 are configured for cluster_interconnect, then configure 172.29.1.0
subnet in an interface, and 172.29.2.0 subnet in the second interface. a unique subnet per private network interface is a
requirement from version 11.2 onward. if this configuration requirement is not followed, and if a cable is removed from
the first interface listed in routing table, then arp does not update the arp cache properly, leading to instance reboot
even though the second interface is up and available. to correct this problem scenario, you must use different subnets if
multiple interfaces are configured for cluster_interconnect.
Note
The following ifconfig command output shows that Clusterware configured a link local IP address in each
interface configured for cluster_interconnect.
$ifconfig -a |more
...
eth3:1: ...<UP,BROADCAST,RUNNING,MULTICAST,IPv4 > mtu 1500 index 4
inet 169.254.28.111 netmask ffffc000 broadcast 169.254.255.255
...
eth4:1: ...<UP,BROADCAST,RUNNING,MULTICAST,IPv4 > mtu 1500 index 4
inet 169.254.78.111 netmask ffffc000 broadcast 169.254.255.255
As we discussed earlier, the oifcfg command shows the network configuration in the Clusterware stored in
the OCR. From Oracle RAC version 11g onward, cluster_interconnect details are needed for Clusterware bootstrap
process too; therefore, cluster_interconnect details are captured in a local XML file, aptly named profile.xml. Further,
this profile.xml file is propagated while adding a new node to a cluster or while a node joins the cluster.
# Grid Infrastructure ORACLE_HOME set in the session.
$ cod $ORACLE_HOME/gpnp/profiles/peer/
$ grep 172 profile.xml
...
<gpnp:Network id="net3" IP="172.18.1.0" Adapter="eth3" Use="cluster_interconnect"/>
<gpnp:Network id="net4" IP="172.18.2.0" Adapter="eth4" Use="cluster_interconnect"/>
...
 
 
Search WWH ::




Custom Search