Database Reference
In-Depth Information
Level 2—Active/Passive OMS and Data Guard Repository
To reduce OMS downtime during a planned or unplanned outage, some redundancy should be introduced into
the configuration. A level 2 configuration uses a shared filesystem for the management service to achieve an active/
passive, or cold, failover cluster solution. The filesystem is shared between two or more hosts and is active on only one
host at a time.
The following steps should be performed as a prerequisite to a level 2 high-availability configuration:
1.
The shared filesystem for the OMS can be installed on a general-purpose cluster file
system including NFS, Oracle Cluster File System (OCFS2), and Oracle Automatic Storage
Management Cluster File System (ACFS). If NFS is used as the shared storage, ensure that
the correct mount options are set in /etc/fstab ( /etc/filesystems on AIX) to prevent
potential I/O issues. Specifically, rsize and wsize should be set.
The following example shows an entry in the /etc/fstab file on a Linux server; the NFS
share is mounted on a filer named filer1 under the /vol1/oms_share directory.
filer:/vol1/oms_share /u01/app/oms_share nfs rw,bg,rsize=32768,wsize=32768,
hard,nointr,tcp,noac,vers=3,timeo=600 0 0
2.
Install binaries for the OMS, along with the inventory, on the shared filesystem.
3.
Set up the virtual hostname and IP address (VIP) by using Oracle Clusterware or third-
party software and hardware. Failover is achieved by using the virtual hostname for the
OMS along with a unique IP address that resolves to the hostname.
4.
Configure the repository database by using a local physical standby with Data Guard
(see Figure 13-5 ).
 
Search WWH ::




Custom Search