Database Reference
In-Depth Information
perform a clustered installation across both nodes. The simple reason is that, with a shared database home reduces
the possibility to miss configuration steps on the passive node. The use of ASM prevents time consuming file system
consistency checks when mounting the database files on the passive node, a common problem with clusters that
don't use cluster aware file systems.
The database can, by definition, only be started on one node at the same time. For license reasons, it must not
be a cluster database. In normal operations, the database is mounted and opened on the active node. Should the
Clusterware framework detect that the database on the active node has failed—perhaps caused by a node failure—it
will try to restart the database on the same node. Should that fail, the database will be restarted on the passive node.
In that time, all user connections will be aborted. Thanks to the cluster logical volume manager (ASM), there is no
requirement to forcefully unmount any file system from the failed node and to mount it on the now active node.
The failed database can almost instantly be started on the former passive node. As soon as the instance recovery is
completed, users can reconnect. Oracle has introduced a very useful feature in Clusterware 10.2, called an application
virtual IP address. Such an address can be tied to the database resource in the form of a dependency and migrate with
it should the need arise. Application VIPs must be manually created and maintained, adding a little more complexity
to the setup.
An easier to implement alternative is available in form of the virtual IP address, which is automatically created
during the Clusterware installation. The so-called VIPs exist on every cluster node and have been implemented to
avoid waiting for lengthy TCP timeouts. If you try to connect to a net service name and the server has crashed, you
may have to wait too long for the operating system to report a timeout. The Clusterware VIP is a cluster resource,
meaning it can be started on the passive node in the cluster to return a “this address no longer exists” message to the
requestor, speeding up connection requests. Common net service name definitions for an active/passive cluster are:
activepassive.example.com =
(DESCRIPTION=
(ADDRESS_LIST=
(FAILOVER=YES) (LOAD_BALANCE=NO)
(ADDRESS=(PROTOCOL=tcp)(HOST=activenode-vip.example.com)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=passivenode-vip.example.com)(PORT=1521)))
(CONNECT_DATA=
(SERVICE_NAME=activepassive.example.com)
)
)
This way, the active node, which should be up and running for most of the time, is the preferred connection target.
In case of a node failover, however, the active node's VIP migrates to the passive node and immediately sends the
request to try the next node in the address list. This way, no change is required on the application servers in case of
a node failure. Once the former active node is repaired, you should relocate the database back to its default location.
Installing a Shared Oracle RDBMS Home
This section assumes that Clusterware is already installed on your servers. The first step in creating the shared Oracle
RDBMS home is to create the ASM Cluster File System (ACFS). ACFS is a POSIX compliant file system created on top
of an ASM disk group. In many scenarios, it makes sense to create a dedicated disk group for the ACFS file system
(the keyword is block-level replication). Once the new ASM disk group is created, you can create a so-called ASM
volume on top of it. The volume is managed internally by an entity called ASM Dynamic Volume Manager. Think of
ADVM as a logical volume manager. The ASM dynamic volume does not need to be of identical size as the ASM disk
group. ADVM volumes can be resized online, allowing for corrections if you are running out of space.
The choice went to ACFS for the simple reason that it guarantees a consistent configuration across nodes.
In many active/passive clusters, changes are not properly applied to all nodes, leaving the passive node outdated
and unsuitable for role transitions. It is very often the little configuration changes—maybe an update of the local
tnsnames.ora file to point to a different host—that can turn a simple role reversal into a troubleshooting nightmare.
If there is only one Oracle home, then it is impossible to omit configuration changes on the passive cluster node.
 
Search WWH ::




Custom Search