Database Reference
In-Depth Information
When you specify the -silent option, the installer runs in silent mode and therefore doesn't display any
interactive screens. In other words, it will run in non-interactive mode.
From any active node, verify the post-node deletion:
$cluvfy stage -post nodedel -n rac3 -verbose
$olsnodes -n -s -t
Clean up the following directories manually on the node that was just dropped:
/etc/oraInst.loc, /etc/oratab, /etc/oracle/ /tmp/.oracle, /opt/ORCLmap
Also, the filesystem where cluster and RDBMS software was installed.
Troubleshooting common Clusterware Stack Start-Up Failures
Various factors could contribute to the inability of the cluster stack to come up automatically after a node eviction,
failure, reboot, or when cluster startup initiated manually. This section will focus and cover some of the key facts and
guidelines that will help with troubleshooting common causes for cluster stack startup failures. Though the symptoms
discussed here are not exhaustive or complete, the key points explained in this section indeed provide
a better perspective to diagnose various cluster daemon processes common start-up failures and other issues.
Just imagine: a node failure or cluster manual shutdown, and subsequent cluster startup doesn't start the
Clusterware as expected. Upon verifying the cluster or CRS health status, one of the following error messages have
been encountered by the DBA:
$GRID_HOME/bin/crsctl check cluster
CRS-4639: Could not contact Oracle High Availability Services
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Check failed, or completed with errors
OR
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
ohasd start up failures - This section will explain and provide most significant information to diagnose common
issues of Oracle High Availability Services (OHAS) daemon process startup failures and provide workarounds for the
following issues:
CRS-4639: Could not contact Oracle High Availability Services
OR
CRS-4124: Oracle High Availability Services startup failed
CRS-4000: Command Start failed, or completed with errors
First, review the Clusterware alert and ohasd.log files to identify the root cause for the daemon startup failures.
Verify the existence of the ohasd pointer, as follows, in the OS-specific file:
/etc/init, /etc/inittab h1:3:respawn:/sbin/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
 
Search WWH ::




Custom Search