Database Reference
In-Depth Information
This pointer should have been added automatically upon cluster installation and upgrade. In case no pointer is
found, add the preceding entry toward the end of the file and as the root user start the cluster manually or initiate the
inittab to start up these things automatically.
If the ohasd pointer exists, the next thing to check is the cluster high availability daemon auto start configuration.
Use the following command as the root user to confirm the auto startup configuration:
$ GRID_HOME/bin/crsctl config has -- High Availability Service
$ GRID_HOME/bin/crsctl config crs -- Cluster Ready Service
Optionally, you can also verify the files under the /var/opt/oracle/scls_scr/hostname/root or /etc/oracle/
scls_scr/hostname/root location to identify whether the auto config is enabled or disabled.
As the root user, enable the auto start and bring up the cluster manually on the local node when the auto startup
is not configured. Use the following examples to enable has/crs auto-start:
$ CRS-4621: Oracle High Availability Services autostart is disabled.
Example:
$ GRID_HOME/bin/crsctl enable has - turns on auto startup option of ohasd
$ GRID_HOME/bin/crsctl enable crs - turns on auto startup option of crs
$ GRID_HOME/bin/crsctl start has - initiate OHASD daemon startup
$ GRID_HOME/bin/crsctl start crs - initiate CRS daemon startup
Despite the preceding, if the ohasd daemon process doesn't start and the problem persists, then you need to
examine the component-specific trace files to troubleshoot and identify the root cause. Follow these guidelines:
Verify the existence of the ohasd daemon process on the OS. From the command-line prompt, execute the following:
ps -ef |grep init.ohasd
Examine OS platform-specific log files to identify any errors (refer to the operating system logs section later in
this chapter for more details).
Refer the ohasd.log trace file under the $GRID_HOME/log/hostname/ohasd location, as this file contains useful
information about the symptoms.
Address any OLR issues that are being reported in the trace file. If OLR corruption or inaccessibility is reported,
repair or resolve the issue by taking appropriate action. In case of a restore, restore it from a previous valid backup
using the $ocrconfig -local -restore $backup_location/backup_filename.olr command.
Verify Grid Infrastructure directory ownership and permission using OS level commands.
Additionally, remove the cluster startup socket files from the /var/tmp/.oracle, /usr/tmp/.oracle, /tmp/.
oracle directory and start up the cluster manually. The existence of the directory is subject to operating system
dependency.
CSSD startup issues - In case the CSSD process fails to start up or is reported to be unhealthy, the following
guidelines help in identifying the root cause of the issue:
Error : CRS-4530: Communications failure contacting Cluster Synchronization Services daemon:
Review the Clusterware alert.log and ocssd.log file to identify the root cause of the issue.
Verify the CSSD process on the OS:
ps -ef |grep cssd.bin
Examine the alert_hostname.log and ocssd.log logs to identify the possible causes that are preventing the
CSSD process from starting.
 
Search WWH ::




Custom Search