Database Reference
In-Depth Information
In addition, when Java-based Oracle tools (such as srvctl, dbca, dbua, cluvfy , and netca ) fail for unknown
reasons, the preceding setting will also help to generate additional diagnostic information that can be used to
troubleshoot the issues.
Example:
$srvctl status database -d
when the basic information from the Crs logs doesn't provide sufficient feedback to conclude the root cause of
any cluster or raC database issue, setting different levels of trace mode might produce useful, additional information to
resolve the problem. however, the scale of the debug mode level will have an impact on the overall cluster performance
and also potentially generate a huge amount of information in the respective log files. on top of that, it is highly advised to
seek the advice of oracle support prior to tampering with the default settings of cluster components.
Note
Grid Infrastructure Component Directory Structure
Each component in Grid Infrastructure maintains a separate log file and records sufficient information under normal
and critical circumstances. The information written in the log files will surely assist in diagnosing and troubleshooting
Clusterware components or cluster health-related problems. Exploring the appropriate information from these log
files, the DBA can diagnose the root cause to troubleshoot frequent node evictions or any fatal Clusterware problems,
in addition to Clusterware installation and upgrade difficulties. In this section, we explain some of the important CRS
logs that can be examined when various Clusterware issues occur.
alert<HOSTNAME>.log: Similar to a typical database alert log file, Oracle Clusterware manages an alert log file
under the $GRID_HOME/log/$hostname location and posts messages whenever important events take place, such
as when a cluster daemon process starts, when a process aborts or fails to start a cluster resource, or when node
eviction occurs. It also logs information about node eviction occurrences and logs when a voting, OCR disk becomes
inaccessible on the node.
Whenever Clusterware confronts any serious issue, this should be the very first file to be examined by the DBA
seeking additional information about the problem. The error message also points to a trace file location where more
detailed information will be available to troubleshoot the issue.
Following are a few sample messages extracted from the alert log file, which explain the nature of the event, like
node eviction, CSSD termination, and the inability to auto start the cluster:
[ohasd(10937)]CRS-1301:Oracle High Availability Service started on node rac1.
[/u00/app/12.1.0/grid/bin/oraagent.bin(11137)]CRS-5815:Agent
'/u00/app/12.1.0/grid/bin/oraagent_ oracle' could not find any base type
entry points for type 'ora.daemon.type'. Details at (:CRSAGF00108:) {0:1:2} in
/u00/app/12.1.0/grid/log/rac1/agent/ohasd/oraagent_oracle/oraagent_oracle.log.
[cssd(11168)]CRS-1713:CSSD daemon is started in exclusive mode
[cssd(11168)]CRS-1605: CSSD voting file is online : /dev/rdsk/oracle/vote/ln1/ora_vote_002; details in
/u00/app/12.1.0/grid/log/rac1/cssd/ocssd.log.
[cssd(11052)]CRS-1656:The CSS daemon is terminating due to a fatal error ; Details at (:CSSSC00012:)
in /u00/app/12.1.0/grid/log/rac1/cssd/ocssd.log
[cssd(3586)]CRS-1608: This node was evicted by node 1 , rac1; details at (:CSSNM00005:) in
/u00/app/12.1.0/grid/log/rac2/cssd/ocssd.log.
 
 
Search WWH ::




Custom Search