Database Reference
In-Depth Information
--coreanalyze Unix only. Extracts information from core files
and stores it in a text file
Use the --clean argument with the script to clean up previously generated files.
ensure that enough free space is available at the location where the files are being generated. Furthermore,
depending upon the level used to collect the information, the script might take a considerable amount of time to complete
the job. hence, keep an eye on resource consumption on the node. the tool must be executed as root user.
Note
CHM
The Oracle CHM tool is designed to detect and analyze OS-and cluster resource-related degradations and failures.
Formerly known as Instantaneous Problem Detector for Clusters or IPD/OS, this tool tracks the OS resource consumption
on each RAC node, process, and device level and also connects and analyzes the cluster-wide data. This tool stores real-time
operating metrics in the CHM repository and also reports an alert when certain metrics pass the resource utilization
thresholds. This tool can be used to replay the historical data to trace back what was happening at the time of failure. This
can be very useful for the root cause analysis of many issues that occur in the cluster such as node eviction.
For Oracle Clusterware 10.2 to 11.2.0.1, the CHM/OS tool is a standalone tool that you need to download and
install separately. Starting with Oracle Grid Infrastructure 11.2.02, the CHM/OS tool is fully integrated with the Oracle
Grid Infrastructure. In this section we focus on this integrated version of the CHM/OS.
The CHM tool is installed to the Oracle Grid Infrastructure home and is activated by default in Grid Infrastructure
11.2.0.2 and later for Linux and Solaris and 11.2.0.3 and later for AIX and Windows. CHM consists of two services:
osysmond and ologgerd . osysmond runs on every node of the cluster to monitor and collect the OS metrics and send
the data to the cluster logger services. ologgerd receives the information from all the nodes and stores the information
in the CHM Repository. ologgerd runs in one node as the master service and in another node as a standby if the
cluster has more than one node. If the master cluster logger service fails, the standby takes over as the master service
and selects a new node for standby. The following example shows the two processes, osysmond.bin and ologgerd :
$ ps -ef | grep -E 'osysmond|ologgerd' | grep -v grep
root 3595 1 0 Nov14 ? 01:40:51 /u01/app/11.2.0/grid/bin/ologgerd -m k2r720n1 -r -d
/u01/app/11.2.0/grid/crf/db/k2r720n2
root 6192 1 3 Nov08 ? 1-20:17:45 /u01/app/11.2.0/grid/bin/osysmond.bin
The preceding ologgerd daemon uses ' -d /u01/app/11.2.0/grid/crf/db/k2r720n2', which is the directory
where the CHM repository resides. The CHM repository is a Berkeley DB-based database stored as *.bdb files in the
directory. This directory requires 1GB of disk space per node in the cluster.
$ pwd
/u01/app/11.2.0/grid/crf/db/k2r720n2
$ ls *.bdb
crfalert.bdb crfclust.bdb crfconn.bdb crfcpu.bdb crfhosts.bdb crfloclts.bdb crfts.bdb
repdhosts.bdb
Oracle Clusterware 12cR1 has enhanced the CHM by providing a highly available server monitor service and also
support for the Flex Cluster architecture. The CHM in Oracle Clusterware 12cR1 consists of three components:
osysmond
ologgerd
Oracle Grid Infrastructure Management Repository
 
 
Search WWH ::




Custom Search