Database Reference
In-Depth Information
To reference the value of the column from the previous collection, the value should simply be prefixed with a
single underscore symbol (
_
). So if the current value is
nice_t
, the previous collection is
_nice_t
. Consequently, the
difference is
nice_t - _nice_t
, which is the number of CPU ticks spent executing niced processes in user mode
since the previous collection. Summing up all CPU ticks since the last collection would then give us the total number
of CPU ticks since the last collection. The percent of time that the CPU was executing niced user processes is the ratio
between niced CPU ticks and total ticks, normalized to a percentage by multiplying by 100. The same applies to all
other CPU modes such as system or idle CPU.
Let's look at another example: calculating the number of user rollbacks and commits. The Oracle database keeps a
counter of commits and rollbacks since database startup, but calculating purely on a cumulative counter isn't useful for
monitoring. What's useful is the average number of commits and rollbacks per second between two collection intervals.
So if the previous number of commits is c0 collected at time t0, and current collection is c1 collected at t1, then the
number of commits per second would be (c1-c0)/(t1-t0). I do not specifically reference timestamps in metric definitions
because EM12c adds the timestamp implicitly (there is a way to explicitly collect it, but it's rarely required). Thus, we
don't have access to that timestamp value from the
COMPUTE_EXPR
definition. However, all we need is the interval.
Even though you define an interval in the default collection, you shouldn't hard-code this value in
COMPUTE_EXPR
because (1) the user can customize it and (2) the real interval might be slightly different, because the configured
interval is just a target. Fortunately, EM12c has an embedded variable
__interval
(note the double-underscore
prefix) that returns the number of seconds since the last collection. Thus, a metric with the number of commits and
rollbacks per second could look like the one in Listing 10-16.
Listing 10-16.
Using
__interval
in
COMPUTE_EXPR
Columns
<Metric NAME="transactions_stat" TYPE="TABLE">
<TableDescriptor>
<ColumnDescriptor NAME="commits" TYPE="NUMBER" TRANSIENT="TRUE"/>
<ColumnDescriptor NAME="commits_ratio" TYPE="NUMBER"
COMPUTE_EXPR="(commits - _commits) / __interval">
<Display>...</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="rollbacks" TYPE="NUMBER" TRANSIENT="TRUE"/>
<ColumnDescriptor NAME="rollbacks_ratio" TYPE="NUMBER"
COMPUTE_EXPR="(rollbacks - _rollbacks) / __interval">
<Display>...</Display>
</ColumnDescriptor>
</TableDescriptor>
<QueryDescriptor>...</QueryDescriptor>
</Metric>
The example in Listing 10-16 is not from the sample host target, but imagine that the
QueryDescriptor
uses the SQL fetchlet that returns two columns, the cumulative number of commits and rollbacks for a database
instance since startup. Let's say at the first collection, the fetchlet returns 1,000 for the
commits
column. Because it's
the first collection, there was no previous
_commits
or
__interval
value, and the computed column isn't calculated
yet. At the second collection, which happens 60 seconds later, imagine the fetchlet returning 1,300 for commits. In
this case, the
commit_ratio
column would calculate to (1,300 - 1,000)/60, or 5 commits per second. The same logic
applies to rollbacks.
Note that the fetchlet defined in the
QueryDescriptor
should return values skipping computed columns. It's
not required to delay the definition of all computed columns to the very end of
TableDescriptor
, and you can place
computed columns right after the columns they reference. Computed columns are calculated by the agent, and you
can clearly monitor the process in the Metric Browser. The agent tracks the values of
TRANSIENT
columns but doesn't
submit them to OMS, so it doesn't bloat the size of the EM12c repository.
Compute expressions are also useful to convert units (say from bytes to kilobytes or megabytes if precision is not
needed) or to convert from blocks to actual bytes if some metrics return blocks.