Database Reference
In-Depth Information
In addition, as of version 12.1, you can see the parameters with which an operation was executed. For example,
the following query shows which parameters were used by the default gathering job during the last execution:
SQL> SELECT x.*
2 FROM dba_optstat_operations o,
3 XMLTable('/params/param'
4 PASSING XMLType(notes)
5 COLUMNS name VARCHAR2(20) PATH '@name',
6 value VARCHAR2(30) PATH '@val') x
7 WHERE operation = 'gather_database_stats (auto)'
8 AND start_time = (SELECT max(start_time)
9 FROM dba_optstat_operations
10 WHERE operation = 'gather_database_stats (auto)');
NAME VALUE
-------------------- ------------------------------
block_sample FALSE
cascade NULL
concurrent FALSE
degree NULL
estimate_percent DEFAULT_ESTIMATE_PERCENT
granularity DEFAULT_GRANULARITY
method_opt DEFAULT_METHOD_OPT
no_invalidate DBMS_STATS.AUTO_INVALIDATE
reporting_mode FALSE
stattype DATA
Be aware that log information is purged by the same mechanism as the statistics history described earlier.
Both, therefore, have the same retention period.
Strategies for Keeping Object Statistics Up-to-Date
The dbms_stats package provides many features for managing object statistics. The question is, how and when
should you use them to achieve a successful configuration? Answering this question is difficult. Probably no definitive
answer exists. In other words, there is no single method that can be implemented in all situations. Let's examine how
to approach the problem.
The general rule, and probably the most important one, is that the query optimizer needs object statistics that
describe the data stored in the database. As a result, when data changes, object statistics should change as well. As you
may know, I am an advocate of gathering object statistics regularly. Those who are opposed to this practice argue that
if a database is running well, there is no need to regather object statistics. The problem with that approach is that more often
than not, some of the object statistics are dependent on the actual data. For example, one statistic that commonly
changes is the low/high value of columns that contain data such as a timestamp associated to a transaction, a sale, or
a phone call. True, not many of them change in typical tables, but usually those that do change are critical because
they're used over and over again in the application. In practice, I run into many more problems caused by object
statistics that aren't up-to-date than the other way around.
Obviously, it makes no sense to gather object statistics on data that never changes. Only stale object statistics
should be regathered. Therefore, it's essential to take advantage of the feature that logs the number of modifications
occurring to each table. In this way, you regather object statistics only for those tables experiencing substantial
modifications. By default, a table is considered stale when more than 10% of the rows change. This is a good default
 
Search WWH ::




Custom Search