Database Reference
In-Depth Information
Truncate table t1;
...
nam='enq: RO - fast object reuse' ela= 976 name|mode=1380909062 2=65598 0=2 obj#=22
...
nam='enq: RO - fast object reuse' ela= 1022 name|mode=1380909062 2=65598 0=1 obj#=22
nam='enq: RO - fast object reuse' ela= 478 name|mode=1380909057 2=65598 0=2 obj#=22
...
nam='local write wait' ela= 270 file#=11 block#=280 p3=0 obj#=89956
nam='local write wait' ela= 1315 file#=11 block#=281 p3=0 obj#=89956
nam='local write wait' ela= 1702 file#=11 block#=282 p3=0 obj#=89956
nam='DFS lock handle' ela= 651 type|mode=1128857605 id1=13 id2=5 obj#=89956
nam='DFS lock handle' ela= 563 type|mode=1128857605 id1=13 id2=1 obj#=89956
nam='DFS lock handle' ela= 1492 type|mode=1128857605 id1=13 id2=2 obj#=89956
For further discussion about the DFS lock handle mechanism, see Chapter 11.
In most cases, developers use TRUNCATE or DROP commands to store the data temporarily while processing the
data. Global temporary tables (GTTs) are the preferred method instead of DDL commands. Rows in GTTs are session
specific and not visible to any other session. By default, rows in a GTT are thrown away after a commit. You can also
choose to create a GTT with on commit preserve rows to preserve rows in the GTT across commits.
As GTT is session specific, it does not suffer from the ill effects of DDL commands. The DELETE command on a
GTT does not trigger any checkpoint, and there is no need to invalidate parse locks, either. So, code developed using
GTTs will scale better in both a single-instance and a RAC database compared to DDL statements.
I am not advocating that you should never use TRUNCATE or DDL statements in your code. On the contrary, be
aware of the ill effects of DDL statements in a RAC database and use the method that best suits your requirements.
For example, if you must delete all rows from a big table (100K+ rows) infrequently in your code (or ad hoc ), then you
should use TRUNCATE commands instead of DELETE statements.
A common reason developers shy away from using GTT is that the optimizer can choose an inefficient execution
plan for SQL statements accessing GTT. It is common for an application to gather schema-level statistics, thereby
collecting statistics on GTT as well. The problem is that the statistics collection process has not populated any rows
in those GTTs, and therefore zero rows statistics are populated for the GTT. The optimizer uses special considerations
for zero rows statistics leading to inefficient execution plans.
One option to resolve the GTT statistics issue is to remove statistics on GTT and lock the statistics. Dynamic
sampling will be triggered if there are no statistics on the table. Another option is to collect statistics after populating
a representative number of rows in the GTT, and then lock the table statistics. However, this method assumes that all
sessions have a uniform number of rows, which may not be entirely accurate. Release 12c introduces a new feature,
session-level private statistics for a GTT, to resolve the statistics issue. You can choose to collect either session-level
private statistics and/or shared statistics on GTT. If private statistics are available, the optimizer uses private statistics
to choose an optimal execution plan. If not, the optimizer uses shared statistics.
Sequence Cache
Sequences provide unique values, not necessarily strict sequential values. Due to a misconception among developers
and designers, however, applications often are designed assuming that sequences will provide strict sequential values.
At the first access to a sequence, sequence values are cached in an instance SGA, up to 20 values by default.
Subsequent access to that sequence will retrieve values from the cache until the cache is depleted. After the
exhaustion of cached values, the next access to the sequence will cache 20 more values in the instance SGA. The data
dictionary table seq$ keeps permanent record of the highest cached value for every sequence in the database. The
replenishing of the sequence cache will update the seq$ table, marking a new highest cached value for that sequence.
In a single-instance database, updates to a dictionary table require a lock on a row cache.
 
Search WWH ::




Custom Search