Database Reference
In-Depth Information
in the data collected under step 6, if gc buffer busy waits are higher, an option to optimize is to consider
using indexing features such as reverse key indexes. this will spread the index leaf blocks, reducing concurrency wait
times. Once again, while reverse key indexes are ideal for queries that have precise index-based retrieval, they can be
extremely bad when used in range-based operations.
Tip
Undo Block Considerations
Excessive undo block shipment and contention for undo buffers usually happens when index blocks containing active
transactions from multiple instances are read frequently.
When a SELECT statement needs to read a block with active transactions, it has to undo the changes to create a
CR version. If the active transaction in the block belongs to more than one instance, there is a need to combine local
and remote undo information for the CR operation. Depending on the amount of index blocks changed by multiple
instances and the duration of transactions, the undo block shipment may become a bottleneck.
Usually this happens in applications that read recently inserted data very frequently and commit frequently.
Techniques that can reduce such situations include the following:
Shorter transactions reduce the likelihood that an index block in the cache contains
uncommitted data, thereby reducing the need to access undo information for a consistent read.
As explained earlier, increasing sequence cache sizes can reduce inter-instance concurrent
access to index leaf blocks. A CR version of index blocks modified by only one instance can be
fabricated without the need of remote undo information.
Similar to the INSERT operations, users can experience high waits due to cluster overheads even with SQL
statements. Unlike the INSERT operation, SELECTs are less stressful when it comes to the locking and sharing of data
blocks. However, the time taken to find the block due to lack of resources or when there is a larger amount of requests
compared to what the system can handle can definitely be affected. Almost always, poor performance from SELECT
operations is directly related to poorly written SQL statements.
Hard Parses
Hard parses are very costly for Oracle's optimizer. The amount of validation that has to be performed during a parse
consumes a significant amount of resources. The primary reason for a repeated hard parse is the uniqueness of the
queries present in the library cache/SGA. When a user/session executes a query, the query is parsed and loaded in the
library cache after Oracle has generated a hash value for the query. Subsequently, when another session or user executes
the same query depending on the extent of similarity of the query that is already present in the library cache, it is reused
and there is no hard parse operation involved. However, if this query is not similar to the previous query or has values
hard coded in the WHERE clause that make it different, it has to go through the Oracle parsing algorithm; this is considered
as a hard parse and is very costly. The total amount of hard parses can be determined using the following query:
Script: MVRACPDnTap_hardparses.sql
SELECT pa.inst_id,
pa.sid,
pa.VALUE "Hard Parses",
ex.VALUE "Execute Count"
FROM gv$sesstat pa,
gv$sesstat ex
 
 
Search WWH ::




Custom Search