Database Reference
In-Depth Information
Of course, if you must do full table scans on huge tables, you should use parallelism to improve the performance
of those queries.
Application Affinity
Application affinity is an important tool to combat performance impact due to global cache workload delays.
Essentially, if an application component (such as multiple threads of a batch process) accesses a few objects
aggressively, then modify that application component to connect to one instance. Access to the local buffer cache is
an order of magnitude faster than the remote buffer cache access, so application components accessing a few objects
aggressively will perform more efficiently if the blocks are cached to the local instance.
An effective application affinity scheme should translate to physical segment-level affinity, not just logical-level
separation. For example, a client designed a batch process to execute in multiple instances, with each instance
processing data for a disjoint list of organizations; that is, organization 1 to 100 was processed by PROD1, 101 to
200 was processed by PROD2, and so on. Although the batch program logically divides the data processing among
multiple instances, this logical separation did not translate to the segment level. All threads of the batch program were
accessing the same segments and transferring blocks aggressively between the instances, leading to a sharp increase
in global cache events. To avoid this problem, partitioning can be used. In this way, list partitioning by organization
results in segment-level affinity; batch processes running on each node accessed a disjoint set of partitions, improving
batch process performance.
Creating a service is an excellent choice to implement application affinity, too. For example, if you have two
major application groups (such as supply chain and manufacturing applications) in the same database, then you
can create a service for each application and separate application workloads to different instances. For example,
create two services, SCM and MANUFAC. Assign PROD1 as a preferred instance for the SCM service and PROD2
as a preferred instance for the MANUFAC server. With this service design, supply chain tables will be accessed
aggressively in the PROD1 instance and the manufacturing tables will be accessed aggressively in the PROD2
instance, improving block collocation. The dynamic remastering feature should also help by remastering almost all
supply chain tables to the PROD1 instance and all manufacturing tables to the PROD2 instance, thereby reducing
global cache impact further.
If the database server has enough capacity to handle an application component, then keep all application
connections connected to a node. Of course, if one node cannot handle the workload, then you should design affinity
at the sub-application component level.
Pipes
Database pipes are single-instance objects and do not spawn multiple instances. If your application uses pipes, then
the code must be modified to access pipes locally such that all consumers and subscribers of a pipe connect to an
instance. Pipes do not work well in a RAC database, so use of the Advanced Queuing feature is recommended.
Application Change Deployment
Application change deployment should be carefully controlled in a busy RAC database. In a single-instance database,
if you modify a heavily used package while the sessions are executing the package, it would lead to severe library
cache lock and pin waits.
In a RAC database, modifying heavily used packages (or its dependent objects) can lead to global enqueue waits
because library cache locks and pins are globalized as enqueues. Sessions can enqueue on global cache locks, leading
to an almost hung database. The database is not actually hung, but queuing causes a serialization because each
session must acquire the global cache lock, validate the package, and then proceed. Because the sessions are queued
 
Search WWH ::




Custom Search