Database Reference
In-Depth Information
Chapter 3
RAC Operational Practices
by Riyaj Shamsudeen
When running a RAC cluster, some operational practices can reduce overall administration costs and improve
manageability. The design of a RAC cluster is critical for the effective implementation of these practices. While
this chapter discusses numerous RAC design patterns, implementation of these design practices will lead to better
operational results.
Workload Management
Workload management is an essential tool that simplifies the management of RAC clusters. Skillful use of workload
management techniques can make the administration of a production cluster much easier. It can also speed up the
detection of failures and accelerate the failover process. Services, VIP listeners, and SCAN listeners play important roles
in workload management.
Workload management is centered on two basic principles:
1.
Application affinity: Optimal placement of application resource usage to improve the
performance of an application.
2.
Workload distribution: Optimal distribution of resource usage among available nodes to
maximize the use of cluster nodes.
Application affinity is a technique to keep intensive access to database objects localized to improve application
performance. Latency to access a buffer in the local buffer cache is on the order of microseconds (if not nanoseconds
with recent faster CPUs), whereas latency to access a buffer resident in a remote SGA is on the order of milliseconds,
typically, 1-3ms. 1 Disk access latency is roughly 3 to 5ms for single block reads; that is nearly the same as that for
remote buffer cache access latency, whereas local buffer cache access is orders of magnitude faster. So, if an application
or application components (such as a group of batch programs) access some objects aggressively, connect those
application components to an instance so that object access is mostly localized, improving application performance.
A complementary (though not necessarily competing) technique to application affinity is workload distribution .
For example, if there are four nodes in a cluster, then you would want to distribute workload evenly among all four
nodes so that a node is not overloaded. Service is a key tool to achieving workload distribution evenly among all nodes
of a cluster.
1 In Exadata platforms, the remote buffer cache access latency is about 0.5ms. Both faster CPUs and infiniband fabric hardware
provide lower latency.
 
Search WWH ::




Custom Search