Database Reference
In-Depth Information
Massive Data Changes
It is inevitable that every DBA must perform massive data changes in a production environment, for reasons such as
upgrade, purge, etc. In a RAC database, performing massive changes requires careful strategy.
For massive updates, if you are planning to use parallelism, consider setting the parallel_force_local parameter to
true. This initialization parameter will allocate all PX servers in local node and reduce overheads due to global cache
latency. However, this strategy assumes that one server has enough resource capabilities to complete the tasks. On the
other hand, if the activity is spread across many nodes, then CPU, I/O, and memory resource usage is spread between
all nodes. So, the choice to use one or more instances depends upon (a) whether a node can handle the workload
without incurring any additional latency, and (b) whether the global cache latency is big enough that keeping
workload in a node will vastly improve the performance.
If you choose to use multiple instances, then the use of table partitioning or data partitioning through effective
SQL tuning to reduce global cache transfers is another effective strategy to improve application performance. Further,
index blocks deserve close attention, as these can increase global cache transfers. For massive updates on indexed
columns or inserts, it is preferable to drop indexes before updating columns, and recreate the indexes after the updates.
Another typical production issue is encountered during massive index creation. Index creation is usually
performed with many PX servers. As PX servers read blocks into PGA directly, bypassing the buffer cache, global
cache latency is minimal. Still, PX servers can distribute blocks among themselves, flooding interconnect with PX
messages. So, allocating all PX servers within the same node is preferable if a single node has sufficient resources to
complete index creation. Refer to Chapter 12 for further discussion of parallel query execution.
Further, if an index creation fails due to an issue, always restart that index creation connecting to the same
instance. Temporary space allocated for indexes (in permanent tablespace) is soft reserved. If you restart index
creation connecting to another instance, then subsequent index creation would trigger uncaching of free space in
other nodes, leading to a possibility of SS Enqueue contention and DFS lock handle contention.
Performance Metrics Collection
It is critical to collect performance metrics on a regular basis in RAC cluster. In most cases, RAC node problems are
side effects of some other OS issue. For example, network, CPU starvation, and memory starvation issues can lead
to node failures and reboots. Root cause analysis of a node reboot requires deep review of OS metrics. It is almost
impossible to identify the root cause of a node reboot without sufficient OS performance metrics.
At the minimum, OSWatcher or another tool should collect performance data in all nodes and keep those
statistics for at least a few days. These OS statistics will enable root cause analysis for node failures.
Further, AWR reports or statspack data are essential to understand pattern changes in database performance.
So, at least a few weeks of history must be kept in the database for future analysis.
Parameter File Management
The initialization parameter file is common to all instances. It is a good idea to keep parameter files in ASM storage,
as ASM storage is available to all RAC database instances.
Initialization parameters can be specified in the scope of an instance or database. For example, the following
command modifies the parameter pga_aggregate_target in a RAC1 instance only.
alter system set pga_aggregate_target=10240M scope=both sid='RAC1';
The following command modifies the parameter globally. If a parameter is specified both at the database level
and at the instance level, then the instance-level parameter specification overrides the global-level parameter.
alter system set pga_aggregate_target=5120M scope=both sid='*';
 
Search WWH ::




Custom Search