Database Reference
In-Depth Information
Small- vs. Large-Scale Cluster Setups
It's an open secret that most IT requirements are typically derived from business needs. To meet business
requirements, you will have either small- or large-scale cluster setups of your environments. This part of the chapter
will focus and discuss some of the useful comparisons between small- and large-scale cluster setups and also address
the complexity involved in large cluster setups.
It is pretty difficult to jump in and say straightaway that whether a small-scale cluster setup is better than a
large-scale cluster setup, as the needs of one organization are totally dissimilar to those of another. However, once
you thoroughly realize the benefits and risks of the two types of setup, you can then decide which is the best option
to proceed with.
Typically, in any cluster setup, there is a huge degree of coordination in the CRS, ASM, and instance processes
involved between the nodes.
Typically, a large-scale cluster setup is a complex environment with a lot of resources deployment. In a
large-scale cluster setup, the following situations are anticipated::
When the current environment is configured with too many resources, for example, hundreds
of databases listeners, and application services, etc., across the nodes, there is a high
probability that it might cause considerable delay starting up the resources automatically on
a particular node upon node eviction. Irrespective of the number of resources configured or
were running on the local node, on node reboot, it has to scan through the list of resources
registered in the cluster, which might cause delay in starting things on the node.
Sometime it will be time consuming and bit difficult to gather the required information from
various logs across all nodes in a cluster when Clusterware related issues confronted.
Any ASM disk/diskgroup-related activity requires ASM communication and actions across all
ASM instances in the cluster.
If an ASM instance on a particular node suffers any operational issues, other ASM instances
across the nodes will be impacted; this might lead to performance degradation, instance
crash, etc.
When shared storage LUNS are prepared, it must be made available across all cluster nodes.
For any reason, if one node lacks ownership or permission or couldn't access the LUN(disk),
the disk can't be used to add to any diskgroup. It will be pretty difficult to track which node
having issues.
Starting/stopping cluster stack from multiple nodes in parallel will lock GRD across the cluster
and might cause an issue bringing up the cluster stack subsequently on the nodes.
If you don't have an optimal configuration in place for large-scale cluster implementation,
frequent node evictions can be anticipated every now and then.
It is likely to confront the upper limit for the maximum number of ASM diskgroups and disks
when a huge number of databases is deployed in the environment.
On the other hand, a smaller cluster setup with a lesser number of nodes will be easy to manage and will have
less complexity. If it's possible to have multiple small ranges of cluster setups in contrast to a large-scale setup, this is
one of the best options, considering the complexity and effort required for the two types.
 
Search WWH ::




Custom Search