Database Reference
In-Depth Information
Based on the planned growth rate of the enterprise, the growth rate in terms of number of users as a result of
increased business is determined. Based on these growth rates, the appropriate hardware configurations are selected.
Although capacity planning is for a future period, the planning is done based on current resources, workload,
and system resources. The following factors influence the capacity of the servers:
CPU utilization—CPU utilized over a specific period of time
Transaction throughput—Transactions completed over a period of time
Service time—Average time to complete a transaction
Transaction capacity—Server capacity to handle number of transactions
Queue length—Average number of transactions
Planning normally starts when the business requests increased user workload or product enhancements. After
analyzing the business requirements, their current application, database configuration, and the growth requirements,
careful analysis should be performed to quantify the benefits of switching to a RAC environment. In the case of a
new application and database configuration, a similar analysis should also be performed to quantify if RAC would be
necessary to meet the requirements of current and future business needs.
The first step in the quantification process is to analyze the current business requirements such as the following:
Are there requirements that justify or require the systems to be up and running 24 hours a day,
Response time—Average response time
every day of the year?
Are there sufficient businesses projections on the number of users that would be accessing the
system and what the user growth rate will be?
Will there be a steady growth rate that would indicate that the current system configurations
might not be sufficient?
Once answers to these questions have been determined, a simulation model should be constructed to establish
the scalability requirements for the planning or requirements team. While developing the simulation model, the
architecture of the system and application should be taken into consideration.
The simulation should determine if any specific hardware architectures (symmetric multiprocessing [SMP], non-
uniform memory access [NUMA], and so forth) would be required for the implementation of the system. During this
initial hardware architecture evaluation, the question may arise as to whether a single instance configuration would
be sufficient or a clustered solution would be required. If a single instance configuration is deemed sufficient, then
whether the system would require protection from disasters would need to be determined. If disaster protection is a
requirement, it may be implemented using the ODG feature.
Applications to run in a clustered configuration (e.g., clustered SMP, NUMA clusters) should be clusterizable
such that the benefits could be measured in terms of global performance, availability (such as failover), and load
balancing. (Availability basically refers to availability of systems to service users.) More important, the application
should be scalable when additional resources are provided. From a performance aspect, the initial measurements
would be to determine the required throughput of the application. Under normal scenarios, performance is measured
by the number of transactions the system could process per second or the IOPS (input/output operations per second).
Performance can also be measured by the throughput of the system, utilizing a simple formula such as the following:
Throughput = the number of operations performed by the application
÷ the unit of time used for measurement
There are two levels of throughput measurement: the minimum throughput expectation and the maximum
throughput required. Tendencies are to justify the capacity with an average throughput (also called ideal throughput),
which could be totally misleading. It's always in the best interest of the test to get the maximum possible throughput
that causes the resources to be totally saturated.
 
Search WWH ::




Custom Search