Database Reference
In-Depth Information
SCaLING WIth aDDItIONaL NODeS
every time a node is added to the cluster, the performance matrix, load balance characteristics, and resource
utilization should be monitored. if bottlenecks are noticed, they should be optimized. Most of the time the load
testing tools execute queries that are generic in nature. they are not efficiently written and hence could be
considered as false bottlenecks that may not occur in real production code. this could definitely be a wrong
assumption. not all application code is well written; several times we encounter several bad pieces of code hitting
the database. ideally bad or poorly performing code should be fixed, but this does not always happen. there are
several reasons for this:
1.
the application was originally developed in-house, but the developer has not left behind any
documentation to fix the code.
2.
the application code is complex and has intertwined business logic so that fixing the code
would involve in-depth study of the business rules and could be expensive.
3.
the application code belongs to a third party and fixing the code would mean that every time
an upgrade or patch is received from the application owners, it would involve customization.
Under these circumstances, breaking the rules of tuning the application code to other alternative methods of
tuning, such as tuning database parameters, should be employed. normally these areas are considered for
performance optimization during the last phases of the testing cycle.
Step 5
Step 4 load testing was based on criteria defined in Table 4-2 . In this test, seen in Table 4-3 , the scalability factor is
increased from 1 to 10. (Ideally scalability factor should have also increased gradually; however, due to limitations in
the tool, this test has not been done.)
Table 4-3. RAP Phase II—Scalability Load on Two Servers
Test #
Scalability Factor
No. of Users
No. of Nodes
Iterations
Duration
3
10
40
4
5
1 hour
After 20 minutes into the run, there was high I/O contention and the latency numbers started tripling per the
Cluster Health Monitor (CHM) illustrated in Figure 4-6 . Combined with I/O waits was also high CPU utilization
response times that were not acceptable.
Figure 4-6. 11g CHM disk latency alarms
 
 
Search WWH ::




Custom Search