Databases Reference
In-Depth Information
How It Works
Processor capacity planning for Exadata is primarily a matter of understanding your current CPU demands. Using
data from Oracle AWR is a good way to forecast Exadata CPU requirements and Exadata configuration details. Using
scripts to extract data from AWR is a relatively straightforward method to accurately size your database workloads
for Exadata. As you can probably gather, the approach in this recipe is not limited to Exadata CPU capacity planning;
with slight adjustments to the last script in the solution of this recipe, you can follow this approach to conduct CPU
capacity-planning analysis for any Oracle target platform.
The code listings in this recipe select data from Oracle AWR views DBA_HIST_SNAPSHOT , DBA_HIST_OSSTAT ,
DBA_HIST_TIME_MODEL , and DBA_HIST_SYSSTAT . We used DBA_HIST_SNAPSHOT to provide detailed listing of the
snapshot ID, snapshot begin interval time and end interval time. Using Oracle's LAG analytic function, we calculated
the minutes between intervals and used this to calculate rate-based metrics.
DBA_HIST_OSSTAT was used to retrieve the per snapshot aggregate processor core/thread count as well as
host-based processor utilization metrics. These host-based utilization metrics are likely not relevant for purposes of
conducting an Exadata processor sizing assessment, as they could include non-Oracle components of CPU load and
additional Oracle environment's load, and they are also dependent somewhat on the operating system and hardware
that the database runs on.
DBA_HIST_TIME_MODEL can be used to measure and forecast Oracle-related CPU load. In the examples in this
recipe, we used the LAG analytic function to calculate differences between snapshot intervals for DB time , DB CPU ,
background cpu time , and RMAN cpu time statistics.
DBA_HIST_SYSSTAT was used to query logons per second and executions per second. This information is not
necessarily required when performing processor capacity planning, but it is often nice to see alongside Oracle CPU
usage statistics to correlate metrics to workload.
We've attempted to make the approach relatively scientific, but there are certainly conditions under which a bit
of educated guesswork may be required. Table 5-1 provides some recommendations to help refine your assumptions
and confidently perform your processor capacity-planning efforts.
Table 5-1. Refining our CPU planning assumptions
Risk or Assumption
Recommendation and Comments
Uncertain how many additional
databases will be consolidated
to Exadata
Strive to consolidate what makes sense from a storage capacity standpoint
(see Recipe 5-5) and eliminate candidate databases when capacity becomes
a limiter as a first-pass. CPU bandwidth is rarely a limiting factor when
consolidating on Exadata; typically, disk capacity is the most important factor
that determines how many databases can fit on your database machine.
Work with your management team to understand consolidation goals for Exadata.
Uncertain how to establish a
resource load percentage for
non-production databases
Run the scripts in this recipe on all of your databases to arrive at a representative
number or run each step manually.
Unsure how to establish
a “margin of error” for
calculations
Be safe but not overly conservative. This is typically a judgment call by the
administrator. Overestimating your margin of error could be the difference
between a Quarter Rack and a Half Rack or a Half Rack and a Full Rack, which
equates to a significant amount of money.
Historical AWR metrics not
representative of expected usage
patterns
Use the margin of error conditions to amplify your CPU requirements numbers.
For example, if your business is planning a business acquisition or new
application functionality rollout, use this information to your advantage.
Refer to the TPC website ( http://tpc.org ) to find processor comparisons.
Unsure how to measure the core
multiplier
 
 
Search WWH ::




Custom Search