Database Reference
In-Depth Information
Profiling analysis: A profiler is a performance-analysis tool that, for a running application,
records the executed operations, the time it takes to perform them, and the utilization of
system resources (for example, CPU and memory). Some profilers gather data at the call
level, others at the line level. The performance data is gathered either by sampling the
application state at specified intervals or by automatically instrumenting the code or the
executable. Although the overhead associated with the former is much smaller, the data
gathered with the latter is much more accurate.
Generally speaking, both methods are needed to investigate performance problems. However, if good
instrumentation is available, profiling analysis is less frequently used. Table 1-2 summarizes the pros and cons of
these two techniques.
Table 1-2. Pros and Cons of Instrumentation and Profiling Analysis
Technique
Pros
Cons
Instrumentation
Possible to add timing information to key
business operations. When available, can be
dynamically activated without deploying new
code. Context information (for example, about
the user or the session) can be made available.
Must be manually implemented. Covers
single components only; no end-to-end view
of response time. Usually, the format of the
output depends on the developer who wrote
the instrumentation code.
Profiling analysis
Always-available coverage of the whole
application. Multitier profilers provide end-to-end
view of the response time.
May be expensive, especially for multitier
profilers. Cannot always be (quickly)
deployed in production. Overhead
associated with profilers working at the line
level may be very high.
It goes without saying that you can take advantage of instrumentation only when it is available. Unfortunately, in
some situations and all too often in practice, profiling analysis is often the only option available.
When you take steps to solve a particular problem, it is important to note that thanks to beneficial side effects,
other problems might sometimes also be fixed (for example, reducing CPU use might benefit other CPU-intensive
operations, and make them perform acceptably). Of course, the opposite can happen as well. Measures taken may
introduce new problems. It is essential therefore to carefully consider all the possible side effects that a specific fix
may have. Also, the inherent risks of introducing a fix should be cautiously evaluated. Clearly, all changes have to be
carefully tested before implementing them in production.
Note that problems are not necessarly solved in production according to their priority. Some measures might
take much longer to be implemented. For example, the change for a high-priority problem could require downtime or
an application modification. As a result, although some measures might be implemented straight away, others might
take weeks if not months or longer to be implemented.
On to Chapter 2
This chapter describes key issues of dealing with performance problems: why it is essential to approach performance
problems at the right moment and in a methodological way, why understanding business needs and problems is
absolutely important, and why it is necessary to agree on what good performance means.
Before describing how to answer the three questions in Figure 1-4 , I need to introduce some key concepts that
I reference in the rest of the topic. For that purpose, Chapter 2 describes the processing performed by the database
engine to execute SQL statements. In addition, I provide some information on instrumentation and define several
frequently used terms.
 
 
Search WWH ::




Custom Search