Databases Reference
In-Depth Information
Sort memory. Sort memory is very challenging to tune. As sorts get large, if they fail
to fit into memory, they will spill to disk, and the disk is used as extended virtual mem-
ory. This is usually implemented through temporary tables. Some sorts are so large that
avoiding the spill to disk is impossible. Other sorts are so small that the size of sort
memory is really irrelevant. The major metrics to watch for are the rate of sort spills,
and the size of the spills. If sorts are spilling and the spill size is small enough that the
spill could be avoided by simply increasing the size of sort memory, then increasing the
sort memory is a good idea. The time spent in I/O is costly, and avoiding sort spills is
usually more important than savings in I/O achieved through reduced page misses in
the buffer pool. “Usually” is a loose terms in this case, and this is another area where
automated tuning is really needed to do a good job. In most cases memory for hash
joins is less than 5% of available memory on OLTP systems and less than 30% of mem-
ory on decision support systems.
Hash join memory. Hash joins are a remarkably efficient method for joining tables,
producing the hashing structures to fit in memory. When they don't the result is an iter-
ative spilling scheme known as “hash loops.” The negative performance consequences of
hash loops are far more severe than spilled sorts. The metrics to watch are the number
and size of hash loops in the system. They are more important than spilled sorts, and
therefore usually more important than buffer pools. However, “usually” is a very strict
term. In most cases memory for hash joins is less than 5% of available memory on
OLTP systems and less than 15% of memory on decision support systems.
Communications buffers . Memory for communications, if constrained, injects a
huge latency into the system. As a general rule you'll want to feed this heap enough
memory so that none of the executing transactions or queries are stalled waiting for a
communication buffer to be available. Fortunately this memory is usually self-tuned.
For the databases where this matters the vendors provide monitoring data you can
view to determine how much time is being spent waiting for this memory, and if the
number is high the heap should be increased. This is not usually a large percentage of
the system memory.
Lock memory . When lock memory is constrained the result can be catastrophic,
resulting in lock escalation and lock wait. Lock escalation means that due to the inabil-
ity to obtain further row locks, the database will start locking data at a much more
coarse level of granularity, such as locking ages or perhaps the entire table. When locks
escalate the impact on system concurrency is disastrous. In many databases this memory
is self-tuned, but if you need/want to tune it by hand the main metrics to watch are the
number and frequency of lock escalations and the amount of time transactions and que-
ries are spending waiting for locks (lock waits). If there is any significant degree of lock
wait occurring, you will usually need to increase the allocation of memory to the lock
structures. In most cases memory for locking structures is less than 7% of available sys-
tem memory, though to protect against peak requirements, this allocation can tempo-
rarily need to be as high as 20% or more.
Search WWH ::




Custom Search