Database Reference
In-Depth Information
You have to realize that the first query to touch the data will incur some additional processing if the block needs
to be cleaned. Realizing this, you yourself should “touch” the data after the UPDATE . You just loaded or modified a ton
of data—you need to analyze it at the very least. Perhaps you need to run some reports to validate the load. This will
clean the block out and make it so the next query doesn't have to. Better yet, since you just bulk-loaded the data, you
now need to refresh the statistics anyway. Running the DBMS_STATS utility to gather statistics may well clean out all of
the blocks as it just uses SQL to query the information and would naturally clean the blocks out as it goes along.
Log Contention
This, like the cannot allocate new log message, is something the DBA must fix, typically in conjunction with the
system administrator. However, it is something a developer might detect as well if the DBA isn't watching closely
enough.
If you are faced with log contention, what you might observe is a large wait time on the “log file sync” event
and long write times evidenced in the “log file parallel write” event in a Statspack report. If you see this, you may be
experiencing contention on the redo logs; they are not being written fast enough. This can happen for many reasons.
One application reason (one the DBA can't fix, but the developer must) is that you are committing too frequently—
committing inside of a loop doing INSERT s, for example. As demonstrated in the Chapter 6 section “What Does a
COMMIT Do?” section, committing too frequently, aside from being a bad programming practice, is a surefire way to
introduce lots of log file sync waits. Assuming all of your transactions are correctly sized (you are not committing more
frequently than your business rules dictate), the most common causes for log file waits that I've seen are as follows:
Putting redo on a slow device : The disks are just performing poorly. It is time to buy
faster disks.
Putting redo on the same device as other files that are accessed frequently : Redo is designed to
be written with sequential writes and to be on dedicated devices. If other components of your
system—even other Oracle components—are attempting to read and write to this device at the
same time as LGWR , you will experience some degree of contention. Here, you want to ensure
LGWR has exclusive access to these devices if at all possible.
Mounting the log devices in a buffered manner : Here, you are using a “cooked” file system
(not RAW disks). The operating system is buffering the data, and the database is also
buffering the data (redo log buffer). Double-buffering slows things down. If possible, mount
the devices in a “direct” fashion. How to do this varies by operating system and device, but it
is usually possible.
Putting redo on a slow technology, such as RAID-5 : RAID-5 is great for reads, but it is
generally terrible for writes. As we saw earlier regarding what happens during a COMMIT , we
must wait for LGWR to ensure the data is on disk. Using any technology that slows this down
is not a good idea.
If at all possible, you really want at least five dedicated devices for logging and optimally six to mirror your
archives as well. In these days of 200GB, 300GB, 1TB and larger disks, this is getting harder, but if you can set aside
four of the smallest, fastest disks you can find and one or two big ones, you can affect LGWR and ARCn in a positive
fashion. To lay out the disks, you would break them into three groups (see Figure 7-1 ):
Redo log group 1 : Disks 1 and 3
Redo log group 2 : Disks 2 and 4
Archive : Disk 5 and optionally disk 6 (the big disks)
 
Search WWH ::




Custom Search