Database Reference
In-Depth Information
For these reasons, having an enormous (hundreds/thousands of megabytes) redo log buffer is not practical;
Oracle will never be able to use it all since it pretty much continuously flushes it. The logs are written to with
sequential writes as compared to the scattered I/O DBWn must perform. Doing large batch writes like this is much more
efficient than doing many scattered writes to various parts of a file. This is one of the main reasons for having a LGWR
and redo logs in the first place. The efficiency in just writing out the changed bytes using sequential I/O outweighs
the additional I/O incurred. Oracle could just write database blocks directly to disk when you commit, but that would
entail a lot of scattered I/O of full blocks, and this would be significantly slower than letting LGWR write the changes out
sequentially.
Starting with Oracle 12 c , Oracle will start additional Log Writer Worker ( LG0 ) processes on multiprocessor
machines to increase the performance of writing to the online redo log files.
Note
ARCn: Archive Process
The job of the ARCn process is to copy an online redo log file to another location when LGWR fills it up. These archived
redo log files can then be used to perform media recovery. Whereas online redo log is used to fix the data files in the
event of a power failure (when the instance is terminated), archived redo logs are used to fix data files in the event of
a hard disk failure. If we lose the disk drive containing the data file, /u01/dbfile/ORA12CR1/system01.dbf , we can go
to our backups from last week, restore that old copy of the file, and ask the database to apply all of the archived and
online redo logs generated since that backup took place. This will catch up that file with the rest of the data files in our
database, and we can continue processing with no loss of data.
ARCn typically copies online redo log files to at least two other locations (redundancy being a key to not losing data).
These other locations may be disks on the local machine or, more appropriately, at least one will be located on another
machine altogether, in the event of a catastrophic failure. In many cases, these archived redo log files are copied by some
other process to some tertiary storage device, such as tape. They may also be sent to another machine to be applied to a
standby database, a failover option offered by Oracle. We'll discuss the processes involved in that shortly.
DIAG: Diagnosability Process
In past releases, the DIAG process was used exclusively in a RAC environment. As of Oracle 11 g , with the ADR
(Advanced Diagnostic Repository), it is responsible for monitoring the overall health of the instance, and it captures
information needed in the processing of instance failures. This applies to both single instance configurations as well
as multi-instance RAC configurations.
FBDA: Flashback Data Archiver Process
This process available with Oracle 11 g Release 1 and above. It is the key component of the flashback data archive
capability—the ability to query data “as of ” long periods of time ago (for example, to query data in a table as
it appeared one year ago, five years ago, and so on). This long term historical query capability is achieved by
maintaining a history of the row changes made to every row in a table over time. This history, in turn, is maintained
by the FBDA process in the background. This process functions by working soon after a transaction commits. The FBDA
process will read the UNDO generated by that transaction and roll back the changes made by the transaction. It will
then record these rolled back (the original values) rows in the flashback data archive for us.
 
 
Search WWH ::




Custom Search