Java Reference
In-Depth Information
many programs work with certain key pieces of information over and over again,
and the importance of information has little to do with how long ago the informa-
tion was first accessed. Typically it is more important to know how many times the
information has been accessed, or how recently the information was last accessed.
Another approach is called “least frequently used” (LFU). LFU tracks the num-
ber of accesses to each buffer in the buffer pool. When a buffer must be reused, the
buffer that has been accessed the fewest number of times is considered to contain
the “least important” information, and so it is used next. LFU, while it seems in-
tuitively reasonable, has many drawbacks. First, it is necessary to store and update
access counts for each buffer. Second, what was referenced many times in the past
might now be irrelevant. Thus, some time mechanism where counts “expire” is
often desirable. This also avoids the problem of buffers that slowly build up big
counts because they get used just often enough to avoid being replaced. An alter-
native is to maintain counts for all sectors ever read, not just the sectors currently
in the buffer pool. This avoids immediately replacing the buffer just read, which
has not yet had time to build a high access count.
The third approach is called “least recently used” (LRU). LRU simply keeps the
buffers in a list. Whenever information in a buffer is accessed, this buffer is brought
to the front of the list. When new information must be read, the buffer at the back
of the list (the one least recently used) is taken and its “old” information is either
discarded or written to disk, as appropriate. This is an easily implemented approx-
imation to LFU and is often the method of choice for managing buffer pools unless
special knowledge about information access patterns for an application suggests a
special-purpose buffer management scheme.
The main purpose of a buffer pool is to minimize disk I/O. When the contents of
a block are modified, we could write the updated information to disk immediately.
But what if the block is changed again? If we write the block's contents after every
change, that might be a lot of disk write operations that can be avoided. It is more
efficient to wait until either the file is to be closed, or the contents of the buffer
containing that block is to be flushed from the buffer pool.
When a buffer's contents are to be replaced in the buffer pool, we only want
to write the contents to disk if it is necessary. That would be necessary only if the
contents have changed since the block was read in originally from the file. The way
to insure that the block is written when necessary, but only when necessary, is to
maintain a Boolean variable with the buffer (often referred to as the dirty bit) that
is turned on when the buffer's contents are modified by the client. At the time when
the block is flushed from the buffer pool, it is written to disk if and only if the dirty
bit has been turned on.
Modern operating systems support virtual memory. Virtual memory is a tech-
nique that allows the programmer to write programs as though there is more of the
faster main memory (such as RAM) than actually exists. Virtual memory makes use
Search WWH ::




Custom Search