Databases Reference
In-Depth Information
Freeing the Pages in a Data Cache
One obvious requirement before reading a page from disk into the data cache is that there has to be
memory available in the data cache to support the read. If there aren't any free pages, the read operation
will have nowhere to place the incoming data page. One option for performing this task would be to
search the data cache for a ''freeable'' page every time an operation needed to put something into the
data cache. This approach would perform very poorly. Instead, SQL Server strives to keep a supply of
free pages ready for use. Thus, when an operation needs a data page, it can simply grab one from the set
of free cache pages.
A data page that has been placed in the data cache also contains some housekeeping information.
That information includes information about when the page was last accessed as well as some other
information. The page replacement policy, whose job it is to determine which pages can be freed, will
utilize that knowledge by using a technique known as LRU-K. This technique will examine both the last
time and how often a page has been accessed, and determine if the page can be freed.
If the page can't be freed, it simply remains in the data cache. Likewise, if the page can be freed, it is
written to disk if it's dirty (the page in memory has changed since it was read in from disk), removed
from the hash table, and put on the list of free cache pages.
Using the technique will naturally keep valuable (highly referenced) pages in the data cache while
returning insignificant pages to the free list.
Previous versions of SQL Server allowed the ability to mark a table so that its pages would never be
freed. Therefore, the pages would remain in the data cache permanently. This is known as pinning the
table. However, that option is no longer available in SQL Server 2005. Microsoft believes that the cache
management is sophisticated enough to make pinning obsolete.
The work of applying this technique falls to three operations. Primarily it's applied by the various user
threads. It's also applied by the lazywriter and checkpoint operations as well.
Lazywriter: The lazywriter process periodically examines the size of the free list. If the list is
below a certain threshold, depending on the size of the data cache, lazywriter traverses the data
cache and updates the free list. Any dirty pages are written to disk and then they're added to the
free list.
User process: A user process performs this function when it needs to read a page from disk into
the data cache. It operates in a similar manner as the lazywriter process. Once the operation
begins, the free list is checked to determine if it's too small. Be aware that this user process has
consumed cache pages from the free list for its own read.
Checkpoint: The checkpoint process also periodically examines the data cache searching for
''freeable'' data pages. However, unlike the lazywriter and user processes, the checkpoint
process will never put data pages back on the free list.
Monitoring the Data Cache
A useful indicator concerning the performance of the data cache is the Buffer Cache Hit Ratio
performance counter. It indicates the percentage of data pages found in the data cache as opposed
to reading them from disk. A value of 90 percent or greater is considered ideal. A value of 90 percent
means that pages were found in the data cache 90 percent of the time, whereas 10 percent of the pages
Search WWH ::




Custom Search