Information Technology Reference
In-Depth Information
Figure 1. Time overlapping in network communication
plenty resources to do that instead of the user
nodes with limited available memory. Moreover,
since the prefetching algorithm is executed by the
memory node, the potential used data blocks can
be pushed without a prefetching command comes
from the user node, and the extra communication
cost can also be prevented.
The basic operations of the remote caching are
“put page” and “get page” upon the basic element
“memory pages”, which correspond to write or
read operations on local disks upon disk blocks.
In most cases, the “write” or “put page” can be
overlapped by an asynchronous operation, thus,
their access latency can be ignored. As a conse-
quence, we only consider the “read” or “get page”
operations and do not distinguish between them.
reading starts. If the free physical memory is not
enough, the operating system has to evict some
of the obsolete memory pages. However, the
prefetched memory pages may not be used at
all. In this case, the allocated memory pages are
wasted. Therefore, we need to design a prefetch-
ing algorithm to maximize the possibility that a
user node will use the prefetched memory pages
pushed by a memory node.
We will firstly propose a system policy that
a user node determines whether a prefetched
memory page should be accepted. The policy is
important because not all of the pushed memory
pages should be accepted, otherwise the cache
will be polluted, while the network bandwidth will
also be wasted if the user node rejects too much
pushed pages. In our scheme, a prefetching buffer
is assigned for each user node. The prefetching
buffer is a queue of free memory pages with the
maximal size of k (0 < k < F ), and F is the number
of free memory pages in the user node. The system
should maintain the prefetching buffer as follows:
SYSTEM DESIGN
VanderWiel et al. concluded that a data prefetching
mechanism should address three basic questions
(Vanderwiel, et al. , 2000): 1) When is prefetch-
ing initiated, 2) where are prefetched data placed,
and 3) what is prefetched? In this section, we will
mainly discuss these questions.
If the prefetched page can be found in the
file system cache, just reject it; else accept
it when k > 0;
Else if the size of the current prefetching
buffer is less than k , allocate memory for
the accepted page and add the page to the
tail of queue;
Prefetching Buffer
For each prefetched disk block, memory pages
must be allocated to hold it before the actual
Search WWH ::




Custom Search