Information Technology Reference
In-Depth Information
in the cache. If so, the reference counting for that set is increased and the data
is moved from its present position in the LRU list, at the top of it. If the data
is not in the cache, it will be searched in the disk, located at the top of the
LRU list, with the reference count initialized to 1. The LFU list is an ordered
list of all the data in the cache that are arranged in no decreasing order for
counting references values.
The LF-LRU policy provides replacement when one data is not found in
the cache, it is searched in the disk and replaces in the cache itself, the set
of data that has the lowest value of counting references. Among the data
indicating the minimum reference counting, the one that leaves the cache is
that one which lies at the bottom of the LRU list, namely, that one which for
the longest time, among them, was not required. In essence, the cache is able
to store inside the most recently and frequently used data. Each time a set of
data leaves the data cache, the corresponding reference count is obviously
cleared. This mechanism allows the cache to adapt itself to changes in user
demand on the nature of data required: it is therefore a kind of adaptive
algorithms, which provide that all more frequently required data, among the
M available, may change over time. Note that the queue scheduling policies
of the requests from the server node remains the same as specified earlier.
The only thing that changes is the 'place' where the data desired by users will
be searched (in this case first on the cache and then on the disk, while in the
previous it is only on the disk).
2.7 LRU-K algorithm
This is an evolution of the LRU algorithm. The LRU algorithm, as
mentioned earlier, deletes the data that has not been used for a long time
from the list of more recent references. The LRU-K algorithm numbers
the requests submitted to the server associated with a natural number n
representative of the instant of time when the request was made (the first
request received by the server is associated with the number 1, the second
to the number 2, the third to 3 and so on). At this point, let us suppose that
at a certain instant of time t , the r 1 , r 2 , …, r h requests have been made, and
let us suppose that q is the generic set of data among the M sets present in
the memory.
We define the K -distance D t ( q , K ), relative to the set q at time t , as the
number of steps to execute 'backward', sweeping the number n starting from
h until the K more recent requests of the set q are reached. For example, if
100 requests r 1 , r 2 , …, r 100 have arrived at instant t , supposing that requests
r 7 , r 9 , r 17 , r 22 , r 36 , r 74 , r 78 , and r 91 are relative to the set q and that K =5, then
(being h =100) the K -distance D t ( q ,5) will be 100-22, then to reach the first of
the last 5 requests of the set q we have to execute 100-22 backward steps
Search WWH ::




Custom Search