Information Technology Reference
In-Depth Information
2.4 The RxW/S algorithm
This RxW/S algorithm also belongs to the mixed category. This algorithm is
much more simple than the previous one. Given a certain queue of incoming
requests to the server, for each element of it, it is estimated the value of the
waiting time W , the value of the number of users (nodes) linked to the ' i '
request, and the time Sstart, i.e. the time required to position the head on
the requested object, starting from the location of the last set of data related to
the last fulfilled request. The value of RxW/Sstart, i is then calculated for each
request in the queue. Among these, the first to be satisfied is the one for which
the above-mentioned product is maximum. This algorithm retains some of the
features of the algorithm RxW, while taking into account the physical access
time to resources on the secondary memory. This algorithm is a one-step
algorithm: it is executed every time a request is fulfilled (i.e. requested
information are taken from the disk and sent). In this algorithm there is no
grouping of requests, because each request is served individually [7].
2.5 Cache memory in a server node
Cache is nothing but a memory of much smaller size compared with a mass
storage device, but at the same time it is much faster: its access time is
significantly lower than that of a disk. The basic idea is to use the cache to
store more recently and more frequently used data to speed up the request of
users. Let us see then how to change the characteristics of the algorithms
examined earlier, when the ubiquity of the server node includes cache. In
particular, algorithms for the management of the cache must be taken into
account. Scheduling policies under consideration involve the combined use
of algorithms for scheduling such as least recently used (LRU) and least
frequently used (LFU), which are considerably efficient and simple at the
same time. The combined use of two techniques is more efficient than the use
of the same taken individually. It is also more advantageous than other cache
management algorithms (such as LRU-K).
2.6 LF-LRU algorithm
This algorithm uses a cache buffer with a given capacity in terms of storing
sets of data. The algorithm works by using two order lists: a LRU and a LFU
list. The data entering the buffer are placed from time to time at the top of the
LRU list. For each set of data that enters the buffer, the algorithm counts the
number of references (here reference means the request of that data from one
or more users) during the time it is in the buffer. Give a request for a certain
set of data, the algorithm first checks if the data requested is already present
Search WWH ::

Custom Search