Information Technology Reference
In-Depth Information
The head starts from the inner part of the disk. On satisfying the requests
regarding data in the inner part, the algorithm searches for data requests in the
most external part of the disk, then the cycle begins again. (This technique
exploits the fact that the rotational latency time is shorter than that of positioning:
radial movements are in fact reduced with the technique described.)
Let us consider, first, the absence of a memory cache in the server node,
and let us assume that every data request should be met by seeking it on
secondary memory. This is done for two reasons: to evaluate the impact that
each disc has on server performance and as a consequence of the fact that in
some applications the use of a cache might actually not take place. Besides,
in some system configurations, the cache size could be so small that it will
have a negligible weight on the global characteristics of the node server. For
these reasons we will study later the impact of the cache.
The scheduling mechanisms we illustrate here are divided into two
categories:
those using different scheduling algorithms for the memory and the
secondary server,
those that are based on a single scheduling criterion that takes decisions
on the basis of information gathered from the queue waiting in the input
to the server and those provided by the disk (e.g. location of data on the
disk) [7].
2.1 The ADoRe algorithm
The first algorithm we illustrate is ADoRe (active disk on request), which
combines and extends the RxW algorithm with a C-LOOK scheduling
algorithm for disk [8]. The simple algorithm RxW does not meet the requests
for data exchange in a ubiquitous environment: this algorithm must be executed
as many times as there are requests in the queue in the input to the server.
If we assume that the tail is composed of n elements, the RxW algorithm
will initially consider the n RxW products corresponding to the requests and
then selecting the one with the highest value. At this point, data correspond-
ing to the selected request are searched in the memory and sent to users who
requested them. As a consequence, n −1 applications have been queued: these
will be subject to a new running of the algorithm, so that the queue is reduced
to n −2 elements and so on until the data requests are completed. It is
unthinkable to adopt a technique of this kind, as it would require a major
effort for the server's CPU. Consider then the fact that the queue is not
static, but dynamic, i.e. it will be varying over time: if the requirements were
initially reduced to n −1, others could superpose to them, bringing the queue
to an even higher size than the initial one. Regardless of the computing power
Search WWH ::




Custom Search