Information Technology Reference
In-Depth Information
In this chapter, we investigate an alternative approach to disk-scheduler design - soft
scheduling. Specifically, by designing the disk scheduler with statistical performance guar-
antees instead of deterministic performance guarantees, we can use the disk I/O bandwidth
more efficiently and at the same time, still be able to satisfy the continuity requirement with
high probability. In addition, by placing data randomly instead of sequentially across the disk
surface, we can make use of the higher transfer rates of the outer zones in multi-zone disks
to achieve higher disk throughput. To further increase the usable disk capacity, we present a
Dual-Round Scheduling technique to schedule disk rounds in pairs so that overflows in the
next service round can be absorbed in the current service round, and an Early-Admission
Scheduling technique to enable the use of larger media block size for better disk efficiency
without adversely increasing the system response time. Finally, we present methods for de-
tecting and recovering from service round overflow. Results from a detailed simulation of five
disk drives will be used to explore the potential performance gains of the presented techniques
over hard-scheduling approaches.
4.2 Statistical Capacity Dimensioning
The worst-case dimensioning technique in hard scheduling enables the disk to provide de-
terministic performance guarantee. However, as with any worst-case techniques, the trade-off
would be lower disk utilization in practice as the worst-case scenario occurs very sparingly. For
example, ignoring rotational latency for the moment, the worst-case seek time under CSCAN
for a disk with a total of N tracks occurs with probability
Pr n i =
k
1
N
k
N
1
1 ,
i
=
1
,
2
,...,
=
(4.1)
k
+
1
where n i denotes seek distance for the i th request (see Table 4.1 for a summary of notations).
For a disk with N
10, this computes into a probability of 1.024 37 .
This is clearly negligible in practice and this motivates us to investigate soft scheduling to
provide statistical rather than deterministic performance guarantee.
In statistical capacity dimensioning, the objective is to find an operating point that provides
higher usable disk capacity than deterministic capacity dimensioning, subject to a given over-
flow probability constraint. Let F round ( t , k ) be the cumulative distribution function (CDF) for
the disk service round length, i.e.,
=
5,001 tracks and k
=
F round ( t
,
k )
=
Pr
{
t round ( k )
t
}
(4.2)
We can then define an overflow probability constraint
that specifies the maximum acceptable
occurrence probability for violating the continuity condition in equation (3.4). Using this con-
straint and equation (4.2), we can then compute the usable disk capacity, denoted by C (
ε
ε
), from
C (
ε
)
=
max
{
k
|
(1
F round ( T r ,
k ))
ε,
k
=
0
,
1
,... }
(4.3)
where T r =
R is the maximum length of a service round. This is the maximum number of
requests that can be served in each service round with an overflow probability no greater than
Q
/
.
Note that storage allocation for a media object must be pseudo-randomized under soft
scheduling, i.e., available disk blocks are randomly selected to store a media title. This
ε
Search WWH ::




Custom Search