Information Technology Reference
In-Depth Information
This is an advantage in storage and data management as the system operator can replace/update
the stored media data without worry of adversely affecting the disk's streaming capacity.
Using the Seagate ST12400N SCSI-2 hard disk as an example, the disk transfer rate in
the disk specification is 3.35 MBps. If we only consider the disk transfer rate, then with a
media stream bit-rate of 1.2 Mbps, the disk will be able to support up to 22 concurrent streams.
However, if we account for worst-case disk seek and other overheads, then the resultant capacity
is only 12 streams. This serves to illustrate the impact of disk seek overhead on streaming
capacity.
3.3 Improving Disk Throughput
Knowing the performance impacts of disk seek, the natural question then is whether we can
reduce the disk seek overhead to achieve higher disk throughput. Let us revisit the equation
for computing the disk round time and normalize it by the number of data blocks retrieved:
N track
T latency +
( k
+
1)
1
Q
R disk
t max
round ( k )
=
α + β
+
(3.7)
k
k
+
1
which represents the per-request service time under the worst-case disk seek scenario.
If we examine the system parameters in equation (3.7), we will find that there are three
non-configurable system parameters, namely the constant overhead
, disk raw transfer rate
R disk , and rotational latency T latency . These parameters are properties of the physical disk and
thus cannot be controlled by the server application. It is possible to eliminate rotational latency
by reading one full track of data at a time. However, this track-based retrieval technique has its
own problems such as large buffer/delay (see below) and disk zoning (Section 3.6) will make
track-based retrieval very complicated.
By contrast, the remaining components in the equation, namely, k - the number of data
blocks to retrieve in a service round, and Q - the size of the data block to retrieve, both can be
controlled by the server application. As the disk overheads are relatively fixed, we can improve
disk throughput simply by increasing the retrieval block size Q and/or retrieving more data
blocks in a service round (i.e., increasing k ). Indeed, this is a simple yet effective method to
improve the disk throughput, illustrated in Figure 3.5 for two disk models.
But then there are also trade-offs. First, increasing Q and k will consume more memory
for buffering. Using the double buffering scheme, the total buffer size is equal to 2 kQ and,
thus, the buffer requirement is proportional to the retrieval block size. Nevertheless, given the
decreasing cost of physical memory, this may not be the limiting factor in practice.
Besides buffer size, increasing Q and k will also increase the disk service round length as the
disk transfer time (i.e., Q / R disk ) in equation (3.7) will increase proportionally. While this does
not affect on-going streams, the admission delay experienced by new users will be increased.
To see why, consider a user who initiates a new streaming session by sending a request to
the server using some control protocol. Upon receiving the request, the server will first verify
the data availability, allocate the system resources (e.g., buffers, state variables, etc.), and then
start retrieving data from the disk for transmission. Now as the user request can arrive at any
time, it will likely arrive in the middle of a disk service round. In this case the server cannot
serve the request in the currently on-going service round as this could lead to additional disk
α
Search WWH ::




Custom Search