Information Technology Reference
In-Depth Information
video decoders consume fixed-size data blocks only quasi-periodically. Given the average
video data rate, R V , and block size, Q , the average time for a video decoder to consume a
single block is
Q
R V
T avg =
(12.16)
To quantify the randomness of video block consumption time, we employ the consumption
model proposed in Section 10.3.2, reproduced below for sake of completeness.
Definition 12.1. Let T i be the time the video decoder starts decoding the i th video block, then
the decoding-time deviation of video block i is defined as
T DV ( i )
=
T i
iT avg
T 0
(12.17)
>
<
and decoding is late if T DV ( i )
20 . The maximum lag in decoding,
denoted by T L , and the maximum advance in decoding, denoted by T E , are defined as follows:
0 and early if T DV ( i )
T L =
max
{
T DV ( i )
|∀
i
0
}
(12.18)
T E =
min
{
T DV ( i )
|∀
i
0
}
(12.19)
The bounds T L and T E are implementation-dependent and can be obtained empirically. Know-
ing these two bounds, the playback instant for video block i , denoted by p ( i ), is then bounded
by
max
{
( T 0 +
iT avg +
T E )
,
0
}≤
p ( i )
( T 0 +
iT avg +
T L )
(12.20)
Buffers are used at the client to absorb these variations to prevent buffer underflow (which
leads to playback hiccups) and buffer overflow (which leads to packet dropping). Let L C =
( Y
Z ) be the number of buffers (each of Q bytes) available at the client, organized as a
circular buffer. The client prefills the first Y buffers before starting playback to prevent buffer
underflow, and reserves the last Z buffers for incoming data to prevent buffer overflow.
We first determine the lower bound for Y . Let t 0 be the time (with respect to the admission
scheduler's clock) when the first block of a video session begins transmission. Let d i be the
clock jitter between the admission scheduler and server i . Without loss of generality, we can
assume that the video title is striped with block zero at server zero. Then the time for block i
to be completely received by the client, denoted by f ( i ), is bounded by
(( i
+
f +
f + +
d mod ( i , N S ) )
(12.21)
where f + and f are used to model the maximum transmission time deviation due to ran-
domness in the system, including transmission rate deviation, CPU scheduling, bus contention,
etc.
Since the client begins video playback after filling the first Y buffers, the playback time for
video block 0 is simply equal to f ( Y
+
1) T F +
t 0 +
d mod ( i , N S ) )
f ( i )
(( i
+
1) T F +
t 0 +
1). Setting T 0 =
f ( Y
1) in equation (12.20) then the
playback time for video block i is bounded by
( f ( Y
1)
+
iT avg +
T E )
p ( i )
( f ( Y
1)
+
iT avg +
T L )
(12.22)
Search WWH ::




Custom Search