Information Technology Reference
In-Depth Information
20
15
10
5
0
0
0.2
0.4
0.6
0.8
1
Max. Server Clock Jitter (seconds)
Concurrent Push w/AGSS & SSS (Max)
Staggered Push (Max)
Concurrent Push w/AGSS & SSS (Avg, 90% Util)
Concurrent Push w/AGSS & SSS (Avg, 80% Util)
Staggered Push (Avg, 90% Util)
Staggered Push (Avg, 80% Util)
Figure 12.13 System response time versus server clock jitter
12.6.5 Server Bandwidth Overhead
Figure 12.14 plots the ORT transmission rate versus server clock jitter for block sizes of
Q
=
64KB, 128KB, and 256KB. As clock jitter can be readily controlled to within 100ms by
distributed software algorithms, the results show that over-rate transmission is applicable in
all three cases. For example, with Q
64KB, ORT will transmit at 1.556Mbps instead of the
video bit-rate at 1.2Mbps, incurring a bandwidth overhead of 29.7%. Increasing the block size
to 256KB reduces the ORT transmission rate to 1.273Mbps, or a bandwidth overhead of only
6%. Thus, the system designer can adjust the block size to balance between bandwidth cost
and memory cost. In any case, compared to uncontrolled traffic overlapping which results in
doubled transmission rate at 2.4Mbps, bandwidth under ORT is clearly substantially lower.
=
12.7 Network Resource Reservations
As the results in the previous section show, the staggered-push architecture can be scaled up
to any number of servers, provided that the network has sufficient capacity. Compared with
the concurrent-push architecture, staggered-push architecture achieves linear scalability at the
expense of bursty network traffic (and slightly larger delay and client buffer requirement). In
Search WWH ::




Custom Search