Database Reference
In-Depth Information
reports runtime information, and enforces adaptation actions determined by
its controller. Specifically, the controller decides when and how to adapt the
application behavior, and the QoS manager focuses on enforcing these adap-
tations in a consistent and ecient manner. The effectiveness of this strategy
was experimentally demonstrated in Reference 5 which showed that it reduced
overheads on the simulation (less than 5%) as well as buffer overflow and data
loss.
5.3.2 QoS Management at In-Transit Nodes
In-transit data processing is achieved using a dynamic overlay of available
nodes (workstations or small to medium clusters, etc.) with heterogeneous
capabilities and loads—note that these nodes may be shared across multiple
applications flows. The goal of in-transit processing is to opportunistically
process as much data as possible before the data reaches the sink. The in-
transit data processing service at each node performs three tasks, namely,
processing, buffering and forwarding, and the processing depends on the ca-
pacity and capability of the node and the amount of processing that is still
required for a data block at hand. The basic idea is that the in-transit data
processing service at each node completes at least its share of the processing
(which may be predetermined or dynamically computed) and can perform ad-
ditional processing if the network is too congested for forwarding. Key aspects
of the in-transit QoS management include: (1) adaptive buffering and data
streaming that dynamically adjusts buffer input and buffer drainage rates,
(2) adaptive run-time management in response to network congestions by dy-
namically monitoring the utility and tradeoffs of local computation versus data
transmission, and (3) signal the application end-points about network state to
achieve cooperative end-to-end self-management—that is, the in-transit man-
agement reacts to local services while the application end-point management
responds more intelligently by adjusting its controller parameters to alleviate
these congestions.
Experiments conducted using the cooperative end-to-end self-managing
data streaming using the GTC fusion application 5 , 35 have shown that adap-
tive processing by the in-transit data processing service during congestions
decreases the average percent idle time per data block from 25% to 1%. Fur-
thermore, coupling end-point and in-transit level management during conges-
tion reduces percent average buffer occupancy at in-transit nodes from 80% to
60.8%. Higher buffer occupancies at the in-transit lead to failures and result
in in-transit data being dropped, and can impact the QoS of applications at
the sink. Finally, end-to-end cooperative management decreases the amount
of data lost due to congestions at intermediate in-transit nodes, increasing the
QoS at the sink. For example, if the average processing time per data block
(1 block is 1 MB) is 1.6 sec at the sink, cooperative management saves about
168 sec (approx. 3 minutes) of processing time at the sink.
Search WWH ::




Custom Search