Biomedical Engineering Reference
In-Depth Information
node to the entire display environment and is advantageous as it works with
most applications and may not require code to be recompiled. Unfortunately,
this approach generally does not work with textures and shaders and has poor
synchronization mechanisms, limiting its use when real-time data processing
and fi ltering are desirable, which should be considered core capabilities of
collaborative visual analytics spaces.
27.2.2.2 Pixel Streaming Another approach to drive tiled-display environ-
ments is to send pixel content to display nodes as outlined in the Scalable
Adaptive Graphics Environment (SAGE) [20 - 22] . In this approach, one
system renders content into a buffer that is mapped to the tiled-display envi-
ronment. This buffer is segmented to match the confi guration of the wall and
is streamed out via the network. The advantage of this approach is that data
have to be processed only once on the streaming node, being the only node
required to have access to data fi les and applications. Consequently, the ren-
dering nodes only need to have minimal computational power as they are only
tasked with receiving and rendering this content. The drawback of this concept
is that it requires a very low latency, high-bandwidth network. Larger buffer
sizes or frame rates require increased network costs, removing the possibility
for native resolution rendering on large tiled-display systems. Even for content,
which fi ts inside of the network bandwidth requirements, the read-back and
splitting operations also have performance costs associated with them, thereby
increasing latency. Finally, as the burden of rendering content is left to a single
machine for a uniform environment, the performance of this system sets the
upper bound for the capabilities of the tiled-display environment.
27.2.2.3 Macroblock Forwarding Chen, interested in distributing the
computational workload for playing large-scale media, derived a method for
segmenting and forwarding compressed video information. As most compres-
sion schemes contain global motion vectors and progressive frame decoding,
they do not work for region-of-interest decoding. In the MPEG2 standard,
motion vectors are confi ned to macroblocks, allowing for the possibility of
partial frame decoding [23, 24]. Unfortunately the MPEG2 standard only
allows for video sizes up to 1920
1152 [25], meaning that encoding videos of
greater resolution cannot be accomplished using common encoders.
Furthermore, this approach requires a second level of nodes in between the
head node and render nodes in order to negotiate macroblock forwarding.
These routing nodes must receive and resend information, including header
data, which incurs an additional 20% bandwidth cost. While this method is
useful for ultra-large-resolution video data, it requires additional hardware, is
limited in its playback ability, and still requires substantial network resources
in order to operate.
×
27.2.2.4 Distributed Application The distributed application approach as
shown in VRJuggler [26] and Cross-Platform Cluster Graphic Library (CGLX)
Search WWH ::




Custom Search