Image Processing Reference
In-Depth Information
31.3.2.2 Compression Techniques for Remote Rendering Systems
To overcome the challenge of bandwidth limitations in a remote rendering environ-
ment mentioned previously, many research activities have been started to efficiently
compress and stream the data types generated on the remote host to the client's appli-
cation. Therefore, compression algorithms greatly reduce the required bandwidth for
network transmission. However, they introduce an additional latency for compression
and decompression. The choice of the compression technique is strongly dependent
on the available network bandwidth and the time needed to compress and decompress
the data. In this section, techniques to compress image-, depth- and geometry-data
are presented. These are the main data types typically generated by the remote side.
Colored images, e.g. from graphics card framebuffers, are mostly compressed by
standard image or video codecs to reduce bandwidth [ 2 ]. One major problem using
standard video codecs like MPEG or H.264 is that they rely on previous frames
introducing an additional delay, which is unacceptable for real-time applications.
A solution mentioned by Pajak et al. used an adapted H.264 coding algorithm.
Motion vectors are directly recovered from the 3D rendering to reduce the costly
time for motion estimation in video encoding [ 38 ]. Another popular software using
image streaming for remote rendering is VirtualGL. VirtualGL intercepts OpenGL
frame buffers for transmission and uses a high-performance JPEG library called
'libJPEGTurbo' (a derivative of the standard 'libJPEG' library). Light fields or lumi-
graphs typically consist of hundreds of high-resolution images, which can consume
a significant amount of bandwidth. Magnor and Girod [ 29 ] proposed two coders for
light-field compression. The first coder is based on video-compression techniques
that have been modified to compress the four-dimensional light-field data struc-
ture efficiently. The second coder relies entirely on disparity-compensated image
prediction establishing a hierarchical structure among the light-field images. Both
techniques reduce the size of light-fields significantly.
Compressing depth images with traditional image or video codecs, which are
more focused on maintaining the perceived visual quality, is not optimal. Many of
these algorithms smooth depth values in order to increase compression performance
at the cost of precision. Using hybrid rendering solutions where local and remote
images have to be composed, this loss of precision leads to unpleasant artifacts
(cf. Fig. 31.12 ). To avoid this situation, alternative lossless compression algorithms
such as run-length encoding can be used. But most of the simple lossless compres-
sion algorithms do not meet the bandwidth constraints due to lower compression
ratio than standard image or video codecs. Therefore, many research activities have
focused on efficient depth compression [ 17 , 24 ]. Bao et al. [ 1 ] presented a remote
rendering environment based on three-dimensional image warping and deep com-
pression utilizing the context statistics structure present in depth views. Pajak et al.
[ 38 ] developed a method allowing a tradeoff between quality and compression of
depth images.
Geometry compression algorithms [ 8 ] are convenient when using model-based
rendering techniques in a client/server architecture to reduce the bandwidth require-
ments. On the server side the extracted geometry can either be compressed as a
Search WWH ::




Custom Search