Graphics Reference
In-Depth Information
3.2.1 Scalablecodingtechniques
Atpresentvideo productionandstreaming isubiquitousas moreandmore devices areable
to produce and distribute video sequences. This brings the increasingly compelling
requirement of sending an encoded representation of a sequence that is adapted to the user,
device and network characteristics in such a way that coding is performed only once while
decoding may take place several times at a differentresolution,framerateandquality.Scal-
ablevideocodingallowsdecodingofappropriatesubsets of bitstream to generate complete
pictures of size and quality dependent on the proportion of the total bitstream decoded. A
number of existing video compression standards support scalable coding, such as MPEG-2
Video and MPEG-4 Visual. Due to reduced compression efficiency, increased decoder
complexity and the characteristics of traditional transmission systems the above scalable
profiles are rarely used in practical implementations. Recent approaches for scalable video
coding are based on motion compensated 3D wavelet transform and motion-compensated
temporal differential pulse code modulation (DPCM) together with spatial de-correlating
transformations [38-41].
The wavelet transform proved to be a successful tool in the area of scalable video coding
since it enables to decompose a video sequence into several spatio-temporal subbands.
Usually the wavelet analysis is applied both in the temporal and spatial dimensions, hence
theterm3Dwavelet.Thedecodermightreceiveasubsetofthesesubbandsandreconstruct
thesequenceatareducedspatio-temporalresolutionatanyquality.Theopen-loopstructure
of this scheme solves the drift problems typical of the DPCM- based schemes whenever
there is a mismatch between the encoder and the decoder. The scalable video coding
based on 3D wavelet transform is addressed in recent research activities [38] [39]. The
scalable video coding profiles of existing video coding standards are based on DCT meth-
ods. Unfortunately, due to the closed loop, these coding schemes have to address the prob-
lem of drift that arises whenever encoder and decoder work on different versions of the
reconstructed sequence. This typically leads to the loss of coding efficiency when com-
pared with non-scalable single layer encoding.
In 2007, the Joint Video Team (JVT) of the ITU-T VCEG and the ISO/IEC MPEG
standardized a Scalable Video Coding (SVC) extension of the H.264/AVC standard [40].
This SVC standard is capable of providing temporal, spatial, and quality scalability with
base layer compatibility with H.264/AVC. Furthermore, this contains an improved DPCM
Search WWH ::




Custom Search