Graphics Reference
In-Depth Information
8.2.3.4
Termination
For termination of the arithmetic codeword in the M coder a special, non-adapting
probability state is reserved. The corresponding probability state index is given
by n D 63 and the corresponding entries of TabRangeLPS deliver a constant
value of R LPS D 2. As a consequence, for each terminating syntax element, such
as end_of_slice_segment_flag , end_of_sub_stream_one_bit ,or
pcm_flag , 7 bits of output are generated in the renormalization process. Two
more bits are needed to be flushed in order to properly terminate the arithmetic
codeword. Note that the least significant bit in this flushing procedure, i.e., the
last written bit at the encoder, is always equal to 1 and thus, represents the so-
called rbsp_stop_one_bit . Before packaging of the bitstream, the arithmetic
codeword is filled up for byte alignment with zero-valued alignment bits.
8.3
Design Considerations
Most of the proposals submitted to the joint Call for Proposals on HEVC in April
2010 already included some form of advanced entropy coding. Some of those
techniques were based on improved versions of CAVLC or CABAC, others were
using alternative methods of statistical coding, such as V2V (variable-to-variable)
codes [ 31 ] or PIPE (probability interval partitioning entropy) codes [ 50 , 51 , 102 ],
and a third category introduced increased capabilities for parallel processing on
a bin level [ 84 ], syntax element level [ 94 , 96 ], or slice level [ 25 , 28 , 105 ]. In
addition, improved techniques for coding of transform coefficients, such as zero-
tree representations [ 2 ], alternate scanning schemes [ 40 ], or template-based context
models [ 55 , 102 ], were proposed.
After an initial testing phase of video coding technology from the best perform-
ing HEVC proposals, it was decided to start the first HEVC test model (HM1.0)
[ 53 ] with two alternate configurations similar to what was given for entropy
coding in H.264/AVC: a high efficiency configuration based on CABAC and a
low-complexity configuration based on LCEC as a CAVLC surrogate. Interestingly
enough, the CABAC-based entropy coding of HM1.0 already included techniques
for improving both coding efficiency and throughput relative to its H.264/AVC-
related predecessor. To be more specific, a template-based context modeling scheme
for larger transform block sizes [ 49 , 55 ] and a parallel context processing technique
for selected syntax elements of transform coefficient coding [ 10 ] were already
part of HM1.0. During the subsequent collaborative HEVC standardization phase,
more techniques covering both aspects of coding efficiency and throughput were
integrated, as will be discussed in more details in the following.
While CABAC inherently is targeting at high coding efficiency, its data depen-
dencies can cause it to be a throughput bottleneck, especially at high bit rates as was
already analyzed in the context of H.264/AVC [ 95 ]. This means that, without any
further provision, it might have been difficult to support the growing throughput
Search WWH ::




Custom Search