Graphics Reference
In-Depth Information
Entropy coding is performed at the last stage of video encoding (and first stage
of video decoding), after the video signal has been reduced to a series of syntax
elements. Syntax elements describe how the video signal can be reconstructed at the
decoder. This includes the method of prediction (e.g., spatial or temporal prediction)
along with its associated prediction parameters as well as the prediction error signal,
also referred to as the residual signal. Note that in HEVC only the syntax elements
belonging to the slice segment data are CABAC encoded. All other high level
syntax elements are coded either with zero-order Exponential (Exp)-Golomb codes
or fixed-pattern bit strings. Table 8.1 shows the syntax elements that are encoded
with CABAC in HEVC and H.264/AVC. For HEVC, these syntax elements describe
properties of the coding tree unit (CTU), prediction unit (PU), and transform unit
(TU), while for H.264/AVC, the equivalent syntax elements have been grouped
together along the same categories in Table 8.1 . For a CTU, the related syntax
elements describe the block partitioning of the CTU into coding units (CU), whether
the CU is intra-picture (i.e., spatially) predicted or inter-picture (i.e., temporally)
predicted, the quantization parameters of the CU, and the type (edge or band) and
offsets for sample adaptive offset (SAO) in-loop filtering performed on the CTU.
For a PU, the syntax elements describe the intra prediction mode or the motion data.
For a TU, the syntax elements describe the residual signal in terms of frequency
position, sign and magnitude of the quantized transform coefficients.
This chapter describes how CABAC entropy coding has evolved from
H.264/AVC to HEVC. While high coding efficiency is important for reducing
the transmission and storage cost of video, processing speed and area cost also need
to be considered in the development of HEVC in order to handle the demand for
higher resolutions and frame rates in future video coding systems. Accordingly,
both coding efficiency and throughput improvement tools are discussed. Section 8.2
provides an overview of CABAC entropy coding. Section 8.3 explains the design
considerations and techniques used to address both coding efficiency and throughput
requirements. Sections 8.4 - 8.7 describe how these techniques were applied to
coding tree unit coding, prediction unit coding, transform unit coding and context
initialization, respectively. Section 8.8 compares the coding efficiency, throughput
and memory requirements of HEVC and H.264/AVC for both common conditions
and worst case conditions.
8.2
CABAC Overview
The CABAC algorithm was originally developed within the joint H.264/AVC stan-
dardization process of ITU-T Video Coding Experts Group (VCEG) and ISO/IEC
Moving Picture Experts Group (MPEG). In a first preliminary version, the new
entropy-coding method of CABAC was introduced as a standard contribution [ 44 ]
to the ITU-T VCEG meeting in January 2001. CABAC was adopted as one of two
alternative methods of entropy coding within the H.264/AVC standard. The other
method specified in H.264/AVC was a low-complexity entropy coding technique
Search WWH ::




Custom Search