Game Development Reference
In-Depth Information
CG is divided to two regions, as shown in Fig. 3.8 . The intra prediction modes and
CG regions are applied in the context coding of syntax elements including the last
CG position, last coefficient position, and run value.
3.2.6 In-Loop Filtering
Artifacts such as blocking artifacts, ringing artifacts, color biases, and blurring
artifacts are quite common in compressed video, especially at medium and low bit
rate. To suppress those artifacts, deblocking filtering, sample adaptive offset (SAO)
filtering (Chen et al. 2013 ) and Adaptive Loop Filter (ALF) (Zhang et al. 2014 )are
applied to the reconstructed pictures sequentially.
Deblocking filter aims at removing the blocking artifacts caused by block trans-
form and quantization. The basic unit for the deblocking filter is an 8
×
8 block. For
each 8
8 block, the deblocking filter is used only if the boundary belongs to either
of CU boundary, PU boundary or TU boundary.
After the deblocking filter, an SAO filter is applied to reduce the mean sample
distortion of a region, where an offset is added to the reconstructed sample to reduce
ringing artifacts and contouring artifacts. There are two kinds of offset called Edge
Offset (EO) and Band Offset (BO) mode. For EO mode the encoder can select
and signal a vertical, horizontal, downward-diagonal, or upward-diagonal filtering
direction. For BO mode, an offset value that directly depends on the amplitudes of
the reconstructed samples is added to the reconstructed samples.
ALF is the last stage of in-loop filtering. There are two stages in this process. The
first stage is filter coefficient derivation. To train the filter coefficients, the encoder
classifies reconstructed pixels of the luminance component into 16 categories, and
one set of filter coefficients is trained for each category usingWiener-Hopf equations
to minimize the mean squared error between the original frame and the reconstructed
frame. To reduce the redundancy between these 16 sets of filter coefficients, the
encoder will adaptively merge them based on the rate-distortion performance. At its
maximum, 16 different filter sets can be assigned for the luminance component and
only one for the chrominance components. The second stage is a filter decision, which
includes both the frame level and LCU level. Firstly the encoder decides whether
frame-level adaptive loop filtering is performed. If frame level ALF is on, then the
encoder further decides whether the LCU level ALF is performed.
×
3.3 Scene Video Coding
More and more videos being captured in specific scenes, such as surveillance video
and videos from classroom, home, court, etc., are characterized by temporally sta-
ble background. The redundancy originating from the background could be further
reduced. AVS2 developed a background picture model-based coding method (Dong
 
Search WWH ::




Custom Search