Graphics Reference
In-Depth Information
Table 10.12 SRAMs for
neighboring pixels
SRAM
Bits
VPB top
3072
VPB left
1024
VPB top-left
768
To t a l
4864
Table 10.13 Gate-count
(in kgates) breakdown for
Intra prediction
Module
Logic area
Reference pixel registers
and padding
12.1
Reference pixel preparation
1.3
Prediction
8.1
Control
5.5
To t a l
27.0
reference preparation and prediction. Another factor is that the three operations
require different amount of computation. For an N N TU, reference padding
and preparation require O.N/ computation while prediction is O.N 2 /.
The reference preparation operation in HEVC varies depending on the prediction
mode. DC mode requires the accumulation of the reference pixels in order to
compute the DC value. An angular extension of the reference pixels may be required
before prediction can begin. A mode dependant intra smoothing (MDIS) filter may
be applied to the reference pixels for TU sizes 8, 16 and 32 depending on the intra
mode.
10.7.3
Implementation Results
Tab le 10.13 shows the synthesis results for the intra prediction architecture in 40 nm
CMOS. Reference pixel registers and their read-out take the most area. The area
for reference preparation, which is a new feature in HEVC, is about 1.3 kgate. The
design is synthesized at 200 MHz and can support 4K Ultra-HD decoding at 30 fps.
10.8
In-Loop Filters
HEVC uses two in-loop filters—deblocking filter and sample adaptive offset
(SAO)—that attempt to reduce compression artifacts and improve coding efficiency.
The deblocking filter in HEVC processes edges on an 8-pixel grid and thus, has
lower computational complexity than H.264/AVC's deblocking filter which uses
a 4-pixel grid. SAO involves selecting an offset type for each pixel based on its
neighboring pixels and adding the offset. Deblocking and SAO can be implemented
in a single pipeline stage as described in [ 30 ].
 
Search WWH ::




Custom Search