Graphics Reference
In-Depth Information
as described above, mitigates the effect of such a rigorous partitioning, the loss in
coding efficiency is still too large to be acceptable for certain applications.
Wavefront parallel processing (WPP) is such a technique for picture partitioning
with the focus on improving the capabilities for parallel processing at virtually
no loss in coding efficiency [ 27 , 34 ]. According to the WPP scheme, a picture
is partitioned into rows of CTUs with each row being represented by its own
CABAC bitstream which, however, is not fully independently parsable except for
the bitstream belonging to first row of CTUs in a picture. Nevertheless, independent
parsing and decoding of the WPP bitstreams is possible, if the processing from
one CTU row to the next complies with an offset of two consecutive CTUs. This
offset guarantees, on the one hand, that all spatial dependencies for the decoding
process are preserved and, on the other hand, it permits inheritance of the adapted
probability models from the first two CTUs in the preceding row of CTUs. The
latter functionality, however, requires to store the content of all probability models
after decoding the second CTU in a row. As already discussed above, the required
memory depends on the slice type: for I slices 134 bytes and for P and B slices each
154 bytes of memory are needed. Note, however, that by using a proper scheduling
and synchronization at the decoder, only one instance of such an additional context
memory is required in addition to the N ! context memories required for parsing and
decoding N ! CTU rows in parallel.
The same context memory handling applies also to the concept of dependent
slice segments [ 69 ]. In HEVC, slices are composed of one initial independent
slice segment and zero or more dependent slice segments, all of which contains an
integer number of CTUs. Compared to regular slices or independent slice segments,
dependent slice segments do not break the coding dependencies within the picture
area to which the corresponding CTUs belong. Although each dependent slice
segment has its own CABAC bitstream, the parsing of this bitstream cannot start
before the parsing of the preceding dependent or independent slice segment has been
finished. In particular, the content of all adapted probability models after parsing the
last CTU in the preceding slice segment needs to be stored and propagated to the
current dependent slice segment. Therefore, the same amount of additional context
memory is required as in the WPP case. Note, however, that WPP and dependent
slices, even though most often used together, are different concepts. While WPP is
targeting at parallel processing, dependent slices cannot be processed in parallel and
are most useful in applications requiring ultra-low delay, since each dependent slice
segment can be put into a separate transport packet. Please refer to Chap. 3 for more
details.
8.8
Overall Performance
This section analyzes the improvements of CABAC in HEVC relative to CABAC
in H.264/AVC. In the first part of this section, the impact of all relevant CABAC
changes in terms of coding efficiency is experimentally evaluated, while in the
Search WWH ::




Custom Search