Digital Signal Processing Reference
In-Depth Information
4. The difference is transformed using the DCT and quantized.
The quantization also uses a scaling factor to regulate the
average number of bits in compressed video frames.
5. The quantizer output, motion vector and header informa-
tion is entropy (variable length) coded, resulting in the
compressed video bitstream.
6. In a feedback path, the quantized macroblocks are rescaled
and transformed using the IDCT to generate the same differ-
ence or residual as the decoder. It has the same artifacts as
the decoder due to the quantization processing, which is
irreversible.
7. The quantized difference is added to the motion-compen-
sated prediction macroblock (see step two above). This is
used to form the reconstructed frame, which can be used as
the reference frame for the encoding of the next frame. It is
worth remembering that the decoder will only have access
to reconstructed video frames, not the actual original video
frames, to use reference frames.
The video decoder is shown as a simplified block diagram. The
following steps take place in the video decoder:
Compressed
Video Bitstream
Uncompressed
Video Output
+
Frame
Buffer and
Difference
Dequantize
and Rescale
Entropy
IDCT
Add
Block
Decode
Reorder
+
MQuant
Header
info
Motion Estimation Vectors
Previous Reconstructed Frame
Motion
Compensate
Reconstructed
Frame Buffer
Current Reconstructed
Frame
Figure 14.7. Video compression decoder.
1. Input compressed stream entropy decoded. This extracts
header information, coefficients, and motion vectors.
2. The data is ordered into video frames of different types (I,
P,B), buffered and re-ordered.
3. For each frame, at the macroblock level, coefficients are re-
scaled and transformed using IDCT, to produce the difference,
or residual block.
4. The decoded motion vector is used to extract the macro-
block data from a previous decoded video frame. This
Search WWH ::




Custom Search