Databases Reference
In-Depth Information
Now we need two things: to know how to find the closest output point (remember, not
all lattice points are output points), and to find a way of assigning a binary codeword to the
output point and recovering the output point from the binary codeword. This can be done by
again making use of the specific structures of the lattices. While the procedures necessary are
simple, explanations of the procedures are lengthy and involved (see [ 154 , 152 ] for details).
10.7 Variations on the Theme
Because of its capability to provide high compression with relatively low distortion, vector
quantization has been one of the more popular lossy compression techniques over the last
decade in such diverse areas as video compression and low-rate speech compression. During
this period, several people have come up with variations on the basic vector quantization
approach. We briefly look at a few of the more well-known variations here, but this is by no
means an exhaustive list. For more information, see [ 136 , 155 ].
10.7.1 Gain-Shape Vector Quantization
In some applications such as speech, the dynamic range of the input is quite large. One effect of
this is that, in order to be able to represent the various vectors from the source, we need a very
large codebook. This requirement can be reduced by normalizing the source output vectors,
then quantizing the normalized vector and the normalization factor separately [ 156 , 145 ]. In
this way, the variation due to the dynamic range is represented by the normalization factor or
gain , while the vector quantizer is free to do what it does best, which is to capture the structure
in the source output. Vector quantizers that function in this manner are called gain-shape
vector quantizers . The pyramid quantizer discussed earlier is an example of a gain-shape
vector quantizer.
10.7.2 Mean-Removed Vector Quantization
If wewere to generate a codebook froman image, differing amounts of background illumination
would result in vastly different codebooks. This effect can be significantly reduced if we remove
the mean from each vector before quantization. The mean and the mean-removed vector can
then be quantized separately. The mean can be quantized using a scalar quantization scheme,
while the mean-removed vector can be quantized using a vector quantizer. Of course, if this
strategy is used, the vector quantizer should be designed using mean-removed vectors as well.
Example10.7.1:
Let us encode the Sinan image using a codebook generated by the Sena image, as we did in
Figure 10.16 However, this time we will use a mean-removed vector quantizer. The result is
Search WWH ::




Custom Search