Databases Reference
In-Depth Information
of the first two tasks, with the performance measure being the mean squared quantization
error. In this section we will look at accomplishing the third task, assigning codewords to the
quantization interval. Recall that this becomes an issue when we use variable-length codes.
In this section we will be looking at the latter situation, with the rate being the performance
measure.
We can take two approaches to the variable-length coding of quantizer outputs. We can
redesign the quantizer by taking into account the fact that the selection of the decision bound-
aries will affect the rate, or we can keep the design of the quantizer the same (i.e., Lloyd-Max
quantization) and simply entropy-code the quantizer output. Since the latter approach is by
far the simpler one, let's look at it first.
9.7.1 Entropy Coding of Lloyd-Max Quantizer Outputs
The process of trying to find the optimum quantizer for a given number of levels and rate is a
rather difficult task. An easier approach to incorporating entropy coding is to design a quantizer
that minimizes the msqe , that is, a Lloyd-Max quantizer, then entropy-code its output.
In Table 9.7 we list the output entropies of uniform and nonuniform Lloyd-Max quantizers.
Notice that while the difference in rate for lower levels is relatively small, for a larger number
of levels, there can be a substantial difference between the fixed-rate and entropy-coded cases.
For example, for 32 levels a fixed-rate quantizer would require 5 bits per sample. However,
the entropy of a 32-level uniform quantizer for the Laplacian case is 3.779 bits per sample,
which is more than 1 bit less. Notice that the difference between the fixed rate and the uniform
quantizer entropy is generally greater than the difference between the fixed rate and the entropy
of the output of the nonuniform quantizer. This is because the nonuniform quantizers have
smaller step sizes in high-probability regions and larger step sizes in low-probability regions.
This brings the probability of an input falling into a low-probability region and the probability
of an input falling in a high-probability region closer together. This, in turn, raises the output
entropy of the nonuniform quantizer with respect to the uniform quantizer. Finally, the closer
the distribution is to being uniform, the less difference in the rates. Thus, the difference in
rates is much less for the quantizer for the Gaussian source than the quantizer for the Laplacian
source.
T A B L E 9 . 7
Output entropies in bits per sample for minimum mean squared error
quantizers.
Number of Levels
Gaussian
Laplacian
Uniform
Nonuniform
Uniform
Nonuniform
4
1.904
1.911
1.751
1.728
6
2.409
2.442
2.127
2.207
8
2.759
2.824
2.394
2.479
16
3.602
3.765
3.063
3.473
32
4.449
4.730
3.779
4.427
 
Search WWH ::




Custom Search