Databases Reference
In-Depth Information
to clustering. This clustering is exploited by the LBG algorithm by placing output points at
the location of these clusters. However, in Example 10.3.2 , we saw that even when there is
no correlation between samples, there is a kind of probabilistic structure that becomes more
evident as we group the random inputs of a source into larger and larger blocks or vectors.
In Example 10.3.2 , we changed the position of the output point in the top-right corner. All
four corner points have the same probability, so we could have chosen any of these points. In
the case of the two-dimensional Laplacian distribution in Example 10.3.2 , all points that lie
on the contour described by
constant have equal probability. These are called
contours of constant probability . For spherically symmetrical distributions like the Gaussian
distribution, the contours of constant probability are circles in two dimensions, spheres in three
dimensions, and hyperspheres in higher dimensions.
We mentioned in Example 10.3.2 that the points away from the origin have very little
probability mass associated with them. Based on what we have said about the contours of
constant probability, we can be a little more specific and say that the points on constant
probability contours farther away from the origin have very little probability mass associated
with them. Therefore, we can get rid of all of the points outside some contour of constant
probability without incurring much of a distortion penalty. In addition, as the number of
reconstruction points is reduced, there is a decrease in rate, thus improving the rate distortion
performance.
|
x
| + |
y
| =
Example10.6.1:
Let us design a two-dimensional uniform quantizer by keeping only the output points in the
quantizer of Example 10.3.2 that lie on or within the contour of constant probability given
by
|
x 1 | + |
x 2 | =
. If we count all the points that are retained, we get 60 points. This is
close enough to 64 that we can compare it with the eight-level uniform scalar quantizer. If we
simulate this quantization scheme with a Laplacian input, and the same step size as the scalar
quantizer, that is,
5
7309, we get an SNR of 12.22 dB. Comparing this to the 11.44 dB
obtained with the scalar quantizer, we see that there is a definite improvement. We can get
slightly more improvement in performance if we modify the step size.
=
0
.
Notice that the improvement in the previous example is obtained only by restricting the
outer boundary of the quantizer. Unlike Example 10.3.2 , we did not change the shape of
any of the inner quantization regions. This gain is referred to in the quantization literature as
boundary gain . In terms of the description of quantization noise in Chapter 8, we reduced
the overload error by reducing the overload probability, without a commensurate increase in
the granular noise. In Figure 10.22 , we have marked the 12 output points that belonged to
the original 64-level quantizer, but do not belong to the 60-level quantizer, by drawing circles
around them. Removal of these points results in an increase in overload probability. We also
marked the eight output points that belong to the 60-level quantizer, but were not part of the
original 64-level quantizer, by drawing squares around them. Adding these points results in a
decrease in the overload probability. If we calculate the increases and decreases (see Problem
5 at the end of this chapter), we find that the net result is a decrease in overload probability.
This overload probability is further reduced as the dimension of the vector is increased.
Search WWH ::




Custom Search