Databases Reference
In-Depth Information
d 1 =
x 1
x 0
(10)
d 1 =
Q
[
d 1 ]=
d 1 +
q 1
(11)
x 0 + d 1 =
x 1 =
ˆ
x 0 +
d 1 +
q 1 =
x 1 +
q 1
(12)
d 2 =
x 2 −ˆ
x 1
(13)
d 2 =
Q
[
d 2 ]=
d 2 +
q 2
(14)
d 2
x 2
ˆ
x 1 +
x 1 +
d 2 +
q 2
(15)
=
x 2 +
q 2
(16)
At the n th iteration we have
q n (17)
and there is no accumulation of the quantization noise. In fact, the quantization noise in the
n th reconstructed sequence is the quantization noise incurred by the quantization of the n th
difference. The quantization error for the difference sequence is substantially less than the
quantization error for the original sequence. Therefore, this procedure leads to an overall
reduction of the quantization error. If we are satisfied with the quantization error for a given
number of bits per sample, then we can use fewer bits with a differential encoding procedure
to attain the same distortion.
x n =
ˆ
x n +
Example11.3.2:
Let us try to quantize and then reconstruct the sinusoid of Example 11.2.1 using the two different
differencing approaches. Using the first approach, we get a dynamic range of differences from
0
.
2 to 0.2. Therefore, we use a quantizer step size of 0.1.
In the second approach, the
differences lie in the range
. In order to cover this range, we use a step size in the
quantizer of 0.2. The reconstructed signals are shown in Figure 11.4 .
Notice in the first case that the reconstruction diverges from the signal as we process more
and more of the signal. Although the second differencing approach uses a larger step size, this
approach provides a more accurate representation of the input.
[−
0
.
4
,
0
.
4
]
A block diagram of the differential encoding system as we have described it to this point
is shown in Figure 11.5 . We have drawn a dotted box around the portion of the encoder that
mimics the decoder. The encoder must mimic the decoder in order to obtain a copy of the
reconstructed sample used to generate the next difference.
We would like our difference value to be as small as possible. For this to happen, given the
system we have described to this point,
x n 1 should be as close to x n as possible. However,
ˆ
x n 1 is the reconstructed value of x n 1 ; therefore, we would like
x n 1 to be close to x n 1 .
Unless x n 1 is always very close to x n , some function of past values of the reconstructed
sequence can often provide a better prediction of x n . We will look at some of these predictor
functions later in this chapter. For now, let's modify Figure 11.5 and replace the delay block
with a predictor block to obtain our basic differential encoding system as shown in Figure 11.6 .
The output of the predictor is the prediction sequence
ˆ
ˆ
{
p n }
given by
p n =
f
( ˆ
x n 1 , ˆ
x n 2 ,..., ˆ
x 0 )
(18)
Search WWH ::




Custom Search