Databases Reference
In-Depth Information
In the following sections, we will look at various predictor and quantizer designs and see how
they function together in a differential encoding system.
11.4 Prediction in DPCM
Differential encoding systems like DPCM gain their advantage by the reduction in the variance
and dynamic range of the difference sequence. How much the variance is reduced depends on
howwell the predictor can predict the next symbol based on the past reconstructed symbols. In
this section we will mathematically formulate the prediction problem. The analytical solution
to this problem will give us one of the more widely used approaches to the design of the
predictor. In order to follow this development, some familiaritywith themathematical concepts
of expectation and correlation is needed. These concepts are described in Appendix A.
Define
2
σ
d , the variance of the difference sequence, as
2
d
2
σ
=
[ (
x n
p n )
]
(19)
E
[]
where E
is the expectation operator. As the predictor outputs p n are given by ( 18 ), the design
of a good predictor is essentially the selection of the function f
2
( · )
that minimizes
σ
d .One
problem with this formulation is that
x n is given by
ˆ
x n =
ˆ
x n +
q n
2
and q n depends on the variance of d n . Thus, by picking f
( · )
, we affect
σ
d , which in turn
affects the reconstruction
. This coupling makes
an explicit solution extremely difficult for even the most well-behaved source [ 170 ]. As most
real sources are far from well behaved, the problem becomes computationally intractable in
most applications.
We can avoid this problem by making an assumption known as the fine quantization as-
sumption . We assume that quantizer step sizes are so small that we can replace
x n , which then affects the selection of f
ˆ
( · )
x n by x n , and
ˆ
therefore
p n =
(
x n 1 ,
x n 2 ,...,
x 0 )
(20)
f
Once the function f
x n to ob-
tain p n . If we now assume that the output of the source is a stationary process, from the study
of random processes [ 171 ] we know that the function that minimizes
( · )
has been found, we can use it with the reconstructed values
ˆ
2
d
σ
is the conditional
expectation E
. Unfortunately, the assumption of stationarity is gener-
ally not true, and even if it were, finding this conditional expectation requires the knowledge
of n th-order conditional probabilities, which would generally not be available.
Given the difficulty of finding the best solution, in many applications we simplify the
problem by restricting the predictor function to be linear. That is, the prediction p n is given
by
[
x n |
x n 1 ,
x n 2 ,...,
x 0 ]
N
p n =
a i ˆ
x n i
(21)
i
=
1
Search WWH ::




Custom Search