Digital Signal Processing Reference
In-Depth Information
n = 1 x(n)x(n
1 )
Substituting ρ =
(first order normalized autocorrelation coeffi-
n = 1 x 2 (n)
cient) in (3.34) gives,
σ r
σ x +
a 2 σ x
2 x ρ
=
(3.35)
The prediction gain G p is then found as,
σ x
1
G p
=
σ r =
(3.36)
a 2
1
+
2
To maximize the prediction gain, the denominator of equation (3.36) should
be minimized with respect to a ,hence,
a 2
∂( 1
+
2 aρ)
=
=
+
0
( 0
2 a
2 ρ)
(3.37)
∂a
which gives,
a
=
ρ
(3.38)
Substituting a
=
ρ in (3.36)
1
1
G p
=
2 ρρ =
(3.39)
ρ 2
ρ 2
1
+
1
The above result shows that if the correlation between the adjacent samples
is high, then a differential quantizer will perform significantly better than a
nondifferential quantizer. In fact, if the signal to be quantized is a nonvarying
DC signal, where ρ =
1, the gain of the prediction process will be infinite, i.e.
no residual error will be left and, hence, no residual information will need to
be transmitted. A typical ρ for speech is between 0.8 and 0.9 which may result
in 4-7 dB signal reduction before quantization, hence achieving significant
increase in quantization performance.
3.4 Vector Quantization
When a set of discrete-time amplitude values is quantized jointly as a single
vector, the process is known as vector quantization (VQ), also known as block
quantization or pattern-matching quantization. A block diagram of a simple
vector quantizer is shown in Figure 3.9.
If we assume x
[ x 1 ,x 2 , ... .,x N ] T is an N dimensional vector with real-
valued, continuous-amplitude (short or float representation is assumed to
be continuous amplitude) randomly varying components x k , 1
=
k
N (the
Search WWH ::




Custom Search