Digital Signal Processing Reference
In-Depth Information
quantizer and the resulting error signal is then used in the input to a
B
2
bit
L
2
level second vector quantizer. The sum of the two quantized vectors results
in the quantized value of the input vector
x
.
The computation and storage costs for a
k
-stage cascaded vector quantiza-
tion are respectively,
Com
cc
=
N(L
1
+
L
2
+
...
+
L
k
) multiply
−
add per input vector
(3.57)
M
cc
=
N(L
1
+
L
2
+
...
+
L
k
)
locations
(3.58)
2
B
1
,L
2
=
2
B
2
and
L
k
=
2
B
k
and the total number of bits per
Assuming
L
1
=
input vector
B
B
k
, we can see that the number of candidate
vectors searched in a cascaded codebook for each input vector is less than in
a full search codebook,
=
B
1
+
B
2
...
+
k
k
2
B
n
<
2
B
if B
=
B
n
and k >
1
(3.59)
n
=
1
n
=
1
We can also see that the storage of a cascaded codebook is less than that
required by a binary codebook,
2
B
n
< N
B
2
i
k
N
for k >
1
(3.60)
n
=
1
=
i
1
Given the condition that the total number of bits used at various stages of a
cascaded codebook is
B
, both computation and storage requirements reduce
with an increase in the number of stages.
Split Codebooks
In all of the above codebook types an
N
dimensional input vector is directly
matched with
N
dimensional codebook entries. In a split vector quantization
scheme, an
N
dimensional input vector is first split into
P
parts where
P >
1.
For each part of the split vector a separate codebook is used and each part may
be vector quantized independently of the other parts using
B
p
bits. Assuming
a vector is split into
P
equal parts and vector quantized using
B
p
bits for each
part, the computation and storage requirements can be calculated as follows:
N
P
(L
1
Com
ss
=
+
L
2
+
...
+
L
P
) multiply
−
add per input vector
(3.61)
2
B
p
=
=
where
L
p
for
p
1
,
2
, ... ,P
. Similarly, the storage is given by:
N
P
(L
1
M
ss
=
+
L
2
+
...
+
L
P
)
locations
(3.62)
Search WWH ::
Custom Search