Digital Signal Processing Reference
In-Depth Information
2.6.1. Nearest neighbor rule
Two codebooks are used as shown in Figure 2.6. The first codebook x 1
x L 1
comprises L 1 N -dimensional vectors which are possibly normed, and the second
···
codebook g 1 ···
g L 2 comprises L 2 scalars. Minimizing the squared error relative
to the two indices i and j can be done through an exhaustive search. However,
the preferred method to find the minimization is, at the expense of making an
approximation, done in two steps. First, the vector in the first codebook which most
closely represents x is found. Next, the gain is calculated using a scalar method.
i ( m )
j ( m )
Look up
in a
t a bl e
Nearest
neighbor
r u l e
x ( m )
x ( m )
g
g
NNR
LT
...
...
g 1
g P
g 1
g P
...
...
x 1
x L
x 1
x L
Figure 2.6. Gain-shape vector quantizer
Let us consider the system depicted in Figure 2.7.
x
x 2
x 1
g 1
x 1
Figure 2.7. Gain-shape vector quantization
The gain, g ( j ), is defined so that the vector g ( j ) x j is the orthogonal projection of
x on x j . This gain is given by:
g ( j ) x j , x j > =0
g ( j )= <x, x j >
||
<x
x j
|| 2
where <x,y> is the scalar product of the two vectors x and y . Since:
g ( j ) x j
2
2
2 g ( j ) <x, x j > + g 2 ( j )
x j
2 )
min
j
||
x
||
min
j
(
||
x
||
||
||
<x, x j > 2
||
g ( j ) x j
|| 2
|| 2
min
j
||
x
min
j
(
||
x
)
x j
|| 2
Search WWH ::




Custom Search