Digital Signal Processing Reference
In-Depth Information
M (i.e. 8-16). The complexity of this search is given by:
N 2 B 0
2 B k
K
C
=
+
M
(5.73)
k
=
2
Obviously for M
1, this equates to the complexity of the sequential search.
It can be seen that the M factor does not apply to the complexity of the
first codebook search. This can be exploited in designing the structure of
the codebook. For example, if we have three stages for a total of 25 bits, it is
significantly less complex to have a
=
structure,
whereas storage is the same and performance is expected to be similar. One
interesting improvement to the M-best search strategy is to use a complex
perceptual measure in the final stage only, to select which of the M final paths
are the best. Since this computation only needs to be performed M times, it
is possible to use much more complex distortion measures than the WMSE
normally used. It is also possible to compute this measure on only a subset
of the M best final paths, i.e. the ones which give the lowest WMSE. This
procedure significantly enhances the performance of the quantizer, partly
solving the problem that the WMSE is not such a good distortion measure
compared to the SD for example.
{
9 , 8 , 8
}
structure than a
{
8 , 9 , 8
}
5.7.4 MSVQCodebookTraining
The basic codebook training algorithms usually cater for single-stage code-
books. It is however possible to adapt the algorithm for MSVQ codebook
training. The most basic technique is called sequential optimization .Inthis
method, the codebook for stage 1 of the MSVQ is first designed. The quanti-
zation errors for the training database are then computed and the codebook
for stage 2 is trained over the error vectors. This is then repeated for each
stage, until reaching the final codebook.
However sequential optimization does not provide the best performance, as
each codebook is optimized as if it was the last stage of theMSVQquantizer. A
better alternative is iterative sequential optimization , where an initial codebook
is chosen for each stage. Each codebook is then optimized by assuming all the
other stages to be fixed and known, i.e. the quantization error is computed
using all the other stages except the current one, and training is used to
obtain an updated version of the current codebook. This process can then be
repeated until all of the codebooks have converged.
It is also possible to jointly optimize all codebooks using simultaneous
joint codebook design . This method gives slightly better results than the
previous methods but has a high computational cost, which is described
in [13].
Search WWH ::




Custom Search