Databases Reference
In-Depth Information
F I GU R E 10 . 14
Original Sinan image.
Example10.4.6:
Let us quantize the Sinan image shown in Figure 10.14 using a 16-dimensional quantizer. The
input vectors are constructed using 4
×
4 blocks of pixels. The codebook was trained on the
Sinan image.
The results of the quantization using codebooks of size 16, 64, 256, and 1024 are shown in
Figure 10.15 . The rates and compression ratios are summarized in Table 10.7 . To see how these
quantities were calculated, recall that if we have K vectors in a codebook, we need
bits to inform the receiver which of the K vectors is the quantizer output. This quantity is
listed in the second column of Table 10.7 for the different values of K . If the vectors are of
dimension L , this means that we have used
log 2 K
bits to send the quantized value of L
pixels. Therefore, the rate in bits per pixel is log 2 K L . (We have assumed that the codebook
is available to both transmitter and receiver, and therefore we do not have to use any bits to
transmit the codebook from the transmitter to the receiver.) This quantity is listed in the third
column of Table 10.7 . Finally, the compression ratio, given in the last column of Table 10.7 ,
is the ratio of the number of bits per pixel in the original image to the number of bits per pixel
in the compressed image. The Sinan image was digitized using 8 bits per pixel. Using this
information and the rate after compression, we can obtain the compression ratios.
Looking at the images, we see that reconstruction using a codebook of size 1024 is very
close to the original. At the other end, the image obtained using a codebook with 16 recon-
struction vectors contains a lot of visible artifacts. The utility of each reconstruction depends
on the demands of the particular application.
log 2 K
Search WWH ::




Custom Search