Graphics Reference
In-Depth Information
as the data is transmitted. In this case, it is referred to as streaming video . If the decompression is not fast
enough to support streaming video, the compressed animation is transmitted in its entirety to the compute
box, decompressed into the complete animation, and then played back. In either case, the compression not
only saves space on the storage device but also allows animation to be transferred to the computer much
faster. Several of the codecs are proprietary and are used in workstation-based video products. The
different schemes have various strengths and weaknesses and thus involve trade-offs, the most important
of which are compression level, quality of video, and compression/decompression speed [ 21 ].
The amount of compression is usually traded off for image quality. With some codecs, the amount
of compression can be set by a user-supplied parameter so that, with a trial-and-error process, the best
compression level for the particular images at hand can be selected. Codecs with greater compression
levels usually incorporate interframe compression as well as intraframe compression. Intraframe com-
pression means that each frame is compressed individually. Interframe compression refers to the tem-
poral compression possible when one processes a series of similar still images using techniques such as
image differencing. However, when one edits a sequence, interframe compression means that more
frames must be decompressed and recompressed to perform the edits.
The quality of the video after decompression is, of course, a big concern. The most fundamental
feature of a compression scheme is whether it is lossless or lossy . With lossless compression, in which
the final image is identical to the original, only nominal amounts of compression can be realized, usu-
ally in the range of 2:1. The codecs commonly used for video are lossy in order to attain the 500:1
compression levels necessary to realize the transmission speeds for pumping animations over the
Web or from a CD-ROM. To get these levels of compression, the quality of the images must be com-
promised. Various compression schemes might do better than others with such image features as edges,
large monochrome areas, or complex outdoor scenes. The amount of color resolution supported by the
compression is also a concern. Some, such as animated GIF format, support only 8-bit color.
The compression and decompression speed is a concern for obvious reasons, but decompression
speed is especially important in some applications, for example, streaming video. On the other hand,
real-time compression is useful for applications that store compressed images as they are captured. If
compression and decompression take the same amount of time, the codec is said to be symmetric .In
some applications such as streaming video, it is acceptable to take a relatively long time for compres-
sion as long as the resulting decompression is very fast. In the case of unequal times for compression
and decompression, the codec is said to be asymmetric . To attain acceptable decompression speeds on
typical compute boxes, some codecs require hardware support.
A variety of compression techniques form the basis of the codec products. Run-length encoding is one
of the oldest andmost primitive schemes that have been applied to computer-generated images. Whenever
a value repeats itself in the input stream, the string of repeating values is replaced by a single occurrence of
the value along with a count of how many times it occurred. This was sufficient for early graphic images,
which were simple and contained large areas of uniform color. This technique does not performwell with
today's complex imagery. The Lempel-Ziv-Welch (LZW) technique was developed for compressing text.
As the input is read in, a dictionary of strings and associated codes is generated and then used as the rest of
the input is read in. Vector quantization simply refers to any scheme that uses a sample value to approx-
imate a range of values. YUV-9 is a technique in which the color is undersampled, so that, for example, a
single color value is recorded for each 4
4 block of pixels. Discrete cosine transform (DCT) is a very
popular technique that breaks a signal into a sum of cosine functions of various frequencies at specific
amplitudes. The signal can be compressed by throwing away low-amplitude and/ or high-frequency com-
ponents. DCT is an example of the more general wavelet compression in which the form of the base
Search WWH ::




Custom Search