Information Technology Reference
In-Depth Information
Figure 5.73: (a) A simple wavelet encoder. (b) The corresponding decoder.
MPEG-4 supports scaleable wavelet coding in multi quant and bilevel quant modes. Multi-quant mode allows
resolution and noise scaleability. In the first layer, a low-resolution and/or noisy texture is encoded. The decoder
will buffer this texture and optionally may display it. Higher layers may add resolution by delivering higher sub-band
coefficients or reduce noise by delivering the low-order bits removed by the quantizing of the first layer. These
refinements are added to the contents of the buffer.
In bi-level quant mode the coefficients are not sent as binary numbers, but are sent bitplane-by-bitplane. In other
words the MSBs of all coefficients are sent first followed by all the second bits and so on. The decoder would
reveal a very noisy picture at first in which the noise floor falls as each enhancement layer arrives. With enough
enhancement layers the compression could be lossless.
[ 3 ] Shapiro, J.M., Embedded image coding using zerotrees of wavelet coefficients. IEEE Trans. SP , 41 3445-3462
(1993)
5.28 Three-dimensional mesh coding
In computer-generated images used, for example, in simulators and virtual reality, the goal is to synthesize the
image which would have been seen by a camera or a single eye at a given location with respect to the virtual
objects. Figure 5.74 shows that this uses a process known as ray tracing . From a fixed point in the virtual camera,
a ray is projected outwards through every pixel in the image to be created. These rays will strike either an object or
the background. Rays which strike an object must result in pixels which represent that object. Objects which are
three- dimensional will reflect the ray according to their geometry and reflectivity, and the reflected ray has to be
followed in case it falls on a source of light which must appear as a reflection in the surface of the object. The
reflection may be sharp or diffuse according to the surface texture of the object. The colour of the reflection will be
a function of the spectral reflectivity of the object and the spectrum of the incident light.
Figure 5.74: In ray tracing, imaginary rays are projected from the viewer's eye through each screen pixel in turn to
points on the scene to be rendered. The characteristics of the point in the scene are transferred to the pixel
concerned.
If the geometry, surface texture and colour of all objects and light sources are known, the image can be computed
in a process called rendering . If any of the objects are moving, or if the viewpoint of the virtual camera changes,
then a new image will need to be rendered for each video frame. If such synthetic video were to be compressed by,
 
Search WWH ::




Custom Search