Graphics Reference
In-Depth Information
As indicated by the description of the PPM image format, pixel values in stan-
dard formats are often nonlinearly related to physical values; we'll encounter this
again when we discuss gamma correction in Section 28.12.
17.5 Other Image Types
The rectangular array of values that represents an image can contain more than
just red, green, and blue values, as we've already seen with images that record
opacity (
) as well. What else can be stored in the pixels of an image? Almost
anything! A good example is depth . There are now cameras that can record a depth
image as well as a color image, where the depth value at each pixel represents the
distance from the camera to the item shown in the pixel. And during rendering,
we typically compute depth values in the course of determining other information
about a pixel (such as “What object is visible here?”), so we can get a depth image
at no additional cost.
With this additional information, we can consider compositing an actor into a
scene in which there are objects between the actor and the camera, and others that
are behind the actor. The compositing rule becomes “If the actor pixel is nearer
than the scene pixel, composite the actor pixel over the scene; if it's farther away,
composite the scene pixel over the actor pixel.” But how should we associate a
depth value to the new pixel? It's clear that blending depths is not the correct
answer. Indeed, for a blended pixel, there's evidently no single correct answer;
blending of colors works properly because when we see light of multiple colors,
we perceive it as blended. But when we see multiple depths in an area, we don't
perceive the area as having a depth that's a blend of these depths. Probably the best
solution is to say that when you composite two images that have depths associated
to each other, the composite does not have depths, although using the minimum
of the two depths is also a fairly safe approach. An alternative is to say that if
you want to do multiple composites of depth images, you should do them all at
once so that the relative depths are all available during the composition process.
Duff [Duf85] addresses these and related questions.
Depths are just one instance of the notion of adding new channels to an image.
Image maps are often used in web browsers as interface elements: An image
is displayed, and a user click on some portion of the image invokes some par-
ticular action. For example, an international corporation might display a world
map; when you click on your country you are taken to a country-specific website.
In an image map, each pixel not only has RGB values, but also has an “action”
value (typically a small integer) associated to each pixel. When you click on pixel
( 42, 17 ) the action value associated to that pixel is looked up in the image and is
used to dispatch an associated action.
Many surfaces created during rendering involve texture maps (see Chap-
ter 20), where every point of the surface has not only x -, y -, and z -coordinates,
but also additional texture coordinates, often called u and v . We can make an
image in which these u - and v -coordinates are also recorded for each pixel (with
some special value for places in the image where there's no object, hence no
uv -coordinates).
There are also images that contain, at each pixel, an object identifier telling
which object is visible at this pixel; such object IDs are often meaningful only
in the context of the program that creates the image, but we often (especially in
α
 
 
Search WWH ::




Custom Search