Graphics Reference
In-Depth Information
The tremendous drawback of analytic coverage is that by computing a single
partial-coverage value per fragment, it loses the information about which part of
the pixel was covered. Thus, the coverage for two separate fragments within the
pixel cannot be accurately combined. This problem is compounded when occlu-
sion between fragments is considered, because that alters the net coverage mask of
each. Porter and Duff's seminal paper [PD84] on this topic enumerates the ways
that coverage can combine and explains the problem in depth (see Chapter 17). In
practice, their OVER operator is commonly employed to combine fragment col-
ors within a pixel under analytic antialiasing. In this case, there is a single depth
sample per pixel, a single shade, and continuous estimation of coverage. Let
rep-
resent the partial coverage of a new fragment, s be its shading value, and d be the
shading value previously stored at the pixel. If the new fragment's depth indicates
that it is closer to the viewer than the fragment that previously shaded the pixel,
then the stored shade is overwritten by
α
) d . This result produces correct
shading on average, provided that two conditions are met. First, fragments with
α<
α
s +( 1
−α
1 must be rendered in farthest-to-nearest order so that the shade at a pixel can
be updated without knowledge of the fragments that previously contributed to it.
Second, all of the fragments with nonunit coverage that contribute to a pixel must
have uncorrelated coverage areas. If this does not hold, then it may be the case,
for example, that some new fragment with
= 0. 1 entirely occludes a previous
fragment with the same coverage, so the shade of the new one should overwrite
the contribution of the former one, not combine with it.
In the case of 2.5D presentation graphics, it is easy to ensure the back-to-front
ordering. The uncorrelated property is hard to ensure. When it is violated, the
pixels at the edges between adjacent primitives in the same layer are miscolored.
This can also occur at edges between primitives in different layers, although the
effect is frequently less noticeable in that case.
α
Inline Exercise 36.7: Give an example, using specific coverage values and
geometry, of a case where the monochrome shades from two fragments com-
bine incorrectly at a pixel under analytic occlusion despite correct ordering.
36.9.2 Defocus ( uv )
For a lens camera, there are many transport paths to each point on the image plane.
The last segment of each path is between a point on the aperture and a point on
the image plane. The “rays” between points on the image plane and points in the
scene are not simple geometric rays, since they refract at the lens. However, we
only need to model visibility between the aperture and the scene, since we know
that there are no occluders inside the camera body.
For a scene point P there is a pencil of rays that radiate toward the aperture.
For example, if the aperture is shaped like a disk, these rays lie within a cone. We
can apply the binary visibility function to the rays within the pencil.
If there are no occluding objects in the scene and the camera is focused on
that point, the lens refracts all of these rays to a single point on the image plane
(assuming no chromatic aberration; see Chapter 26). The point Q on the image
plane to which P projects in a corresponding pinhole camera thus receives full
coverage from the light transported along the original pencil of rays.
If the point is still in focus, but an occluder lies between the scene point and the
 
 
 
Search WWH ::




Custom Search