Graphics Reference
In-Depth Information
the pixel. When an object is rendered, a pixel is only drawn if the depth value
is less than the existing pixel depth. The objects can therefore be rendered in
any order. This approach, also called a depth buffer , has proven very effective
and is now a standard part of graphics hardware. However, it is not without its
drawbacks. The extra depth channel does require extra memory, and it must have
enough precision to adequately distinguish the depths of scene objects. When
points on different objects project to the same pixel at the same depth, the results
can be unpredictable. Such “Z-fighting” as it is called can cause unwanted visible
artifacts in the image resulting from incorrect pixels being drawn.
1.3.3 Ray Tracing
As described above, ray tracing works by following the ray from the viewpoint
through each pixel and coloring the pixel according to the shading of the ob-
ject. Ray tracing naturally performs hidden surface elimination, because a pixel
is shaded according to the first object the ray hits. Furthermore, reflected and
transmitted light can be captured by tracing secondary reflected and refracted rays
when a ray hits a reflective or refractive surface. Multiple rays can be fired through
each pixel and the results averaged to produce better looking images, a technique
known as antialiasing . However, ray tracing is computationally expensive, pri-
marily because of the cost of computing the intersection of rays with the scene
objects. Many of the rendering methods discussed in this topic use some form of
ray tracing, so the basic method is briefly described here.
Light emitted from a light source can be regarded as a collection of rays, each
carrying a radiance value. These light rays are reflected and scattered between
objects, and some of the light ends up reaching the eye of a human observer (or
the lens of a camera making a photographic record of the scene). Rendering by
tracing rays from the light source ends up being extremely wasteful, because the
probability that a tracked ray hits the eye or camera is small. Rendering methods
based on ray tracing therefore usually start from the viewpoint or camera and trace
rays in the reverse direction. This way, light paths are followed that are known to
hit the viewpoint. The approach is physically sensible because of the reversibility
property of light propagation: essentially the same physical laws apply if the
direction of light travel is reversed. The bidirectional requirement of a BRDF
assures this is true for surface reflection.
In ray tracing, the value of a pixel is determined from the radiance carried by
the ray (in the reverse direction) from the viewpoint through the pixel. When the
ray hits an object, the radiance is the outgoing surface radiance reflected from the
light source according to the BRDF ( Figure 1.9 ) . Ray tracing that stops at the
first object is known as ray casting . General recursive ray tracing involves tracing
Search WWH ::




Custom Search