Graphics Reference
In-Depth Information
Occasionally we will have several incoming vectors at a point P , and we'll
need to index themwith names like
v 1 ,
v 2 ,
...
. When we want to refer to a generic
vector in this list, we'll use
v j , avoiding the subscript i to prevent confusion with
the previous use. You will have to infer, from context, that these vectors are all
being used to describe incoming light directions, that is, serving in the role of
v i .
As we discussed in Chapter 26, many terms, and associated units, are used
to describe light. In an attempt to avoid problems, we'll use just a few: power
(in watts), flux (in watts per square meter), radiance (in watts per square meter
steradian), and occasionally spectral radiance (in watts per square meter steradian
nanometer).
31.9 What Do We Need to Compute?
Much of the work in rendering falls into a few categories:
• Developing data structures to make the ray-casting operation fast, which
we discuss in Chapter 36
• Choosing representations for the function f s that are general enough to
capture the sorts of reflectivity exhibited by a wide class of surfaces, yet
simple enough to allow clever optimizations in rendering, which we've
already seen in Chapter 27
• Determining methods to approximate the solution of the rendering equation
It is this last topic that concerns us in this chapter.
The rendering equation characterizes the function L that describes the radiance
in a scene. Do we really need to know everything about L ? Presumably radiance
that's scattered off into outer space (or toward some completely absorbing surface)
is of no concern to us—it cannot possibly affect the picture we're making. In fact,
if we're trying to make a picture seen from a pinhole camera whose pinhole is
at some point C , the only values we really care about computing are of the form
L ( C ,
) . To compute these we may need to compute other values L ( P ,
η
) in order
v
to better estimate the values we care about.
Suppose, however, that we want to simulate an actual camera, with a lens and
with a sensor array like the CCD array in many digital cameras. To compute the
sensor response at a pixel P , we need to consider all rays that convey light to P
rays from any point of the lens to any point of the sensor cell corresponding to P
(see Figure 31.8).
Figure 31.8: Light along any ray
from the lens to the sensor cell
contributes to the measured value
at that cell.
As we said in Chapter 29, light arriving along different rays may have different
effects: Light arriving orthogonal to the film plane may provoke a greater response
than light arriving at an angle, and light arriving near the center of a cell may
matter more than light arriving near an edge—it all depends on the structure of
the sensor. The measurement equation, Equation 29.15, says that
m ij =
U × S 2
) L in ( P ,
M ij ( P ,
v
v
)
| v ·
n P |
dP d
,
(31.31)
v
where M ij is a sensor-response function that tells us the response of pixel ( i , j ) to
radiance along the ray through P in direction
.
One perfectly reasonable idealization is that the pixel area is a tiny square, and
that M ij is 1.0 for any ray through the lens that meets this square, and 0 otherwise.
v
 
 
Search WWH ::




Custom Search