Graphics Reference
In-Depth Information
limiting us to those points that lie on surfaces—we define radiance at ( P ,
) for a
v
nonsurface point P by letting Q = R ( P ,
v
) , and setting
)= L out ( Q ,
L ( P ,
) ,
(29.13)
v
v
which results in radiance that's constant along rays in empty space. In Equa-
tion 29.13, we defined L ( P ,
) rather than L in or L out because at points of empty
space, these two functions agree; they only differ at points of
v
M
.
The rendering equation now becomes
v o )+
v i S 2 ( p )
L out ( P ,
v o )= L e ( P ,
L in ( P ,
v i ) f s ( P ,
v i ,
v o )
| v i ·
n P |
d
v i .
(29.14)
The changes we've made to incorporate transmission seem fussy and likely to
lead to code with multiple cases. In practice, however, they have almost no effect.
That's partly because of the restricted model of scattering we use in representing
materials in Chapter 32: Scattering at a surface point consists of a small num-
ber of impulses and an otherwise diffuse or glossy reflectance-scattering pattern.
(Recall that an impulse is a phenomenon that is similar to mirror reflection or
Snell-Fresnel refraction, where radiance arriving along one ray scatters out along
just one or two other rays.) In particular, in the general rendering equation, the
part of the integral representing transmission degenerates to something far sim-
pler: We look at the radiance arriving along one particular ray, multiply it by a
constant representing how much light is transmitted, and add the result to the out-
going radiance.
29.4.1 The Measurement Equation
Typically a renderer takes a scene description as input, and produces an image—a
rectangular array of values—as output. These values might just be RGB triples in
some fixed range, or they might be RGB radiance values representing radiance in
Wm 2 sr 1 , or something else. In general, a particular pixel value represents the
result of a measurement process. For a typical digital camera, the red measure-
ment, for one pixel, represents the total charge accumulated in one cell of a CCD
device. For a synthetic camera, it might represent the integral of irradiance in the
red portion of the spectrum over the rectangular corresponding to one pixel on the
image plane. Or it might represent a weighted integral of this irradiance over a
disk slightly larger than the rectangle usually associated to a pixel so that radi-
ance along a single ray contributes to the value of more than one pixel in the final
image. We express this idea by associating to each pixel ij a sensor response M ij ,
which converts radiance along any ray into a numerical value that can be summed
over all rays to get the sensor value. That is to say, we posit that the measurement
m ij associated to pixel ij is computed as
m ij =
) L in ( P ,
U × S 2 M ij ( P ,
v
)
| v ·
n P |
dP d
,
(29.15)
v
v
where U is the image plane. This is a purely formal description of the measure-
ment process. The critical thing is that M ij is zero except for points in a small area
and directions in a small solid angle. For a camera with a small pinhole aperture,
for instance, M ij ( P ,
) is nonzero only if both of the following are true.
v
 
 
Search WWH ::




Custom Search