Graphics Reference
In-Depth Information
( θ , φ ) Light at position
φ
p
Camera
θ
Photographed image
Figure 8.25 The light stage. (Left image courtesy of Paul Debevec.)
lit individually in sequence; a camera captures images of the the subject's face
illuminated by each spotlight. Several exposures are recorded, which are later
combined into an HDR image to represent the reflectance field for the incident
direction of the particular spotlight. 13
Next, the collection of pixel values corresponding to a sample point on the
physical surface are extracted from each captured HDR image. The pixel values
are then arranged into a 2D array by the
φ i values of the spotlight in the
HDR image from which they came, to create what is called a “reflectance func-
tion” in the paper. Figure 8.26 illustrates the process. A reflectance function is
essentially a slice of R ( θ i , φ i ; u r ,
θ i and
v r , θ r , φ r )
with u r and v r (the pixel locations)
φ r (the camera position) fixed. A separate reflectance function is
defined for each of a set of fixed sample points on the physical surface. For each
sample point, the reflectance function expresses how the reflectance at that point
changes with different lighting directions.
The reflectance functions can be combined into a tiled mosaic texture map,
similar to an image produced by the light field camera described in Chapter 5,
although the arrangement is different. Each tile in this texture map contains the
reflectance function for a single pixel, and thus shows how the appearance of that
surface point changes with the position of the light source. Figure 8.27 shows a
reflectance function mosaic created for a face. The right side of Figure 8.27 shows
the combined tiled texture map; the left side shows the original image. Near the
center of each tile is the reflectance for the spotlight near the camera, so the values
there correspond to the front-lit image. This is why the face is discernable in a
kind of “fractured” form.
13 The light stage has evolved since the reflectance field was first introduced in 2000. The system
described in the original paper actually used a single spotlight attached to a rotating arm. A video
camera recorded images as the spotlight rotated around the subject. Other devices for capturing and
displaying reflectance fields have since emerged. For example, a paper entitled “Towards Passive 6D
Reflectance Field Displays” by Martin Fuchs, Ramesh Raskar, Hans-Peter Seidel, and Hendrik P. A.
Lensch [Fuchs et al. 08] describes a prototype of a flat display that is view and illumination dependent.
It works by discretizing the incident light field using a lens array and modulating it with a coded
pattern.
as well as
θ r ,
 
Search WWH ::




Custom Search