Graphics Reference
In-Depth Information
in the local coordinate system of the vertex, and is projected via an orthographic
projection onto the texture coordinate plane (the circles in Figure 5.20 are the
bounds of this projection). The surface map and view map values are multiplied
directly; the surface map includes the value of the weighting function (which is
shown as a gradation in Figure 5.20 ) , so the result can be added directly to the
accumulated sum for the triangle. These computations can all be done efficiently
on a GPU.
The surface representation of the light field presented in the 2002 “Light Field
Mapping” paper performs particularly well on objects having intricate surface ge-
ometry detail, but does have some drawbacks. The construction of the surface and
view maps, i.e., the construction of the vertex function approximations, requires
a huge number of radiance samples. Furthermore, the representation of the maps
requires less storage than the spatial light field, but it takes up significant space
nonetheless.
5.4.5 Light Field Photography
In 2005, Ren Ng and his colleagues at Stanford University published two pa-
pers that introduced a new connection of the light field to photography. “Light
Field Photography with a Hand-held Plenoptic Camera,” a technical report writ-
ten by Ng, Levoy, and Hanrahan along with Mathieu Bredif, Gene Duval, and
Mark Horowitz, describes a digital camera modified with a microlens array that
can capture a portion of the light field in a single exposure. The authors show
how the acquired light field can be used to digitally refocus an image. The paper
“Fourier Slice Photography,” presented at SIGGRAPH 2005, contains a mathe-
matical framework for imaging from the light field [Ng 05]. The main result is a
new theorem that relates an image as a slice of the 4D Fourier transform of the
light field.
The light incident on the surface of the main lens of a camera is a light field:
at each point on the lens, radiance from a set of ray directions enters the lens.
The set of directions is further limited by the aperture, and the rays are refracted by
the optics in the lens system, but each point on the sensor plane ultimately receives
radiance from a set of directions. The set of light rays reaching the sensor plane
is called the in-camera light field . Figure 5.21 illustrates the concept. A point
(
x
,
y
)
on the sensor plane receives radiance from a set of points parameterized by
(
on the main lens, although this correspondence depends on the optics and
focus state of the camera. If F is the distance from the lens plane to the sensor
plane, the radiance at
u
,
v
)
. Because
of other optical elements, including the aperture stop,there may not be a light
(
x
,
y
)
from point
(
u
,
v
)
is denoted by L F (
x
,
y
,
u
,
v
)
Search WWH ::




Custom Search