Graphics Reference
In-Depth Information
Digital refocusing is therefore a matter of evaluating this integral, given the in-
camera light field L F .
Light field photography with a plenoptic camera. In order to capture
the in-camera light field, Ng et al. constructed a device which they call a “light
field camera” based on a camera proposed in the paper “Single Lens Stereo with
a Plenoptic Camera” published in 1992 by Adelson and Wang [Adelson and
Wang 92]. 5 An ordinary camera focuses light from a distant plane of focus so
that light rays from this plane converge at corresponding points on the sensor ar-
ray (or the film plane). Suppose that a thin opaque surface perforated by an array
of pinholes is placed just in front of the sensor array, and the image is focused
onto this surface instead. Each pinhole spreads out the convergent light onto the
sensor plane in the form of a disc-shaped image, which represents what can be
seen through the pinhole. The sensor plane image then records these images into
a single image, which looks like an array of small images of the scene, each cap-
tured from a slightly different viewpoint. Judiciously placing the pinhole surface
maximizes the sizes of these images so that they just touch each other.
The light field camera described by Ng and his collaborators uses this princi-
ple, except that an array of microlenses is used in place of a pinhole surface. 6 In
the light field camera, the image is focused on the microlens array and each mi-
crolens creates a small sharp image of what is seen through the lens on the sensor
array. The image, called a “microlens image,” is recorded by a group of pixels
corresponding to the microlens. The image captured by the light field camera
therefore has the appearance of a collection of small images of the object, each
with a field of view several times larger than that corresponding to the original
group of pixels ( Figure 5.23 ) .
Each microimage contains a small 2D slice of the in-camera light field. The
pinhole model helps illustrate why. The microlens array is placed where the sen-
sor array would be in an ordinary camera, so the
(
x
,
y
)
parameters correspond to
5 The idea of “plenoptic” photography has a long history, which can trace its roots back more
than a century to the “integral photography” method pioneered by M. G. Lippmann [Lippmann 08].
Adelson and Wang's incorporates a single main lens along with an array of small lens-like elements
placed at the sensor plane. This arrangement allows for the simultaneous capture of a set of images
corresponding to shifted views of a scene seen through subregions of the main lens. Their work
includes a method they call “single lens stereo” for reconstruction based on the parallax of the captured
images. The light field camera constructed by Ng et al. had a design similar to Adelson and Wang's,
but the microlens array was more precisely constructed and was moved away from the sensor plane.
6 Cameras with similar microlens arrays had been used before for similar purposes. The 2001
paper “3-D computer graphics based on integral photography” by T. Naemura, T. Yoshida, and H.
Harashima [Naemura et al. 01] is representative of this work. The system described in the paper
enables interactive recovery and display of objects from an arbitrary viewpoint. The light field camera
developed by Ng et al. was novel in the sense that it was designed specifically for capturing the in-
camera light field, and its primary purpose was to study digital refocusing.
Search WWH ::




Custom Search