Graphics Reference
In-Depth Information
this period. The common feature of the techniques developed was that the pixels
were selected out of the image data set by depth.
These view-interpolation techniques assume that the depth of existing im-
ages is precomputed or already known. The problem of rendering from general
photographs that have no range data remained unsolved. In the middle of the
1990s, CG researchers began to use stereo reconstruction to assign appropriate
depths to images in the absence of range information. The method of recover-
ing depth by using a form of stereo reconstruction is called depth from stereo .
With depth from stereo, a collection of images can be re-rendered at an arbitrary
viewpoint without any a priori knowledge of the underlying geometry. Doing
this has come to be known as image-based rendering (IBR). An early IBR tech-
nique was developed in 1994 at INRIA, the French national laboratory [Laveau
and Faugeras 94]. Basically, the technique works using stereo correspondence,
but the depth is determined by measuring the “disparity” (distortion) between the
primary image and a secondary image. The underlying geometry never needs to
be constructed explicitly.
In 1995, Leonard McMillan and Gary Bishop published the paper “Plenop-
tic Modeling” [McMillan and Bishop 95] in which they argued that image based
rendering ultimately amounts to determining the plenoptic function . 3 The plenop-
tic function gives the spectral radiance at each point in space, in each direction, at
each point in time. It is therefore a function of seven variables: P
,
assuming an underlying coordinate system. The plenoptic function is an abstract
construct; ultimately much of photorealistic rendering can be formulated as com-
puting values of the plenoptic function. McMillan and Bishop's 1995 paper did
not try to actually reconstruct the plenoptic function, rather it cast image-based
rendering in the framework of computing a slice of the plenoptic function. The
particular depth from stereo method employed in the paper uses cylindrical pro-
jections of panoramic images to represent the plenoptic function at specific points
in space. Rendering from an arbitrary viewpoint is accomplished by warping
these images. The environment is assumed to be static, so there is no variation in
the images over time.
(
x
,
y
,
z
, θ , φ , λ ,
t
)
5.3 Image-Based Modeling and Rendering
In 1996, Paul E. Debevec, Camillo J. Taylor, and Jitendra Malik published “Mod-
eling and Rendering Architecture from Photographs: A Hybrid Geometry- and
3 The term “plenoptic function” was coined by vision researchers Adelson and Bergen [Adelson
and Bergen 91]. The term is a combination of the Latin plenus , meaning “full” or “complete,” and the
Greek optikos , meaning “of sight.”
Search WWH ::




Custom Search