Graphics Reference
In-Depth Information
by Ko Nishino, Yoichi Sato, and Katsushi Ikeuchi [Nishino et al. 01] described a
related approach for more general object surfaces. In the eigen-texture method,
PCA is applied to a set of captured images of a small patch on an object surface
under various viewing directions. This provides a small subset of basis images
that best represent the appearance variation of the patch over all viewing direc-
tions. The technique has had a significant influence on light field research in
general.
In light field rendering (Chapter 5), the radiance from an object is captured in
a collection of directions to construct the light field, then the appearance of the
object from any surface point can be reconstructed by interpolating the light field
radiance values. The reconstruction does not consider the specific geometry of
the object, which is one of the strengths of the method. However, it does have
the drawback of regarding all parts of the object as having equal visual impor-
tance. Furthermore, the lack of any specific geometric representation can cause
problems with the appearance of interpolated shadows. The goal of the eigen-
texture method was to construct a representation similar to the light field that also
accounts for changes in the lighting/viewing direction and the geometry of the
particular object. The paper considered a restricted version of the problem, where
the viewpoint and light direction remain fixed and the object is rotated. Conse-
quently the lighting/viewing directions are functions of only one parameter, the
rotation angle.
In the eigen-texture method, the object surface is divided into small triangular
patches, and the analysis is done on each patch. Images of the object are captured
from the various rotation angles. Then these images are registered so that the
parts of the images corresponding to each patch can be extracted. Each triangular
patch is warped to a fixed-size right triangle, called a “cell” in the paper, and the
appearance of a cell from a particular viewpoint is a “cell image.” A sequence
of cell images represents the appearance of a patch for all viewing directions
( Figure 9.10 ) . Synthesizing a cell image for an arbitrary rotation angle
could
be done by interpolating the appropriate image cells separately in each image cell
sequence, but the authors were interested in something better. A model typically
consists of thousands of patches, each of which requires a large number of cell
images. The storage requirements are therefore quite large, and one goal of the
method was to develop a compact representation of the view-dependent texture.
The eigen-texture method, as the name suggests, employs eigenvector-based PCA
to compress the sequence of cell images. The color channels of the cell image
pixels are included in the PCA analysis. Some interpolation methods for color
images work on each color component separately. In the eigen-texture method,
the color channels are separated and concatenated into cell image column vectors
θ
Search WWH ::




Custom Search