Game Development Reference
In-Depth Information
If explicit 3-D models and with that surface-normal information are available,
more accurate estimates of the illumination parameters are obtainable (Stauder,
1995; Deshpande et al., 1996; Brunelli, 1997; and Eisert et al., 1997). In these
approaches, Lambertian reflection is assumed in combination with directional
and ambient light. Given the surface normals, the illumination parameters are
estimated using neural networks (Brunelli, 1997), linear (Deshpande et al., 1996;
Eisert et al., 1997), or non-linear (Stauder, 1995) optimization.
Rather than using explicit light source and reflection models to describe
illumination effects, multiple images captured from the same viewing position,
but under varying illumination can also be exploited. Hallinan et al. showed
(Hallinan et al., 1994; Epstein et al., 1995) that five eigen images computed from
a set of differently illuminated facial images are sufficient to approximate
arbitrary lighting conditions by linearly blending between the eigen images. An
analytic method for the derivation of the eigen components can be found in
Ramamoorthi (2002). This low-dimensional space of face appearances can be
represented as an illumination cone as shown by Belhumeur et al. (1998). In
Ramamoorthi et al. (2001), the reflection of light was theoretically described by
convolution in a signal-processing framework. Illumination analysis or inverse
rendering can then be considered as deconvolution. Beside the creation of
arbitrarily illuminated face images, the use of multiple input images also allows
the estimation of facial shape and thus a change of head pose in 2-D images
(Georgiades et al., 1999). Using eigen light maps of explicit 3-D models (Eisert
et al., 2002) instead of blending between eigen images, also extends the
applicability of the approach to locally deforming objects like human faces in
image sequences.
For the special application of 3-D model-based motion estimation, relatively few
approaches have been proposed that incorporate photometric effects. In Bozdagi
et al. (1994), the illuminant direction is estimated according to Zheng et al. (1991)
first without exploiting the 3-D model. Given the illumination parameters, the
optical flow constraint is extended to explicitly consider intensity changes caused
by object motion. For that purpose, surface normals are required which are
derived from the 3-D head model. The approach proposed in Stauder (1995 and
1998) makes explicit use of normal information for both illumination estimation
and compensation. Rather than determining the illuminant direction from a single
frame, the changes of surface shading between two successive frames are
exploited to estimate the parameters. The intensity of both ambient and direc-
tional light, as well as the direction of the incident light, is determined by
minimizing a non-linear cost function. Experiments performed for both ap-
proaches show that the consideration of photometric effects can significantly
improve the accuracy of estimated motion parameters and the reconstruction
quality of the motion-compensated frames (Bozdagi et al.; 1994, Stauder, 1995).
Search WWH ::




Custom Search