Game Development Reference
In-Depth Information
is in general not constant (Verri et al., 1989). On the contrary, intensity changes
due to varying illumination conditions can dominate the effects caused by object
motion (Pentland, 1991; Horn, 1986; and Tarr, 1998). For accurate and robust
extraction of motion information, lighting effects must be taken into account.
In spite of the relevance of illumination effects, they are rarely addressed in the
area of 3-D motion estimation. In order to allow the use of the optical flow
constraint for varying brightness, higher order differentials (Treves et al., 1994)
or pre-filtering of the images (Moloney, 1991) have been applied. Similarly,
lightness algorithms (Land et al., 1971; Ono et al., 1993; and Blohm, 1997)
make use of the different spectral distributions of texture and intensity changes
due to shading, in order to separate irradiance from reflectance. If the influence
of illumination cannot be suppressed sufficiently by filtering as, e.g., in image
regions depicting highlights caused by specular reflections, the corresponding
parts are often detected (Klinker et al., 1990; Stauder, 1994; and Schluens et al.,
1995) and classified as outliers for the estimation.
Rather than removing the disturbing effects, explicit information about the
illumination changes can be estimated. This not only improves the motion
estimation but also allows the manipulation and visual enhancement of the
illumination situation in an image afterwards (Blohm, 1997). Under controlled
conditions with, e.g., known object shape, light source position (Sato et al., 1997;
Sato et al., 1996; and Baribeau et al., 1992), and homogeneous non-colored
surface properties (Ikeuchi et al., 1991; Tominaga et al., 2000), parameters of
sophisticated reflection models like the Torrance-Sparrow model (Torrance et
al., 1967; Nayar et al., 1991; and Schlick, 1994) which also includes specular
reflection, can be estimated from camera views. Since the difficulty of param-
eter estimation increases significantly with model complexity, the analysis of
global illumination scenarios (Heckbert, 1992) with, e.g., inter-reflections (Forsyth
et al., 1991) is only addressed for very restricted applications (Wada et al., 1995).
In the context of motion estimation, where the exact position and shape of an
object are often not available, mostly simpler models are used that account for
the dominant lighting effects in the scene. The simplest scenario is the assump-
tion of pure ambient illumination (Foley et al., 1990). Other approaches (Gennert
et al., 1987; Moloney et al., 1991; and Negahdaripour et al., 1993) extend the
optical flow constraint is extended by a two-parameter function to allow for
global intensity scaling and global intensity shifts between the two frames. Local
shading effects can be modeled using additional directional light sources (Foley
et al., 1990). For the estimation of the illuminant direction, surface-normal
information is required. If this information is not available as, e.g., for the large
class of shape-from-shading algorithms (Horn et al., 1989; Lee et al., 1989),
assumptions about the surface-normal distribution are exploited to derive the
direction of the incident light (Pentland, 1982; Lee et al., 1989; Zheng et al., 1991;
and Bozdagi et al., 1994).
Search WWH ::




Custom Search