Graphics Reference
In-Depth Information
The basic assumption is that V is a subset of the column space (the linear space
spanned by the columns) of the response matrix R ; i.e., each vector in V is a linear
combination of the columns of R . But at the same time, the column space of R
greatly exceeds V —many linear combinations of the columns of R do not repre-
sent the appearance of the object from a real lighting/viewing direction. After all,
the set of lighting/viewing directions comprise four parameters, so V is in some
sense a four-parameter space. The column space of R is a subset of that space,
as each column is the response vector of a particular lighting/viewing direction.
Consequently, it is likely that the column space of R itself has a basis of fewer
than n elements. A similar line of reasoning was applied in the “data-driven” ap-
proach to BRDF representation of Matusik et al. described in Chapter 8, although
that work looks to nonlinear basis elements [Matusik et al. 03].
The basic idea of algebraic analysis is to find a small set of basis vectors that
approximates V , the space of all valid object appearance images. From a carefully
chosen set of basis vectors, response vectors can be approximated as a linear
combination of these basis vectors, which reduces the representation of an image
under a new lighting/viewing direction to a set of linear coefficients. That is, each
synthesized lighting/viewing direction is a weighted sum of the representative
basis vectors. Linear approximation is especially useful for real-time rendering.
For example, large matrix multiplications are not performed efficiently on GPUs
(4
4 matrix multiplications are normally very fast), but linear combinations of
large vectors is a natural GPU operation if each vector is stored as a texture map.
×
9.3.2 SVD and PCA
The notion of an approximate basis of a set is a collection of vectors or func-
tions that can be combined to approximate any element of the set. One way of
constructing such a basis is to use principal component analysis (PCA), which
provides, in some sense, the most significant directions in a data set. PCA was
originally conceived as a method of data analysis used to reveal the covariance
structure of a set of data points. However, the technique has many other applica-
tions.
Originally, PCA was done using eigenvector analysis. Basic theory of eigen-
values and eigenvectors are a standard part of an elementary linear algebra course.
An eigenvector
x of a square matrix A is a kind of fixed point of the matrix A :
A
x
= λ
x
.
(9.4)
That is, transforming an eigenvector
x by A is equivalent to a scalar multiple of
is an eigenvalue of the matrix A . The span of all eigenvectors
is the eigenspace of the matrix. A nonsingular n
x . The scalar
λ
×
n matrix A has at most n real
Search WWH ::




Custom Search