Image Processing Reference
In-Depth Information
and used to instantiate the models. In the following, we briefly introduce the statistical learning
methods that have been applied to non-rigid shape recovery.
2.2.1 STATISTICAL LEARNINGMETHODS
Many surface parameterizations involve a large number of degrees of freedom. This, for example,
is the case when specifying the shape of a triangulated mesh in terms of its vertex coordinates.
However, these degrees of freedom are often coupled and therefore lie on a much lower-dimensional
manifold. Rather than explicitly adding constraints to the problem at hand, the core idea behind
statistical learning is to discover this manifold and express the problem in terms of its low-dimensional
representation, thus implicitly enforcing the constraints.The different methods are divided into linear
and nonlinear ones.
In the linear dimensionality reduction case, an example x is linked to its latent, possibly
low-dimensional, representation c through the linear relationship
x
=
x 0 +
Sc
+
,
(2.4)
where x 0 is the mean data value, accounts for noise, usually taken as Gaussian distributed, and the
matrix S contains the new basis vectors. Typically, S is obtained by Principal Component Analysis
(PCA) Jolliffe [ 1986 ]. More specifically, the columns of S are taken to be the eigenvectors of the data
covariance matrix. For non-rigid surfaces, this naturally sorts the deformations from low to high fre-
quencies, as was the case with modal analysis. In fact, when applied to surfaces for which stiffness ma-
trices are also available and modal decomposition can be performed, the resulting deformation modes
often look very similar. A probabilistic interpretation of PCA was introduced Tipping and Bishop
[ 1999 ] and used to build the distribution of the data in the new space from the eigenvalues of the
data covariance matrix. To obtain the basis S , PCA can also be replaced by Independent Component
Analysis (ICA) Comon [ 1994 ]. Instead of yielding uncorrelated components, the basis found by
ICA minimizes the dependencies between its potentially non-orthonormal components.
In many cases, however, the low-dimensional manifold onto which the training exam-
ples lie is not linear. Therefore, a linear model gives high probabilty to truly unlikely data,
or vice-versa. As a result, several nonlinear dimensionality reduction techniques, such as Ker-
nel PCA Schoelkopf et al. [ 1999 ], Isomap Tenenbaum et al. [ 2000 ], Locally Linear Embed-
ding Roweis and Saul [ 2000 ], Laplacian Eigenmaps Belkin and Niyogi [ 2001 ], and Maximum Vari-
ance Unfolding Weinberger and Saul [ 2004 ] were introduced. However, these techniques are not
very well-suited to the problem of non-rigid reconstruction, since they do provide a mapping from
the low-dimensional representation to the high-dimensional one. Such a mapping must be learned
separately, in terms of Radial Basis Functions (RBF) for example, which makes these nonlinear
techniques prone to errors both in the direct and the inverse mappings.
As an alternative, one can use the Gaussian Process Latent Variable Model
(GPLVM) Lawrence [ 2004 ], which was originally introduced as a generalization of probabilis-
tic PCA. The advantage of the GPLVM over the previous nonlinear techniques is that it directly
Search WWH ::




Custom Search