Image Processing Reference
In-Depth Information
15.3 Singular Value Decomposition (SVD)
Here we discuss a powerful tool, the singular value decomposition (SVD) to solve
homogeneous linear equations of the type
O T ψ = 0 ,
where
O is M
×
K,
(15.35)
in the TLS error sense, which appear in numerous image analysis problems. This is
an important class of problems for computer vision for many reasons, including the
following.
First, in the direction estimation formulation, the error
ψ 2 is to be made as
O
small as possible, possibly zero, by a certain solution
. The minimum achievable
error is λ ( M ) , which can be zero in the ideal case or small in a successful hyperplane
fitting, if and only if the solution satisfies
ψ
OO T
ψ ( M ) = λ ( M ) ψ ( M )
(15.36)
where λ ( M ) is the least (eigenvalue) of the scatter matrix OO T . With this construc-
tion, we can thus transfer the quadratic minimization problem to the homogeneous
equation problem:
mi ψ ψ OO T
O T
ψ
ψ = 0
(15.37)
where we search for the TLS error solution with
=1.
Second, studying lemmas 15.2 and 15.1 shows that the quadratic minimization
ψ
ψ
problem which we converged at, first in the linear symmetry direction
problem, then in feature extraction including corner features, group direction fea-
tures, motion estimation, world geometry estimation, and dimension reduction prob-
lems, is a very fundamental problem for vision. One can therefore conclude that the
problems of vision can be effectively modeled¿ as a direction estimation problem,
which is in turn equivalent to a homogeneous equation problem. Third, because a
linear equation A ψ
T OO T
ψ
= b can be rewritten as
b ] 1
= 0
O T
[ O T ,
ψ
= b
(15.38)
even non homogeneous linear equations can be easily treated within homogeneous
equation formalism. There are straightforward extensions of this idea employing
polynomials as well as other nonlinear transformations as the elements of O ,af-
fording nonlinear modeling too. Typically, O , b represent the explanatory variables
and the response variables , respectively. Other
names for these variables are
in-
put and output , respectively. One searches for
, representing the model variables ,
which are also known as the regression coefficients . The advantage with a homoge-
neous equation resides in that it can model noise in both O and b in the TLS sense,
making the solution independent of measurement coordinates, that is, a tensor solu-
tion, see Sect. 10.10. In computer vision, homogenization is in itself an important
ψ
Search WWH ::




Custom Search