Image Processing Reference
In-Depth Information
ψ 2
is given by the least significant eigenvector of OO T . The solution coincides with
the normal direction of the hyperplane having the least orthogonal distance to the
points given by the columns of O . If the multiplicity of the least eigenvalue is ν ,the
dimension of the fitted hyperplane is M
O T
where
ψ
=1 and O is an M
×
K matrix, that minimizes the TLS error
ν .
15.2 PCA for Rare Observations in Large Dimensions
If the dimension of the observation vector space is very large, it is generally very
likely that S will be singular. The scatter matrix is guaranteed to be singular when
the number of observed vectors is less than the dimensions, K<M . This happens
often in image analysis.
A well-known example of large dimensionality in features is obtained when the
observed feature vectors consist of entire images, e.g., face images that we will
also illustrate in an example below. The image then becomes an observation vec-
tor by scanning it in a certain fashion, for example in the reading direction. For a
1000
100 image the dimension-
ality is 10000, etc. To obtain a reliable estimate of the scatter matrix one has to collect
tens of thousands of observations even for small, low-resolution images, whereas for
high-resolution images this is simply not practicable. The result is that one has to
use the available observations, despite the fact that they are almost never sufficient,
to yield a good estimate of the scatter matrix. Another difficulty when dealing with
large-dimensional PCA is that building the scatter matrix grows from being a trivial
task in few dimensions, to an impractical task in large dimensions. This is because
every outer (tensor) product in the sum of Eq. (15.26) needs M ( M +1) / 2 multipli-
cations, and there are K such matrices in the summation. In the ideal case, K should
furthermore be at least as many as the scatter matrix coefficients, M ( M +1) / 2.It
follows then that the number of multiplications is KM ( M +1) / 2, which approaches
O ( M 4 ).
In this section we will discuss how to obtain the KL coefficients when K
×
1000 image, M is equal to 1 million, for a 100
×
M
without performing large numbers of arithmetic operations. The symmetric matrix
OO T , and thereby S , has the size M
M , whereas the matrix O T O , also symmetric,
×
has the size K
×
K . Although different, these two matrices share eigenvalues because
OO T
O T O ( O T
ψ m )= λ m ( O T
ψ m = λ m ψ m ,
ψ m ) ,
(15.32)
where λ m > 0. Also, inspecting (15.32), we see that if
ψ m is an eigenvector of
OO T , then
ψ m = O T
ψ m (15.33)
is an eigenvector of O T O . Thus the two matrices have eigenvectors that differ only
by a fixed projection. Multiplying this equation with O
O ψ m = OO T
ψ m = λ m ψ m
(15.34)
Search WWH ::




Custom Search