Information Technology Reference
In-Depth Information
is close to LDA but is particularly appealing because its good performance be-
havior and flexibility of implementation specially in the case of very large di-
mensionalities [8, 2].
In this paper, incremental formulations corresponding to basic (batch) im-
plementations of the DCV method are proposed. The derived algorithms follow
previously published ideas about (incrementally) modifying subspaces [9,10] but
in the particular context of DCV. Both subspace projections and explicit vec-
tors are eciently recomputed allowing the application of these algorithms in
interactive and dynamic problems.
2 Discriminant Common Vectors for Image
Characterization and Recognition
The DCV method has been recently proposed for face recognition problems
in which input data dimension is much higher than the training set size [2].
In particular, the method looks for a linear projection that maximizes class
separability by considering a criterion very similar to the one used for LDA-
like algorithms and also uses the within-class scatter matrix, S w . In short, the
method consists of constructing a linear mapping onto the null space of S w in
which all training data gets collapsed into the so-called discriminant common
vectors . Classification of new data can be then accomplished by first projecting
it and then measuring similarity to DCVs of each class with an appropriate
distance measure.
Let
X∈ R d×M be a given training set consisting of Md -dimensional (column)
vector-shaped images, x j R d ,where i =1 ,...,M j refers to images of any of the
c given classes, j =1 ,...,c and M = j =1 M j .Let S X be their corresponding
within-class scatter matrix and let x j
be the j -th class mean vector from
X
.
2.1 DCV through Eigendecomposition
R d×n be matrices formed with the eigenvectors corre-
sponding to non zero and zero eigenvalues, computed from the eigenvalue de-
composition (EVD) of S X
R d×r and U
Let U
r are the dimensions of its range
and null spaces, respectively. The j -th class common vector can be computed as
the orthonormal projection of the j -th class mean vector onto this null space,
U U T x j
where r and n = d
or, equivalently as the residue of x j
with regard to U .Thatis
x com
UU T x j
= x j
(1)
In both expressions, the mean vector x j may in fact be substituted by any other
j -class training vector [2]. Note that it is much easier and convenient to use U
rather than U , partially because in the context of image recognition usually r
n .
These d -dimensional common vectors constitute a set of size c to which stan-
dard PCA can be applied. The combination of this with the previous mapping
gives rise to a linear mapping onto a reduced space, W
R ( c− 1) . Reduced
 
Search WWH ::




Custom Search