Information Technology Reference
In-Depth Information
dimensionality discriminative common vectors (DCVs) can be then computed
as
Ω
j
=
W
T
x
j
. When new (test) data,
x
, is to be classified, it can get projected
as
W
T
x
and then appropriately compared to
Ω
j
in order to be recognized.
Even after several improvements that can be applied [11,2], the computational
burden associated to this procedure is dominated by the eigendecomposition of
S
X
and leads to a cost in
O
(
M
3
+
dM
2
), where
is a constant related to the
iterative methods used for EVD.
2.2 DCV through Orthonormalization
An alternative and more ecient way of computing an equivalent projection
requires the use of Gram-Schmidt orthonormalization (GSO) instead of EVD.
Let
B
X
∈
R
d×
(
M−c
)
be a matrix whose columns are given by difference vectors
x
j
−
x
1
j
,where
j
=1
,...,c
,and
i
=2
,...,M
j
. It can be shown that the range
subspace of
S
X
and the subspace spanned by
B
X
are the same. Therefore, a
∈
R
d×r
can be computed using the
r
base vectors obtained from
B
X
through GSO. This mapping can equivalently substitute the mapping
U
in
Equation 1 to compute the same common vectors.
The difference common vectors,
mapping
Θ
B
com
=[(
x
2
x
1
com
...
(
x
com
−
x
1
com
∈
R
d×
(
c−
1)
,
com
−
)
)]
can now be computed and a linear mapping to a reduced space is obtained from
B
com
using GSO. The composition of this mapping with the previous one leads
to a linear mapping,
Ψ
, equivalent (but different in general) to the composite
mapping in the previous section,
W
. This mapping represents the same subspace
as
W
given that
WW
T
=
ΨΨ
T
.
As in the previous case, the cost of obtaining the reduced mapping can be
neglected with regard to the cost of computing the projection
Θ
that amounts
to
O
(
dM
2
).
3
Incrementally Computing Discriminative Common
Vectors
Both basic algorithms in the previous section have a first phase in which pro-
jections (
U
or
Θ
) are obtained in order to apply Equation 1. And a second one
in which a definitive mapping (
W
or
Ψ
) is obtained. From an algorithmic view-
point, the second phase mimics the first one at a much smaller scale in both
cases. Consequently, only details about the first phase will be given here.
Let
Y∈
R
d×N
be the currently available
training set (new incoming data) consisting of
N
j
X
beasdefinedinSection2andlet
vectors from each of the
c
∈
R
d×
(
M
+
N
)
.
classes. And let
Z
=[
XY
]
3.1
Incremental DCV through EVD
would require eigendecomposing
S
Z
Basic DCV method on
first in order to
use Equation 1 to compute the common vectors and go forward. To avoid this,
Z
Search WWH ::
Custom Search