Information Technology Reference
In-Depth Information
database according to its global size (in the range from 6 to 10%). Particular M
and N values used are also shown in Figure 1.
The accuracy of the minimum distance classifier using DCVs in the projected
subspace has been considered [2]. Also, relative CPU time for each incremental
algorithm at each iteration with regard to the basic DCV algorithm has been
adopted to put forward the relative computational benefit of using incremental
versus batch algorithms. Both, classification and eciency results are shown in
Figure 2.
Accuracy plots clearly show that there is not a significant difference between
incremental and batch classification results as expected. In the case of IDCV-
GSO, classification results are exactly the same as with DCV since this incre-
mental procedure is less prone to numerical errors. On the other hand, a small
decrease is observed in all cases when comparing IDCV-EVD to DCV due to
numerical inaccuracies when computing eigendecompositions. Take also into ac-
count that we could have fixed a more strict tolerance level in the numerical
procedures but this would have had an impact in the computation times. It is
worth noting that with our current implementation, the observed degradation
in performance is kept into an insignificant level as the training set is increased.
More interestingly, relative times plot in Figure 2 exhibit relative savings from
about 30% up to 95% of the time spent by the basic DCV algorithm. Obviously,
the relative CPU time decreases with M while N is kept fixed.
Several interesting facts can be put forward. First, the IDCV-GSO algorithm
gets significantly higher savings than the IDCV-EVD one in the first iteration
(smallest value of M ). This situation is only partially kept for the smallest
database. On the contrary, IDCV-EVD is more ecient than IDCV-GSO (with
regard to its batch counterpart) for larger values of M . This behavior gets more
evident in the case of the largest database.
Both incremental algorithms are able to cut computational cost to 25% or less
of their corresponding batch algorithm. Preference to use one or another will de-
pend also on absolute computation times which in turn may depend on the
particular implementation. For example, in our unoptimized implementation,
GSO is roughly 5 times slower than EVD. With a more careful and ecient
implementation this situation could be turned upside-down [2]. Regardless of
computational cost, IDCV-EVD may lead to some additional benefits as it per-
mits controlling the size of the null space as in [15] which may help in increasing
the generalization ability of the incremental algorithm.
5 Concluding Remarks and Further Work
Incremental algorithms to compute DCVs and corresponding subspaces have
been proposed. The algorithms use incremental eigendecomposition and Gram-
Schmidt orthonormalization, respectively as in the original (batch) algorithms.
Dramatic computational savings are observed while performance behavior of
DCV is preserved.
Further work is driven towards the implementation of more general common
vector based subspace algorithms, using extended null space and kernels, in an
 
Search WWH ::




Custom Search