Information Technology Reference
In-Depth Information
additionally added normal noise (a procedure for this is described in [10]). Since for
different signs a different number of exemplars is available, such a strategy allows
each pattern to be trained with different number of prototypes. Finally, the training
stage ends in computation of T h i for each T i , in accordance with (7).
Recognition is done by testing approximation of a given pattern P x in each of the
spaces spanned by the set of bases T h given in (6). This is done by solving the
following minimization problem
2
N
i
i
(11)
min
i
h
Pc T ,
x
h
h
c
i
=
1
where c i h are the coordinates of P x in the manifold spanned by T h i . Due to the
orthogonality of the tensors T h i , the above reduces to the maximization of the
following parameter [19]
N
2
ˆ ˆ
i
ρ
=
T
,
P
,
(12)
i
h
x
i
=
1
where the 〈.,.〉 operator denotes the scalar product of the tensors. The returned by a
classifier pattern is a class i for which the corresponding
ρ i from (12) is the largest. In
our system we set a threshold (
=0.85); Below this threshold the system answers
don't know ”. Such a situation arises if wrong pattern is provided by the detector or a
sign which system was not trained for. The number N in (12) of components was set
from 3 to 9. The higher N , the better fit, though at an expense of computation time.
τ
4 Computer Representation of the Flat Tensors
Many platforms have been developed for efficient tensor representations. However,
sometimes they lack sufficient elasticity of using different data types or they do not fit
into the programming platforms [2][5]. In this paper we address the problem of
efficient tensor representation and manipulation in software implementations. Our
main assumptions can be summarized as follows.
1. Flexibility in accessing tensors as multidimensional arrays and flat data
representations at the same time without additional copies.
2. Efficient software and/or hardware processing.
3. Flexible element type selection and specializations for tensors.
A proposed class hierarchy for storage and manipulation of tensors is shown in Fig. 3.
The base template class TImageFor<> comes from the HIL library [14]. The
library is optimized for image processing and computer vision tasks, as well as for
fast matrix operations [10]. TFlatTensorFor<> is the base class for tensor
representation. Thus, in our framework a tensor is represented as a specialized version
of a matrix class. This does not follow the usual way in which a matrix is seen as a
special two-dimensional tensor. This follows from the fact that tensors in our system
are always stored in the flattened representation for a given mode. This also follows a
Search WWH ::




Custom Search