Information Technology Reference
In-Depth Information
linear organization of the computer memory. In these terms a tensor is just a data
structure with a tensor mode governed by the more specialized objects in the
hierarchy. At the same time, flexible and efficient methods developed for
manipulation of the matrix data objects [14] are retained. In other words, the “is a”
relationship showed up to be a more practical solution than the previously tried “has
a”, in which a tensor object contained a matrix that stored its data in one mode. In
effect the TFlatTensorFor<> has two sets of the principal methods for accessing its
elements. The first pair Get/SetElement takes as an argument a TensorIndex which is
a vector of indices. Its length equals dimension of the tensor. The second pair of
functions Get/SetPixel is inherited from the base TImageFor<> . The latter allow
access to the matrix data providing simply its row and column ( r , c ) indices.
Element
Type
TImageFor
TFlatTensorProxyFor
TFlatTensorFor
# fTensorMode : int
# fData : pixel type = val
- fMotherTensor : TFlatTensorFor &
# fIndexVector: vector
+ GetElement( TensorIndex ) : ElType
+ SetElement( TensorIndex, ElType )
1..*
N
+ GetElement( TensorIndex ) : ElType
+ SetElement( TensorIndex, ElType )
# Offset_ForwardCyclic
( TensorIndex, MatrixIndex, Mode )
# Offset_BackwardCyclic
( TensorIndex, MatrixIndex, Mode )
+ GetPixel( matrix_index ) : PixelType
+ GetPixel( matrix_index ) : PixelType
+ SetPixel( matrix_index ) : PixelType
+ SetPixel( matrix_index ) : PixelType
# Offset_ForwardCyclic
( MatrixIndex, TensorIndex, Mode )
HIL Library
# Offset_BackwardCyclic
( MatrixIndex, TensorIndex, Mode )
Fig. 3. A tensor class hierarchy
The TFlatTensorProxyFor<> class is a simplified proxy pattern to the
TFlatTensorFor<> [10]. These are useful in all cases in which tensor representations
in different flat n -modes are necessary. Proxies allow this without creating a copy of
the input tensor which could easily consume large parts of memory and time. An
example is the already discussed HOSVD decomposition. In each step of this
algorithm the n -mode flat tensor needs to be created from the initial tensor T , for all
n 's [16]. In our realization these two-way index transformations are possible with the
Offset_ForwardCyclic / Offset_BackwardCyclic metods which recompute tensor-
matrix indices in two ways and in two cyclic modes (backward and forward), and also
for different n -modes. More specifically, an index of an element in a tensor T of
dimension k is given by a tuple ( i 1 , i 2 , …, i k ) of k indices. This maps into an offset q of
a linear memory
(
)
(
)
(
)
q
=
i n
+
i
n
+
n
+
i
,
(13)
12
2
3
k
k
where the tuple ( n 1 , n 2 , …, n k ) gives dimensions of T . On the other hand, matrix
k
(
)
representation always involves selection of two dimensions
rc
,
=
n
,
n
,
m
z
zzm
=≠
1,
m equals a mode of the T . In consequence, an element at index q has to fit into such a
matrix. In the tensor proxy pattern the problem is inversed - given a matrix offset q a
corresponding tensor index tuple has to be determined due to different modes of the
 
Search WWH ::




Custom Search