Database Reference
In-Depth Information
After the tensors have been defined, we can perform operations on them. This is
illustrated in the next example.
Example 12.13 Suppose we have created two static tensor objects t1 and t2. We
demonstrate some tensor operations, then create the corresponding sparse tensors,
and use them again for tensor operations:
// Element-wise multiplication T1 x T1:
Tensor tt1 ¼ t1.mult( t1 );
// Tensor multiplication T1 x T2:
Tensor t12 ¼ t1.tensorMult( t2 );
// N-mode multiplication T1 x T2 (N ¼ 1):
Tensor tn12 ¼ t1.nModeMult( t2.toMatrix(), 1 );
// Inner product T1 x T2:
Tensor ti12 ¼ t1.innerProduct( t2, 0, 0 );
// Matricization of T1 (n-mode ¼ 1):
Matrix m1 ¼ t1.matrice( 1 );
// Create the sparse tensors:
SparseTensor sp1 ¼ new SparseTensor( t1 );
SparseTensor sp2 ¼ new SparseTensor( t2 );
// SparseTensor1 + SparseTensor2:
SparseTensor sp12 ¼ sp1.plus( sp1 );
// N-mode multiplication SparseTensor1 x T2 (N ¼ 1):
SparseTensor spn12 ¼ sp1.nModeMult( t2.toMatrix(), 1 ); ■
12.1.4.2 Factorizations
The package Factorizations contains two subpackages:
Matrix : matrix factorizations (Sects. 8.3 and 8.4 )
Tensor : tensor factorizations (Sects. 9.1 , 9.2 , and 9.3 )
Matrix Factorizations
The matrix factorization package contains basic decompositions from JAMA for
dense matrices, namely, Cholesky, LU, QR, eigenvalues, and SVD. Further decom-
positions are for sparse matrices of large dimensions. These include Lanczos for
eigenvalues, SVD and Lanczos vectors, the adaptive SVD of Sect. 8.3 , an SVD
based on a gradient descent method as of Sect. 8.5 , different ALS versions,
Search WWH ::




Custom Search