Information Technology Reference
In-Depth Information
are orthogonal over the interval y , z with
The Chebyshev polynomials T yz
( x )
respect to the density function 1 / 1 x 2 . By choosing the interval y , z as
small as possible so that it contains all the undesired eigenvalues of a matrix S ,
the ordinary Chebyshev iteration method will converge to a basis of the eigensub-
k
space associated with the remaining eigenvalues outside y , z : in particular, in
multidimensional TLS problems, where the singular subspace associated with the
p smallest singular values of C , y , z must contain the n p largest eigenvalues
of S
C T C
B ] T [ A
=
=
[ A
;
;
B ]. The inverse Chebyshev iteration is applied to the
matrix S = C T C 1 and then the interval z , y must be chosen as small as pos-
sible such that it contains all undesired eigenvalues of S (i.e., the inverses of the
squares of the singular values of [ A ; B ] ) . Only OCI does not alter the matrix C .
Usually, for small motor gaps [typically, for ratios σ r r + 1 < 10 for the gap
between the singular values associated with the desired ( σ r + 1 ) and undesired
( σ r ) singular subspace of matrix C ], ICI is recommended. Indeed, this method
is proven [98, p. 160] always to converge faster than OCI. Generally, the gain
in speed is very significant. In contrast to OCI and provided that the undesired
singular value spectrum is not too small (e.g., σ 1 r 2), the convergence rate of
ICI is hardly influenced by the spread of this spectrum as well as by the quality of
its lower bound z 1
2
1 [98, p. 152]. Moreover, ICI is proved [98, pp. 152 - 157]
always to converge faster than II, provided that an optimal estimation of the
bound y is given. The smaller the motor gap, the larger the gain in speed. OCI is
shown [98, p. 139] to be efficient only in problems characterized by a very dense
singular value spectrum, which is not the case for most TLS problems. However,
it is recommended for solving very large, sparse, or structured problems, because
it does not alter the matrix C . Table 1.1 summarizes these comparisons.
1.13.4 Lanczos Methods
Lanczos methods are iterative routines that bidiagonalize an arbitrary matrix C
or tridiagonalize a symmetric matrix S directly without any orthogonal updates
as in the Householder approach to the classical SVD. The original matrix struc-
ture is not destroyed and needs little storage; thus, these methods work well for
large, sparse, or structured matrices [98]. They are used for TLS problems with
a small motor gap; if the Lanczos process is applied to S (LZ), it has a minimally
faster convergence than OCI with optimal bounds. Like OCI, the convergence
Table 1.1 Link between the singular value spectrum and the best convergence a
II
OCI
ICI
RQI
LZ
ILZ
Motor gap
large
small
small
small
small
small
Undesired SV spread
indep.
small
indep.
indep.
small
indep.
Bounds
no
optimal
optimal
no
no
no
a indep., independent.
Search WWH ::




Custom Search