Database Reference
In-Depth Information
Let e : ¼ x A 1 b denote the error vector of the current iterate before the coarse
grid correction. Equation ( 6.7 ) reveals that the error vector immediately after the
coarse grid correction is given by
e
1 RA
e 0
I LL T AL
¼
:
But I L ( L T AL ) 1 RA ) is the orthogonal projector along the range of L in terms
of the energetic inner product. Hence, the following relation holds:
A ¼ min
z
e 0
k
e Lz
k A ,
ð 6
:
8 Þ
that is, the coarse grid correction is the best approximation to the smooth error
vector in terms of the energetic inner product. In particular, this implies that
A
e 0
kk A ,
that is, the coarse grid correction can never increase the error, no matter which
coarse grid and interpolation operator have been chosen. Furthermore, Eq. ( 6.8 )
gives a decisive hint as to how to establish the prolongation operator: to eliminate
smooth error components, the range of L should be a good approximation to the
space spanned by smooth eigenvectors of the iteration matrix S . In the geometric
setting with a regular grid, this may be achieved by coarsening the grid and using,
for example, piecewise linear interpolation.
In the algebraic setting, however, there remains the question of the construction
of a suitable grid hierarchy and corresponding inter-level operators. Interestingly,
by means of what we call coarsening , an interpolator can be extracted automatically
from the Eq. ( 6.3 ) (and from this by the inverse the restrictor and by the Galerkin
operator the stiffness matrix on the coarse grid). This might again sound like Baron
M¨nchhausen, but we do exactly the same thing when solving most operator
equations: their structure is analyzed in the preprocessing and the best solution
method derived from it.
Apart from this, it is also possible to establish a grid hierarchy and inter-level
operators intellectually by exploiting certain structural properties arising from the
underlying model. We shall do so to devise AMG schemes dedicated to RL in the
next section.
Sadly enough, the coefficient matrices arising from RL for recommendation
matrices are typically not symmetric, let alone positive definite. As it lacks a notion
of an energetic inner product, the nonsymmetric case defies a mathematical analysis as
profound and simple as the above-outlined. Moreover, eigenvalues of nonsymmetric
iterationmatrices may be complex, there need not be a basis of eigenvectors, and, even
if, this basis is generally not orthogonal in terms of any sensible inner product.
Nevertheless, if we can guarantee that the matrix of the coarse Eq. ( 6.6 )be
non-singular, the AMG scheme is at least well defined. Moreover, it is supported by
numerical evidence, and, as we shall see in the next section, results on convergence
rates, though not as neat as those for the symmetric case, may be obtained.
Search WWH ::




Custom Search