Civil Engineering Reference
In-Depth Information
Analysis of the CG method as an Optimal Method
n
in (at most)
n
steps. However, at the time of its discovery, the fact that this iterative method finds
a solution in
n
steps was overemphasized. Indeed, for problems with 1000 or more
unknowns, this property is of little significance. What is much more important is
the fact that a very good approximation can already be found with many fewer
than
n
steps.
First we note that for all iterative methods of the form
R
The CG method finds a minimum of a quadratic function on
x
k
+
1
=
x
k
+
α
k
(b
−
Ax
k
),
with
α
k
∈ R
- and thus also for the simplest gradient method - the approximation
x
k
lies in
x
0
+
V
k
, where
V
k
is defined as in 3.5(2). It follows from 3.5(2) that among all of
these methods, the CG method is the one which gives the smallest error
x
k
−
x
∗
A
.
As usual, the
spectrum σ (A)
of a matrix
A
is the set of its eigenvalues.
3.6 Lemma.
Suppose there exists a polynomial p
∈
P
k
with
p(
0
)
=
1
and
|
p(z)
|≤
r for all z
∈
σ (A).
(
3
.
9
)
n
, the CG method satisfies
Then for arbitrary x
0
∈ R
x
k
−
x
∗
A
≤
r
x
0
−
x
∗
A
.
(
3
.
10
)
Proof.
Set
q(z)
=
(p(z)
−
1
)/z
. Using the same notation as in 3.5, we have
=
x
0
+
q(A)g
0
∈
x
0
+
V
k
, and
g
0
=
A(x
0
−
x
∗
)
implies
y
−
x
∗
=
x
0
−
x
∗
+
y
−
x
0
=
x
0
−
x
∗
+
q(A)g
0
=
p(A)(x
0
−
x
∗
).
y
:
n
j
Now let
{
z
j
}
1
be a complete system of orthonormal eigenvectors with
Az
j
=
λ
j
z
j
−
x
∗
=
j
c
j
z
j
. Then
=
and
x
0
x
∗
=
y
−
c
j
p(A)z
j
=
c
j
p(λ
j
)z
j
.
(
3
.
11
)
j
j
The orthogonality of the eigenvectors implies
x
0
−
x
∗
2
A
2
=
λ
j
|
c
j
|
(
3
.
12
)
j
and
≤
r
2
j
y
−
x
∗
2
A
2
2
.
=
λ
j
|
c
j
p(λ
j
)
|
λ
j
|
c
j
|
j
y
−
x
∗
A
≤
r
x
0
−
x
∗
A
. Combining this with
y
∈
x
0
+
V
k
and the minimal
property 3.5(4) for
x
k
, we get (3.10).
Thus,
Search WWH ::
Custom Search