Civil Engineering Reference
In-Depth Information
Thus, when using conjugate vectors - in contrast to using an arbitrary basis - we
can compute the coefficients α i in the expansion of x directly from the given
vector b .
Let x 0 be an arbitrary vector in
n . Then an expansion of the desired correction
vector x x 0 in terms of conjugate directions can be computed recursively directly
from the gradients g k :
R
=∇ f(x k ) .
3.2 Lemma on Conjugate Directions. Let d 0 ,d 1 ,...,d n 1 be conjugate direc-
tions, and let x 0 ∈ R
n . Then the sequence generated by the recursion
x k + 1 = x k + α k d k
with
d k g k
d k Ad k , k :
α k =−
= Ax k b,
A 1 b after (at most) n steps.
Proof. Writing x x 0 = i α i d i , from (3.1) we immediately have
gives a solution x n =
d k A(x x 0 )
d k Ad k
d k (Ax 0 b)
d k Ad k
α k =
=−
.
Since the vector d k is assumed to be conjugate to the other directions, we have
d k A(x k x 0 ) = d k A k 1
α i d i =
0. Hence,
i
d k (Ax k
d k g k
d k Ad k .
b)
α k =−
=−
d k Ad k
3.3 Corollary. Under the hypotheses of 3.2, x k minimizes the function f not
only on the line
{ x k 1
+ αd k 1
; α ∈ R}
, but also over x 0
+ V k , where V k :
=
span[ d 0 ,d 1 ,...,d k 1 ] . In particular,
d i g k =
0
for i<k.
( 3 . 2 )
Proof. It suffices to establish the relation (3.2). The choice of α i ensures that
d i g i + 1 =
0 .
( 3 . 3 )
Thus, (3.2) holds for k =
1. Assume that the assertion has been proved for k
= A(x k x k 1 ) =
α k 1 Ad k 1 . Since the directions d 0 ,d 1 ,...,d k 1 are conjugate, we have d i (g k
g k 1 ) =
1. From x k x k 1
= α k 1 d k 1 it follows that g k g k 1
0 for i<k
1. Combining this with the induction hypothesis, i.e. (3.2)
for k
1, we conclude that the formula is also correct for k and i k
2. The
remaining equation for k and i = k
1 is a direct consequence of (3.3).
 
Search WWH ::




Custom Search