Graphics Reference
In-Depth Information
things. Suppose that we look for a solution a = (a 1 ,a 2 ,...,a m ) to equation (1.30) with
a m π 0. Then equation (1.30) can be written in the form
a
L
M OM
L
a
Ê
ˆ
11
n
1
a
a
a
a
Ê
Ë
ˆ
¯
1
m
m
-
1
Á
Á
˜
˜
=- (
)
L
a
L
a
.
m
1
mn
m
Ë
¯
a
a
m
-
11
,
m
-
1
,
n
There is actually no loss of generality in assuming that a m = 1. This equation is again
of the form (1.27) and if we assume that the right-hand side of this equation is not
the zero vector, then we can again apply Theorem 1.11.6 and get what we want in the
same sense as before. The only problem however is that we could assume different
coordinates of a to be nonzero and for each choice we shall get different solutions.
The general point to remember then is that the approach to the linear least squares
problem that we described above works well but the answer that we get depends on
the assumptions that we make.
We finish this section with two results about decompositions of matrices.
1.11.7. Theorem. Let A be a real m ¥ n matrix of rank r. Then there exists an m ¥ m
orthogonal matrix U, an n ¥ n orthogonal matrix V, and a diagonal m ¥ n matrix
s
0
0
Ê
ˆ
1
Á
Á
Á
˜
˜
˜
O
D
=
,
0
s
r
Ë
¯
0
0
where s 1 ≥s 2 ≥ ...≥s r > 0, so that
UDV T
A
=
.
(1.31)
Proof.
See [ForM67] or [RaoM71].
Definition. The decomposition of A in equation (1.31) is called the singular value
decomposition of A. The ss are called the singular values of A.
The singular value decomposition of a matrix has useful applications. One
interpretation of Theorem 1.11.7 is that, up to change of coordinates, every linear
transformation T : R m Æ R n of rank r has the form T( e i ) =s i e i , 1 £ i £ r. More pre-
cisely, one can find orthonormal bases u 1 , u 2 ,..., u m of R m
and v 1 , v 2 ,... v n of R n ,
so that T( u i ) =s i v i , 1 £ i £ r.
1.11.8. Theorem. Let A be a real m ¥ n matrix of rank r. If A has the singular value
decomposition shown in equation (1.31), then
+
+
T
A
=
D
,
and the n ¥ m matrix D + is given by
Search WWH ::




Custom Search