Graphics Reference
In-Depth Information
so that T( w ) ΠW ^ . Clearly, S = T| W ^ is a self-adjoint transformation on the (n - 1)-
dimensional vector space W ^ . The inductive hypothesis applied to S means that there
is an orthonormal basis u 2 , u 3 ,..., u n for W ^ which are eigenvectors for S (and hence
for T). The vectors u i are obviously what we wanted, proving the theorem.
In the case of Theorem 1.8.10, the name “Principal Axes Theorem” comes from
its role in finding the principal axes of ellipses. The matrix form of Theorem 1.8.10 is
1.8.11. Theorem. If A is a real symmetric n ¥ n matrix, then there exists an orthog-
onal matrix P so that D = P -1 AP is a diagonal matrix. In particular, every real sym-
metric matrix is similar to a diagonal one.
Proof. Simply let the columns of P be the vectors that form an orthonormal basis
of eigenvectors.
Note that Theorem 1.8.11 only gives sufficient conditions for a matrix to be similar
to a diagonal one. Nonsymmetric matrices can also be similar to a diagonal one. For
necessary and sufficient conditions for a matrix to be diagonalizable see Theorem
C.4.10.
In Theorem 1.8.11, the number s of positive diagonal entries of D is uniquely deter-
mined by A. We may assume that the diagonal of D has the s positive entries first, fol-
lowed by r - s negative entries, followed by n - r zeros, where r is the rank of A.
1.8.12. Example.
Let
21
12 .
-
Ê
Ë
ˆ
¯
A =
-
We want to find an orthogonal matrix P so that P -1 AP is a diagonal matrix.
Solution. Consider A to be the matrix of a linear transformation T on R 2 . Now, the
roots of the characteristic polynomial
(
) =-+
2
2
det t I
-
A
t
43
t
are 1 and 3, which are the eigenvalues of T. To find the corresponding eigenvectors,
we must solve
(
) =
(
)
2
xy I
-
A
0
and
(
) =
(
)
2
xy
3
I
-
A
0 .
This leads to two pairs of equations
-+ =
-=
xy
xy
0
0
Search WWH ::




Custom Search