Graphics Reference
In-Depth Information
In the final analysis, it will turn out that a transformation is diagonalizable if the
matrix associated with it is symmetric or Hermitian. Unfortunately, those properties
of a matrix are not independent of the basis that is used to define the matrix. For
example, it is possible to find a transformation and two bases, so that the matrix is
symmetric with respect to one basis and not symmetric with respect to the other. The
definition that captures the essence of the symmetry that we need is that of the
“adjoint” transformation.
1.8.1. Lemma. Let V be an n-dimensional vector space over a field k. If a: V Æ k is
a nonzero linear functional, then
dim ker () =n1.
Proof. Since a is nonzero, dim im(a) = 1 and so the lemma is an immediate conse-
quence of Theorem B.10.3.
If the vector space V has an inner product •, then it is easy to check that for each
u ΠV the map
u
*: V Æ
k
defined by
() =∑
u* vvu
is a linear transformation, that is, a linear functional. There is a converse.
1.8.2. Theorem. If a is a linear functional on an n-dimensional vector space V with
inner product •, then there is a unique u in V , so that
() =∑
a vuv
for all v in V .
Proof. If a is the zero map, then u is clearly the zero vector. Assume that a is
nonzero. Then by Lemma 1.8.1, the subspace X = ker(a) has dimension n - 1. Let u 0
be any unit vector in the one-dimensional orthogonal complement X ^ of X . We show
that
()
uuu
=
a
00
is the vector we are looking for. (The complex conjugate operation is needed in case
we are dealing with vector spaces over the complex numbers.) If v is an arbitrary
vector in V , then V = X X ^ implies that v = x + c u , for some x in X and some scalar
c. But
Search WWH ::




Custom Search