Graphics Reference
In-Depth Information
T
T
U
xy
=
,
which are easy to solve. If row interchanges are involved, then another factor comes
into play and A can be written in the form
A
=
PLU
,
where P is a permutation matrix , that is, a matrix obtained from an identity matrix
by a sequence of column and/or row interchanges. See [NobD77].
Definition.
The representations A = LU or A = PLU are called LU-decompositions
of A.
Next, let T : V Æ V be a linear transformation and assume that v 1 , v 2 ,..., v n is a
basis for V . If v ΠV , then
n
Â
() =
T
v
a
v
.
i
ij
j
j
=
1
The fact that the vectors v i form a basis implies that the coefficients a ij are unique.
Definition. The matrix a = (a ij ) is called the matrix for the linear transformation T
with respect to the basis v 1 , v 2 ,..., v n .
Clearly, the matrix for a linear transformation depends on the basis of the vector
space that is used in the definition. It is easy to describe this dependence.
Definition. Two n ¥ n matrices A and B are similar if there exists a nonsingular
matrix P so that
-1
BPAP
=
.
Similarity of matrices is an equivalence relation.
C.3.4. Theorem. Let A be the matrix for a linear transformation T with respect to
a given basis. A matrix B represents T with respect to some other basis if and only if
B is similar to A.
Proof.
See [John67].
One reason for defining a matrix for a linear transformation is that it allows us
to evaluate that transformation using matrix multiplication. It is very important that
one use the correct matrix and not its transpose. With our choice and the fact that
our vectors in R n are row vectors (1 ¥ n matrices), given a linear transformation
n
n
T
:
RR
Æ
,
(C.2a)
then
Search WWH ::




Custom Search