Graphics Reference
In-Depth Information
(a) Explain why, in this situation, k
n .
(b) Prove the claim that M T M = I k .
Exercise 10.6: An image (i.e., an array of grayscale values between 0 and 1,
say) can be thought of as a large matrix, M (indeed, this is how we usually
represent images in our programs). Use a linear algebra library to compute the
SVD M = UDV T of some image M . According to the decomposition theorem
described in Exercise 10.3, this describes the image as a sum of outer products of
many vectors. If we replace the last 90% of the diagonal entries of D with zeroes
to get a new matrix D , then the product M = UD V deletes 90% of the terms in
this sum of outer products. In doing so, however, it deletes the smallest 90% of the
terms. Display M and compare it to M . Experiment with values other than 90%.
At what level do the two images become indistinguishable? You may encounter
values less than 0 and greater than 1 during the process described in this exercise.
You should simply clamp these values to the interval [ 0, 1 ] .
Exercise 10.7: The rank of a matrix is the number of linearly independent
columns of the matrix.
(a) Explain why the outer product of two nonzero vectors always has rank one.
(b) The decomposition theorem described in Exercise 10.3 expresses a matrix M
as a sum of rank one matrices. Explain why the sum of the first p such outer
products has rank p (assuming d 1 , d 2 ,..., d p
= 0). In fact, this sum M p is the rank
p matrix that's closest to M , in the sense that the sum of the squares of the entries
of M
M p is as small as possible. (You need not prove this.)
Exercise 10.8: Suppose that T : R 2
R 2
is a linear transformation repre-
2 , that
sented by the 2
×
2matrix M , that is, T ( x )= Mx .Let K =max x S 1
T ( x )
is, K is the maximum squared length of all unit vectors transformed by M .
(a)IftheSVDof M is M = UDV T , show that K = d 1 .
(b) What is the minimum squared length among all such vectors, in terms of D ?
(c) Generalize to R 3 .
Exercise 10.9: Show that three distinct points P , Q , and R in the Euclidean
P x
P y
1
, etc.) are
plane are collinear if and only if the corresponding vectors ( v P =
α P v P +
α Q v Q +
α R v R = 0 with not all the
linearly dependent, by showing that if
α
s being 0, then
(a) none of the
s are 0, and
(b) the point Q is an affine combination of P and R ; in particular, Q =
α
α P
α Q P
α R
α P R ,so Q must lie on the line between P and R .
(c) Argue why dependence and collinearity are trivially the same if two or more
of the points P , Q , and R are identical.
Exercise 10.10: It's good to be able to recognize the transformation repre-
sented by a matrix by looking at the matrix; for instance, it's easy to recognize a
3
×
3 matrix that represents a translation in homogeneous coordinates: Its last row
is 001 and its upper-left 2
×
2 block is the identity. Given a 3
×
3matrix
representing a transformation in homogeneous coordinates,
(a) how can you tell whether the transformation is affine or not?
(b) How can you tell whether the transformation is linear or not?
(c) How can you tell whether it represents a rotation about the origin?
(d) How can you tell if it represents a uniform scale?
Exercise 10.11: Suppose we have a linear transformation T : R 2
R 2 , and
two coordinate systems with bases
{
u 1 , u 2 }
and
{
v 1 , v 2 }
; all four basis vectors are
 
 
Search WWH ::




Custom Search