Digital Signal Processing Reference
In-Depth Information
Appendix B
Projection Techniques
When the number of samples in the feature vector is large, the computational
complexity in computing the decision boundary between the different clusters
increases. Therefore, there is a requirement to map the feature vector into the
lower dimensional space; thereby reducing the computational complexity. This
mapping is achieved by using transformation matrix W T
obtained as described
below.
B.1 Principal Component Analysis
The covariance matrix of the vectors in the vector space is given as
C X ¼ E ðð X l X Þð X l X Þ T Þ , where X is the random vector asssociated with the
vectors in the vector space (V), l X is the mean vector of the vector space.The
diagonal elements of the matrix C X gives the variance of the individual elements of
the random variable X. Let the size of the random vector X be m 1 and the
transformation vector W 1 of size m 1 is used to transform the random vector X to
random variable Y 1asY1 ¼ W 1 X. Note that the random variable Y 1 is of the size
1 1. Let us formulate the objective function to optimize the vector W 1 such that
the variance of the random variable Y1 is maximized, subject to the constraint that
W 1 W 1 ¼ 1. Note the variance of the random variable Y is given as C Y ¼ W 1 C X W 1 .
The solution is obtained as follows. The lagrangean equation is given as
J ¼ W 1 C X W 1 þ k ð W 1 W 1 1 Þ
ð B : 1 Þ
differentiating with respect to the W 1 and equate to zero and solve for W 1
(exploiting the property of the symmetry of the covariance matrix C X ), we get
C X W 1 ¼ kW 1 . This indicate that the W 1 is the eigen vector corresponding to the
eigen value k. To maximize the function, we have to choose the eigen vector
corresponding to the largest eigen value.
Thus in general the eigen vectors of the co-variance matrix C X corresponding the
largest n significant eigen values are arranged in columnwise to obtain the
Search WWH ::




Custom Search