Image Processing Reference
In-Depth Information
Let q 1 , q 2 ,
...
, q r
be r vectors of size m
1 obtained as linear combinations of
X 1 , X 2 ,
...
, X N , that is,
X
N
a ij X i ¼ a j Xj ¼
q j ¼
...
(
:
)
1, 2,
, r
3
157
i ¼ 1
where
a j ¼ a 1j
T is a unit norm vector
½
a 2j
a Nj
X ¼ X 1 X 2
½
X N
is a data matrix
The sample variance of q j is given by
T
T
1
N
1
N
2
q i a j m
q j a j m
a j X a j m
a j X a j m
s
j ¼
¼
X
N
i ¼ 1 ( X i m) T ( X i m) a j
1
N ( X m) T ( X m) a j ¼ a j
1
N
¼ a j
¼ a j Ra j
(
3
:
158
)
j ¼ a j Ra j with respect to unit norm vector a j . The solution is
the normalized eigenvector corresponding to the largest eigenvalue of the sample
covariance matrix R. Therefore, the
2
We now maximize
s
, q r are the
normalized eigenvectors corresponding to the r largest eigenvalues of R. Note that
the principal components are pairwise orthogonal. Once the principal components
are established, any random vector in this class can be approximated by linear
combinations of these principal components, that is,
r principal components q 1 , q 2 ,
...
X
r
i ¼ 1 a i q i
X ¼
(
3
:
159
)
where
a i ¼ X T q i
i ¼
1, 2,
...
, r
(
3
:
160
)
Example 3.41
Figure 3.18 shows 2-D scatter data obtained by plotting two neighboring pixels
x ¼ f (i, j) and y ¼ f (i þ
1, j) of the LENA image. The estimated covariance matrix of
this data set is
0
:
2529 0
:
2512
R ¼
0
:
2512 0
:
2525
Search WWH ::




Custom Search