Image Processing Reference
In-Depth Information
associated with the largest eigenvalue s 1 , where E {·} is the expectation opera-
tor,
µ x is the mean vector
of x . Instead of using the expectation, we can estimate the covariance matrix as
a sample covariance matrix
x
=
{, , , }
xx
x n
is the variables' random vector, and
12
1
p
ˆ C
=
X
T
(16.19)
x
where the sample mean has been removed from each variable, and p is the number
of observations. The remaining PCs are found by means of the eigenvectors of
the covariance matrix ordered such that their associated eigenvalues are in decreas-
ing order. The eigenvalues equal the variance explained by the corresponding PC
and the sum of the eigenvalues equals the variance in the original observations.
So, we can write where is the variance of the i th variable. In
Figure 16.5, a bidimensional distribution of data points is shown, along with the
directions of the eigenvectors of the data matrix. In the same figure, the eigenvalues
associated with each PC are shown. If we consider the voxels as random variables,
then the matrix accounts for the spatial covariance structure in the data set, i.e.,
the covariances among the voxels. The n -dimensional eigenvectors u i are eigen-
images and span the columns' space of the data matrix. On the other hand, if we
consider the time points as variables, the covariance matrix
r
s
n
Σ
σ
2
=
Σ
σ i 2
i
=
1
i
l
=
1
C x
1
n
XX
(16.20)
4
3
PC1
2
1
PC2
0
1
2
3
s1 = 0.9532
s2 = 0.16
4
4
3
2
1
0
1
2
3
4
FIGURE 16.5 Bidimensional distribution of data points is shown along with the principal
components and the associated eigenvalues.
 
Search WWH ::




Custom Search