Information Technology Reference
In-Depth Information
However, one disadvantage of 2D-PCA (compared to PCA) is that more coef-
ficients are needed to represent an image. From (
48
), it is clear that dimension of the
2D-PCA principal component matrix Y (m
K) is always much higher than PCA.
To reduce the dimension of matrix Y, the conventional PCA is used for further
dimensional reduction after 2D-PCA.
Now, let the training set consists of M training images {I
1
,
×
…
,I
M
}; with SDFs
{
U
M
}. All images are binary, pre-aligned, and normalized to the same reso-
lution. As in [
42
], we obtain the mean level set function of the training shapes,
U
U
1
,
…
,
,as
the average of these M signed distance functions. To extract the shape variabilities,
U
is subtracted from each of the training SDFs. The obtained mean-offset functions can
be represented as {
U
1
; ...; U
M
}. These new functions are used to measure the
variabilities of the training images. We use 80 training VB images with 120
×
120
pixels in our experiment. According to (
46
), the constructed matrix G will be:
M
X
M¼80
1
U
i
U
i
:
t
G
¼
ð
49
Þ
i¼1
find the optimal K eigenvectors of G corresponding to
the largest K eigenvalues. The value of
The goal of 2D-PCA is to
helps to capture the necessary shape
variation with minimum information. Experimentally, we
“
K
”
find that, the minimum
suitable value is K =10[
44
]. Less than this value, the accuracy of our segmentation
algorithm falls drastically below other alternatives. After choosing the eigenvectors
corresponding to 10 largest eigenvalues, b
1
,b
2
,
…
,b
10
, we obtained the principal
component matrix Yi
i
(m = 120
×
K = 10) for each SDF of our training set (i =1,2,
…
, 80). For more dimensional reduction, the conventional PCA is applied on the
principal components {
*
1
,
}. It should be noted that,
*
is the vector rep-
resentation of Y. The reconstructed components (after retransforming to matrix
representation) will be:
,
*
…
M
Y
fl;hg
¼
Ue
fl;hg
;
ð
50
Þ
where U is the matrix which contains L eigenvectors corresponding to L largest
eigenvalues
λ
l
,(l =1,2,
…
, L), and e
f
l
;
h
g
is the set of model parameters which can
be described as [
44
]:
p
k
l
e
f
l
;
h
g
¼
h
;
ð
51
Þ
where l = {1,
is a constant which can be chosen
arbitrarily (in our experiments, we chose L =4,
…
, L}, h ={
−
µ
,
…
,
}, and
µ
µ
= 3). The new principal com-
ponents of training SDFs are represented as {Y
1
; ...;
Y
N
} instead of {Y
1
; ...;
µ
Y
M
}
where N is the multiplication of L and standard deviation in eigenvalues (the
number of elements in h), i.e. N = L(2
+ 1)[
42
]. Given the set {Y
1
; ...;
Y
N
}, the
new projected training SDFs are obtained as follows:
µ