Information Technology Reference
In-Depth Information
eigenvalues. Let
λ
n
(where
n
=1
,
2
,...N
G
) represent the eigenvalues in de-
creasing order. We select the subspace dimension (i.e. number of eigenvectors) so
as to retain 90% energy and project the Contourlet coecients to this subspace.
If
×
U
sk
then the subspace Contourlet
coe
cients at scale
s
and orientation
k
are given by
B
sk
=(
U
s
L
represents the first
L
eigenvectors of
U
s
L
)
T
(
A
sk
−
µ
sk
p
)
,
(4)
U
s
L
represents the subspace for Contourlet coecients at scale
s
and orientation
k
.
Similar subspaces are calculated for different scales and orientations using the
training data and each time, the subspace dimension is chosen so as to retain
90% energy. In our experiments, we considered three scales and a total of 15
orientations along with the low pass sub-band image. Fig. 3 shows samples of a
sub-band image and Contourlet coecients at two scales and seven orientations.
The subspace Contourlet coecients were normalized so that the variance
along each of the
L
dimensions becomes equal. This is done by dividing the
subspace coecients by the square root of the respective eigenvalues. The nor-
malized subspace Contourlet coecients at three scales and 15 orientations of
each image are stacked to form a matrix of feature vectors
µ
sk
.Notethat
where
p
is a row vector of all 1's and equal in dimension to
where each column
is a feature vector of the concatenated subspace Contourlet coe
cients of an
image. These features are once again projected to a linear subspace however,
this time without subtracting the mean. Since the feature dimension is usually
large compared to the size of the training data,
B
BB
T
is very large. Moreover,
at most
N
1 orthogonal dimensions (eigenvectors and eigenvalues) can be
calculated for a training data of size
N
×
G
−
×
G
.The(
N
×
G
)th eigenvalue is always
B
T
B
zero. Therefore, we calculate the covariance matrix
C
=
instead and find
the
N
×
G
−
1 dimensional subspace as follows
U
SV
T
=
C
,
(5)
BU
/
diag(
U
=
S
)
.
(6)
AU
) is divided by the square root
of the corresponding eigenvalue so that the eigenvectors in
In Eqn. 6, each dimension (i. e. column of
U
(i. e. columns) are
AU
is ignored to avoid division by zero.
of unit magnitude. The last column of
U
defines an
N
×
G
−
Thus
1 dimensional linear subspace. The feature vectors
are projected to this subspace and used for classification
U
T
B
F
=
(7)
3 Classification
We tested three different classification approaches. In the first approach, the cor-
relation between the features of the query and the training images was calculated
by
n
tq
−
t
q
γ
=
n
(
)
2
n
(
,
(8)
(
t
(
q
t
)
2
−
q
)
2
−
)
2
Search WWH ::
Custom Search