Digital Signal Processing Reference
In-Depth Information
1) The image is rotated to keypoint's main direction. We choose a 41 4× patch
around each keypoint, and calculate the horizontal and vertical gradient to form a
23939 3042
× , n
is the number of keypoints. The eigenvalues and eigenvectors of covariance matrix is
calculated by
×× =
dimensional vector. These vectors form a matrix A ,
n
3042
A T
AA A , cov
=−
mean
AA
=
. The first r eigenvectors are used to
form a projection matrix w ,
. This projection is computed once and stored.
2) Build descriptor: Choosing the 41 4× patch around the keypoint to form a 3042
dimensional normalized gradient vector, x . We project x into our feature space
using the stored eigenspace w , and generate a PCA-SIFT descriptor,
×
3042
r
× ,
where r =20. At last we consider the similarity between feature vectors as matching
pairs.
r
1
4
2DPCA-SIFT
PCA needs decomposing a non sparse matrix, mn m × , the high dimensional tensor
model data were converted into vector model. To some extent, it would result in the
curse of dimensionality. Compared with PCA, 2DPCA is a good solution, it is a
quantum spatial learning method based on matrix pattern, and does not need to
transform image ( mn
) [11].
In this paper, we draw inspiration from 2DPCA and PCA-SIFT and propose
feature points matching algorithm based on 2DPCA-SIFT. Main idea: Using SIFT to
find keypoints and extract the location, scale, rotation invariant features, then using
2DPCA to reduce dimensionality to establish a more streamlined feature description.
Establish the 2DPCA-SIFT Descriptor:
1) Choose the 41
) into one-dimensional vector (1 mn
×
×
×
41
patch around the keypoint, a total of M image blocks, M is the
number of keypoints.
2) Rotate image to main direction for each keypoint, calculate the horizontal and
vertical gradient to form feature matrix, 39
×
78
.
3) Use 2DPCA to reduce dimension.
The idea of 2DPCA is that the image A , ( mn
×
), is projected into X through linear
transformation,
YAX , so we will get an m -dimensional column vector, Y ,.
2DPCA's core mission is to find the optimal projection vectors matrix X .
The specific processes are as follows:
a) Suppose the reference image having M feature points,
=
,
,
, M
, the
A is
AA A
12
denoted by an mn
×
matrix, the mean matrix of train samples:
1 M
mn
×
A
=
A
(11)
R
i
M
i
=
1
b) Get the projection matrix X by establishing the objective function. The total
scatter of the project samples can be characterized by the trace of the covariance
matrix of the projected feature vectors.
 
Search WWH ::




Custom Search