Database Reference
In-Depth Information
are called orthogonal if x
x q =
,
x q , i.e. they are
geometrically perpendicular. Given two vectors, the smaller the magnitude of their
inner-product, the less similar they are.
The modified query vector x q discussed in Eq. ( 2.7 ), is obtained by the training
samples as:
0 in which case we write x
N p
N n
i = 1 x i
N p
i = 1 x i
N n
x q = ʱ
x q + ʲ
ʵ
(2.8)
where x q = x q 1 ,...,
x qP t
are
suitable parameters [ 13 ]. The new query is obtained by adjusting the positive and
negative terms of the original query. When adding the positive terms to the query,
the modified query is close to the mean of the positive samples (i.e., x q =
denotes the original query vector, and
( ʱ , ʲ , ʵ )
x ), and the
inner product x ,
x q =
1. On the other hand, subtracting the negative terms from
the query will make the modified query more dissimilar to the negative samples.
The query modification method has been widely used for information retrieval
[ 13 , 107 ] and image retrieval systems [ 14 , 103 ]. However, one disadvantage of this
model is the requirement of an indexing structure to follow term-weighting model,
as in text retrieval for greater effectiveness. The models assume that the query index
terms are sparse and are usually of a binary vector representation. However, as
compared to text indexing, image feature vectors are mostly real vectors. Thus, a
large number of terms can be applied for characterization of images in order to
overcome this problem [ 103 ]. This also increases computational complexity.
2.2.3
Metric Adaptation Method
The Euclidean inner-product may be extended as the Mahalanobis inner product
x q )= x
x q M =
x t Mx q
K
(
x
,
,
(2.9)
with a weight matrix M . The Euclidean inner product is a special case of the
Mahalanobis inner product with M
I . In this case, we assume that all the
features are equally weighted in their importance, and there exists no inter-
feature dependence. However, when the features are mutually independent, but not
isotropic, the Mahalanobis matrix takes the following form
=
M
=
Diag
{
w i },
i
=
1
,···,
P
(2.10)
where the weights
{
w i ,
i
=
1
,···,
P
}
reflect the importance of the respective features.
Search WWH ::




Custom Search