Database Reference
In-Depth Information
RBF width model 2:
The feature relevancy is also related to sample variance in the
N p
m
x mi }
positive set
{
1 , and thus, the RBF width can also be obtained by
=
˃ i =
exp
(( ʲ )
St d i )
(2.36)
1
N p
m = 1 x mi x 2 2
N p
St d i =
(2.37)
1
N p
m
x mi }
where St d i is the standard deviation of the members in set
{
1 which is
=
inversely proportional to their density (Gaussian distribution), and
ʲ
is a positive
constant. The parameter
can be chosen to maximize or minimize the influence
of St d i on the RBF width. For example, when
ʲ
ʲ
is large, a change in St d i will
be exponentially reflected in the RBF width
˃ i .
˃ i if the i -th feature is highly relevant. This
allows higher sensitivity to any change in the distance d i = |
Both models provide a small value of
x i
z i |
. In contrast, a
high value of
˃ i is assigned to the non-relevant feature, so that the corresponding
vector component can be disregarded when determining the similarity. Table 2.3
summarizes the RBF-based relevance feedback algorithm using RBF center model
1 and RBF width model 1.
2.3.3
Experimental Result
This section reports the experimental results [ 35 , 329 ] of the nonlinear RBF
approach in comparison with linear-based adaptive retrieval methods. Table 2.4
describes the database and feature extraction methods used in the experiment.
The Laplacian mixture model (LMM) demonstrated in [ 35 ] is applied to the texture
images for feature characterization. Table 2.5 summarizes the learning procedure
of all methods of comparison, which comprise of the RBF method, the query
adaption method (QAM), and the metric adaption method (MAM). Table 2.6
summarizes the retrieval results in terms of average precision. The initial precision
of 76.7 %, averaged over all queries, was obtained. The precision was significantly
improved by updating weighting functions. During relevance feedback, most of
the performance enhancement was achieved after the first iterations. A slight
improvement was achieved after the second iteration. A significant improvement
in the retrieval efficiency was observed by employing a nonlinear RBF method.
The final results, after learning, show that RBF-1 gave the best performance with
88.12 % correct retrievals, followed by RBF-2 (87.37 %), and MAM (80.74 %) at a
distant third. The QAM is also given for benchmarking purposes.
Figure 2.1 illustrates retrieval examples with and without learning similarity.
It shows some of the difficult patterns analyzed, which clearly illustrates the
superiority of the RBF method.
Search WWH ::




Custom Search