Database Reference
In-Depth Information
degree of correlation. Its performance, however, was reduced after two iterations as
the retrieved samples became correlated more strongly. This suggests that the EDLS
may not be suitable for constructing the RBF network under this learning condition.
Using the same RBF widths, the OLS learning strategy was more stable and much
better than the EDLS.
It was observed that the RBF centers critically influenced the performance
of the RBF classifier, and that the RBF classifier constructed by matching all
retrieved samples exactly to the RBF centers degraded the retrieval performance.
The OLS algorithm was fairly successful at resolving this problem, by choosing the
subset of the retrieved samples for the RBF centers. However, the OLS provided
a less adequate RBF network, compared to the ARBFN. In ARBFN learning,
each available positive sample was considered as important. Also, the centers were
shifted by negative samples with the weighted norm parameters being updated
during adaptive cycles. The ARBFN also managed well with the small set of
samples encountered. The ARBFN is thus the most adequate model for the current
application.
The retrieval performance of the ARBFN was next compared to the single-
class model discussed in Sects. 2.2 - 2.3 , using a new query set, which contained
59 images randomly selected from different categories. The methods compared
include ARBFN, single-RBF, OPT-RF, and MAM. Two criteria were employed for
performance measures: precision Pr measured from the top N c images, where N c
was set to 10, 16, 25, and 50; and second, a precision versus recall graph. However,
the relevance feedback was done only on the top 16 retrieved images.
Table 2.14 summarizes the precision results averaged over all queries, measured
from the top 10, 16, 25, and 50 retrieved images. It can be seen that the learning
methods provided a significant improvement in each of the first three iterations. The
ARBFN achieved the best precision results in all conditions, compared to the other
methods discussed. At N c =
10, ARBFN reached a near-perfect precision of 100 %
after three iterations. This means that all the top ten retrieved images were relevant.
The results also show that, at N c =
16, more than 14 relevant images were presented
in the top 16 ranking set. The most important precision results are perhaps those
after the first iteration, since users would likely provide only one round of relevance
feedback. It was observed that the ARBFN provided a better improvement than the
other methods for this requirement.
Figure 2.5 a-c illustrates the average precision versus recall figures after one, two,
and three iterations, respectively. The behavior of the system without learning and
the strong improvements with adaptive learning can easily be seen. In all cases, the
precision at 100 % recall drops close to 0. This fact indicates that it was not possible
to retrieve all the relevant images in the database, which had been pre-classified by
the Corel Professionals. It is observed from Fig. 2.5 a that the ARBFN was superior
to the single-RBF at the higher recall levels, while both provided similar precision
at the lower recall levels. Also, the ARBFN achieved better improvements than
the single-RBF by up to 8.6 %, 7.3 % and 6.5 %, at one, two and three iterations,
respectively.
Search WWH ::




Custom Search