Database Reference
In-Depth Information
The comparison shown in Fig. 2.8 c, d is for the same purpose of performance
evaluation as that described above, and the difference lies with the approach to
the content analysis for the likelihood computation, which is based on the output
of the SVM employed for the active learning-based STRF. In this case, N R =
20
was adopted for the evaluation of precision as a function of the number of STRF
iterations, and N C =
50 for the evaluation of PRC. Since the initial retrieval is just
random ranking, the precision was evaluated starting from the first STRF iteration.
Still, we can observe the improvement resulting from the integration through the
Bayesian framework.
An interface with the NN-CLBIR enabled has been implemented to demonstrate
the effectiveness of the system in terms of performance improvement by the
accumulation of user history. Illustrated in Fig. 2.9 a, b are the top 20 images
retrieved using NN-CLBIR. Shown in the figure on the left is the result obtained
using a system whose apriori knowledge was extracted from 1,000 user data, while
on the right, the result is based on the apriori knowledge learned from 1,400 user
data. The query is selected from the semantic class of the theme soldier, and the last
four images do not belong to this class in Fig. 2.9 a. Nonetheless, all of the top 20
images are relevant to the query.
2.6
Summary
The kernel approach makes use of a nonlinear kernel-induced inner product, instead
of the traditional Euclidean inner product, to measure the similarity metrics of two
vectors. In a relevance feedback session, the nonlinear kernel approach implements
the nonlinear mapping function to analyze the role of the users in perceiving image
similarity. This results in a high performance machine that can cope with the
small size of the training sample set and the convergence speed. The new learning
algorithms for the nonlinear kernel-based RF can be categorized into two groups.
The first group includes the single-class RBF, the adaptive RBF, the gradient-
descent-based learning, where hard constraints are used to force a clear separation
on the RF samples. Then, in the second group, soft constraints are used to allow
more support vectors to be included in the so-called fuzzy RBF formulations. Much
of the chapter is meant to build the theoretical footing for the machine learning
models in the subsequent chapters.
In addition, the nonlinear-kernel approach in a STRF is extended to a Bayesian
fusion model. The STRF represents a content component that can be incorporated
with a context component in a LTRF, through a Bayesian framework. This can
be considered as a retrieval system with a memory, which can incrementally
accumulate high-level semantic knowledge, assisting in bridging the semantic gap
in future retrieval performed by prospective users.
Search WWH ::




Custom Search