Information Technology Reference
In-Depth Information
ʨ m 1 , m 2 (
)
from user U m 1 to user U m 2 can
be estimated by the number of words/visual descriptors for user U m 2 which are
influenced by U m 1 in t th topic, i.e., N U , C , S , Z (
The t th topical influence strength
t
U m 2 ,
U m 1 ,
0
,
Z t )
:
N U , C , S , Z (
N U , C , S , Z (
U m 2 ,
U m 1 ,
0
,
Z t ) +
U m 2 ,
U m 1 ,
0
,
Z t ) + ʱ ʳ
ʨ m 1 , m 2 (
) =
t
N U , S , Z (
N U , S , Z (
U m 2 ,
0
,
Z t ) +
U m 2 ,
0
,
Z t ) +|
C U m 2 | ʱ ʳ
(4.8)
This equation is quite intuitive in that if one user U m 2 has more tag words or visual
images from topic Z t likely to be influenced from the contact user U m 1 , then U m 1 is
supposed to influence U m 2 strongly in t th topic.
4.4 General Framework for Personalized Image Search
In this section, we propose a risk minimization-based framework to incorporate
the derived topic-sensitive influences for multimedia application of personalized
image search. Risk minimization is a popular information retrieval framework with
solid theoretical foundation [ 14 ]. It formulates query and document by LM [ 22 ],
where queries and documents are modeled respectively from a generative process.
Risk minimization views the retrieval of relevant documents from the perspective
of Bayesian decision theory, and the goal is equivalent to minimizing the expected
loss.
4.4.1 Risk Minimization Framework
In the context of personalized image search, we view the query generated from a
probabilistic process associated with searcher u , and each image generated from a
probabilistic process associated with the candidate image set. Specifically, query is
the result of first choosing a language model, and then generating the query from
that model. The generative process of each image is similar. Note that the query and
image language models provide entrance to the incorporation of rich information. For
example, the querymodel can encode detailed user information and searching context
when issuing the query (time, location, searcher's mood, etc.); the image model can
encode information of uploader, annotator, and comments of the candidate images.
In this chapter, we incorporate user interest into the query model and annotator
information into the image model.
We assume each candidate image d is associated with visual content v d and
annotation w d . The generative process of query q and image d is illustrated in Fig. 4.4 .
ʸ
u
v
w
D denote the parameters
of the image model for generating the visual content and textual annotation. For each
image d , there are two hidden relevance variable r
Q denotes the parameter of the query model by user u ,
ʸ
D
(
q
,
u
,
v d )
and r
(
q
,
u
,
w d )
, which
 
Search WWH ::




Custom Search