Information Technology Reference
In-Depth Information
u
u
ʸ
Q
,ʸ
D
are similar,
ʔ(ʸ
Q
,ʸ
D
)
Intuitively, if the models
should be small. With this
assumption, we have
u
v
u
v
v
u
Q
R
(
q
,
u
,
d
)
∝
μ
p
(ʸ
Q
|
q
,
u
)
p
(ʸ
D
|
v
d
)ʔ(ʸ
Q
,ʸ
D
)
d
ʸ
D
d
ʸ
Q
D
u
w
u
w
w
u
Q
+
(
1
−
μ)
p
(ʸ
Q
|
q
,
u
)
×
p
(ʸ
D
|
w
d
)ʔ(ʸ
Q
,ʸ
D
)
d
ʸ
D
d
ʸ
Q
D
u
v
v
d
)ʔ( ʸ
u
Q
, ʸ
v
u
∝
μ
p
(ʸ
Q
|
q
,
u
)
p
(ʸ
D
|
D
)
+
(
1
−
μ)
p
(ʸ
Q
|
q
,
u
)
w
w
d
)ʔ( ʸ
u
Q
, ʸ
w
×
p
(ʸ
D
|
D
),
ʸ
u
Q
, ʸ
D
, ʸ
v
w
D
are posterior point estimate of the model parameters:
ʸ
ʸ
ʸ
u
u
v
v
w
w
Q
=
(ʸ
Q
|
,
),
D
=
(ʸ
D
|
v
d
),
D
=
(ʸ
D
|
w
d
)
argmax
p
q
u
argmax
p
argmax
p
u
v
Since
p
(ʸ
Q
|
q
,
u
)
does not depend on
d
and we further assume
p
(ʸ
D
|
v
d
)
and
v
are the same for all
d
for ranking. The risk minimization framework finally
reduces to measurement of the similarity between LMs. We employ the Kullback-
Leibler divergence to measure
p
(ʸ
D
|
v
d
)
ʔ(
·
)
:
)
∝
μʔ( ʸ
Q
, ʸ
u
v
−
μ)ʔ( ʸ
u
Q
, ʸ
w
R
(
q
,
u
,
d
D
)
+
(
1
D
)
Z
t
|
ʸ
u
p
(
Q
)
Z
t
|
ʸ
u
∝
μ
(
Q
)
D
)
+
(
−
μ)
p
log
1
Z
t
|
ʸ
v
p
(
t
Z
t
|
ʸ
u
log
p
(
Q
)
Z
t
|
ʸ
u
×
p
(
Q
)
(4.10)
Z
t
|
ʸ
w
p
(
D
)
t
with the expected risk for returning individual image
R
,wemakesim-
plification by further assuming the risk of returning each image is independent
of returning others. It can be easily derived that, the final rank of each image
rank
(
q
,
u
,
d
)
(
q
,
u
,
d
)
which minimizes the overall risk is inversely proportional to its indi-
vidual risk:
1
rank
(
q
,
u
,
d
)
∝
(4.11)
R
(
q
,
u
,
d
)
In the following subsection, we will instantiate the query and image LMs by incor-
porating annotation confidence and topic-sensitive influences.