Information Technology Reference
In-Depth Information
6.1 General Relational Ranking Framework
6.1.1 Relational Ranking SVM
By considering the relationship in the ranking process, the ranking function can be
refined as follows, where the function
h
is named the relational ranking function [
6
],
h
f(
x
), R
.
h(
x
)
=
(6.1)
Here
f
can still be understood as a scoring function operating on each single
document, and the matrix
R
describes the relationship between documents. There
are then several ways of parameterizing
h(
x
)
. For example, one can let
f
contain
a parameter
w
, while the way of integrating
f
and
R
is pre-defined. One can also
regard
f
as a fixed function, and
h
has its own parameter. The most complex case
is that both
f
and
h
contain parameters. In [
6
], the first case is discussed, and it is
assumed that
h
f(w,
x
), R
=
l
1
f(w,
x
),
z
+
βl
2
(R,
z
)
,
h(
x
)
=
arg min
z
(6.2)
where
l
1
is an objective function to guarantee that the ranking results given by
h
should not be too different from those given by
f
, and
l
2
is another objective func-
tion to guarantee that the ranking results should be consistent with the relationship
requirement
R
as much as possible.
Then, as for topic distillation and pseudo relevance feedback, the relational rank-
ing function
h
is further realized and discussed. A nice thing is the existence of
the closed-form solution of the relational ranking function in these two cases. For
pseudo relevance feedback, one can eventually obtain,
h
f(w,
x
), R
=
I
+
β(D
−
R)
−
1
f(w,
x
),
(6.3)
where
I
denotes an identity matrix, and
D
is a diagonal matrix with
D
i,i
=
j
R
i,j
.
For topic distillation, one has
h
f(w,
x
), R
=
2
I
+
β
2
D
−
R
−
R
T
−
1
2
f(w,
x
)
−
βr
,
(6.4)
where
r
j
=
i
R
i,k
−
j
R
k,j
.
One may have noticed that in the relational ranking framework, there are two
different inputs: one is the features of the documents
x
, and the other is the relation-
ship matrix
R
. This seems quite different from the setting of conventional learning to
rank mentioned in previous chapters. However, if one notices that
x
=
φ(q,d)
and
R
=
ψ(d,d)
, one will realize that the inputs are still documents and queries. The
only difference lies in that now we also care about the inter-dependency between
documents, while previously each document was dealt with in an independent man-
ner.
The relational ranking function can be substituted into any learning-to-rank al-
gorithm introduced in this topic for learning. In particular, the demonstration of