Information Technology Reference
In-Depth Information
m
j
={
x j }
1 associated with a
training query q , the score s j of document x j is treated as no longer a deterministic
value but a random variable. The random variable is governed by a Gaussian distri-
bution whose variance is σ s and mean is f(x j ) , the original score outputted by the
scoring function. That is,
First, one defines the score distribution. Given x
=
N s j |
f(x j ), σ s .
p(s j )
=
(4.2)
Second, one defines the rank distribution. Due to the randomness in the scores,
every document has the probability of being ranked at any position. Specifically,
based on the score distribution, the probability of a document x u being ranked before
another document x v can be deduced as follows:
N s
f(x v ), 2 σ s ds.
P u,v =
|
f(x u )
(4.3)
0
On this basis, the rank distribution can be derived in an iterative manner, by
adding the documents into the ranked list one after another. Suppose we already
have document x j in the ranked list, when adding document x u , if document x u
can beat x j the rank of x j will be increased by one. Otherwise the rank of x j will
remain unchanged. Mathematically, the probability of x j being ranked at position r
(denoted as P j (r) ) can be computed as follows:
P (u)
j
P (u 1 )
j
P (u 1 )
j
(r)
=
(r
1 )P u,j +
(r)( 1
P u,j ).
(4.4)
Third, with the rank distribution, one computes the expectation of NDCG (re-
ferred to as SoftNDCG 2 ), and use (1
SoftNDCG) as the loss function in learning
to rank:
m
m
2 y j
1
1
Z m
L(f ;
x , y ) =
1
η(r)P j (r).
(4.5)
j
=
1
r
=
1
In order to learn the ranking model f by minimizing the above loss function, one
can use a neural network as the model, and taking gradient descent as the optimiza-
tion algorithm.
In [ 14 ], the Gaussian process is used to further enhance the SoftRank algorithm,
where σ s is no longer a pre-specified constant but a learned parameter. In [ 35 ], the
SoftRank method is further extended to approximate P @ k and AP. The correspond-
ing objective functions are named SoftPC and SoftAP respectively.
4.2.1.2 Decision Theoretic Framework for Ranking
In [ 37 ], a similar idea to that of SoftRank is proposed, which uses a decision theo-
retic framework to optimize expected evaluation measures. First, a ranking utility is
2 For ease of reference, we also refer to objective functions like SoftNDCG as surrogate measures.
Search WWH ::




Custom Search