Information Technology Reference

In-Depth Information

ε satisfies

m
0
−

1

exp
(δα)
+

1

ε<

.

ln 2

For SVM
map
[
11
], SVM
ndcg
[
2
], and PermuRank [
10
], one can obtain the result

as shown in the following theorem. Basically, one can always find an infinite number

of samples on which the tendency correlations between their surrogate measures and

the evaluation measures cannot be as strong as expected.

M represent the surrogate measure used in SVM
map

Theorem 5.6
Let

[
11
],

SVM
ndcg

[
2
],
or PermuRank
[
10
],
and M the corresponding evaluation measure
.

Then for

∀

w
1
=

0,
and

∀

∈

(
0
,
1
)
,
there exist w
2
=

0
and an infinite number of

(
x
∗
,
y
∗
) which satisfy

sgn
M(w
1
;

x
∗
,
y
∗
)
M(w
1
;

x
∗
,
y
∗
)
≥

x
∗
,
y
∗
)

−
M(w
2
;

x
∗
,
y
∗
)

−

−

M(w
2
;

.

And therefore ε
≥
.

According to the aforementioned theorems, we have the following discussions.

1. The surrogate measures optimized by SoftRank and Approximate Rank can have

arbitrarily strong tendency correlation with the evaluation measures on any kind

of data, when the parameters in these algorithms are appropriately set. Therefore,

the evaluation measures can be effectively optimized by such methods and the

corresponding ranking performance can be high.

2. The surrogate measures optimized by SVM
map
[
11
], SVM
ndcg
[
2
], and Permu-

Rank [
10
] cannot have an arbitrarily strong tendency correlation with the evalua-

tion measures on certain kinds of data. Note that the distribution of data is usually

unknown in practical learning-to-rank setting. Thus, in practice, the ranking per-

formance of these methods may not be very high, and may vary according to

different datasets.

3. Considering that the evaluation measures usually contain many local optima with

respect to the ranking model, a surrogate measure will also have many local

optima when it has very strong tendency correlation with the evaluation mea-

sure. As a result, robust strategies that are able to find the global optimum are

required when performing optimization. In other words, there is a trade-off be-

tween choosing a surrogate measure that has a strong tendency correlation with

the evaluation measure and choosing a surrogate measure that can be easily op-

timized. This trade-off issue is especially critical for algorithms like SoftRank

and Approximate Rank since they utilize the gradient descent techniques for op-

timization, which is easy to be trapped into a local optimum.