Information Technology Reference

In-Depth Information

algorithms are referred to as SVM
ndcg
and SVM
mrr
. Basically, in these extensions,

different feature maps or different strategies of searching the most violated con-

straints are used, but the key idea remains the same as that of SVM
map
.

Further analysis in [
33
] shows that the loss functions in the aforementioned works

are actually convex upper bounds of the following quantity, and this quantity is an

upper bound of the corresponding measure-based ranking error.

y
1

−
M
y
c
,
y
I

max

y
c

,

(4.19)

w
T
Ψ(
y
,
x
)

w
T
Ψ(
y
c
,
x
)

{

≤

}

=

where
M(
y
c
,
y
)
represents the value of an evaluation measure
M
when the ranking

result is
y
c
and the ground truth label is
y
.

In [
5
,
6
,
19
,
36
] the following convex upper bound of the above quantity is mini-

mized:

y
1

M
y
c
,
y
+

w
T
Ψ
y
c
,
x
−

w
T
Ψ(
y
,
x
)

+

max

y
c

−

.

(4.20)

=

In another method, called PermuRank [
33
], a different convex upper bound of the

aforementioned quantity is employed in the optimization process, as shown below.

y
1

M
y
c
,
y
1

w
T
Ψ
y
c
,
x

w
T
Ψ(
y
,
x
)

max

y
c

−

−

+

.

(4.21)

+

=

4.2.3 Non-smooth Optimization

Different from the methods introduced in the previous two subsections, there are

also some other methods that directly optimize evaluation measures using non-

smooth optimization techniques. For example, in AdaRank [
32
], the boosting algo-

rithm is used to optimize the exponential function of the evaluation measure. Note

that since the exponential function is monotonic, the optimization of the objective

function in AdaRank is equivalent to the optimization of the evaluation measure it-

self. For another example, in RankGP [
34
], the evaluation measure is used as the

fitness function of a genetic algorithm.

It should be noted that although these non-smooth optimization techniques are

very general and can deal with the optimization of any non-smooth functions, they

also have their limitations. That is, although the evaluation measures are used di-

rectly as the objective function, it is not guaranteed that one can really find their

optima.

4.2.3.1 AdaRank

Xu and Li [
32
] point out that evaluation measures can be plugged into the frame-

work of Boosting and get effectively optimized. This process does not require the