Information Technology Reference

In-Depth Information

Theorem 5.2
The pairwise losses in Ranking SVM
,
RankBoost
,
and RankNet are

all upper bounds of the essential loss
,
i
.
e
.,

β(t)
L
pairwise
(f

L
β
(f

;

x
,Ω
y
)

≤

max

;

x
,
y
),

1

≤

t

≤

m

−

1

where L
pairwise
(f

;

x
,
y
) is defined as in
(
5.4
).

Therefore, the minimization of the loss functions in the aforementioned pairwise

ranking algorithms will all lead to the minimization of the essential loss. Further

considering the relationship between the essential loss and (1

−

NDCG) as well as

(1

−

MAP), we have

G(K

−

1
)η(
1
)

Z
m

L
pairwise
(f

1

−

NDCG
(π,
y
)

≤

;

x
,
y
)

;

(5.9)

1

i
=
k
∗
m
i

L
pairwise
(f

1

−

MAP
(π,
y
)

≤

;

x
,
y
).

(5.10)

In other words, optimizing these pairwise loss functions can minimize (1

−

−

NDCG) and (1

MAP).

5.4 The Listwise Approach

5.4.1 Non-measure-Specific Loss

We take ListMLE as an example of the listwise ranking algorithms in this sub-

category. The following results have been proven in [
3
], which characterizes the

relationship between the likelihood loss and (1

−

NDCG) as well as (1

−

MAP).

Theorem 5.3
The loss function in ListMLE is an upper bound of the essential loss
,

i
.
e
.,

β(t)
L
ListMLE
(f

1

ln 2

L
β
(f

;

x
,Ω
y
)

≤

max

;

x
,Ω
y
).

(5.11)

1

≤

t

≤

m

−

1

Further considering the relationship between the essential loss and (1

−

NDCG)

as well as (1

−

MAP), one can come to the following conclusions.

G(K
−

1
)η(
1
)

Z
m
ln 2

L
ListMLE
(f
;

1

−

NDCG
(π,
y
)
≤

x
,Ω
y
)
;

(5.12)

1

ln 2
i
=
k
∗
m
i

L
ListMLE
(f

1

−

MAP
(π,
y
)

≤

;

x
,Ω
y
).

(5.13)

Note that the proof of the above theorem highly depends on the form of the

likelihood loss. Extensive work is needed to generalize this result to other listwise

ranking methods.