Information Technology Reference
In-Depth Information
documents, i.e., j = 1 I
y j } =
4. Furthermore, if we compute the bound given by
{
y j = ˆ
( 5.2 ), we obtain
2
m
2
1
log (j
m
m
m
15
Z m
1
log (j
m
·
I
{
y j = ˆ
y j }
+
+
1 )
1 )
j
=
1
j
=
1
j
=
1
24 . 49
21 . 35 =
1 . 15 .
It is clear that the bound is meaningless since it is larger than one. Actually the
loose bound is not difficult to understand. The left-hand side of inequality ( 5.2 )
contains positional information, while the right-hand side does not. When the same
amount of classification loss occurs in different positions, the ranking error will be
quite different. In order to make the inequality always hold, the price one has to pay
is that the bound must be very loose.
Note that there are no results yet in the literature to connect the aforementioned
regression loss and classification loss to the MAP-based ranking error.
5.3 The Pairwise Approach
In [ 6 ], the following theoretical result has been obtained, in addition to the design
of the Ranking SVM algorithm.
C m 1 + 1 1 m 1
j 2
m 1 L pairwise (f
1
1
MAP (π, y )
1
;
x , y )
+
,
(5.3)
j
=
1
where
m
1
m
φ f(x t ) f(x i ) ,
L pairwise (f ;
x , y ) =
(5.4)
t
=
1
i
=
1 ,y i <y t
which corresponds to the query-level sum of the pairwise loss function in Ranking
SVM, RankBoost, and RankNet, when the φ function is the hinge function ( φ(z) =
( 1
e z ), and logistic function ( φ(z)
z) +
), exponential function ( φ(z)
=
=
log ( 1
+
e z ) ), respectively.
The above result basically says that there is a connection between (1
MAP)
and the loss functions in several pairwise ranking algorithms. When minimizing the
pairwise loss functions, (1
MAP) can also be minimized. However, this result has
the following limitations.
The connection revealed in the above result is somehow indirect. It is not clear
whether the pairwise loss functions and (1
MAP) have a direct relationship,
e.g., the pairwise loss functions are upper bounds of (1
MAP).    Search WWH ::

Custom Search