Information Technology Reference
In-Depth Information
Fig. 1.4 Retrieval result for
query “learning to rank”
Consider the example as shown in Fig. 1.4 . Since the first document in the re-
trieval result is relevant, r 1 =
1. Therefore, MRR for this query equals 1.
Mean Average Precision (MAP) To define MAP [ 2 ], one needs to define Preci-
sion at position k ( P @ k ) first. Suppose we have binary judgment for the documents,
i.e., the label is one for relevant documents and zero for irrelevant documents. Then
P @ k is defined as
t k I { l π 1 (t) = 1 }
k
P @ k(π,l) =
,
(1.6)
is the indicator function, and π 1 (j ) denotes the document ranked at
position j of the list π .
Then the Average Precision (AP) is defined by
where I {·}
k = 1 P @ k
·
I
{
l π 1 (k) =
1
}
AP (π, l)
=
,
(1.7)
m 1
where m is the total number of documents associated with query q , and m 1 is the
number of documents with label one.
The mean value of AP over all the test queries is called mean average precision
(MAP).
Consider the example as shown in Fig. 1.4 . Since the first document in the re-
trieval result is relevant, it is clear P @1
=
1. Because the second document is irrel-
1
evant, we have P @2
=
2 . Then for P @3, since the third document is relevant, we
2
1
2
5
obtain P @3
=
3 . Then AP
=
2 ( 1
+
3 )
=
6 .
Discounted Cumulative Gain (DCG) DCG [ 39 , 40 ] is an evaluation measure
that can leverage the relevance judgment in terms of multiple ordered categories,
and has an explicit position discount factor in its definition. More formally, suppose
the ranked list for query q is π , then DCG at position k is defined as follows:
k
DCG@ k(π,l) =
G(l π 1 (j) )η(j ),
(1.8)
j
=
1
( 2 z
where G(
·
) is the rating of a document (one usually sets G(z)
=
1 ) ), and η(j)
is a position discount factor (one usually sets η(j)
=
1 / log (j
+
1 ) ).
Search WWH ::




Custom Search