Information Technology Reference
In-Depth Information
learning the relational ranking function with Ranking SVM is given in [ 6 ]. The cor-
responding algorithm is named “ Relational Ranking SVM (RRSVM)”. The matrix-
form expressions of RRSVM are given as below.
RRSVM for pseudo relevance feedback can be represented as the following op-
timization problem:
n
1
2
1 (i) T ξ (i) ,
2
min
w,ξ (i)
w
+
λ
i =
1
C (i) h f w, x (i) ,R (i)
1 (i)
ξ (i) (i)
s.t.
0 ,
(6.5)
h f w, x (i) ,R (i) = I
β D (i)
R (i) 1 f w, x (i) ,
where C (i) denotes a constraint matrix for query q i , 1 ( i ) denotes a vector with all
the elements being 1, and its dimension is the same as that of ξ (i) . Each row of
C (i)
1, and the
other elements are all 0. For example, for query q i , if the ground truth indicates that
y 1 >y 3 and y 2 >y 4 , then we have
represents a pairwise constraint: one element is 1, one element is
10
.
10
01 0
C (i)
=
1
RRSVM for topic distillation can be represented as the following optimization
problem:
n
1
2
1 (i) T ξ (i) ,
2
min
w,ξ (i)
w
+ λ
i
=
1
C (i) h f w, x (i) ,R (i)
1 (i)
ξ (i) (i)
s.t.
0 ,
(6.6)
h f w, x (i) ,R (i) = 2 I
β 2 D (i)
R (i) T 1
R (i)
+
× 2 f w, x (i)
βr (i) .
Further discussions indicate that although the ranking function becomes more
complicated, one only needs to pre-process the features of all the documents associ-
ated with a given query, and can still use standard Ranking SVM toolkits to perform
the learning and test. 1 Therefore the new algorithm is very feasible for practical use.
Experimental results in [ 6 ] have shown that for particular learning tasks, RRSVM
can significantly outperform Ranking SVM and other heuristic methods such as
the pseudo relevance feedback method provided in the Lemur toolkit 2 and sitemap-
based relevance propagation [ 5 ].
1 For example, http://olivier.chapelle.cc/primal/ and http://www.cs.cornell.edu/People/tj/svm_light/
svm_rank.html .
2 http://www.lemurproject.org/ .
Search WWH ::




Custom Search