Information Technology Reference
In-Depth Information
11.5 Exercises
11.1 Implement all the learning-to-rank algorithms introduced in previous chapters
and test them on the LETOR datasets.
11.2 Study the contribution of each feature to the ranking performance. Suppose the
linear scoring function is used in the learning-to-rank algorithms, please study
the differences in the weights of the features learned by different algorithms,
and explain the differences.
11.3 Perform case studies to explain the experimental results observed in this chap-
ter.
11.4 Analyze the computational complexity of each learning-to-rank algorithm.
11.5 Select one learning-to-rank algorithm and perform training for linear and non-
linear ranking models respectively. Compare the ranking performances of two
models and discuss their advantages and disadvantages.
11.6 Study the change of the ranking performance with respect to different evalu-
ation measures. For example, with the increasing k value, how do the rank-
ing performances of different algorithms compare with each other in terms of
NDCG@ k ?
References
1. Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval. Addison-Wesley, Reading
(1999)
2. Cao, Z., Qin, T., Liu, T.Y., Tsai, M.F., Li, H.: Learning to rank: from pairwise approach to
listwise approach. In: Proceedings of the 24th International Conference on Machine Learning
(ICML 2007), pp. 129-136 (2007)
3. Chapelle, O.: Direct optimization for ranking. In: Keynote Speech, SIGIR 2009 Workshop on
Learning to Rank for Information Retrieval (LR4IR 2009) (2009)
4. Freund, Y., Iyer, R., Schapire, R., Singer, Y.: An efficient boosting algorithm for combining
preferences. Journal of Machine Learning Research 4 , 933-969 (2003)
5. Herbrich, R., Obermayer, K., Graepel, T.: Large margin rank boundaries for ordinal regres-
sion. In: Advances in Large Margin Classifiers, pp. 115-132 (2000)
6. Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Trans-
actions on Information Systems 20 (4), 422-446 (2002)
7. Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the 8th
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD
2002), pp. 133-142 (2002)
8. Tsai, M.F., Liu, T.Y., Qin, T., Chen, H.H., Ma, W.Y.: Frank: a ranking method with fidelity
loss. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research
and Development in Information Retrieval (SIGIR 2007), pp. 383-390 (2007)
9. Verberne, S., Halteren, H.V., Theijssen, D., Raaijmakers, S., Boves, L.: Learning to rank qa
data. In: SIGIR 2009 Workshop on Learning to Rank for Information Retrieval (LR4IR 2009)
(2009)
10. Xu, J., Li, H.: Adarank: a boosting algorithm for information retrieval. In: Proceedings of the
30th Annual International ACM SIGIR Conference on Research and Development in Infor-
mation Retrieval (SIGIR 2007), pp. 391-398 (2007)
Search WWH ::




Custom Search