Information Technology Reference
In-Depth Information
and output spaces, and make detailed case studies. We will show some such exam-
ples [ 5 , 6 , 9 , 12 , 13 ] in Chap. 18. Along with the difficulty, actually the analysis for
ranking is also more interesting and content-rich than for classification. For exam-
ple, when the top- k loss is used as the true loss [ 12 ], one can further analyze how
the statistical consistency changes with respect to different k values. This is quite
unique, and has never appeared in the literature of classification.
In the next three chapters, we will give detailed introductions to existing works
on statistical ranking framework, generalization analysis for ranking, and statistical
consistency for ranking. Note that it is unavoidable that a lot of mathematics will
be used in these chapters, in order to make the discussions rigorous and clear. It is
safe, however, to skip this whole part, if one only wants to know the algorithmic
development of learning to rank.
15.4 Exercises
15.1 List the differences between ranking and classification, which pose potential
challenges to the theoretical analysis on ranking.
15.2 Enumerate the major problems to investigate in statistical learning theory, and
their relationships.
References
1. Agarwal, S.: Generalization bounds for some ordinal regression algorithms. In: Proceedings
of the 19th International Conference on Algorithmic Learning Theory (ALT 2008), pp. 7-21
(2008)
2. Agarwal, S., Graepel, T., Herbrich, R., Har-Peled, S., Roth, D.: Generalization bounds for the
area under the roc curve. Journal of Machine Learning 6 , 393-425 (2005)
3. Agarwal, S., Niyogi, P.: Stability and generalization of bipartite ranking algorithms. In: Pro-
ceedings of the 18th Annual Conference on Learning Theory (COLT 2005), pp. 32-47 (2005)
4. Chen, W., Liu, T.Y., Ma, Z.M.: Two-layer generalization analysis for ranking using
rademacher average. In: Lafferty, J., Williams, C.K.I., Shawe-Taylor, J., Zemel, R., Culotta, A.
(eds.) Advances in Neural Information Processing Systems 23 (NIPS 2010), pp. 370-378
(2011)
5. Clemencon, S., Lugosi, G., Vayatis, N.: Ranking and empirical minimization of u-statistics.
The Annals of Statistics 36 (2), 844-874 (2008)
6. Cossock, D., Zhang, T.: Subset ranking using regression. In: Proceedings of the 19th Annual
Conference on Learning Theory (COLT 2006), pp. 605-619 (2006)
7. Freund, Y., Iyer, R., Schapire, R., Singer, Y.: An efficient boosting algorithm for combining
preferences. Journal of Machine Learning Research 4 , 933-969 (2003)
8. Lan, Y., Liu, T.Y.: Generalization analysis of listwise learning-to-rank algorithms. In: Proceed-
ings of the 26th International Conference on Machine Learning (ICML 2009), pp. 577-584
(2009)
9. Lan, Y., Liu, T.Y., Ma, Z.M., Li, H.: Statistical consistency of ranking methods. Tech. rep.,
Microsoft Research (2010)
10. Lan, Y., Liu, T.Y., Qin, T., Ma, Z., Li, H.: Query-level stability and generalization in learning
to rank. In: Proceedings of the 25th International Conference on Machine Learning (ICML
2008), pp. 512-519 (2008)
Search WWH ::




Custom Search