Information Technology Reference
In-Depth Information
References
1. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)
2. Vapnik, V.: Statistical Learning Theory: Inference from Small Samples. Wiley, New York
(1995)
3. Vapnik, V.: Estimation of Dependences Based on Empirical Data. Information Science &
Statistics. Springer, US (2006)
4. Cherkassky, V., Mulier, F.: Learning from data. John Wiley & Sons, Inc. (1998)
5. Hellman, M., Raviv, J.: Probability of error, equivocation and the chernoff bound. IEEE
Transactions on Information Theory IT-16, 368-372 (1970)
6. Schmidt, J., Siegel, A., Srinivasan, A.: Chernoff-hoeffding bounds for applications with lim-
ited independence. SIAM Journal on Discrete Mathematics 8, 223-250 (1995)
7. Shawe-Taylor, J., et al.: A framework for structural risk minimization. In: COLT, pp. 68-76
(1996)
8. Devroye, L., Gyorfi, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer
Verlag, New York, Inc. (1996)
9. Anthony, M., Shawe-Taylor, J.: A result of vapnik with applications. Discrete Applied Math-
ematics 47, 207-217 (1993)
10. Krzyzak, A., et al.: Application of structural risk minimization to multivariate smoothing
spline regression estimates. Bernoulli 8, 475-489 (2000)
11. Holden, S.: Cross-validation and the pac learning model. Technical Report RN/96/64,
Dept. of CS, University College, London (1996)
12. Holden, S.: Pac-like upper bounds for the sample complexity of leave-one-out cross-
validation. In: 9th Annual ACM Workshop on Computational Learning Theory, pp. 41-50
(1996)
13. Kearns, M., Ron, D.: Algorithmic stability and sanity-check bounds for leave-one-out cross-
validation. Neural Computation 11, 1427-1453 (1999)
14. Kearns, M.: A bound on the error of cross-validation, with consequences for the training-test
split. In: Advances in Neural Information Processing Systems, vol. 8. MIT Press (1995)
15. Kearns, M.: An experimental and theoretical comparison of model selection methods. In: 8th
Annual ACM Workshop on Computational Learning Theory, pp. 21-30 (1995)
16. Bartlett, P., Kulkarni, S., Posner, S.: Covering numbers for real-valued function classes. IEEE
Transactions on Information Theory 47, 1721-1724 (1997)
17. Bartlett, P.: The sample complexity of pattern classification with neural networks: the size of
weights is more important then the size of the network. IEEE Transactions on Information
Theory 44 (1997)
18. Ng, A.: Feature selection, l1 vs. l2 regularization, and rotational invariance. In: 21st ACM
International Conference on Machine Learning. Proceeding Series, vol. 69 (2004)
19. Vapnik, V., Chervonenkis, A.: The necessary and sufficient conditions for the consistency
of the method of empirical risk minimization. Yearbook of the Academy of Sciences of the
USSR on Recognition, Classification and Forecasting 2, 217-249 (1989)
20. Kohavi, R.: A study of cross-validation and boostrap for accuracy estimation and model
selection. In: International Joint Conference on Artificial Intelligence, IJCAI (1995)
21. Efron, B., Tibshirani, R.: An Introduction to the Bootstrap. Chapman & Hall, London (1993)
22. Hjorth, J.: Computer Intensive Statistical Methods Validation, Model Selection, and Boot-
strap. Chapman & Hall, London (1994)
Search WWH ::




Custom Search