Information Technology Reference
In-Depth Information
13. Asuncion, A., Newman, D.: UCI machine learning repository, University of
California, School of Information and Computer Science (2010),
http://www.ics.uci.edu/~mlearn/MLRepository.html
14. Auda, G., Kamel, M.: Modular neural network classifiers: A comparative
study. Intel. Robotic Systems 21, 117-129 (1998)
15. Balakrishnan, N., Nevzorov, V.: A Primer on Statistical Distributions. John
Wiley & Sons, Inc. (2003)
16. Banarer, V., Perwass, C., Sommer, G.: Design of a Multilayered Feed-Forward
Neural Network Using Hypersphere Neurons. In: Petkov, N., Westenberg,
M.A. (eds.) CAIP 2003. LNCS, vol. 2756, pp. 571-578. Springer, Heidelberg
(2003)
17. Banarer, V., Perwass, C., Sommer, G.: The hypersphere neuron. In: European
Symposium on Artificial Neural Networks, pp. 469-474 (2003)
18. Batchelor, B.: Practical Approach to Pattem Classification. Plenum Press,
Plenum Pub. Co. Ltd. (1974)
19. Batchelor, B.: Classification and data analysis in vector spaces. In: Batchelor,
B. (ed.) Pattern Recognition: Ideas in Practice, Plenum Press, Plenum Pub.
Co. Ltd. (1978)
20. Battiti, R.: Accelerated backpropagation learning: Two optimization methods.
Complex Systems 3, 331-342 (1989)
21. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algo-
rithms: Bagging, boosting, and variants. Machine Learning 36(1-2), 105-139
(1999)
22. Baum, E., Wilczek, F.: Supervised learning of probability distributions by
neural networks. In: NIPS 1987, pp. 52-61 (1987)
23. Beirlant, J., Dudewicz, E.J., Györfi, L., van der Meulen, E.C.: Nonparametric
Entropy Estimation: An Overview. Int. J. Math. Stat. Sci. 6(1), 17-39 (1997)
24. Beiu, V., Pauw, T.: Tight Bounds on the Size of Neural Networks for Classifi-
cation Problems. In: Cabestany, J., Mira, J., Moreno-Díaz, R. (eds.) IWANN
1997. LNCS, vol. 1240, pp. 743-752. Springer, Heidelberg (1997)
25. Berkhin, P.: Survey of clustering data mining techniques. Tech. rep., Accrue
Software, San Jose, CA (2002)
26. Bishop, C.: Neural Networks for Pattern Recognition. Oxford University Press
(1995)
27. Bishop, C.: Pattern Recognition and Machine Learning, 2nd edn. Springer
(2007)
28. Blakrishnan, N., Navzorov, V.B.: A Primer on Statistical Distributions. John
Wiley & Sons (2003)
29. Blum, A., Rivest, R.: Training a 3-node Neural Network is NP-Complete. In:
Hanson, S.J., Rivest, R.L., Remmele, W. (eds.) MIT-Siemens 1993. LNCS,
vol. 661, pp. 9-28. Springer, Heidelberg (1993)
30. Bousquet, O., Boucheron, S., Lugosi, G.: Introduction to statistical learning
theory. In: Bousquet, O., von Luxburg, U., Rätsch, G. (eds.) Machine Learning
2003. LNCS (LNAI), vol. 3176, pp. 169-207. Springer, Heidelberg (2004)
31. Bowman, A., Azzalini, A.: Applied Smooting Techniques for Data Analysis.
Oxford University Press (1997)
32. Brady, M., Raghavan, R., Slawny, J.: Gradient descent fails to separate. In:
IEEE Int. Conf. on Neural Networks, vol. 1, pp. 649-656 (1988)
33. Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and Regression
Trees. Chapman & Hall/CRC (1993)
34. Buntine, W.: A theory of learning classification rules. Ph.D. thesis, Univ. of
Technology, Sidney (1990)
35. Buntine, W., Niblett, T.: A further comparison of splitting rules for decision-
tree induction. Machine Learning 8, 75-85 (1992)
 
Search WWH ::




Custom Search