Information Technology Reference
In-Depth Information
105. Huang, G.B., Babri, H.: Upper bounds on the number of hidden neurons in
feedforward networks with arbitrary bounded nonlinear activation functions.
IEEE Trans. on Neural Networks 9(1), 224-229 (1998)
106. Huber, P.: Robust estimation of a location parameter. The Annals of Mathe-
matical Statistics 35(1), 73-101 (1964)
107. Hubert, L., Arabie, P.: Comparing partitions. Journal of Classification 2(1),
193-218 (1985)
108. Hyafil, L., Rivest, R.L.: Constructing Optimal Binary Decision Trees is NP-
complete. Information Processing Letters 5(1), 15-17 (1976)
109. Jacobs, R.: Increased rates of convergence through learning rate adaptation.
Neural Networks 1, 295-307 (1988)
110. Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixtures of local
experts. Neural Computation (3), 79-87 (1991)
111. Jacobs, R., Peng, F., Tanner, M.: A bayesian approach to model selection in
hierarchical mixtures-of-experts architectures. Neural Networks 10(2), 231-
241 (1997)
112. Jain, A., Dubes, R.: Algorithms for Clustering Data. Prentice Hall Int. (1988)
113. Jain, A., Murty, M., Flynn, P.: Data clustering: a review. ACM Computing
Surveys 31(3), 264-323 (1999)
114. Jain, A., Topchy, A., Law, M., Buhmann, J.: Landscape of clustering algo-
rithms. In: 17th Int. Conf. on Pattern Recognition, vol. 1, pp. 260-263 (2004)
115. Janzura, M., Koski, T., Otáhal, A.: Minimum entropy of error principle in
estimation. Information Sciences (79), 123-144 (1994)
116. Jensen, D., Oates, T., Cohen, P.: Building simple models: A case study with
decision trees. In: Liu, X., Cohen, P., Berthold, M. (eds.) IDA 1997. LNCS,
vol. 1280, pp. 211-222. Springer, Heidelberg (1997)
117. Jenssen, R., Hild II, K., Erdogmus, D., Principe, J., Eltoft, T.: Clustering
using Rényi's entropy. In: Int. Joint Conf. on Neural Networks, pp. 523-528
(2003)
118. Jeong, K.H., Liu, W., Han, S., Hasanbelliu, E., Principe, J.: The correntropy
MACE filter. Pattern Recognition 42(5), 871-885 (2009)
119. Jirina Jr., M., Jirina, M.: Neural network classifier based on growing hyper-
spheres. Neural Network World 10(3), 417-428 (2000)
120. Johnson, E., Mehrotra, A., Nemhauser, G.: Min-cut clustering. Mathematical
Programming 62, 133-151 (1993)
121. Johnson, N.L., Kotz, S., Blakrishnan, N.: Continuous Univariate Distributions,
vol. 1. John Wiley & Sons (1994)
122. Johnson, N.L., Kotz, S., Blakrishnan, N.: Continuous Univariate Distributions,
vol. 2. John Wiley & Sons (1995)
123. Jordan, M., Jacobs, R.: Hierarchical mixture of experts and the EM algorithm.
Neural Computation 6, 181-214 (1994)
124. Kannan, R., Vempala, S., Vetta, A.: On clusterings: Good, bad, and spectral.
In: Annual Symposium on the Foundation of Computer Science, pp. 367-380
(2000)
125. Kapur, J.: Maximum-Entropy Models in Science and Engineering. John Wiley
& Sons, New York (1993)
126. Karypis, G.: Cluto: Software package for clustering high-dimensional datasets,
Version 2.1.1 (2003)
127. Karypis, G., Han, E.H., Kumar, V.: Chameleon: Hierarchical clustering using
dynamic modeling. IEEE Computer 32(8), 68-75 (1999)
128. Kaufman, L., Rousseeuw, P.: Finding Groups in Data: An Introduction to
Cluster Analysis. John Wiley & Sons, New York (1990)
129. Kittler, J., Hatef, M., Duin, R., Matas, J.: On combining classifiers. IEEE
Trans. Pattern Analysis and Machine Intelligence 20(3), 226-239 (1998)
Search WWH ::




Custom Search