Information Technology Reference
In-Depth Information
49. Y. Le Cun, P.Y. Simard, and B. Pearlmutter. Automatic learning rate maximization
by on-line estimation of the Hessian's eigenvectors. In S.J. Hanson, J.D. Cowan,
and C.L. Giles, eds., Advances in Neural Information Processing Systems ,Vol.5,
pp. 156-163. Morgan Kaufmann, San Mateo, CA, 1993.
50. C.E. Davila. An efficient recursive total least squares algorithm for FIR adaptive
filtering. IEEE Trans. Signal Process ., 42:268-280, 1994.
51. R.D. Degroat and E. Dowling. The data least squares problem and channel equal-
ization. IEEE Trans. Signal Process ., 41(1):407-411, Jan. 1993.
52. A.J. Van der Veen and A. Paulraj. An analytical constant modulus algorithm. IEEE
Trans. Signal Process ., pp. 1136-1155, May 1996.
53. K.I. Diammantaras and S.Y. Kung. An unsupervised neural model for oriented prin-
cipal component. Proceedings of the IEEE International Conference on Acoustics,
Speech and Signal Processing , pp. 1049-1052, 1991.
54. G. Eckart and G. Young. The approximation of one matrix by another of lower rank.
Psychometrica , 1:211-218, 1936.
55. S.E. Fahlman. Faster-learning variations on back propagation: an empirical study.
In D. Touretzky, G.E. Hinton, and T.J. Sejnowski, eds., Proceedings of the 1988
Connectionist Models Summer School , pp. 38-51. Morgan Kaufmann, San Mateo,
CA, 1988.
56. D.Z. Feng, Z. Bao, and L.C. Jiao. Total least mean squares algorithm. IEEE Trans.
Signal Process ., 46(8):2122-2130, Aug. 1998.
57. K.V. Fernando and H. Nicholson. Identification of linear systems with input and
output noise: the Koopmans-Levin method. IEE Proc. D , 132:30-36, 1985.
58. G.W. Fisher. Matrix analysis of metamorphic mineral assemblages and reactions.
Contrib. Mineral. Petrol ., 102:69-77, 1989.
59. R. Fletcher. Practical Methods of Optimization , 2nd ed. Wiley, New York, 1987.
60. W.A. Fuller. Error Measurement Models . Wiley, New York, 1987.
61. G.G. Cirrincione, S. Van Huffel, A. Premoli, and M.L. Rastello. An iteratively
re-weighted total least-squares algorithm for different variances in observations and
data. In V.P. Ciarlini, M.G. Cox, E. Filipe, F. Pavese, and D. Richter, eds., Advanced
Mathematical and Computational Tools in Metrology , pp. 78-86. World Scientific,
Hackensack, NJ, 2001.
62. P.P. Gallo. Consistency of regression estimates when some variables are subject to
error. Commun. Stat. Theory Methods , 11:973-983, 1982.
63. K. Gao, M.O. Ahmad, and M.N. Swamy. Learning algorithm for total least-squares
adaptive signal processing. Electron. Lett ., 28(4):430-432, Feb. 1992.
64. K. Gao, M.O. Ahmad, and M.N. Swamy. A constrained anti-Hebbian learning algo-
rithm for total least-squares estimation with applications to adaptive FIR and IIR
filtering. IEEE Trans. Circuits Syst.-II , 41(11):718-729, Nov. 1994.
65. C.L. Giles and T. Maxwell. Learning, invariance, and generalization in high order
neural networks. Appl. Opt ., 26:4972-4978, 1987.
66. P.E. Gill, W. Murray, and M.H. Wright. Practical Optimization . Academic Press,
New York, 1980.
67. R.D. Gitlin, J.E. Mazo, and M.G. Taylor. On the design of gradient algorithms for
digitally implemented adaptive filters. IEEE Trans. Circuits Syst ., 20, Mar. 1973.
Search WWH ::




Custom Search