Information Technology Reference
In-Depth Information
Chapter 3
Learning Mixtures of Independent
Component Analysers
This chapter presents a new procedure for learning mixtures of independent
component analyzers. The procedure includes non-parametric estimation of the
source densities, supervised-unsupervised learning of the model parameters,
incorporation of any independent component analysis (ICA) algorithm into the
learning of the ICA mixtures, and estimation of residual dependencies after
training for correction of the posterior probability of every class to the testing
observation vector. We demonstrate the performance of the procedure in the
classification of ICA mixtures of two, three, and four classes of synthetic data. The
utilization of the proposed posterior probability correction demonstrates an
improvement in the classification accuracy. Semi-supervised learning shows that
unlabelled data can degrade the performance of the classifier when they do not fit
the generative model. Comparative results of the proposed method and standard
ICA algorithms for blind source separation in one and multiple ICA data mixtures
show the suitability of the non-parametric ICA mixture-based method for data
modelling.
Let us discuss a possible comparison of our method with other ICAMM
methods such as those explained in Chap. 2 . Note that the main objective of our
approach is to pursue generalization in the ICAMM method. The proposed method
is a maximum-likelihood approach; therefore, it can be related to the first method
proposed for ICAMM by Lee et al. [ 1 ], which is also based on maximum-likeli-
hood estimation. There are some significant differences in the proposed method
that outperform Lee's method: (i) non-parametric source density estimation allows
a wider range of densities to be modelled (e.g., complex multinomial densities),
instead of simple switching between Laplacian and bimodal densities; (ii) different
kinds of supervision in learning are allowed (unsupervised, semi-supervised, and
supervised learning) compared with only the one kind of learning supported by
Lee's method; (iii) the proposed method allows correction of residual dependen-
cies after the learning stage; and (iv) incorporation of different methods for
ICAMM parameter updating that are not supported by Lee's method.
Search WWH ::




Custom Search