Information Technology Reference
In-Depth Information
addition, the estimation of multiple ICA parameters of the proposed method pro-
vides an important advantage of flexibility over other non-parametric and standard
ICA
algorithms
when
data
with
linear/nonlinear
dependencies
and
complex
structures are processed.
The correction of the posterior probability has proven to be useful in the
improvement of classification accuracy for ICA mixtures with nonlinear depen-
dencies. This correction is practical for lower and medium dimensions, and for
higher dimensions it is constrained by limitations of data availability. The use of
few parameters (such as the ICA model parameters) provides efficiency in the
representation of the data mixture with the restriction of assuming linear depen-
dencies in the latent variables of the generative data model. However, but this
restriction is relaxed to the proposed correction. Thus, the proposed method can be
applied to a range of classification problems with data generated from underlying
ICA models with residual dependencies, and it may be suitable for the analysis of
real-world mixtures.
The role of unlabelled data in training and classification for semi-supervised
learning of ICA mixtures has been demonstrated, showing that unlabelled data can
degrade the performance of the classifier when they do not fit the generative model
of the data. In addition, the fuzziness (the strength of the membership of the data to
the classes) of the data mixture contributes to determining the role of the unla-
belled data.
References
1. T.W. Lee, M.S. Lewicki, T.J. Sejnowski, ICA mixture models for unsupervised classification
of non-gaussian classes and automatic context switching in blind signal separation. IEEE
Trans. Pattern Anal. Mach. Intell. 22(10), 1078-1089 (2000)
2. N.H. Mollah, M. Minami, S. Eguchi, Exploring latent structure of mixture ICA models by the
minimum ß-divergence method. Neural Comput. 18, 166-190 (2005)
3. R. Choudrey, S. Roberts, Variational mixture of bayesian independent component analysers.
Neural Comput. 15(1), 213-252 (2002)
4. A. Leon-García, Probability and Random Processes for Electrical Engineering (Addison
Wesley, Reading, 1994)
5. A. Salazar, L. Vergara, A. Serrano, J. Igual, A general procedure for learning mixtures of
independent component analyzers. Pattern Recogn. 43(1), 69-85 (2010)
6. B.W. Silverman, Density Estimation for Statistics and Data Analysis (Chapman and Hall,
London, 1985)
7. S.I. Amari, Natural gradient works efficiently in learning. Neural Comput. 10, 251-276 (1998)
8. D.T. Pham, Blind separation of instantaneous mixture of sources via an independent
component analysis. IEEE Trans. Signal Process. 44(11), 2768-2779 (1996)
9. A. Samarov, A. Tsybakov, Nonparametric independent component analysis. Bernoulli 10(4),
565-582 (2004)
10. A. Chen, P.J. Bickel, Efficient independent component analysis. Ann. Stat. 34(6), 2825-2855
(2006)
11. A. Chen, P.J. Bickel, Consistent independent component analysis and prewhitening. IEEE
Trans. Signal Process. 53(10), 3625-3632 (2005)
Search WWH ::




Custom Search