Information Technology Reference
In-Depth Information
of any ICA algorithm for parameter updating; and (iv) correction of residual
dependencies in classification after estimating the parameters of the model.
Thus, the algorithm balances the parametric ICAMM formulation with
increasing flexibility by using non-parametric source density estimation.
Furthermore, requirements of prior knowledge (e.g., mixture proportions,
density parameters, etc.) are not imposed, but all available priors -even
fragmented knowledge- can be incorporated in the learning stage. This is an
advantage in comparison with other methods such as Bayesian learning,
which can require tuning of an extensive number of prior parameters.
The application of the proposed ICAMM technique has been demonstrated
for several ICA mixtures and ICA datasets. The non-parametric approach of
the procedure clearly yielded better results in source separation than standard
ICA algorithms, indicating promising adaptive properties in learning source
densities, even using small sample sizes. In addition, the estimation of
multiple ICA parameters by the proposed method provides an important
advantage of flexibility over other non-parametric and standard ICA algo-
rithms when data with linear/nonlinear dependencies and complex structures
are processed. The correction of the posterior probability has proven to be
useful in the improvement of classification accuracy for ICA mixtures with
nonlinear dependencies. The use of few parameters (such as the ICA model
parameters) provides efficiency in the representation of the data mixture with
the restriction of assuming linear dependencies in the latent variables of the
generative data model. However, this restriction is relaxed by the proposed
correction. Thus, the proposed method can be applied to a range of classi-
fication problems with data generated from underlying ICA models with
residual dependencies, and it may be suitable for the analysis of real-world
mixtures. In addition, the role of unlabelled data in training and classification
for semi-supervised learning of ICA mixtures has been demonstrated,
showing that unlabelled data can degrade the performance of the classifier
when they do not fit the generative model of the data. The Mixca algorithm
was exhaustively validated applying it in diverse simulations and real-world
applications in the thesis. In addition, it was compared with standard
classifiers (MLP, LDA), non-parametric ICA algorithms (Npica, Radical,
Kernel-ICA),
and
standard
ICA
methods
(InfoMax,
extended
InfoMax,
JADE, TDSEP, FastIca).
• A novel method for agglomerative hierarchical clustering from mixtures of
ICAs has been proposed (Sects. Sects. 4.2 and 4.3 ). The procedure includes
two stages: learning the parameters of the mixtures (basis vectors and bias
terms) using the Mixca algorithm and clustering the ICA mixtures following
a bottom-up agglomerative scheme to construct a hierarchy for classification.
The approach for the estimation of the source probability density function is
non-parametric and the minimum Kullback-Leibler distance is used as a
criterion for merging clusters at each level of the hierarchy.
The hierarchical clustering method was validated from several simulations and
processing real data (image processing). Simulations showed the capability of
Search WWH ::




Custom Search