Information Technology Reference
In-Depth Information
source extraction and signal classification are performed simultaneously, which
could contribute to obtain higher insights into underlying physical phenomena
from observations originated in real-world applications. Thus, this theoretical
foundation was appropriate for dealing with the outlined problems. However, there
were unsolved issues with the existing methods that we researched in this work:
support of different kinds of learning; higher flexibility in the source modelling;
support of different strategies for model parameter updating; and correction of
residual dependencies. The formulation and testing of a general ICAMM frame-
work, which solves these important open issues, represents a true challenge. It is
particularly difficult if we consider the contexts of real applications where a priori
knowledge of the data is incomplete; it is therefore complicated to derive a
meaning for the parameters of the model (sources, bias terms, and mixing
matrices).
Chapter 3 presented the first method of the research that addressed obtaining a
general procedure by incorporating new features in ICAMM. This method
attempts to obtain a balance between parametric and non-parametric estimation.
Thus, the ICA mixtures were modelled by a short set of parameters maintaining
simplicity; however, flexibility for source estimation with complex distributions
was achieved by using a non-parametric kernel-based technique. The non-gaussianity
of the data was preserved since any assumption about the source model was not
imposed. The method allows unsupervised, supervised, and semi-supervised
learning scenarios to be modelled in order to deal with different kinds of frag-
mented knowledge in specific applications. The advantages supplied by different
kinds of ICA algorithms can be exploited since any ICA algorithm can be used in
the model parameter updating. In addition, estimation of residual dependencies
after training for correction of the posterior probability of every class to the testing
observation vector was formulated. The capabilities of the method were demon-
strated by means of an extensive number of simulations. Thus, ICA and ICA
mixture data with different kinds of distributions such as Laplacian, uniform,
Gaussian, Rayleigh, and K-type with several sample sizes were considered. The
method was compared with standard ICA algorithms: InfoMax, Extended Info-
Max, FastIca, JADE, and TDSEP as well as with non-parametric algorithms:
Npica, radical, and Kernel-ICA. The results show competitive performance of the
proposed method for accurate recovery of sources even for small sample sizes;
improvement of the classification accuracy for data with non-linear dependencies
using the proposed correction; and consistent learning from labelled-unlabelled
data.
The second method of the research, explained in Chap. 4 , consisted of an
agglomerative hierarchical clustering procedure that creates higher levels of
classification from a basic level of clusters formed by an ICA mixture model. This
kind of organization allows an increasing degree of data model flexibility through
hierarchical levels. Thus, the lack of flexibility of ICA projection models is
compensated by the overall flexibility of the mixture of ICAs which in turn is
relaxed by the complete hierarchy. An optimum cluster number is estimated using
the partition and partition entropy coefficients. The use of hierarchy of relatively
Search WWH ::




Custom Search