Information Technology Reference
In-Depth Information
where m and l are indices that correspond to the training vectors belonging to class
k and training vectors not belonging to class k, respectively.
In a totally supervised training pC k = x ð m Þ ; W
ð i Þ¼ 1 8 m and pC k = x ð l Þ ; W
ð i Þ¼ 1 8 l; hence the residual increment r k ð i Þ¼ 0 8 k : In other words, only the
training vectors corresponding to class k will affect the increment, and the con-
vergence properties will be the same as the properties of the underlying ICA
algorithm: every ICA model of the mixture is separately learned. In the case of
semi-supervised learning, the lower the amount of supervision, the higher the
perturbation introduced by r k ð i Þ in the learning of class k. It is clear that close to
the optimum, pC k = x ð m Þ ; W ð i Þ' 1 8 m and pC k = x ð l Þ ; W ð i Þ' 1 8 l; that is, the
convergence properties will essentially be the same as those of the underlying ICA
algorithm. Since it is very difficult to undertake a general convergence analysis of
what happens far from the optimum in ICAMM, we have included some simu-
lations in the next section. These simulations demonstrate how the percentage of
semi-supervision modifies the learning curves that correspond to the totally
supervised case and how this percentage modifies the approximate number of
observation vectors in order to achieve a particular mean SIR (signal-to-interfer-
ence ratio), which is defined in next section.
Finally, we have defined the demixing matrix for every class W k ¼ A k ;
ð k ¼ 1...K Þ; that is, a prewhitening step is not included. In several ICA algorithms,
the data are first whitened as explained in Chap. 2 . Prewhitening is an optional step
that has been used in many ICA algorithms allowing computational-burden reduc-
tion [ 11 , 12 ]. However, also there exists algorithms that avoid the prewhitening
phase, see for instance [ 13 , 14 ]. In the context of semi-supervised ICAMM, it is not
possible to prewhiten the data because a priori the allocation of the points to each
class k is not totally known [ 15 ]. Specifically, in our Mixca procedure for unsuper-
vised or semi-supervised learning, the estimation of the whitening matrix can be
incorporated in each of the parameter updating steps. In the case of supervised
learning, the prewhitening step can be performed previous to the Mixca procedure.
We performed several simulations with Laplacian and Uniform distributed sources,
which demonstrated that there are no major differences in BSS (SIR B 1 dB)
whether or not a prewhitening step is used.
3.4 Simulations
In this section, we provide a demonstration of the performance of the algorithms
proposed in previous sections. Several simulations with different kinds and num-
bers of source densities and numbers of classes (ICA mixtures) are presented for the
evaluation of the proposed technique in BSS, classification of ICA mixtures,
convergence properties, classification of ICA mixtures with nonlinear dependen-
cies, and semi-supervised learning. The parameters of Mixca were configured
as follows: a ¼ 5e 05 (Eq. 3.6); a ¼ 1 (Eq. 3.7 ); h ¼ 1 : 06rN 1 = 5 ð r ¼ std ð s m Þ
Search WWH ::




Custom Search