Biomedical Engineering Reference
In-Depth Information
This observation holds when it comes to AMICA and JADER for number of chan-
nels from 8 to 32 , the rule being not verified for 48 channels in this particular case of
random data set. A quick look at the figure 7(e) ( PI vs sample size for 48 channels)
shows a rupture of the JADER curves around 6 sto 8 s, 9 s being required to get a PI
under 0 . 1 . Such rule seems thus to be inadequate for JADER algorithm for number of
channels over 48 .
Extended Infomax is showing bad performances for these data length, requiring
much more sample size to converge, confirming the results presented in [23]. This phe-
nomenon appears because of our choice of the simulated data. Indeed, because of the
use of subgaussian sources, the algorithm (even in the extended version) needs more
data points in order to give reliable results. Empirically, one can say that the 30 n 2 rule
seems to be adapted for the Extended InfoMax algorithm, but apparently too strong for
the three others.
In the case of AMICA, the initialization parameter has to be considered in order to
fully understand its misadequation with our minimum data length rule in the random
data set case for 48 channels (see below the analysis for the plausible data set).
Impact of Initialization. Our second objective is to analyse the sensitivity of the ICA
algorithms to the initialization step with whitening or sphering. The curves of figure 7
are showing evolution of PI with the data sample size for the five channel number con-
figurations considered. Whitening and sphering curves are difficult to distinguish for
FastICA and JADER, allowing to conclude that these methods are not sensitive to the
decorrelation step. This has to be explained by the optimization strategy of these meth-
ods, based respectively on a fixed point and a Jacobi technique, both techniques reputed
to have fastest convergence and being more reliable than the gradient technique [9].
AMICA is doing better than FastICA and JADER in most configurations when the
amount of data is enough. This algorithm is based on the fitting of extended Gaussian
(mixtures of scaled Gaussians) for each source time course, thus needing more data and
execution time for accurate estimation and convergence (see the note below on time
convergence). Besides, results appear to be quite deceiving for AMICA when it comes
to the 48 channels configuration, with a PI around 0 . 06 for lengths greater than 10s for
whitening (with a large standard deviation around 0 . 05 ), but a PI well above 0 . 1 for
sphering. In this case initialization shows to have a noticeable impact on AMICA. This
observation can be done also for Extended InfoMax for all five channel size cases. For
this specific data set of randomly mixed sources, whitening initialization (solid curves)
is resulting globally in better PI than sphering initialization (dashed curves).
As pointed out in [5], Extended InfoMax and AMICA are based on a natural gradient
descent optimization scheme, initialization is then a major issue for these algorithms:
the farthest from the solution the initialization is, the longest will take the optimization
procedure. In the case of random mixing matrices, the solutions are distributed widely
over the optimization space, making it difficult to define an adequate initialization point.
In this context, whitening seems to be on average more appropriate than sphering. It has
to be noticed that in our simulation no iteration or convergence criteria parameter has
been changed in the Extended InfoMax algorithm, while maximum number of itera-
tions has been set to 300 for the AMICA procedure (some numerical issues have been
experienced with the default 100 value for short length data (<=2s)).
 
Search WWH ::




Custom Search