Digital Signal Processing Reference
In-Depth Information
1. Serial estimation: In this approach, the components are sequen-
tially estimated. This means that all extracting vectors w i must
be, necessarily, orthogonal to the previously obtained vectors. For
this purpose, one can employ the Gram-Schmidt orthogonalization
method [128]. This serial approach is also known in the literature as
the deflation approach [89].
2. Parallel estimation: In this case, a certain number of sources will
be estimated at once, adapting in parallel the vectors w i . However,
since it is required that all vectors be orthogonal, an additional
step of orthonormalization is required, and the Gram-Schmidt
procedure can be employed again [148].
6.2.2.5 The Infomax Principle and the Maximum Likelihood Approach
Another interesting approach to perform ICA is the so-called Infomax prin-
ciple, introduced in the context of BSS by Bell and Sejnowski [31], even
though key results had already been established in a different context [212].
The approach is based on the concepts issued from the field of neural net-
works. Neural networks will be discussed in more detail in Chapter 7, but
for the moment, it suffices to consider that one possible structure of a neural
network is composed of a linear portion and a set of nonlinearities.
Let us consider the structure depicted in Figure 6.4, where A represents
the mixing system. The separating system is an artificial neural network com-
posed by a linear part (the matrix W ) and a set of nonlinearities f i ( · )
, each one
applied to a particular output y i , so that we define the vector.
f 1 y 1 f 2 y 2
f N y N T
f y =
···
(6.37)
The nonlinear functions f i ( · )
are monotonically increasing, with
f i ( −∞ ) =
1.
According to the Infomax principle, the coefficients of the neural net-
work should be adjusted in order to maximize the amount of information
that flows from the inputs to the outputs, which means that W should be
chosen to maximize the mutual information between x and z , thus leading to
0and f i ( ) =
s 1 ( n )
x 1 ( n )
y 1 ( n )
f 1 ( . )
z 1 ( n )
W
A
s N ( n )
x N ( n )
y N ( n )
f N ( . )
z N ( n )
FIGURE 6.4
Structure of an artificial neural network.
 
Search WWH ::




Custom Search