Digital Signal Processing Reference
In-Depth Information
+
y 1 ( n )
x 1 ( n )
in Figure 6.5, which is a fully interconnected neural
network composed of linear neurons.
From Figure 6.5, the output signal can be
expressed as
m 12
m 21
+
x 2 ( n )
y 2 ( n )
FIGURE 6.5
Fully interconnected linear
neural network used as sep-
arating structure.
y 1 =
x 1
m 21 y 2
(6.45)
y 2 =
x 2
m 12 y 1
(6.46)
or, equivalently, using matrix notation
y
=
x
My
(6.47)
where M is the weight matrix composed of elements m ij ,with m ij =
0for i
=
j .
The update law for M is given by
m ij
m ij
μ E
{
f
(
y i )
g
(
y j ) }
(6.48)
Such update law employs the idea of nonlinear decorrelation discussed
in Section 6.2: the algorithm stops updating the weights when the nonlinear
correlation between the outputs is null.
As discussed earlier, nonlinear decorrelation does not guarantee, in all
cases, that the signals are mutually independent. In practice, the effective use
of the algorithm is restricted to scenarios with a limited number of sources,
and there may be convergence problems even for the case with two sources,
as studied in [72, 93, 278]. Nevertheless, Herault and Jutten's algorithm is
one of the simplest BSS algorithms, and had a major importance in the
development of the BSS research field.
6.3.2 The Infomax Algorithm
The Infomax algorithm, also known as the Bell-Sejnowski (BS) algorithm,
is derived from the Infomax principle, discussed in Section 6.2.2.5, and
employs a steepest-descent approach to update the free parameters. Thus,
the first step to build the adaptation rule is to obtain the gradient of the cost
function:
J Infomax (
W
)
x T
W T
) 1
=
E
{
(
)
}+ (
g
Wx
(6.49)
W
where g
( · ) =[
g 1 ( · )...
g N ( · ) ]
is a vector of functions such that
d log f i (
x
)
g i (
x
) =
(6.50)
dx
 
Search WWH ::




Custom Search