Digital Signal Processing Reference
In-Depth Information
i.e., the relative gradient is given by
∂
R
J
W
=
∂
(
W
)
J
(
W
)
W
T
(6.71)
∂
∂
W
For a given cost function
J
(
)
=
E
{
(
)
}
, it is possible to build the
following update rule from the relative gradient:
W
f
y
f
(
y
T
W
←
W
−
λ
E
{
y
)
}
W
,
(6.72)
where
f
(
f
(
f
(
. In addition to that, by including an orthogo-
nality constraint on
W
, the update rule becomes
y
)
=[
y
1
)...
y
N
)
]
yy
T
f
(
y
T
yf
(
T
W
←
W
−
λ
E
{
−
I
+
y
)
−
y
)
}
W
(6.73)
which corresponds to the so-called Equivariant Adaptive Source Separation
(EASI) algorithm.
It is interesting to notice that the notion of natural gradient, developed
by Amari [8], is very similar to that of the relative gradient, leading to
expressions similar to those obtained by Cardoso and Laheld. In [8], Amari
defines the gradient taking into account the structure subjacent to the param-
eter space, i.e., the nonsingular matrices space. In this space, the gradient
direction is given by
∂
Natural
J
(
W
)
=
∂
J
(
W
)
W
T
W
(6.74)
∂
W
∂
W
which is very similar to the relative gradient.
6.3.6 The FastICA Algorithm
Another contribution whose innovation is mainly related to the optimiza-
tion method is the FastICA algorithm [147, 149], a very popular tool for
ICA. The formal derivation of the algorithm considers that signals have been
prewhitened, and also considers an approximation for the negentropy of a
zero-mean unit variance signal
)
=
E
f
)
−
E
f
)
2
J
(
y
(
y
(
ν
(6.75)
where
f
represents a nonquadratic function
ν represents a zero-mean unit variance Gaussian variable
(
·
)