Digital Signal Processing Reference
In-Depth Information
Fig. 9.2 Schematic
comparison of the successive
points obtained with gradient
and projected gradient
algorithms. For k = 0 , 1 , 2 , 3,
the sequence u k is generated
by a gradient algorithm,
whereas w k is generated by a
projected gradient update
before renormalization.
Sequence w k is generated by
a projected gradient iteration
convergence. This drift is illustrated by Fig. 9.2 , and may become unacceptable due
to numerical overflow whenever a great number of gradient iterations are required.
This undesired phenomenon can be prevented by a normalizing step after the gradi-
ent update, e.g., by projecting the extracting vector on the unit sphere, thus yielding
the so-called projected gradient algorithm summarized below:
Projected gradient algorithm for kurtosis optimization
Set an initial value w ( 0 ) for the extracting vector.
For k =
1 , 2 ,...,k max , do:
1. Compute the gradient direction d (k 1 )
=∇ J ( w (k 1 ) ) from Eq. ( 9.14 ).
2. Compute an appropriate step size μ and update
w (k 1 )
μ d (k 1 ) .
w
=
+
3. Project the update as w (k)
=
w /
w
, or any other suitable form of nor-
malization.
It is important to remark that, thanks to the contrast's scale invariance, the nor-
malization step does not affect the contrast function value attained at the gradient
update step.
9.4.3 Gradient Algorithm with Filter Parametrization
In the instantaneous case, the rank of the observation covariance matrix decreases
by one after each deflation step and, consequently, the dimension of the observation
space can be reduced without losing information. By performing dimensionality
reduction, the search for the next source can be carried out in a lower-dimensional
Search WWH ::




Custom Search