Digital Signal Processing Reference
In-Depth Information
For fixed v ,theterm v H Rv is just an irrelevant constant factor, and Eq. ( 9.39 ) be-
comes essentially a Rayleigh quotient in the extracting vector w . The maximization
of this quotient is a well-known problem in array signal processing and matrix alge-
bra that can be solved, e.g., via the generalized EVD of matrix pencil ( C v , R ) and
accepts an SVD-based solution [ 16 ]. Despite these interesting features, it has been
observed that using this contrast function within a deflation procedure is not robust,
since the rank of R decreases when performing deflation, as noted in Sect. 9.4.3 .As
a consequence of the unknown rank of R , the performance of the SVD-based opti-
mization seriously degrades as more sources are recovered. The following section
details an alternative method avoiding this drawback.
9.4.6.2 Monotonically Convergent Algorithms Based on Quadratic Contrasts
As mentioned above, the SVD-based optimization of the quadratic contrast ( 9.38 )-
( 9.39 ) is not robust and not recommended in the case where R is of unknown and
non maximal rank, which always occurs in a deflation scenario. As a first alternative,
maximizing
J r ( w , v ) by a gradient algorithm has been proposed in [ 11 ] at the cost
of an increased computational burden. In this section, we show that an intermediate
approach is possible.
One can note that criteria ( 9.9 ) and ( 9.38 ) are linked by
J κ ( w )
= J r ( w , w ) .
Based on this fact and on the symmetry property
= J r ( v , w ) , an alter-
native algorithm has recently been proposed in [ 13 ] for the optimization of
J r ( w , v )
J κ ( w ) .
The idea is to perform the iterative maximization of
J r with respect to the extract-
ing filter after initializing this latter with a given reference filter. The reference filter
is then updated with the extracting filter obtained after maximization, and so forth.
In the summary given below, the gradient operator with respect to the first argument
of
J r is denoted by
1 J r .
Algorithm for kurtosis maximization based on reference signals
Initialize the reference filter v 0 (n) and compute the corresponding refer-
ence signal z 0 (n)
=
v 0 (n) x (n) .
For k
=
0 , 1 ,...,(k max
1 ) , initialize the extracting filter as w 0 =
v k and
do:
-For
1 ) , do exact line search along w -dimension:
1. Compute gradient direction d =∇ 1 J r ( w , v k )
2. Compute the optimal step size μ opt =
=
0 , 1 , 2 ,...,( max
arg max μ J r ( w +
μ d , v k ) .
3. Update:
μ opt d .
4. Normalize: w + 1 =
w
=
w +
w
) 1 / 2 .
( E
{|
w (n) x (n)
|
2
}
- Update the reference filter: v k + 1 =
w max .
Search WWH ::




Custom Search