Digital Signal Processing Reference
In-Depth Information
where a following normalization step is used to ensure kwk¼ 1 as in the gradient
updates.
1.6.3 Mutual Information Minimization: Connections to
ML and MN
As discussed in Sections 1.6.1 and 1.6.2, we can solve the complex ICA problem by
maximizing the log likelihood function given by
L ( W ) ¼ X
X
T
N
log p S n ( w n x ) þT log j det Wj:
(1 : 65)
1
1
The weight matrix W to maximize the log likelihood can be computed using relative
gradient update equation given in (1.54).
When using negentropy maximization as the objective, all sources can be estimated
by maximizing the cost function
J ( W ) ¼ X
N
E {log p S n ( w n x )}
1
T X
T
X
N
1
log p S n ( w n x )
(1 : 66)
1
1
under the unitary constraint for W . The mean ergodic theorem is used to write (1.66)
and when compared to the ML formulation given in (1.65), it is clear that the two
objective functions are equiv ale nt if we constrain the weight matrix W to be unitary
for complex ML. Since det( W ) ¼j det( W ) j
2
[40], when W is unitary, the second
term in (1.65) vanishes.
Similar to the real case given in [21], for the complex case, we can satisfy the uni-
tary constraint for the weight matrix by projecting DW to the space of skew-hermitian
matrices. The resulting update equation is then given by
DW¼ ( Iuu H
c ( u ) u H
þuc H ( u )) W:
(1 : 67)
On the other hand, for the MN criterion, the weight matrix can be estimated in sym-
metric mode, or the individual rows of the weight matrix W can be estimated sequen-
tially in a deflationary mode as in [52]. The latter procedure provides a more flexible
formulation for individual source density matching than ML where each element of
the score function c ( u ) given in (1.51) needs to be matched individually.
As in the real case, the two criteria are intimately linked to mutual information.
Written as the Kullback-Leibler distance between the joint and factored marginal
 
Search WWH ::




Custom Search