Digital Signal Processing Reference
In-Depth Information
Recently, several authors have shown the feasibility of BSS when N is
less than M for IM case [4-5, 7]. This can be achieved by transforming the
sensor (mixed) signals to the time-frequency domain and using the property
of sparseness in the transformed domain to help in the estimation of the
mixing matrix. After the mixing matrix has been estimated, it is used to
estimate the sources where the sources are assumed to be independent and
exhibits a Laplacian density in the sparse transformed domain. Note that in
all these methods probabilistic techniques have been and a posteriori log
probability has been maximized. This maximization with the assumption of
independent sources, the sources exhibiting Laplacian density in the sparse
transformed domain and additive white Gaussian noise leads to the
minimization of L2 and L1 norms. In [4], the mixing matrix is first estimated
as mentioned above and then the source signals are separated using this
mixing matrix and minimizing the L1 norm. However [7] uses what the
authors call “dual update” approach that iteratively refines the estimate of the
source and mixing matrix jointly by minimizing L1 and L2 norms. We have
extended this to the convolutive mixture in [3] which is reviewed in the
following section.
2.1
Probabilistic BSS for underdetermined IM
This section summarizes our previous algorithm described in [7] and
generalizes it with some modifications (a) to handle more than 2 mixtures,
(b) to robustly estimate the initial mixing matrix and (c) to speed up the
iterative “dual update” algorithm.
2.1.1
Review of “dual update” algorithm
Consider the observed signal x given in the Eq. (1). The most efficient
techniques for the source separation in the case of underdetermined IM are
based on probabilistic approach. These approaches mainly correspond to
minimizing the negative log of a posteriori likelihood function with
respect to s . Note that the maximization of log a posteriori probability is
equivalent to minimizing the negative log posteriori probability. This
likelihood function can further be written as
by applying the Bayes theorem and assuming statistical independence
between a and s. Here, P(a) and P(s) correspond to prior probabilities of a
and
s
, respectively. By applying the negative log operation to
we
get:
Search WWH ::




Custom Search