Digital Signal Processing Reference
In-Depth Information
9.4.4.1 Choice of the New Columns of A
In the aforementioned algorithm, step 1 amounts to adding a column vector to the
current mixing matrix A . The most simple choice would amount to choosing this
vector at random. Wiser choices can also be made based on additional prior infor-
mation:
Decorrelation: If the mixing matrix is assumed to be orthogonal, the new col-
umn vector can be chosen as being orthogonal to the subspace spanned by the
columns of A with rank ( A )
=
n s
1.
Known spectra: If a library of spectra is known a priori, the new column can
be chosen from among the set of unused ones. The new spectrum can be cho-
sen based on its correlation with the residual. Let
A
denote a library of spectra
n s denote the set of spectra that have not been chosen
yet; then the n s th new column of A is chosen such that
{
a i } i = 1 ,..., Card ( A ) , and let
A
l ] .
N
1
a i
a i ( Y
a i =
argmax
a i ∈A
AS ) [
.,
(9.32)
2
ns
l
=
1
Any other prior information can be taken into account, which will guide the choice
of a new column vector of A .
9.4.4.2 The Noisy Case
In the noiseless case, step 2 of Algorithm 36 amounts to running the GMCA algo-
rithm to estimate A and S
T for a fixed n s with a final threshold
= α
λ min =
0. In the
noisy case,
in ( P n s ) can be closely related to the noise level. For instance, if the
noise E is additive Gaussian with
σ
2
3-4, as suggested
throughout the chapter. If the GMCA algorithm recovers the correct sources and
mixing matrix, this ensures that the residual mean squares are bounded by
E = σ
E ,
λ
= τσ E with
τ =
min
τ
2
σ
2
E with
2
probability higher than 1
exp(
τ
/
2).
Illustrative Example.
In this experiment, 1-D channels are generated following the instantaneous linear
mixture model (9.2) with N s sources, where N s varies from 2 to 20. The number of
channels is N c
is chosen as
the Dirac basis, and the entries of S have been independently drawn from a Lapla-
cian pdf with unit scale parameter (i.e., p
=
64, each having N
=
256 samples. The dictionary
1 in equation (9.25)). The
entries of the mixing matrix are independent and identically distributed
=
1 and
λ =
∼N
(0
,
1).
The observations are not contaminated by noise.
This experiment will focus on comparing the classical principal components anal-
ysis (PCA), the popular subspace selection method, and the GMCA algorithm, as-
suming that N s is unknown. In the absence of noise, only the N s highest eigenvalues
provided by the PCA, which coincide with the Frobenius norm of the rank 1 matri-
ces ( a i s i ) i = 1 ··· , N s , are nonzero. PCA therefore provides the true number of sources.
The GMCA-based selection procedure in Algorithm 36 has been applied to the
same data to estimate the number of sources N s . Figure 9.1 depicts the mean num-
ber of sources estimated by GMCA. Each point has been averaged over 25 random
realizations of S and A . The estimation variance was zero, indicating that for each
of the 25 trials, GMCA provides exactly the true number of sources.
Search WWH ::




Custom Search