Digital Signal Processing Reference
In-Depth Information
N (an image or higher-dimensional data are treated by reordering samples in a one-
dimensional (1-D) vector of length N ); each measurement is the linear mixture of
N s vectors ( s 1 ,...,
s N s ) called sources, each one having the same length N .Inthe
noisy case, this reads
N s
=
,
+ ε i [ l ]
,
∈{
,...,
N c } ,
∈{
,...,
} ,
y i [ l ]
A [ i
j ] s j [ l ]
i
1
l
1
N
(9.1)
=
j
1
where A is the N c
×
ε
i is
the noise vector in channel i , supposed to be bounded. A defines the contribution of
each source to each measurement. As the measurements are N c different mixtures,
source separation techniques aim at recovering the original sources ( s i ) i = 1 ..., N s by
taking advantage of some information contained in the way the signals are mixed in
the observed channels. This mixing model is conveniently rewritten in matrix form:
N s mixing matrix whose columns will be denoted a i , and
Y
=
AS
+
E
,
(9.2)
N measurement matrix whose rows are y i ,
where Y is the N c ×
i
=
1
,...,
N c (i.e.,
N source matrix with rows s i ,
observed data), and S is the N s ×
i
=
1
,...,
N s .The
T
N c ×
i , is added to account for instrumental noise and/or
model imperfections. In this chapter, we will discuss the overdetermined case, which
corresponds to N c
N matrix E ,withrows
ε
N s (i.e., we have more channels than sources). The converse
underdetermined case ( N c <
N s ) is an even more difficult problem; see Jourjine
et al. (2000) or Georgiev et al. (2005) for further details.
In the BSS problem, both the mixing matrix A and the sources S are unknown
and must be estimated jointly. In general, without further a priori knowledge, de-
composing a rectangular matrix Y into a linear combination of N s rank-one matrices
is clearly ill posed. The goal of BSS is to understand the different cases in which this
or that additional prior constraint allows us to reach the land of well-posed inverse
problems and to devise separation methods that can handle the resulting models.
Source separation is overwhelmingly a question of contrast and diversity to dis-
entangle the sources. Depending on the way the sources are distinguished, most BSS
techniques can be categorized into two main classes:
Statistical approaches (ICA): The well-known independent component analysis
(ICA) methods assume that the sources ( s i ) i = 1 ,..., N s (modeled as random pro-
cesses) are statistically independent and non-Gaussian. These methods (e.g.,
joint approximate diagonalization of eigen-matrices (JADE) (Cardoso 1999);
FastICA and its derivatives (Hyv arinen et al. 2001); and InfoMax (Koldovsky
et al. 2006)) already provided successful results in a wide range of applications.
Moreover, even if the independence assumption is strong, it is, in many cases,
physically plausible. Theoretically, Lee et al. (2000) focus on the equivalence of
most ICA techniques with mutual information minimization processes. Then, in
practice, ICA algorithms are about devising adequate contrast functions, which
are related to approximations of mutual information. In terms of discernibility,
statistical independence is a “source of diversity” between the sources.
Sparsity and morphological diversity: Zibulevsky and Pearlmutter (2001) intro-
duced a BSS method that focuses on sparsity to distinguish the sources. They
assumed that the sources are sparse in a particular basis
(e.g., the wavelet
Search WWH ::




Custom Search