Digital Signal Processing Reference
In-Depth Information
Definition
→ R
n
Definition 4.3 BSS:
Let
s
:Ω
be an independent random
μ
R
n
−→ R
m
vector, and let
:
be a measurable mapping. An ICA
of
x
:=
μ
(
s
) is called a
BSS
of (
s
,
μ
). Given a full-rank matrix
A
∈
Mat(
n
), called a
mixing matrix
, a linear ICA of
x
:=
As
is called
a
linear BSS
of (
s
,
A
).
×
m
;
R
Again, we speak of
square BSS
if
m
=
n
. In the linear case this
means that the mixing matrix
A
is invertible:
A
Gl(
n
).
If
m>n
, the model above is called
overdetermined
or
undercomplete
.
In the case
m<n
(i.e. in the case of less mixtures than sources) we speak
of
underdetermined
or
overcomplete BSS
.
Given an independent random vector
s
:Ω
∈
n
and an invertible
→ R
matrix
A
Gl(
n
)
such that
BAs
is independent (i.e. the set of all square linear BSSs of
As
).
∈
Gl(
n
), denote
BSS
(
s
,
A
) all invertible matrices
B
∈
Properties
In the following we will mostly deal only with the linear case. So the goal
of BSS - one of the main applications of ICA - is to find the unknown
mixing matrix
A
, given only the observations/mixtures
x
. Using theorem
4.2, we see that in the linear case this is indeed possible, except for the
usual indeterminacies scaling and permutation.
n
be an independent random vector with existing covariance having at
most one Gaussian component, and let
A
Theorem 4.2 Indeterminacies of linear BSS:
Let
s
:Ω
→ R
∈
Gl(
n
). If
W
is a BSS of
(
s
,
A
), then
W
−1
∼
A
.
Proof
This follows directly from theorem 4.2 because both
A
−1
and
W
are ICAs of
x
:=
As
.
So in this case
BSS
(
s
,
A
)=Π(
n
)
A
−1
,whereΠ(
n
) denotes the group
of products of
n
×
n
scaling and permutation matrices.
Search WWH ::
Custom Search