Digital Signal Processing Reference
In-Depth Information
be a random vector with existing covariance, and let W , V
Gl( m )be
two linear ICAs of x such that Wx has at most one Gaussian component.
Then their inverses are equivalent i.e. there exists a permutation P and
a scaling L with
PLW = V .
Proof This follows directly from theorem 3.9: Wx is independent, and
by assumption ( VW −1 )( Wx ), so VW −1 is the product of a scaling and
permutation matrix, and therefore W −1 equals V −1 except for right-
multiplication by a scaling and permutation matrix.
Note that this theorem also obviously holds for the case m>n ,
which can easily be shown using projections.
In order to solve linear ICA, we could again use the MMI algorithm
from above,
W 0 =argmin W I ( Wx ) ,
n 2 are easily parameterizable. Still, the
mutual information has to be approximated.
because elements in Gl( n )
⊂ R
4.3
Blind Source Separation
m called a mixed
vector is given; it comes from an independent random vector s
In blind source separation, a random vector x
→ R
n ,
which will be called a source vector , by mixing with a mixing function
μ
→ R
:
R
n
−→ R
m (ie. x =
μ
( s )). Only the mixed vector is known, and the
task is to recover
and then s .IfwefindanICAof x ,somekindof
inversion thereof could possibly give
μ
μ
.
In the square case ( m = n ),
μ
is usually assumed to be invertible,
μ −1 ( x ). This means
that if we assume that the inverse of the mixing function already lies
in the transformation space, then we know that the global minimum of
the contrast function (usually the mutual information) has value 0, so
a global maximum will indeed give us an independent random vector.
Of course we cannot hope that
so reconstruction of
μ
directly gives s via s =
μ −1 will be found because uniqueness in
this general setting cannot be achieved (section 4.2) — in contrast to the
linear case, as shown in section 4.2. This will usually impose restrictions
on the used model.
Search WWH ::




Custom Search