Information Technology Reference
In-Depth Information
particular underlying distributions of the mixed signals (any information about the
sources is completely unknown), (see for instance [
45
]); the second consists of
including the maximum number of priors available in the cost function in order to
guide the algorithm to find particular sources (blind source extraction, semi-blind
source separation, etc.), (see for instance [
46
,
47
]). Some new methods that use
non-parametric (NP) density estimation have been recently developed from the
first direction in ICA research.
The new non-parametric ICA methods use techniques such as: minimization of
a kernel canonical correlation or a kernel generalized variance among recovered
sources (the so-called Kernel-ICA) [
48
]; maximum likelihood estimation (MLE)
by using spline-based density approximations [
49
]; MLE by using Gaussian kernel
density estimates (the so-called Npica) [
45
]; and minimization of the entropy of
the marginals by estimating their order statistics (the so-called Radical) [
50
].
These methods have shown good performance in simulations, but there are no
references about their performance in real applications. Theoretical analyses
(convergence, consistency, and other issues) of non-parametric density estimation
in the framework of ICA are found in [
29
,
26
,
51
]. We include a review of the
Npica, Radical, and Kernel-ICA algorithms in the following section.
2.3.1 Npica
The Npica algorithm [
45
] is a maximum loglikelihood ICA method that solves the
Eq. (
2.9
). It uses a non-parametric estimation for the probability density function p
i
,
which is directly estimated from the data using a kernel density estimation tech-
nique [
52
].
Given a batch of sample data of size N, the marginal distribution of an arbitrary
reconstructed signal is approximated as follows:
;
X
N
p
i
s
ðÞ¼
1
Nh
s
i
s
il
h
j
i
¼
1
;
...
;
M
ð
2
:
24
Þ
I
¼
1
1
2p
p
e
u
2
=
2
:
where h is the kernel bandwidth and j is the Gaussian kernel j
ð
u
Þ
,
The kernel centroids s
il
are equal to s
il
¼
w
i
x
ð
l
Þ
¼
P
N
w
il
X
li
;
where x
ð
l
Þ
is the lth
l
¼
1
column of the mixture matrix X.
The expectation of the maximum loglikelihood solution is approximated by the
following cost function
L
ð
W
Þ¼
L
0
ð
W
Þ
log
ð
det W
Þ
ð
2
:
25
Þ
where L
0
ð
W
Þ
is obtained by replacing the marginal pdf's p
i
with their kernel
density estimates
Search WWH ::
Custom Search