Information Technology Reference
In-Depth Information
#1 Uniform
#2 Laplacian
#3 Norma l
#4 Rayleigh
#5 K-nois e
45
45
45
45
45
40
40
40
40
40
35
35
35
35
35
30
30
30
30
30
25
25
25
25
25
20
20
20
20
20
15
15
15
15
15
10
10
10
10
10
5
5
5
5
5
0
0
0
0
0
0
5000
0
5000
0
5000
0
5000
0
5000
N
N
N
N
N
Fig. 3.3
Second simulation experiment—accuracy in BSS for the sources of Table 3.4
(Sources #4 and #5) and Gaussian sources (Source #3). For standard ICA algo-
rithms, JADE and FastIca are more stable for all the distributions, whereas Info-
Max and Extended InfoMax perform well, but with larger observation vector sizes.
For Laplacian sources (Source #2), all the algorithms show a similar performance.
3.4.2 Classification of ICA Mixtures
The proposed procedure Mixca was tested with several ICA mixture datasets
varying the following parameters: (i) Supervision ratio = 0,0.1,0.3,0.5,0.7,1
(number of training observation vectors/N). (ii) ICA algorithm for updating model
parameters = JADE, non-parametric Mixca. (iii) Number of ICA mix-
tures = 2,3,4 with N ¼ 500 ; (iv) Number of components = 2,3,4. The classes
were generated randomly by mixtures of uniformly and Laplacian distributed
sources with a sharp peak at the bias and heavy tails, and 400 observation vectors
were used for pdf estimation. The parameters were randomly initialized, and the
algorithm normally converged after 100-150 iterations depending on the initial
conditions. The learning mixture algorithm was trained using 30 % of the data,
obtaining the parameters W k and b k .
Classification was performed estimating the posterior probabilities p ð C k = x Þ for
each testing observation vector using Eq. ( 3.10 ) and applying the learned
parameters and the non-parametric source pdf estimator of Eq. ( 3.11 ). The class of
Search WWH ::




Custom Search