Biomedical Engineering Reference
In-Depth Information
. Here / is a general multivariate Gaussian
density function, expressed as follows [ 32 ]:
of m [ 1 components / x j l m ; P m
h
i
Þ T P 1
m
exp 2
ð
x l m
ð
x l m
Þ
/ x j l m ; R m
ð
Þ ¼
;
ð 6 : 1 Þ
2 ð d = 2 R m
j 1 = 2
j
where x is the d-dimensional data vector, and l m and R m are the mean vector and
the covariance matrix of the mth component, respectively. A variety of approaches
to the problem of mixture decomposition has been proposed, many of which focus
on maximum likelihood methods such as an EM algorithm [ 33 ].
An EM algorithm is a method for finding maximum likelihood estimates of
parameters in a statistical model. EM alternates between an expectation step,
which computes the expectation of the log-likelihood using the current variable
estimate, and a maximization step, which computes parameters maximizing the
expected log-likelihood collected from E-step. These estimated parameters are
used to select the distribution of variable in the next E-step [ 31 ].
6.2.2 Neural Network
A neural network (NN) is a mathematical model or computational model that is
inspired by the functional aspects of biological neural networks [ 37 ]. A simple NN
consists of an input layer, a hidden layer, and an output layer, interconnected by
modifiable weights, represented by links between layers. Our interest is to extend
the use of such networks to pattern recognition, where network input vector (x i )
denotes elements of extracted breathing features from the breathing dataset and
intermediate results generated by network outputs will be used for classification
with discriminant criteria based on clustered degree. Each input vector x i is given
to neurons of the input layer, and the output of each input element makes equal to
the corresponding element of the vector. The weighted sum of its inputs is com-
puted by each hidden neuron j to produce its net activation (simply denoted as
net j ). Each hidden neuron j gives a nonlinear function output of its net activation
U ðÞ; i : e :; U net j
in Eq. ( 6.2 ). The process of output
neuron (k) is the same as the hidden neuron. Each output neuron k calculates the
weighted sum of its net activation based on hidden neuron outputs U net j
¼ U P i ¼ 1 x i w ji þ w j0
as
follows [ 38 ]:
!
net k ¼ X
w kj U X
H
N
x i w ji þ w j0
þ w k0 ;
ð 6 : 2 Þ
j ¼ 1
i ¼ 1
where N and H denote neuron numbers of the input layer and hidden layer. The
subscript i, j and k indicate elements of the input, hidden and output layers,
respectively. Here, the subscript 0 represents the bias weight with the unit input
Search WWH ::




Custom Search