Information Technology Reference
In-Depth Information
19.3 Synchronization Measures
19.3.1 Mutual Information
Mutual information and nonlinear interdependence measures were applied on the
EEG recordings to identify the effect of treatment on the coupling strength between
different brain cortical regions [26, 27, 7, 21, 25].
In this section, we first describe the approach for estimating mutual informa-
tion [15]. Let us denote the time series of two observable variables as X
i
= {
x i }
=
1
j
and Y
1 , where N is the fixed length of the discrete time, and the time
between consecutive observations (i.e., sampling period ) is fixed. Then the mutual
information is given by
= {
y j }
=
)= i j P x , y ( x i , y j ) log P x , y ( x i , y j )
I
(
X ; Y
.
(19.1)
P x (
x i )
P y (
y j )
One can obtain the mutual information between X and Y using the following
equation [5]:
I
(
X ; Y
)=
H
(
X
)+
H
(
Y
)
H
(
X
,
Y
) ,
(19.2)
where H
(
X
) ,
H
(
Y
)
are the entropies of X
,
Y and H
(
X
,
Y
)
is the joint entropy of X
and Y . Entropy for X is defined by
)= i
H
(
X
p
(
x i
)
log p
(
x i
) .
(19.3)
The units of the mutual information depends on the choice on the base of logarithm.
The natural logarithm (base e ) is used in the study, therefore, the unit of the mutual
information is nat .For X and Y time series we define d ( x )
d ( y )
ij =
x i
x j ,
ij =
y i
y j
as the distances between x i and y i and every other point in matrix spaces X and
Y. One can rank these distances and find the k -nearest neighbor ( knn ) for every x i
and y i . In the space spanned by X
,
Y , similar distance rank method can be applied
one can also compute the distances d ( z )
ij
for Z
=(
X
,
Y
)
and for every z i =(
x i ,
y i )
=
and determine the knn according to some distance measure. The maximum
norm is used in this study:
z i
z j
d ( z )
ij
d ( x )
ij
=
max
{
x i
x j ,
y i
y j },
= |
x i
x j |.
(19.4)
Next, let ε ( i 2 be the distance between z i and its k th neighbor. In order to estimate the
joint probability density function ( p
which
is the probability that for each z i the k th nearest neighbor is at a distance ε ( i 2 ±
.
d
.
f
.
), we consider the probability P k ( ε )
d
ε
from z i .This P k ( ε )
represents the probability for k
1 points to have the distance
1 points have distance greater than ε ( i )
2
less than the k th nearest neighbor and N
k
1 points have distance less than ε ( i 2 . P k ( ε )
and k
is obtained using the multinomial
distribution:
Search WWH ::




Custom Search