Biomedical Engineering Reference
In-Depth Information
3.4 Non-linear estimators of dependencies between signals
3.4.1 Non-linear correlation
The general idea of the non-linear correlation h y | x [Lopes da Silva et al., 1989]
is that, if the value of y is considered to be a function of x ,thevalueof y can be
predicted according to a non-linear regression curve. The formula describing the
non-linear correlation coefficient is given by the equation:
2
k = 1 y
2
k = 1 (
(
k
)
y
(
k
)
f
(
x i
))
h y | x =
(3.39)
k = 1 y
(
k
)
2
where y
is the linear piecewise
approximation of the non-linear regression curve. In practice to determine f
(
k
)
are samples of the signal y of N points, f
(
x i
)
a
scatter plot of y versus x is studied. Namely the values of signal x are divided into
bins; for each bin, the value x of the midpoint ( r i ) and the average value of y ( q i )are
calculated. The regression curve is approximated by connecting the resulting points
(
(
x i
)
by straight lines. By means of h y | x estimator the directedness from x to y can
be determined. Defined in an analogous way, estimator h x | y
r i
,
q i
)
shows the influence of y
on x .
3.4.2 Kullback-Leibler entropy, mutual information and transfer en-
tropy
Kullback-Leibler (KL) entropy introduced in [Kullback and Leibler, 1951] is a
non-symmetric measure of the difference between two probability distributions P
and Q . KL measures the expected number of extra bits required to code samples
from P when using a code based on Q , rather than using a code based on P .For
probability distributions P
of a discrete random variable i their KL diver-
gence is defined as the average of the logarithmic difference between the probability
distributions P
(
i
)
and Q
(
i
)
(
i
)
and Q
(
i
)
, where the average is taken using the probabilities P
(
i
)
:
log P
(
i
)
)) = i
D KL
(
P
||
Q
P
(
i
)
(3.40)
Q
(
i
)
If the quantity 0 log 0 appears in the formula, it is interpreted as zero. For continuous
random variable x , KL-divergence is defined by the integral:
Z
p
(
x
)
D KL
(
P
||
Q
)) =
p
(
x
)
log
dx
(3.41)
(
)
q
x
where p and q denote the densities of P and Q . Typically P represents the distribu-
tion of data, observations, or a precise calculated theoretical distribution. The mea-
sure Q typically represents a theory, model, description, or approximation of P .The
KL from P to Q is not necessarily the same as the KL from Q to P .KLentropy
 
Search WWH ::




Custom Search