Biomedical Engineering Reference
In-Depth Information
Using the same derivation mentioned in Sect. C.9, the maximum value is equal to
the largest eigenvalue of the matrix
ʣ
−
1
xy
aa
H
yy
ʣ
ʣ
xy
.
The matrix above is a rank-one matrix and it only has a single nonzero eigenvalue.
According to Sect. C.8 (Property No. 10), this eigenvalue is equal to
a
H
ʣ
xy
ʣ
−
1
H
yy
ʣ
xy
a
,
(7.63)
which is a scalar.
Thus, the optimization
2
b
H
H
xy
aa
H
subject to
a
H
1 and
b
H
|
ˈ
|
=
max
a
ʣ
ʣ
xy
b
ʣ
xx
a
=
ʣ
yy
b
=
1
,
,
b
(7.64)
is now rewriten as
2
a
H
ʣ
xy
ʣ
−
1
H
subject to
a
H
|
ˈ
|
=
max
a
yy
ʣ
xy
a
ʣ
xx
a
=
1
.
(7.65)
Using again the derivation described in Sect. C.9, the solution of this maximization
is obtained as the maximum eigenvalue of the matrix
ʣ
−
1
xx
ʣ
xy
ʣ
−
1
H
yy
ʣ
xy
.
That is, denoting the eigenvalues of this matrix as
ʳ
j
where
j
=
1
,...,
d
and
d
=
min
{
p
,
q
}
, the canonical squared magnitude coherence is derived as
2
=
S
max
{
ʣ
−
1
xx
ʣ
xy
ʣ
−
1
H
|
ˈ
|
yy
ʣ
xy
}=
ʳ
1
,
(7.66)
where the notation
indicates the maximum eigenvalue of a matrix between
the parentheses, as is defined in Sect. C.9.
This canonical squared magnitude coherence is considered the best overall mag-
nitude coherence measure between the two sets of multiple spectra
x
1
,...,
S
max
{·}
x
p
and
ʳ
1
in Eq. (
7.66
). However,
other eigenvalues may have information complementary to
y
1
,...,
y
q
, and it is equal to the maximum eigenvalue
ʳ
1
, and therefore, a met-
ric that uses all the eigenvalues may be preferable. Let us assume the random vectors
x
and
y
to be complex Gaussian. According to Eq. (C.52), we can then define the
mutual information between
x
and
y
such that
d
1
I(
x
,
y
)
=
log
−
ʳ
j
,
(7.67)
1
j
=
1