Digital Signal Processing Reference
In-Depth Information
6.1.1 Mean as a Variational Optimization
M of a finite set of SPD matrices
To define a mean
, we model it using
the following optimization framework for a distance function D :
{
M 1 ,...,
M n }
n
1
n
M
=
arg
min
D
(
M i ,
M
).
(6.1)
Sym + (
M
d
)
i
=
1
Taking d
=
1, and choosing the squared Euclidean distance D
(
P
,
Q
) = (
P
n i = 1 M i ,
the arithmetic mean (minimizing the variance). The squared Euclidean distance is
derived from the inner product
2
M
1
Q
)
for positive numbers P
,
Q
>
0, we get the center of mass
=
P T Q of the underlying vector space:
P
,
Q
=
2
(
,
) =
=
,
.
D
P
Q
P
Q
P
Q
P
Q
(6.2)
Thus to define the mean of square matrices P
,
Q
M
(
d
,
d
)
, we can choose the
tr
F
=
(
MM T
)
Fröbenius matrix norm
M
, and find the arithmetic matrix mean
n i = 1 M i as the minimizer of ( 6.1 )for D
M
1
2
F . Although
trivial to compute, this arithmetic matrix mean has several drawbacks in practice.
For example, in DT-MRI [ 9 ], the Euclidean matrix mean may have a determinant
bigger than the input which is physically not plausible as matrices denote water flow
properties.
=
(
P
,
Q
) =
P
Q
6.1.2 Log-Euclidean Mean
,
where log M is the principal logarithm of matrix M . The logarithm of a SPD matrix
is defined as the reciprocal operator of the exponentiation exp M
The Log-Euclidean distance [ 9 ] is defined as D
(
P
,
Q
) =
log Q
log P
= i = 0
1
i
M i .For
!
SPD matrices M , we compute the eigendecomposition
R T
M
=
R diag
1 ,...,λ d )
(6.3)
and deduce the log/exp matrices as
R T
log M
=
R diag
(
log
λ 1 ,...,
log
λ d )
(6.4)
and
R T
=
(
λ 1 ,...,
λ d )
.
exp M
R diag
exp
exp
(6.5)
Note that in general log MN
=
log M
+
log N and exp
(
M
+
N
) =
exp M exp N .
This is only true when matrix commutes, that is MN
0. Symmetric matrices
commute if and only if they share the same eigen spaces. The Log-Euclidean mean
[ 9 ] inherits a vector space structure, and has a closed-form solution:
NM
=
Search WWH ::




Custom Search