Digital Signal Processing Reference
In-Depth Information
Fig. 15.4
Premia for A ,with
9e+09
A
a
=
100
,
r
= =
1
,
q
=
4
,
η =
100, premium divergence
= logdet (Table 15.1 ). See text
for details
8e+09
7e+09
6e+09
5e+09
4e+09
3e+09
2e+09
1e+09
0
0
100
200
300
400
500
600
T
15.6 Discussion
In this section, our objective is twofold. first, we drill down into the properties of our
divergences ( 15.2 ), and compare them to the properties of other matrix divergences
based on Bregman divergences published elsewhere. Second, we exploit these prop-
erties to refine our analysis on the risk premium of our mean-divergence model.
Thus, for our first goal, the matrix arguments of the divergences are not assumed to
be symmetric anymore.
Reference [ 13 ] have previously defined a particular case of matrix-based diver-
gence, which corresponds to computing the usual p -norm vector divergence between
spec
. It is not hard to check that this corresponds to a particular
case of Bregman-Schatten p -divergences in the case where one assumes that L and
N share the same transition matrix. The qualitative gap between the definitions is
significant: in the case of a general Bregman matrix divergences, such an assumption
would make the divergence separable , that is, summing coordinate-wise divergences
[ 11 ]. This is what the following Theorem shows. We adapt notation ( 15.4 ) to vectors
and define
(
L
)
and spec
(
N
)
u the vector with coordinates
˜
ψ (
u i )
. We also make use of the Hadamard
product
·
previously used in Table 15.1 .
Theorem 3. Assume diagonalizable squared matrices L and N , with their diago-
nalizations respectively denoted:
P L D L P 1
L
L
=
,
P N D N P 1
N
N
=
.
Search WWH ::




Custom Search