Digital Signal Processing Reference
In-Depth Information
bounded risk premia over a sequence of market iterations [ 14 , 26 ]. The setting of
these works represents the most direct lineage to our second contribution: the design
and analysis, in our mean-divergence model, of an on-line learning algorithm to track
shifting portfolios of bounded risk premia, which relies upon our Bregman-Schatten
p -divergences. Our algorithm is inspired by the popular p -norm algorithms [ 15 ].
Given reals r
0, the algorithm updates symmetric positive definite (SPD) allo-
cations matrices whose r -norm is bounded above by
, >
. The analysis of the algorithm
exploits tools from matrix perturbation theory and new properties of Bregman matrix
divergences that may be of independent interest. We then provide experiments and
comparisons of this algorithm over a period of twelve years of S&P 500 stocks,
displaying the ability of the algorithm to track efficient portfolios, and the capacity
of the mean-divergence model to spot important events at the market scale, events
that would be comparatively dampened in the mean-variance model. Finally, we
drill down into a theoretical analysis of our premia, first including a qualitative and
quantitative comparison of the matrix divergences we use to others that have been
proposed elsewhere [ 12 , 13 , 16 ], and then analyzing the interactions of the two key
components of the risk premium: the investor's and the natural market allocations.
The remaining of the paper is organized as follows: Sect. 15.2 presents Breg-
man matrix divergences and some of their useful properties; Sect. 15.3 presents our
generalization of the mean-variance model; Sect. 15.4 analyzes our on-line learning
algorithm in our mean-divergence model; Sect. 15.5 presents some experiments; the
two last sections respectively discuss further our Bregman matrix divergences with
respect to other matrix divergences introduced elsewhere, discuss further the mean-
divergence model, and then conclude the paper with avenues for future research.
15.2 Bregman Matrix Divergences
We begin by some definitions. Following [ 25 ], capitalized bold letters like M denote
matrices, and italicized bold letters like
v
denote vectors. Blackboard notations like
S
denote subsets of (tuples of, matrices of) reals, and
|S|
their cardinal. Calligraphic
letters like
are reserved for algorithms. To make clear notations that rely on eco-
nomic concepts, we shall use small capitals for them: for example, utility functions
are denoted
A
. The following particular matrices are defined: I , the identity matrix; Z ,
the all-zero matrix. An allocation matrix A is SPD; a density matrix is an alloca-
tion matrix of unit trace. Unless otherwise explicitly stated in this section and the
following ones (Sects. 15.3 and 15.4 ), matrices are symmetric.
We briefly summarize the extension of Bregman divergences to matrix divergences
by using the diagonalization of linear operators [ 16 , 21 , 25 ]. Let
u
ψ
be some strictly
(ψ) ⊆ R
convex differentiable function whose domain is dom
. For any symmetric
d
×
d
matrix N
∈ R
whose spectrum satisfies spec
(
N
)
dom
(ψ)
,welet
.
=
.
=
k N k
ψ(
N
)
Tr
( Ψ (
N
)) ,
Ψ (
N
)
t
,
(15.1)
ψ,
k
0
Search WWH ::




Custom Search