Databases Reference
In-Depth Information
symmetric and convex, then it is also Schur-convex. The reverse conclusion is not
valid.
Example A3.3.
Consider again the function
g
(
x
) from Example
A3.2
, which is of the
form (
A3.13
) with
x
i
x
i
.
It can be checked that
h
(
x
i
) is convex by showing that its second derivative is nonnegative.
Therefore,
g
(
x
) is Schur-convex. This is exactly what we have done in Example
A3.2
,
thus implicitly deriving the special rule for functions of the form (
A3.13
).
h
(
x
i
)
=
1
−
Example A3.4.
Consider a discrete random variable
X
that takes on
n
values with proba-
bilities
p
1
,
p
n
]
T
. The entropy
p
2
,...,
p
n
, which we collect in the vector
p
=
[
p
1
,
p
2
,...,
of
X
is given by
n
H
(
X
)
=−
p
i
log
p
i
.
i
=
1
Since
h
(
x
)
x
log
x
is strictly convex, entropy is strictly Schur-concave. Using the result
from Example
A3.1
, it follows that minimum entropy is achieved when probabilities are
“most unequal”, i.e.,
p
=
0]
T
or any permutation thereof. Maximum entropy
is achieved when all probabilities are equal, i.e.,
p
=
[1
,
0
,...,
n
]
T
.
=
[1
/
n
,
1
/
n
,...,
1
/
A3.2.2
Functions defined on
D
We know from Result
A3.3
that Schur-convex functions are necessarily symmetric if they
are defined on IR
n
. However, it is not required that functions defined on the set of ordered
n
-tuples
D
={
x
:
x
1
≥
x
2
≥···≥
x
n
}
be symmetric in order to be Schur-convex.
D
→
D
Result A3.4.
Let g
:
IR
be continuous on
and continuously differentiable on the
interior of
D
. Then g is Schur-convex if and only if
∂
∂
∂
x
i
g
(
x
)
≥
x
i
+
1
g
(
x
)
,
i
=
1
,...,
n
−
1
,
(A3.16)
∂
for all
x
in the interior of
D
. For functions of the form
n
g
(
x
)
=
h
i
(
x
i
)
,
(A3.17)
i
=
1
where each h
i
:IR
→
IR
is differentiable, this condition simplifies to
h
i
(
x
i
)
h
i
+
1
(
x
i
+
1
)
≥
,
∀
x
i
≥
x
i
+
1
,
i
=
1
,...,
n
−
1
.
(A3.18)
Search WWH ::
Custom Search