Information Technology Reference
In-Depth Information
Suppose ʔ R 1
· R 2 Φ . Then, there is some ʘ such that ʔ R 1 ʘ R 2 Φ .
Proof
By Lemma 6.1 we have the decomposition ʔ = i I p i · s i and ʘ = i I p i · ʘ i
with s i R 1 ʘ i for each i I . By Proposition 6.2 , we obtain Φ = i I p i · Φ i
and for each i
R 2
Φ i . It follows that s i R 1 · R 2
I we have ʘ i
Φ i , and thus
R 1 · R 2
)
R 1
· R 2
R 1 · R 2
) . The other
ʔ (
Φ . So we have shown that
(
direction can be proved similarly.
Notice that in the second part of Definition 6.2 , we have required the index set I to
be finite. In fact, this constraint is unnecessary if the state space S is finite. Then we
say that the lifting operation has a property of infinite linearity . To show the property
we need two technical lemmas.
Let us write
ˉ X for the set of subdistributions of the form i 0 p i ·
ʔ i , where
X and i 0 p i =
ʔ i
1.
Lemma 6.4
If the set S is finite then
X
= ˉ X for any subset X of
D sub ( S ) .
Proof
X .
The basic idea is to view a subdistribution over S as a point in Euclidean space
of dimension
It is clear that
X
ˉ X , so we prove the inverse inclusion,
ˉ X
and give a geometric proof, by induction on the size of S . More
specifically we prove, by induction on k , that if X is a subset in a space of dimension
k , then
|
S
|
= ˉ X . The base case, when
|
|=
X
S
1 is trivial. Let us consider the
+
inductive case, where the dimension is ( k
1).
X . Then, by the Separation theorem
(cf. Theorem 2.7) there exists a hyperplane H that separates x from
Suppose there is a point x
ˉ X but x
X .If h is the
normal of H , we can assume without loss of generality that there is a constant c
satisfying
x
c for all x
h
·
x
c and h
·
X ,
where with a slight abuse of notation we write
·
for dot product of two vectors of
dimension ( k
+
1). Note that the separation here may not be strict because
X is
convex but not necessarily Cauchy closed.
Since x
ˉ X , there is a sequence of probabilities p i with i 0 p i =
1 and a
= i 0 p i ·
sequence of points x i
X such that x
x i . We then have
= i 0 p i ·
(i) c
h
·
x
( h
·
x i ) and
(ii) h
·
x i
c for all i
0.
It follows from (i) and (ii) that actually h
·
x i =
c , for all i
0. In other words, it
must be the case that h
c for all i , which means that all the points x i lie in H ;
in other words the separation of x from
·
x i =
X cannot be strict. Therefore, we have that
x
ˉ ( X
H ) since
ˉ {
x i |
i
0
}ↆ ˉ ( X
H ).
H can be
described as a subset in a space of one dimension lower than X , that is of dimension
k . We have now contradicted the induction hypothesis.
On the other hand, since x
X we have x
( X
H ). However, X
In order to use the above lemma, we need to rephrase the lifting operation in
terms of the closure operator
(_). To this end let us use
R
( s ) to denote the set
{ ʔ D sub ( S )
| s R ʔ }
, for any
R S × D sub ( S ).
Search WWH ::




Custom Search