Information Technology Reference
In-Depth Information
p , p
If we instead of
ε
use
N , then
η n
( ω )
0 if and only if
1
p ,
1
p ))
ω η 1
p
k
n
>
k :
((
.
n
η ( ω )
}
And
η n
0 almost everywhere, if the set
;
0
has measure 1.
(
)
Definition 4.1.
n of observables
(i) converges in distribution to a function F : R
A sequence
y n
R ,if
(
((
)) =
(
)
lim
n
m
y n
, t
F
t
for every t
R ;
(ii) it converges to 0 in state m :
F→ [
0, 1
]
,if
lim
n
m
(
y n (( ε
,
ε ))) =
0
ε >
for every
0;
(iii) it converges to 0 m -almost everywhere, if
1
p ,
1
p )) =
k
+
i
lim
p
lim
k
lim
i
m
(
k y n
(
0.
=
n
(
)
( η n
)
Theorem 4.4.
Let
y n
n be a sequence of observables,
n be the sequence of corresponding
random variables. Then
(i)
(
y n
)
n converges to F : R
R in distribution if and only if
( η n
)
n converges to F ;
(ii) y n
)
n converges to 0 in state m :
F→ [
0, 1
]
if and only if
( η n
)
n converges to 0 in measure
P :
S→ [
0, 1
]
(iii) if
( η n
)
n converges P -almost everywhere to 0, then
(
y n
)
n m -almost everywhere converges
to 0.
The details can be found in [66]. Many applications of the method has been described in [25],
[31], [35], [37], [39] , [52].
6. Conditional probability
Conditional entropy (of A with respect to B ) is the real number P
(
A
|
B
)
such that
(
)=
(
)
(
|
)
P
A
B
P
B
P
A
B
.
(
|
)=
(
)
When A , B are independent, then P
A
B
P
A
, the event A does not depend on the ocuring
of event B . Another point of view:
P
(
A
B
)=
P
(
A
|
B
)
dP .
B
 
Search WWH ::




Custom Search