Information Technology Reference
In-Depth Information
Proof. We shall use the notation from the last two theorems. Then for
C
∈
σ
(
J
)
(
π
−
1
(
ξ
−
1
m
(
x
n
(
C
)) =
m
(
h
n
(
R
×
...
×
R
×
C
)=
P
(
R
×
...
×
R
×
C
)) =
P
(
C
))
,
n
n
hence
∞
∞
E
(
ξ
n
)=
tdP
ξ
n
(
t
)=
tdm
x
n
(
t
)=
E
(
x
n
)=
a
,
−
∞
−
∞
and
2
2
2
.
(
ξ
n
)=
σ
(
)=
σ
σ
x
n
Moreover,
(
ξ
−
1
1
∩
ξ
−
1
(
π
−
1
P
(
C
1
)
∩
...
(
C
n
)) =
P
(
C
1
×
...
×
C
n
)) =
n
n
(
ξ
−
1
1
(
ξ
−
1
=
m
(
h
n
(
C
1
×
...
×
C
n
)=
m
(
x
1
(
C
1
))
.....
m
(
x
n
(
C
n
)) =
P
(
C
1
))
.....
P
(
C
n
))
,
n
√
n
σ
∑
i
hence
ξ
1
, ...,
ξ
n
are independent for every
n
. Put
g
n
(
u
1
, ...,
u
n
)=
1
u
i
−
a
. By Theorem
=
4.2. we have
√
n
σ
n
i
=
1
x
i
−
a
)((
−
∞
,
t
)) =
m
(
h
n
(
g
−
1
m
(
(
(
−
∞
,
t
))) =
m
(
y
n
((
−
∞
,
t
)) =
n
;
√
n
σ
n
i
=
1
ξ
i
(
ω
)
−
a
<
t
}
)
.
(
η
−
1
n
=
((
−
∞
))) =
(
{
(
ω
)
P
,
t
P
Therefore by the classical central limit theorem
t
1
n
∑
i
−
1
x
i
a
1
√
2
u
2
2
du
=
e
−
lim
n
m
(
(
−
∞
,
t
)) =
√
n
→
∞
π
−
∞
Let us have a look to the previous theorem from another point of view, say, categorial. We had
(
η
−
1
lim
n
P
((
−
∞
,
t
)) =
φ
(
t
)
n
→
∞
We can say that
in distribution. Of course, there are important
possibilities of convergencies, at least in measure and almost everywhere.
(
η
n
)
n
converges to
φ
(
η
n
)
A sequence
n
of random variables (= measurable functions) converges to 0 in measure
S→
[
]
μ
:
0, 1
,if
→
∞
μ
(
η
−
1
lim
n
(
−
ε
,
ε
)) =
0
ε >
for every
0. And the sequence converges to 0 almost everywhere, if
1
p
,
1
p
)) =
(
∩
p
=
1
∪
k
=
1
∩
n
=
k
η
−
1
lim
n
P
((
−
1
n
→
∞
(
ω
)
→
Certainly, if
η
n
0,then
∀
ε
>
0
∃
k
∀
n
>
k
:
−
ε
<
η
(
ω
)
<
ε