Information Technology Reference
In-Depth Information
Proof. Let
C
1
,
C
2
, ...,
C
n
∈B
(
R
)
. Then by Definition 3.3. and Definition 3.1
μ
n
+
1
(
C
1
×
C
2
×
...
×
C
n
×
R
)=
m
(
x
1
(
C
1
))
.
x
2
(
C
2
)
.....
x
n
(
C
n
)
.
x
n
+
1
(
R
)) =
=
m
(
x
1
(
C
1
))
.
x
2
(
C
2
)
.....
x
n
(
C
n
)
.
(
1, 0
)) =
=
m
(
x
1
(
C
1
))
.
x
2
(
C
2
)
.....
x
n
(
C
n
)) =
=
μ
n
(
C
1
×
C
2
×
...
×
C
n
)
,
|
(
J
×
)=
μ
n
|J
J
hence
μ
n
+
1
R
n
. Of course, if two measures coincide on
n
then they coincide
n
σ
(
J
)
on
, too.
n
Now we shall formulate a translation formula between sequences of observables in
(
F
,
m
)
R
N
,
and corresponding random variables in
(
σ
(
C
)
,
P
)
([67]).
Let
g
n
:
R
n
Theorem 4.2.
Let the assumptions of Theorem 4.1 be satisfied.
→
R
be Borel
be the family of all cylinders in
R
N
,
ξ
n
:
R
N
=
C
→
measurable functions
n
1, 2, .... Let
R
be
((
t
i
)
i
)=
defined by the formula
ξ
n
t
n
,
η
n
:
R
N
→
η
n
=
(
ξ
1
., ...,
ξ
n
)
R
,
g
n
,
R
n
g
−
1
n
y
n
:
B
(
)
→F
,
y
n
=
h
n
◦
.
Then
(
η
−
1
n
(
)) =
(
(
))
P
B
m
y
n
B
for any
B
∈B
(
R
)
.
g
−
1
n
Proof. Put
A
=
(
B
)
. By Theorem 4.1.
g
−
1
n
(
π
−
1
g
−
1
n
m
(
y
n
(
B
)) =
m
(
h
n
(
(
B
))) =
P
(
(
B
))) =
n
)
−
1
(
η
−
1
n
=
((
◦ π
n
(
)) =
(
))
P
g
n
B
P
B
.
As an easy corollary of Theorem 4.2 we obtain a variant of central limit theorem.
In the
classical case
t
1
n
n
i
1
ξ
i
(
ω
)
−
∑
a
1
√
2
u
2
2
du
=
e
−
lim
n
P
(
{
ω
;
<
t
}
)=
√
n
→
∞
π
−
∞
Of course, we must define for observables the element
√
n
σ
n
i
=
1
x
i
−
a
)(
−
∞
,
t
)
(
It is sufficient to put
√
n
σ
n
i
=
1
u
i
−
a
(
)=
g
n
u
1
, ...,
u
n
Theorem 4.3.
Let
(
x
n
)
n
be a sequence of square integrable, equally distributed, independent
2
2
observables,
E
(
x
n
)=
a
,
σ
(
x
n
)=
σ
(
n
=
1, 2, ...
)
. Then
t
1
n
∑
i
−
1
x
i
a
1
√
2
u
2
2
du
=
e
−
lim
n
m
(
(
−
∞
,
t
)=
√
n
→
∞
π
−
∞