Information Technology Reference
In-Depth Information
hence
h (
x (
. y (
×
)=
)
)
C
D
C
D
.
Analogously
h (
x (
. y (
C
×
D
)=
C
)
D
)
.
If we define
h (
h (
2
(
)=(
)
))
σ ( J
)
h
A
A
,1
A
, A
,
then
x (
. y (
x (
. y (
(
×
)=(
)
)
)
)) =
(
)
(
)
.
Now we shall present two applications of the notion of the joint observable. The first is the
definition of function of a finite sequence of observables, e.g. their sum. In the classical case
h
C
D
C
D
,1
C
D
x
C
. y
D
ξ + η =
g
T :
Ω
R
where g
(
u , v
)=
u
+
v , T
( ω )=( ξ ( ω )
,
η ( ω ))
. Hence
ξ + η
can be defined by the help of
pre-images:
( ξ + η ) 1
T 1
g 1
=
:
B (
R
) →S
.
Definition 3.4.
B (
) →F
Let x 1 , ..., x n :
R
be observables, g : R n
R be a measurable function.
Then we define
(
)
B (
) →F
g
x 1 , ..., x n
:
R
by the formula
g 1
g
(
x 1 , ..., x n )(
C
)=
h
(
(
C
))
, C
∈B (
R
)
,
R n
B (
) →F
where h :
is the joint observable of the observables x 1 , ..., x n .
+
+
B (
) →F
(
+
+
Example 3.1.
x 1
...
x n :
R
is the observable defined by the formula
x 1
...
g 1
R n
is the joint observable of x 1 , ..., x n , and g : R n
)(
)=
(
(
))
B (
) →F
x n
C
h
C
, where h :
R
is defined by the equality g
u n .
The second application of the joint observable is in the formulation of the independency.
(
u 1 , ..., u n
)=
u 1 +
...
+
) n = 1
Definition 3.5.
Let m :
F→ [
0, 1
]
be a state,
(
x n
be a sequence of observables,
n
) n = 1
σ ( J
) →F
(
=
)
(
h n :
be the joint observable of x 1 , ..., x n
n
1, 2, ...
.
Then
x n
is called
independent, if
(
(
×
×
×
)) =
(
(
))
(
(
))
(
(
))
m
h n
C 1
C 2
...
C n
m
x 1
C 1
. m
x 2
C 2
..... m
x n
C n
for any n
N and any C 1 , ..., C n
σ ( J )
.
Now let us return to the notion of mean value of an observable. In the classical case
(
ξ )=
ξ
=
E
g
g
dP
gdF
Ω
R
where F is the distribution function of
ξ
.
Definition 3.6.
Let x :
B (
R
) →F
be an observable, m :
F→ [
0, 1
]
be a state, g : R
R be
a measurable function, F be the distribution function of x
(
F
(
t
)=
m
(
x
((
, t
))))
.Thenwe
Search WWH ::




Custom Search