Image Processing Reference
In-Depth Information
as probability distribution. This means that we assume two normally distributed,
independent real parameters that will determine our uncertain field. In this simple
case, the two random variables
W 1 : Ω → R ,
W 1 (ω) = ω 1 ,
W 2 : Ω → R ,
W 2 (ω) = ω 2
determine the values at the two given positions x 1 and x 2 , respectively. It is natural
to define the linearly interpolated uncertain field f on the real line by
f
: Ω ( R ), (
f
(ω)) x := ω 1 (
1
x
) + ω 2 x
.
With the notation
f
ω (
x
) = ω 1 (
1
x
) + ω 2 x
,
it becomes pretty clear that we are really defining a linear interpolation of the values
at 0 and 1 on the real line for each given
. However, the whole point of the chapter
is that we are really defining a Gaussian process! The short argument is that this
follows from slightly more abstract arguments of Adler and Taylor [ 3 , pp. 17-19].
However, some basic computations might improve understanding of this point: At
every position x
ω
D , we have the random variable
f x (ω) = ω 1 (
1
x
) + ω 2 x
.
ω 1 2 are independent Gaussian variables, this is a Gaussian variable with
expectation
As
μ(
x
) =
E
(
f x (ω)) = μ 1 (
1
x
) + μ 2 x
and variance
2
2
2
2
2 x 2
2
σ
(
x
) =
E
((
f x (ω) μ(
x
))
) = σ
1 (
1
x
)
+ σ
.
For the covariance function C
:
D
×
D
→ R
,wehave
C
(
x
,
y
) =
E
((
f x (ω) μ(
x
))(
f y (ω) μ(
y
)))
=
E
(((ω 1 μ 1 )(
1
x
) + 2 μ 2 )
x
)((ω 1 μ 1 )(
1
y
) + 2 μ 2 )
y
))
2
2
= (
1
x
)(
1
y
)
E
((ω 1 μ 1 )
) +
xyE
((ω 2 μ 2 )
)
2
1
2
2
= (
1
x
)(
1
y
+
xy
σ
because of the independence of
ω 1 2 , i.e. E
((ω 1 μ 1 )(ω 2 μ 2 )) =
0. For
σ 1 = σ 2 ,
this coincides with the construction by Pöthkow and Hege [ 5 ].
Search WWH ::




Custom Search