Image Processing Reference
In-Depth Information
and Williams [
7
]. This section and the rest of the article will focus on Gaussian
processes.
As before, let
d
(Ω,
S
,
P
)
be a known probability space. Let
D
⊂ R
,
d
=
v
be the set of potential values
1
,
2
,
or 3 be the known domain of our field and let
R
of our field, i.e.
v
=
1 means a scalar field,
v
=
d
means a vector field, and
v
=
d
×
d
means a tensor field of second order. A measurable, separable map
v
D
f
:
Ω
→
(
R
)
is called
Gaussian random field
on
D
if for all finite tuples
(
x
1
,...,
x
n
)
of points
in
D
the random variable
(
f
x
1
,...,
f
x
n
)
is a
v
×
n
-dimensional Gaussian random
variable. The function
v
μ
:
D
→ R
,μ(
x
)
=
E
(
f
x
)
with expectation
E
is called
expectation function
.Themap
v
×
v
C
:
D
×
D
→ R
C
(
x
,
y
)
:=
E
((
f
x
−
E
(
f
x
))(
f
y
−
E
(
f
y
)))
v
is called
covariance function
. For any function
μ
:
D
→ R
and any non-
v
×
v
, there is a unique Gaussian
negative definite function
C
:
D
×
D
→ R
process with expectation function
and covariance function
C
, see Adler and Taylor
[
2
, p. 5]! This statement is the basis behind the design and use of Gaussian processes
in machine learning as described by Rasmussen and Williams [
7
]. However, we
think that an approach starting with interpolation is more appropriate to visualiza-
tion, as this is the usual way of defining continuous fields from discrete data in our
discipline.
μ
9.4 Linear Interpolation on the Line as a Gaussian Process
This section considers a very simple example. We take the real line as domain, i.e.
D
. We assume that we are given two uncorrelated Gaussian distributions of
scalar values
= R
W
1
∼
N
(μ
1
,σ
1
)
and
W
2
∼
N
(μ
2
,σ
2
)
at the points
x
1
=
1 as data. We want to describe a simple linear interpo-
lation. Since the two values are uncorrelated, we take
0 and
x
2
=
2
as parameter space,
Ω
= R
2
the Borelalgebra
B
(
R
)
as
σ
-algebra and the 2-dimensional normal distribution
P =
N
(μ,
C
)
with
μ
1
μ
2
σ
1
μ
=
,
C
=
σ
2
Search WWH ::
Custom Search