Image Processing Reference
In-Depth Information
N
v
j
μ :
D
→ R
,μ(
x
) =
1 μ
φ j (
x
)
i
=
and the covariance function
0
k
=
l
v
×
v
i = 1
N
j = 1 φ i (
N
C
:
D
×
D
→ R
,
C kl (
x
,
y
) =
C ij
x
j (
y
)
k
=
l
because of the independence of
ω k l . It should be noted that the definition of an
interpolation as above and the definition of a covariance function as usually done in
machine learning, see Rasmussen and Williams [ 7 ], is actually equivalent, see Adler
and Taylor [ 3 , pp. 17-19].
Finally, we describe the case of dependent data at given N positions. To simplify
notation, we formulate only the scalar case. We consider a closed domain D
d
⊂ R
and N positions p 1
p N
D . At these positions, we are given N uncertain
scalar values with normal distributions
,...,
W i
N
i ,
C ii )
i
=
1
,...,
N
with covariances 8
W i
W i
W j
W j
C ij =
E
((
E
(
))(
E
(
))).
The interpolation is again given by N deterministic weight functions
p j
φ i
:
D
→ R ,
i
=
1
,...,
N with
φ i (
) = δ ij
with Kronecker
. The interesting point is that the dependence of the uncertain values
typically reduces the number of independent uncertain parameters. Mathematically,
this means that the (symmetric) covariance matrix C has only M
δ
< =
N independent
rows. One can find them by principal component analysis. 9 Let
λ 1 ,...,λ M ∈ R
be
the non-zero eigenvalues of C , e 1
e M
N
,...,
∈ R
the corresponding eigenvectors.
M
×
M be the diagonal matrix of the non-zero eigenvalues
Λ ∈ R
λ 1 ,...,λ M .We
Let
M , Borelalgebra
model our probability space via
,)
as probability measure. This probability space consists of M independent normally
distributed scalar parameters with mean 0. The uncertain field f is defined as
Ω = R
B (Ω)
and
P ∼
N
(
0
8 In praxis, the covariances are either given or have to be estimated from several given sample fields.
Obviously, this estimation might be a challenge in its own right as the number of positions is almost
certainly larger than the number of sample fields. Pöthkow et al. [ 6 ] made some comments in this
direction.
9 In praxis, there will be eigenvalues very close to zero in the estimated covariance matrix which
one might want to set to zero. Again, this is an obvious challenge outside the scope of this article.
Search WWH ::




Custom Search