Image Processing Reference
In-Depth Information
Infinity
3
-B/2
B/2
2
1
1
0.8
0.6
0.5
0.4
0.2
0
-5
-4
-3
-2
-1
0
1
2
3
4
5
Fig. 9.4. The characteristic function (
red
) plotted with Gaussians optimally approximating it
using the norms
L
1
(
blue
),
L
2
(
cyan
),
L
3
(
green
), and
L
∞
(
black
). The bandwidth is
B
=
π
g
(
x
)=
a
exp
2
x
T
C
−
1
x
1
−
where
C
is an
N
N
symmetric, positive, definite
5
(covariance) matrix, and
x
is the
N
-dimensional coordinate vector. The one-dimensional Gaussian is a special case
of this function, with the major difference being that we now have more variance
parameters (one
σ
for each dimension and also covariances), which are encoded in
C
.
Two properties in particular speak in favor of Gaussians: separability and directional
indifference (isotropy).
×
Separability
The separability property stems from the fact that
N
D Gaussians can always be
factored out into
N
one-dimensional Gaussians, in the following manner:
exp
=
a
exp
N
N
(
x
T
v
i
)
2
2
σ
i
(
y
i
)
2
2
σ
i
g
(
x
)=
a
−
−
(9.25)
i
=1
i
=1
where
v
i
is the
i
th unit-length eigenvector of
C
, and (
σ
i
) is the
i
th eigenvalue of
C
.
The vector
y
=(
y
1
,y
2
,
,y
N
)
t
is given by
y
=
Qx
. Convolving an
N
D image
f
(
x
) with the
N
D Gaussian can therefore be achieved by first rotating the image:
···
5
Positive definite symmetric matrices can always be decomposed as
C
=
QΣQ
, where
Q
is an orthogonal matrix containing the unit-length eigenvectors of
C
in its columns, and
Σ
is the diagonal matrix containing the eigenvalues, which are all positive. Orthogonal
matrices fulfill
Q
T
Q
=
I
, which expresses the orthogonality of the eigenvectors.