Image Processing Reference
In-Depth Information
B
M
=
{
ψ
1
,
ψ
2
,
···
,
ψ
M
}
(15.2)
yet to be specified, as
⎛
⎞
⎛
⎞
⎛
⎞
⎛
⎞
f
1
(1)
f
1
(2)
f
1
(3)
.
f
1
(
M
)
f
2
(1)
f
2
(2)
f
2
(3)
.
f
2
(
M
)
f
k
(1)
f
k
(2)
f
k
(3)
.
f
k
(
M
)
f
K
(1)
f
K
(2)
f
K
(3)
.
f
K
(
M
)
⎝
⎠
⎝
⎠
⎝
⎠
⎝
⎠
f
1
=
,
f
2
=
,
···
,
f
k
=
,
···
,
f
K
=
,
(15.3)
where
f
k
(
m
) is the
m
th component of the vector
f
k
. Each vector
f
k
can then be
written as
M
f
k
=
f
k
(
m
)
ψ
m
(15.4)
m
=1
By using all
M
basis vectors, we can thus represent any of the observed
f
k
without
error. This remains true even if we choose another basis set containing
M
orthogonal
vectors, as long as we include all
M
basis vectors in the expansion in Eq. (15.4).
Does the basis we choose really matter? Yes, it does, because in applications we
cannot always afford to choose complete bases of
M
vectors for a variety of reasons,
including that
M
can be too large. One must then expand each of the observed
f
k
by
using fewer vectors:
N
f
k
=
f
k
(
m
)
ψ
m
,
where
N<M
(15.5)
m
=1
Note that the only difference between Eq. (15.5) and Eq. (15.4) is in the number of
the terms in the summation,
N
and
M
, respectively. All terms in Eq. (15.5) exist in
Eq. (15.4), but not vice versa. The vectors
f
1
,
f
2
···
,
f
k
,
,
f
K
···
(15.6)
only approximate the corresponding observation, because the approximation error
f
k
f
k
−
(15.7)
is usually not zero.
Here, we are interested in finding an orthonormal (ON) basis,
B
N
:
B
N
=
{
ψ
1
, ··· ,
ψ
N
},
with
ψ
i
,
ψ
j
=
δ
ij
,
(15.8)
that is “most economical” among all possible ON basis sets. Note that
B
N
is a “trun-
cated”
B
M
in that it has fewer basis vectors. Economical means that, despite the fact
that the basis
B
N
has fewer basis vectors than the full set, it should still represent
O
with a smaller
basis truncation error
,
K
1
K
f
k
2
f
k
−
(15.9)
k