Image Processing Reference
In-Depth Information
We now de
ne the total cost function as a linear combination of the two cost
functions:
J
¼
J
1
þ a
J
2
(
6
:
82
)
where
a
0. This function measures both the smoothness and the goodness of the
fit. By changing the value of the parameter
a
,
the relative importance of the
smoothness and goodness of
fit can be adjusted. The vector f that minimizes J will
be the best
fit for a given
a
and is obtained by setting the gradient of J with respect to
f equal to zero:
q
J
1
q
f
¼
(
f
f
)
2
q
J
2
q
f
¼
(
:
)
2Qf
6
83
q
J
q
f
¼
(
f
f
) þ
2
2
a
Qf
¼
0
The solution is
Þ
1
f
f
¼
I
n
þ a
Q
ð
(
6
:
84
)
where I
n
is an n
n identity matrix.
6.6.2.2 Two-Dimensional Smoothing Algorithm
In the 2-D case, there is a function of two variables, x and y, that we are trying to
approximate. The values of this function at evenly spaced points in the x
y plane are
stored in the in
m matrix f . Let f be the n
m matrix that is the smooth approxi-
mation of f that we are trying to
find. In this case, J
1
is a measure of the distance
between f and f and is given by
X
X
n
m
2
f
fij
f
fij
¼k
f
f
k
F
¼ Tr (
f
f
)(
f
f
)
T
2
J
1
¼
(
6
:
85
)
i
¼
1
j
¼
1
where F stands for the Fibonacci norm (Equation 3.153). The second cost function
J
2
, which is a measure of the smoothness of f, is given as
X
X
X
X
n
2
m
n
m
2
2
2
J
2
¼
f
fij
2f
(
i
þ
1
)
j
þ
f
(
i
þ
2
)
j
þ
f
fij
2f
i
(
j
þ
1
)
þ
f
i
(
j
þ
2
)
(
6
:
86
)
i
¼
1
j
¼
1
i
¼
1
j
¼
1
Search WWH ::
Custom Search