Information Technology Reference
In-Depth Information
edge rather than across it. This last point is indeed the motivation behind the
steer-
ing
KR framework [14] which we will review in Section 2.2.
Returning to the optimization problem (4), regardless of the regression order and
the dimensionality of the regression function, we can rewrite it as a weighted least
squares problem:
(
y
Xb
)
,
b
= arg min
b
Xb
)
T
K
(
y
−
−
(7)
where
b
=
T
,
y
P
]
T
,
T
2
,
T
N
y
=[
y
1
,
y
2
,
···
β
0
,
β
···
,
β
,
(8)
K
= diag
K
H
(
x
1
−
x
)
,
x
)
,
K
H
(
x
2
−
x
)
,
···
,
K
H
(
x
P
−
(9)
and
x
)
T
,
vech
T
(
x
1
−
x
)
T
,
⎡
⎣
⎤
⎦
1
,
(
x
1
−
x
)(
x
1
−
···
x
)
T
,
vech
T
(
x
2
x
)
T
,
1
,
(
x
2
−
−
−
···
x
)(
x
2
X
=
(10)
.
.
.
.
x
)
T
,
vech
T
(
x
P
−
x
)
T
,
1
,
(
x
P
−
x
)(
x
P
−
···
with “diag” defining a diagonal matrix. Using the notation above, the optimization
(4) provides the weighted least square estimator
⎡
⎣
⎤
⎦
W
N
W
N
,
x
1
W
N
,
x
2
.
b
=
X
T
KX
−
1
X
T
Ky
=
y
,
(11)
where
W
N
P
vector that contains filter coefficients, which we call the
equiv-
alent kernel
weights, and
W
N
,
x
1
is a 1
×
P
vectors that compute the
gradients along the
x
1
-and
x
2
-directions at the position of interest
x
. The estimate of
the signal (i.e. pixel) value of interest
and
W
N
,
x
2
are also 1
×
β
0
is given by a weighted
linear
combination
of the nearby samples:
P
i
=1
W
i
(
K
,
H
,
N
,
x
i
−
x
)
y
i
,
P
i
=1
W
i
(
·
)=1
,
z
(
x
)=β
0
=
e
1
b
=
W
N
y
=
(12)
where
e
1
is a column vector with the first element equal to one and the rest equal to
zero, and we call
W
i
the equivalent kernel weight function for
y
i
(q.v.[14]or[21]
for more detail). For example, for zero-th order regression (i.e.
N
= 0), the estimator
(12) becomes
i
=1
K
H
(
x
i
−
x
)
y
i
β
0
=
∑
z
(
x
)=
ˆ
,
(13)
P
i
=1
K
H
(
x
i
−
∑
x
)
which is the so-called
Nadaraya-Watson
estimator (NWE) [22].